Updated May 8, 2026 Raff Object Storage is S3-compatible: any AWS S3 SDK or tool can talk to it once you point the SDK at Raff’s endpoint with valid Raff access keys. You don’t need a Raff-specific client library — the sameDocumentation Index
Fetch the complete documentation index at: https://docs.rafftechnologies.com/llms.txt
Use this file to discover all available pages before exploring further.
boto3, aws-cli, aws-sdk-go-v2, aws-sdk-js, rclone, s3cmd, mc, Cyberduck, etc. that work against AWS S3 work against Raff.
Under the hood, Raff Object Storage runs on Ceph RGW (the RADOS Gateway), the same battle-tested S3 implementation behind many production object stores at scale. Ceph RGW implements the S3 API directly, which is why the compatibility surface is broad — the constraints below mirror the standard S3 spec, not Raff-specific quirks.
This page covers what that actually means in practice. It’s not just “we accept S3 API calls” — there are practical compatibility lines that matter when you’re porting code from AWS or building against Raff for the first time.
What “S3-compatible” guarantees
The S3 API surface that’s effectively universal across compatible providers (AWS, DigitalOcean Spaces, Cloudflare R2, Backblaze B2, Wasabi, MinIO, Vultr, Raff):| Operation | What it does | Supported on Raff |
|---|---|---|
CreateBucket / DeleteBucket | Bucket lifecycle | ✅ |
ListBuckets | List the buckets you own | ✅ |
ListObjects / ListObjectsV2 | List a bucket’s contents | ✅ |
PutObject / GetObject / HeadObject / DeleteObject | Object CRUD | ✅ |
CopyObject | Server-side copy | ✅ |
CreateMultipartUpload / UploadPart / CompleteMultipartUpload / AbortMultipartUpload | Multipart upload protocol | ✅ |
PutObjectAcl / GetObjectAcl | Per-object ACL | ✅ |
PutBucketAcl / GetBucketAcl | Bucket ACL | ✅ |
PutBucketPolicy / GetBucketPolicy / DeleteBucketPolicy | JSON policy management | ✅ — see the policy templates |
PutBucketCors / GetBucketCors | CORS configuration | ✅ |
| Presigned URL generation (SigV4) | Time-limited signed download/upload links | ✅ |
| Bucket versioning | Keep multiple versions of an object | ✅ — see the Properties tab |
What’s AWS-only — won’t work on Raff (or any S3-compatible provider)
These are AWS-specific extensions; they’re not part of the S3 API “spec” that compatible providers implement.| AWS-only feature | What it is | Use instead |
|---|---|---|
| S3 Select / Glacier Select | Run SQL-style queries against object contents server-side | Read the object, query client-side; or use a database |
| Object Lambda | Transform objects on-read with AWS Lambda | Do transforms in your application layer |
| Storage Classes beyond Standard (Glacier, Intelligent-Tiering, IA, Deep Archive) | AWS’s tiered storage with lifecycle transitions | Single-class today; an HDD class is on Raff’s roadmap for cold data |
| S3 Transfer Acceleration | AWS edge-network upload speedup | Standard upload to the regional endpoint; multipart parallelism is your performance lever |
| S3 Access Points / Multi-Region Access Points | Named alternative endpoints with their own policies | Use bucket policies directly |
| S3 Object Lock (full WORM compliance) | Write-once-read-many with regulatory retention modes | Use Deny Delete policies for similar effect; full compliance modes not available |
| S3 Replication (CRR / SRR / RTC) | Cross-region or same-region async replication | App-level replication, or copy + cron |
| Requester Pays | Charge the requester not the bucket owner for egress | Not available on Raff — egress is billed to the bucket owner (first 1 TB/month free, then $0.01/GB) |
| IAM principals in bucket policies | "Principal": "arn:aws:iam::..." | Use access-key-based permissions; for fine-grained access, use Limited access keys |
| SSE-KMS with customer-managed keys | Server-side encryption tied to a specific KMS key | Encryption-at-rest is on by default; per-object KMS configuration not exposed |
| S3 Tables / S3 Tables Buckets | New AWS Iceberg-table primitive | Use a database / data-lake layer above the bucket |
Endpoint URL — virtual-host vs path style
AWS S3 (and Raff) primarily uses virtual-hosted style URLs:addressing_style="path" (or s3ForcePathStyle: true / usePathStyle: true in JS / Go SDK v2) when:
- Bucket names contain dots (the wildcard TLS cert doesn’t cover nested dots)
- You’re testing against a local S3-compatible server without DNS wildcards
- You hit
SSL: certificate verify failederrors that look TLS-related
Authentication — SigV4 and the region trap
Raff signs requests with AWS SigV4. The SDK handles the signing automatically once you give it the access key + secret. The thing to get right is region.us-east — not us-east-1 (AWS naming). This bites every new integration. Symptoms:
SignatureDoesNotMatcherrorsThe authorization header is malformed; the region '<x>' is wrong- Requests work intermittently, then break when the SDK tries a different code path
| SDK | Setting |
|---|---|
| boto3 (Python) | region_name="us-east" in boto3.client(...) |
| aws-cli | aws --region us-east s3 ls (or AWS_REGION=us-east env var, or ~/.aws/config) |
| AWS SDK v3 (JS) | region: "us-east" in client constructor |
| AWS SDK v2 (Go) | config.WithRegion("us-east") |
| rclone | region = us-east in the remote config |
| s3cmd | bucket_location = us-east in ~/.s3cfg |
us-east-1 — it’s similar enough to silently mostly-work and then fail mysteriously.
Tooling — what works out of the box
Anything that speaks S3 with a configurable endpoint: Official AWS SDKs- Python —
boto3/aiobotocore - JavaScript / TypeScript — AWS SDK v2 and v3
- Go —
aws-sdk-go-v2 - Java — AWS SDK for Java
- .NET — AWS SDK for .NET
- Rust —
aws-sdk-s3 - Ruby —
aws-sdk-s3 - PHP — AWS SDK for PHP
aws-cli—aws s3andaws s3apicommandss3cmd— long-standing Python CLImc(MinIO Client) — fast, multi-cloud, scriptablerclone— best for syncs, mirrors, mass-copy operationss5cmd— high-performance bulk transfers
- Cyberduck — macOS / Windows
- Transmit — macOS
- WinSCP — Windows
- S3 Browser — Windows
- Cloudberry Explorer — Windows / macOS
How “compatible” is “fully compatible”?
S3 has many edge cases. Compatibility across providers is excellent for the operations you’ll actually use day-to-day, and degrades for the AWS-only extensions listed above. A practical rule:- If a tutorial uses only the basic API (
PutObject,GetObject, presigned URLs, ACLs, multipart) → it works on Raff with the endpoint + region change. - If it uses
s3:Select*,iam:-prefixed permissions, AWS-specific Storage Classes, or KMS-managed keys → it won’t, and you’ll need to redesign that portion.
Common pitfalls — read this once
These are the things that break first-time integrations:- Forgetting
endpoint_url— the SDK silently calls*.amazonaws.comand you get authentication errors that look weird. Every SDK call must point at the Raff endpoint. - Region not set to
us-east— SDKs default tous-east-1; SigV4 signs with the wrong region; requests fail withSignatureDoesNotMatch. - Virtual-host vs path style mismatch — buckets with dots in the name, or SDKs that don’t auto-detect, need
addressing_style="path". - Public-bucket assumptions don’t port — AWS needs both ACL and Block Public Access disabled; R2 needs a public binding; Raff needs the bucket ACL or a policy. “Just make it public” isn’t portable code.
- Multipart parts smaller than 5 MiB (except the last part) — providers reject
CompleteMultipartUploadwithEntityTooSmall; the upload silently bills storage until you abort it. Most SDKs default to ≥ 5 MiB parts; if you hand-roll multipart, respect the floor.
Related
Use the S3 SDK
Code samples for boto3, aws-cli, JS, Go, rclone.
Generate access keys
Get the credentials you’ll need.
Set public or private
Bucket-level and object-level ACL plus 7 ready-made policies.