Updated May 8, 2026 The most-seen failures with Raff Object Storage and the fix that resolves each. Most of these come from the SDK side; for ACL and policy questions see Set public or private. If your issue isn’t here, contact support@rafftechnologies.com. Raff Object Storage runs on Ceph RGW (RADOS Gateway), so error responses follow standard S3 conventions — the error codes (Documentation Index
Fetch the complete documentation index at: https://docs.rafftechnologies.com/llms.txt
Use this file to discover all available pages before exploring further.
SignatureDoesNotMatch, EntityTooSmall, AccessDenied, etc.) are the same ones you’d see from AWS, MinIO, or other Ceph-backed services. Most fixes are universal across S3-compatible providers.
Authentication and signing
`SignatureDoesNotMatch` — request fails with auth errors
`SignatureDoesNotMatch` — request fails with auth errors
us-east-1 (AWS default) but Raff’s region is us-east.Fix: set region explicitly:| SDK | Setting |
|---|---|
| boto3 | region_name="us-east" |
| aws-cli | --region us-east or AWS_REGION=us-east |
| AWS SDK v3 (JS) | region: "us-east" |
| aws-sdk-go-v2 | config.WithRegion("us-east") |
| rclone | region = us-east |
| s3cmd | bucket_location = us-east |
- Wrong endpoint — confirm
endpoint_url = https://s3.raffusercloud.com. - System clock skew — SigV4 rejects requests more than 15 min off real time. Run
ntpdateorchronyc sourcesto verify. - Mangled secret key — copy/paste lost a character; regenerate the key if unsure.
`InvalidAccessKeyId` — key not recognized
`InvalidAccessKeyId` — key not recognized
- Confirm you copied the key from the right Raff account (check the dashboard).
- Confirm the key wasn’t deleted or disabled — see Object Storage → Access Keys.
- If you saved a typo’d ID, generate a new key — there’s no way to retrieve the original from the dashboard.
`AccessDenied` even with valid keys
`AccessDenied` even with valid keys
- Limited-scope access key that doesn’t include this bucket — see Generate access keys → Limited. Re-scope or create a new key.
- Bucket policy denies the operation — open the bucket’s Permissions tab and check the policy. A
Denyalways wins. If yourDeny Deletepolicy is blocking your own cleanup, Delete Policy first. - Object owned by another account in a multi-account setup. The current account doesn’t have ACL grants on this object.
- Customer-edited bucket policy broke a Limited key — Limited access keys depend on a Raff-managed policy. If someone edited the policy directly via the S3 API, the auto-grant for that key may have been removed. Either restore the policy or re-create the key.
boto3 / AWS SDK silently calls `*.amazonaws.com`
boto3 / AWS SDK silently calls `*.amazonaws.com`
endpoint_url (boto3) / endpoint: (JS v3) / BaseEndpoint (Go v2) is set on every client constructor. The setting doesn’t propagate from environment unless you explicitly read it. Most common copy-paste mistake.Connection and TLS
`SSL: certificate verify failed` for buckets with dots in the name
`SSL: certificate verify failed` for buckets with dots in the name
*.s3.raffusercloud.com (one DNS label). Bucket names with dots like my.app.bucket resolve to my.app.bucket.s3.raffusercloud.com — multi-label, which the wildcard doesn’t cover.Fix: force path-style addressing.| SDK | Setting |
|---|---|
| boto3 | Config(s3={"addressing_style": "path"}) |
| aws-cli | aws s3api ... (most ops support both) or aws --no-verify-ssl (not for production) |
| AWS SDK v3 (JS) | forcePathStyle: true |
| aws-sdk-go-v2 | o.UsePathStyle = true |
| rclone | force_path_style = true |
my-app-bucket works with virtual-host style, no path-style override needed.`Could not resolve host: s3.raffusercloud.com`
`Could not resolve host: s3.raffusercloud.com`
- Local DNS issue — try
dig s3.raffusercloud.comornslookup s3.raffusercloud.com. If that fails, your DNS resolver is broken; restart networking or switch resolvers. - Corporate proxy / firewall — some networks block third-party S3 endpoints. Check with your IT team or test from outside the network.
- Typo in the endpoint —
raffusercloudnotraffusercloud.ioor other variants. The exact endpoint ishttps://s3.raffusercloud.com.
Connection times out, no response
Connection times out, no response
s3.raffusercloud.com:443 doesn’t establish.Fix:- Confirm you can reach the internet at all (
curl https://example.com). - Confirm port 443 isn’t blocked (
curl -v https://s3.raffusercloud.com). - Check status.rafftechnologies.com for an Object Storage incident.
Uploads
`EntityTooSmall` on `CompleteMultipartUpload`
`EntityTooSmall` on `CompleteMultipartUpload`
AbortMultipartUpload for the upload ID, or apply the Abort Incomplete Uploads lifecycle pattern.Upload hangs partway, never completes
Upload hangs partway, never completes
- Network instability — multipart upload retries individual parts; if your network drops more than the retry budget, the whole upload fails. Check your connection’s stability.
- Single-part PUT on a large file — for files larger than ~100 MB, prefer multipart (every modern SDK auto-switches above 5-10 MB; if your code disables it, files past ~5 GB will fail outright).
- Server-side timeout — large single-PUT requests have a connection time limit. Switch to multipart.
boto3, s3.upload_file and Object.upload_file use multipart by default. In aws-cli, aws s3 cp does too. In aws-sdk-go-v2, use s3manager.Uploader instead of raw PutObject.Files appear duplicated or partially uploaded
Files appear duplicated or partially uploaded
- Multipart upload was aborted before completion — the parts are billed as storage but invisible. Run
ListMultipartUploadsagainst the bucket to find them, thenAbortMultipartUploadon each. - Two clients raced to write the same key. Last-write-wins; the loser sees its data overwritten.
- Object listed before consistency window settled — Raff’s listing is strongly consistent, but if you
PutObjectthen immediatelyListObjects, retry once if the object isn’t there.
Dashboard upload progresses, then nothing happens
Dashboard upload progresses, then nothing happens
- Refresh the page — the bucket listing doesn’t always live-update.
- Check the file size — files above ~100 MB use multipart; the dashboard does this transparently but it can take a moment for the assembly to finish on the server side.
- If still missing after 1-2 minutes, contact support with the bucket name and approximate upload time.
ACLs and access
Public bucket but Object URL returns 403
Public bucket but Object URL returns 403
Public Read but accessing the URL in a browser returns Forbidden.Likely causes:- Object has its own
PrivateACL override — open the object’s Details page → Access Control. Set to Public. - Bucket policy explicitly denies anonymous access — the bucket ACL grants public, but a
Denyin the policy wins. Open the Permissions tab and review the policy JSON. - Wrong URL format — the public URL is
https://<bucket>.s3.raffusercloud.com/<key>, nothttps://s3.raffusercloud.com/<bucket>/<key>(path style). The latter requires authentication for browser access.
Limited access key suddenly stopped working
Limited access key suddenly stopped working
AccessDenied.Why: Limited access keys depend on a Raff-managed bucket policy that grants them access. If anyone edited that bucket’s policy directly via the S3 API, aws-cli, or the dashboard’s policy JSON editor, the auto-grant for the Limited key may have been overwritten or removed.Fix:- Open the bucket’s Permissions tab. Review the JSON policy — look for statements granting access to your access key ID.
- Easiest path: delete the existing Limited access key and create a new one — Raff regenerates the bucket policy for the new key.
- Alternative: restore the missing grant manually in the bucket policy JSON.
- Prevent recurrence: if your team edits bucket policies often, consider using Full Access keys for those buckets — Full Access doesn’t depend on bucket policies and can’t be broken by policy edits.
CORS errors when calling Raff from a browser
CORS errors when calling Raff from a browser
No 'Access-Control-Allow-Origin' header or CORS policy blocked the request.Fix: configure CORS on the bucket. Use the S3 SDK’s PutBucketCors API:"AllowedOrigins": ["*"] for buckets that hold private or authenticated content — it lets any site embed your URLs.Pricing and billing
Storage usage looks higher than what I uploaded
Storage usage looks higher than what I uploaded
- Aborted multipart uploads — uploaded parts that never completed assembly are still stored and billed. List with
ListMultipartUploadsand abort. - Versioning enabled — every PUT to the same key keeps the previous version. Old versions stay billed until cleaned up. Disable versioning if you don’t need it; or expire old versions via lifecycle.
- Hidden delete markers (versioned buckets) — deleting an object in a versioned bucket leaves a marker; previous versions stay until you explicitly purge.
Hit by the \$7/month minimum on a small bucket
Hit by the \$7/month minimum on a small bucket
Migration
Migrating from AWS S3 — what doesn't port?
Migrating from AWS S3 — what doesn't port?
- Calls to S3 Select, Object Lambda, Storage Classes (Glacier / IA / Intelligent-Tiering), Object Lock compliance modes, S3 Replication, Access Points, Transfer Acceleration, IAM-principal bucket policies, KMS-managed keys.
- Bucket / object CRUD, multipart, ACLs, bucket policies (rewritten to use access keys instead of IAM principals), CORS, versioning, presigned URLs.
rclone with both providers configured as remotes; rclone sync source:bucket raff:bucket. Or contact support — Raff can run end-to-end migrations and hand you a working bucket. See S3 compatibility for the full feature matrix.Still stuck?
Contact support@rafftechnologies.com with:- The bucket name and the access key ID in use
- The exact error message from your SDK / CLI / browser console
- The request ID (most SDKs surface this —
e.response['ResponseMetadata']['RequestId']in boto3,Error.RequestIdin JS, etc.) - The timestamp (with timezone) of the failed request
- A minimal repro if possible — the smallest code that triggers the problem