Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.rafftechnologies.com/llms.txt

Use this file to discover all available pages before exploring further.

Updated May 8, 2026 Raff Object Storage speaks the standard S3 API. Point any AWS-S3 SDK or compatible tool at Raff’s endpoint with your access keys and you’re connected — no Raff-specific library needed. This page is the practical “drop into your code” guide. For the conceptual side (which AWS features port and which don’t), see S3 compatibility.

Before you start

  • Access key + secret — see Generate access keys. The secret is shown once at creation; if you don’t have it, generate a new key.
  • At least one bucket — see Create a bucket.
  • The Raff connection details below.

Connection details

These are the same for every SDK and tool:
SettingValue
Endpoint URLhttps://s3.raffusercloud.com
Regionus-east (not us-east-1)
Access Key IDFrom your access key creation
Secret Access KeyFrom your access key creation
Signature versionSigV4 (default in all modern SDKs)
Addressing styleVirtual-host by default (<bucket>.s3.raffusercloud.com); use path style if your bucket name contains dots
Plug those into any of the SDK examples below.
Set region to us-east explicitly. SDKs default to us-east-1 (AWS naming) and SigV4 signs with that — Raff rejects signatures from the wrong region. Skipping this step is the #1 reason first-time integrations fail.

Python — boto3

The de facto S3 client for Python.
import boto3

s3 = boto3.client(
    "s3",
    endpoint_url="https://s3.raffusercloud.com",
    region_name="us-east",
    aws_access_key_id="YOUR_ACCESS_KEY_ID",
    aws_secret_access_key="YOUR_SECRET_ACCESS_KEY",
)

# List buckets
for bucket in s3.list_buckets()["Buckets"]:
    print(bucket["Name"])

# Upload a file
s3.upload_file("local.txt", "my-bucket", "uploads/local.txt")

# Download a file
s3.download_file("my-bucket", "uploads/local.txt", "downloaded.txt")

# Generate a presigned URL valid for 1 hour
url = s3.generate_presigned_url(
    "get_object",
    Params={"Bucket": "my-bucket", "Key": "uploads/local.txt"},
    ExpiresIn=3600,
)
For bucket names with dots, force path-style addressing:
from botocore.config import Config

s3 = boto3.client(
    "s3",
    endpoint_url="https://s3.raffusercloud.com",
    region_name="us-east",
    aws_access_key_id="YOUR_ACCESS_KEY_ID",
    aws_secret_access_key="YOUR_SECRET_ACCESS_KEY",
    config=Config(s3={"addressing_style": "path"}),
)
upload_file and download_file use multipart automatically for files above 8 MB. Configure thresholds via boto3.s3.transfer.TransferConfig.

Command line — aws-cli

The official CLI works against any endpoint with --endpoint-url. Configure once:
aws configure --profile raff
# AWS Access Key ID: YOUR_ACCESS_KEY_ID
# AWS Secret Access Key: YOUR_SECRET_ACCESS_KEY
# Default region name: us-east
# Default output format: json
Then point every command at Raff’s endpoint:
# List buckets
aws --profile raff --endpoint-url https://s3.raffusercloud.com s3 ls

# Upload a file
aws --profile raff --endpoint-url https://s3.raffusercloud.com \
    s3 cp local.txt s3://my-bucket/uploads/local.txt

# Sync a local directory to a bucket
aws --profile raff --endpoint-url https://s3.raffusercloud.com \
    s3 sync ./public-site s3://my-bucket --delete

# Generate a presigned URL valid for 1 hour
aws --profile raff --endpoint-url https://s3.raffusercloud.com \
    s3 presign s3://my-bucket/uploads/local.txt --expires-in 3600
To avoid repeating --endpoint-url, set it via env var:
export AWS_ENDPOINT_URL="https://s3.raffusercloud.com"
export AWS_PROFILE=raff
aws s3 ls
(AWS_ENDPOINT_URL is supported in AWS CLI v2.13+.)

JavaScript / TypeScript — AWS SDK v3

The current AWS SDK for JS (the modular @aws-sdk/client-s3).
import { S3Client, PutObjectCommand, GetObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

const s3 = new S3Client({
  endpoint: "https://s3.raffusercloud.com",
  region: "us-east",
  credentials: {
    accessKeyId: "YOUR_ACCESS_KEY_ID",
    secretAccessKey: "YOUR_SECRET_ACCESS_KEY",
  },
  forcePathStyle: false, // set true only if bucket name has dots
});

// Upload
await s3.send(
  new PutObjectCommand({
    Bucket: "my-bucket",
    Key: "uploads/local.txt",
    Body: "hello world",
  })
);

// Presigned URL (1 hour)
const url = await getSignedUrl(
  s3,
  new GetObjectCommand({ Bucket: "my-bucket", Key: "uploads/local.txt" }),
  { expiresIn: 3600 }
);
For multipart uploads above ~5 MB, use the high-level @aws-sdk/lib-storage package — it handles chunking and parallelism automatically.

Go — AWS SDK v2

package main

import (
    "context"
    "log"

    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/config"
    "github.com/aws/aws-sdk-go-v2/credentials"
    "github.com/aws/aws-sdk-go-v2/service/s3"
)

func main() {
    cfg, err := config.LoadDefaultConfig(context.TODO(),
        config.WithRegion("us-east"),
        config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
            "YOUR_ACCESS_KEY_ID", "YOUR_SECRET_ACCESS_KEY", "",
        )),
    )
    if err != nil {
        log.Fatal(err)
    }

    client := s3.NewFromConfig(cfg, func(o *s3.Options) {
        o.BaseEndpoint = aws.String("https://s3.raffusercloud.com")
        o.UsePathStyle = false // set true only if bucket has dots
    })

    out, err := client.ListBuckets(context.TODO(), &s3.ListBucketsInput{})
    if err != nil {
        log.Fatal(err)
    }
    for _, b := range out.Buckets {
        log.Println(*b.Name)
    }
}
For multipart, use s3manager.NewUploader(client) from github.com/aws/aws-sdk-go-v2/feature/s3/manager.

rclone — best for syncs and mass copies

Add a remote once, then use it like a local path:
rclone config
# n) New remote
# name> raff
# Storage> s3
# provider> Other
# env_auth> false
# access_key_id> YOUR_ACCESS_KEY_ID
# secret_access_key> YOUR_SECRET_ACCESS_KEY
# region> us-east
# endpoint> https://s3.raffusercloud.com
# location_constraint> (leave blank)
# acl> private
Then:
rclone ls raff:my-bucket
rclone copy ./local-dir raff:my-bucket/path --progress
rclone sync raff:source-bucket raff:dest-bucket --progress
rclone mount raff:my-bucket /mnt/raff   # mount a bucket as a filesystem
rclone is excellent for migrating from AWS / DO / Vultr to Raff — point one remote at the source, another at Raff, and rclone sync.

s3cmd — long-standing Python CLI

~/.s3cfg:
[default]
access_key = YOUR_ACCESS_KEY_ID
secret_key = YOUR_SECRET_ACCESS_KEY
host_base = s3.raffusercloud.com
host_bucket = %(bucket)s.s3.raffusercloud.com
bucket_location = us-east
use_https = True
signature_v2 = False
Then:
s3cmd ls
s3cmd put local.txt s3://my-bucket/uploads/local.txt
s3cmd get s3://my-bucket/uploads/local.txt downloaded.txt
s3cmd sync ./local-dir s3://my-bucket/path/

mc (MinIO Client) — fast, scriptable

mc alias set raff https://s3.raffusercloud.com YOUR_ACCESS_KEY_ID YOUR_SECRET_ACCESS_KEY

mc ls raff/
mc cp local.txt raff/my-bucket/uploads/
mc mirror ./local-dir raff/my-bucket/path
mc share download raff/my-bucket/uploads/local.txt --expire 1h
mc is the fastest CLI for bulk operations and has good defaults for parallelism.

Cyberduck — GUI

For Mac / Windows users who want a Finder-style file browser:
  1. Open Cyberduck → Open Connection.
  2. Choose Amazon S3 from the dropdown.
  3. Server: s3.raffusercloud.com
  4. Access Key ID and Secret Access Key from your Raff access key.
  5. More Options → set the region to us-east.
  6. Connect.
Drag-and-drop uploads, presigned URL generation (right-click → Share), folder mirroring all work out of the box.

Multipart upload — what every SDK does

Every modern SDK transparently switches to multipart upload for files above ~5-10 MB:
  • The file is split into parts (typically 5-25 MB each).
  • Parts upload in parallel.
  • Failed parts retry individually — the whole file doesn’t restart.
  • The server assembles parts on CompleteMultipartUpload.
You don’t have to think about this for normal use — boto3.upload_file, aws s3 cp, rclone copy, and the JS / Go transfer managers all handle multipart automatically. When hand-rolling multipart (rare), respect the protocol limits:
  • Minimum part size: 5 MiB (except the last part)
  • Maximum part size: 5 GiB
  • Maximum parts per upload: 10,000
Aborted multipart uploads keep their parts billed as storage until cleaned up. Use the Abort Multipart Upload template in bucket policies, or call AbortMultipartUpload from your code, to clean up after failures.

Presigned URLs — share without sharing keys

Every SDK can generate a presigned URL: a time-limited HTTPS link that lets anyone download (or upload) without having Raff credentials. The recipient just opens the URL.
Use whenDon’t use when
Email a one-off download linkPermanent public assets (use a public bucket / object instead)
Upload from a browser without exposing keysLong-lived service-to-service auth (use access keys)
Limit a third-party’s access to one file with a deadlineYou need to revoke before expiry (you can’t — only key rotation revokes)
See examples above for boto3, the AWS SDK v3 (getSignedUrl), aws-cli (s3 presign), and mc share. The mechanism and signing semantics are identical to AWS S3.

Migrating from another provider

Moving from AWS / DigitalOcean Spaces / Vultr / Cloudflare R2 to Raff is mostly a config swap:
  1. Generate a Raff access key.
  2. Create the destination bucket(s) on Raff.
  3. Use rclone with both providers configured as remotes:
    rclone sync source-provider:source-bucket raff:destination-bucket --progress
    
  4. Update your application’s endpoint, region, and credentials.
  5. Remove any code that calls AWS-only features — see S3 compatibility.
If you have a sizable migration coming, support can help — we can run the migration end-to-end and hand you a working Raff bucket with your data in place.

Next steps

S3 compatibility

What’s universal, what’s AWS-only.

Set public or private

ACLs, policies, presigned URLs.

Troubleshooting

SignatureDoesNotMatch, 403s, slow uploads.
Last modified on May 8, 2026