Scalable object storage with a fully S3-compatible API and a built-in global CDN. Store media, backups, datasets, and static assets at a fraction of the cost — with 1 TB of outbound transfer included every month.
One storage primitive that handles backups, media, datasets, logs, and static assets — with S3 compatibility from day one.
Point any existing AWS SDK, CLI, or tool at our endpoint and it works. Change one line — the endpoint URL — and migrate without rewriting a single line of application code.
Built for applications requiring thousands of requests per second. Low-latency reads for hot assets and high-throughput writes for bulk uploads.
Every bucket comes with a CDN toggle. Enable it once and your assets are cached across 200+ edge locations worldwide — faster delivery, lower origin load.
Define per-bucket and per-object ACLs. Generate signed URLs for time-limited access to private content — images, PDFs, downloads — without exposing your storage.
Automatically transition objects to cold storage after a set period and enable versioning to keep a full history of every file — essential for compliance and accidental deletion recovery.
Teams use our object storage for backups, media delivery, ML datasets, static sites, and everything in between.
Store video files and serve them via our built-in CDN for buffer-free playback. Cache media on the nearest edge node to prevent stream crashes during traffic spikes.
Archive server backups, database dumps, and log files at scale. Capacity grows automatically — you only pay for what you store.
Store petabyte-scale datasets for model training. High-throughput access patterns ensure your training pipelines don't wait on storage reads.
Host build artefacts, container images, or large file downloads behind the CDN. Deliver the same binary to users in Tokyo and Amsterdam equally fast.
Host your static website directly from a bucket or use it as a CDN origin for images, CSS, and JS — eliminating load from your application servers.
One environment variable swap is all it takes. No new SDK, no new library — just point your existing code at our storage endpoint and go.
Create a bucket
Log in to your dashboard and create a new storage bucket in any available region.
Generate API credentials
Issue a key pair from the Access Keys panel. Store them securely as environment variables.
Point your tool at our endpoint
Set STORAGE_ENDPOINT, STORAGE_KEY, and STORAGE_SECRET. No other changes needed.
Compatible with
| 1 | import boto3, os |
| 2 | |
| 3 | s3 = boto3.client( |
| 4 | 's3', |
| 5 | region_name=os.environ['STORAGE_REGION'], |
| 6 | endpoint_url=os.environ['STORAGE_ENDPOINT'], |
| 7 | aws_access_key_id=os.environ['STORAGE_KEY'], |
| 8 | aws_secret_access_key=os.environ['STORAGE_SECRET'], |
| 9 | ) |
| 10 | |
| 11 | # Upload a file |
| 12 | s3.upload_file('photo.jpg', 'my-bucket', 'uploads/photo.jpg') |
| 13 | |
| 14 | # Generate a time-limited signed URL |
| 15 | url = s3.generate_presigned_url( |
| 16 | 'get_object', |
| 17 | Params={'Bucket': 'my-bucket', 'Key': 'uploads/photo.jpg'}, |
| 18 | ExpiresIn=3600, # 1 hour |
| 19 | ) |
STORAGE_ENDPOINT, STORAGE_KEY, STORAGE_SECRET in your envS3 compatibleInbound bandwidth is always free — pay only for storage and outbound transfer.
One plan, everything included. Start with a generous free quota and scale predictably — you always know what you'll pay.
Everything you need to store and deliver at scale.
Includes 250 GB storage + 1 TB outbound transfer
Create a bucketNeed petabyte-scale or custom SLAs? Talk to our team.
Object storage is a flat storage system where files are stored as "objects" with metadata and retrieved via HTTP(S) URLs. Unlike a traditional disk (block storage), there is no directory hierarchy and files can be accessed directly by applications at any scale without mounting a volume.
Yes. Our storage implements the S3-compatible API, meaning any tool or library that works with AWS S3 — AWS SDK v2 and v3, boto3, rclone, s3cmd, Cyberduck, MinIO clients, and more — works without modification. Just update the endpoint URL.
Yes. Uploading data to your buckets — from your application, from a migration, or from a backup job — is always free. You only pay for storage space used and outbound (download) traffic that exceeds the 1 TB monthly allowance.
Enable CDN on a bucket with a single toggle. We automatically provision a CDN distribution backed by 200+ global edge nodes. Once enabled, objects served from your bucket URL are cached at the nearest edge location — reducing latency and origin load.
Yes. Enable static website hosting on a bucket, upload your HTML/CSS/JS files, set the index and error documents, and your site is live at a bucket subdomain — or point a custom domain with a CNAME record.
Outbound transfer beyond 1 TB/month is billed at $0.01/GB. There are no sudden spikes or throttling — usage scales smoothly and you only pay for what you consume.
Because our storage is S3-compatible, tools like rclone can migrate your data from any other S3-compatible provider (AWS S3, Cloudflare R2, Backblaze B2, etc.) with a single command. Inbound transfer is free, so migration costs nothing extra.
Create your first bucket, upload a file, and flip the CDN switch — $10/month for 250 GB and 1 TB of transfer. No hidden fees.