This release introduces a few new features to Supabase Storage: Resumable Uploads , Quality Filters, Next.js support, and WebP support.
As a reminder, Supabase Storage is for file storage, not to be confused with Postgres Storage. Resumable Uploads is the biggest update because it means that you can build more resilient apps: your users can continue uploading a file if their internet drops or if they accidentally close a browser tab
This implementation uses TUS[0], which is an open source protocol. We opted for this over s3's protocol to support the open source ecosystem. This means you can use several existing libraries and frameworks (like Uppy.js[1]) for multi-part uploads.
It also has some neat technical details, using Postgres Advisory Locks to solve concurrency issues.
The Storage team will be in the comments to cover any technical questions.
Thanks for working on such an awesome project and releasing such useful features!
Yet, what stops us from using supabase is not the set of features, but the state of the current apis. Last night I was evaluating the python sdk and some of the examples were broken, baseline features like RLS are unimplemented (https://github.com/supabase-community/supabase-py/issues/58), and progress seems slow (it seems to have been release two years ago and it is still in alpha). Even though supporting many platforms is boring (i.e. no flashy feature announcements) and costly (many FTEs are required for maintenance let alone adding features), are there any plans to bring feature parity more of the sdks to allow more developers to leverage this awesome platform?
client libs are one of our next focuses. we've been discussing internally how to do this. So far we have concentrated on the JS/Dart libs because that makes up a large portion of our users. Over the last 3 months the community has added (and documented) libraries for Swift[0] and C#[0].
We hope to continue making this a community-driven endeavor, so if anyone reading this would like to become a maintainer for one of the libraries (especially Python), please reach out. If we can't find maintainers we'll find a polyglot who can work on this full-time.
> We hope to continue making this a community-driven endeavor, so if anyone reading this would like to become a maintainer for one of the libraries (especially Python), please reach out. If we can't find maintainers we'll find a polyglot who can work on this full-time.
We don't have a lot of C++ experience in the team, but I'll share your feedback with the Realtime team - perhaps we can find someone in the community to help
Awesome to see supabase constantly improving. Been using supabase for the past few weeks and have really enjoyed it!
I was a bit surprised, however, that there's not currently a good way to reference storage objects from my postgres tables. I found that the recommended way is to store the object's path (as a string) in the database. While that works, it isn't optimal as I'd like to enforce consistency between the object and the table referencing it.
I've tried referencing the id of the corresponding row in the storage.objects table, but (1) apparently the schema supabase uses to manage storage.objects may change, and (2) it still requires separate (non-atomic) operations - or additional triggers - for keeping things in sync. Using buckets (corresponding to tables) and folders with ids is another way to work around it, but still feels suboptimal.
Not 100% sure what the best solution would look like, but ideally the supabase client could emulate storage operations for objects "attached" to a given table record, and supabase (the backend piece) could implement them as atomic operations (e.g., uploading the actual storage asset, storing the necessary metadata, and updating my table row to reference the newly-created storage object; exposing a helper function to return the URLs for any storage objects attached to a record; etc).
Anyway, just a suggestion. Keep up the great work!
this is a similar issue we face with Auth. We want to give direct access to the data/tables, and at the same time we need some flexibility to alter the tables on rare occasions.
We've dabbled with the idea of offering versioned views which have a "set" interface (eg `storage_v1`, `storage_v2`). We're still debating all the ways that we might do it.
All that to say - we're aware that this experience can be improved and we're working on it.
It's very hard to compare Supabase + Firebase pricing because it depends on which features you're using. For example, API requests are free on Supabase and paid on Firebase.
The 500 connections you quoted is for the "Realtime" feature (which provides multiplayer functionality). You're correct, this is 500 clients connected simultaneously
Quick question; since you use AWS under the hood, is there any way to “bring your own aws account” for any of these things? (Example use cases: relevant if I want to use sns alongside your storage solutions, or if I have aws credits I want to use)
credit to the Fabrizio and Inian on this one. I'm personally impressed how fast they were able to implement this since we shipped a major update to Storage last Launch Week.
From what I understand, the TUS protocol wasn't necessarily simple, but the TUS community has made it a lot easier with their node server which they recently converted to Typescript: https://github.com/tus/tus-node-server
Any chance we could choose to use B2 instead of S3?
Even if Supabase keeps the same pricing (though why would you) the folks at BackBlaze have had my respect for a long time and I’m happier when I can stay out of the AWS universe.
yes, it's planned. I believe the only blocker we saw last time we looked was a 1-hour downtime-window every week (not that always went offline, but that they could use that time for maintenance). Do you know if that's still the case?
If you’re talking about https://www.backblaze.com/scheduled-maintenance.html it’s not intended to affect file usage of B2 (the bucket functions may go offline for up to 15min, but ul, dl, and listing of files are fine. One related thing to be aware of: the official documentation suggests adding retry to every file upload as some uploads naturally fail (usually for load balancing))
yes, I believe that is the one. We've engineered the server for additional providers and so it should be easy enough to add (and we'd welcome any PRs)
one caveat - we are shipping as fast as we can right now so that we can have feature-parity with Firebase. It's a balance of "breadth vs depth" and right now we are going for breadth. We will definitely offer more providers on the platform but the resilience of s3 is beneficial for our engineers right now - it means that they can focus on features rather than devops.
This release introduces a few new features to Supabase Storage: Resumable Uploads , Quality Filters, Next.js support, and WebP support.
As a reminder, Supabase Storage is for file storage, not to be confused with Postgres Storage. Resumable Uploads is the biggest update because it means that you can build more resilient apps: your users can continue uploading a file if their internet drops or if they accidentally close a browser tab
This implementation uses TUS[0], which is an open source protocol. We opted for this over s3's protocol to support the open source ecosystem. This means you can use several existing libraries and frameworks (like Uppy.js[1]) for multi-part uploads.
It also has some neat technical details, using Postgres Advisory Locks to solve concurrency issues.
The Storage team will be in the comments to cover any technical questions.
[0] TUS: https://tus.io/
[1] Uppy: https://uppy.io/docs/tus/