-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scheduled removal of MinIO Gateway for GCS, Azure, HDFS #14331
Comments
Do you have any alternative to recommend for non paid users who enjoyed MinIO Gateway ? A MinIO distributed setup is not quite the same than Azure Blob Storage for example. |
You don't have to upgrade @fungiboletus |
There used to be a standalone gateway repo,but i can't find it now.was it deleted? |
I don't want to have unmaintained software in my stacks though. I have found s3proxy which could be a replacement, but it's Java based and it's lacking some features like the cache or the encryption. |
For bullet point #3, does that mean I can run a single node setup with version enabled on a bucket? |
When it is available. It's not finished yet. |
Oh yes you shouldn't definitely. Thats why you should migrate to a more supported deployment model. |
Is the team planning on sharing motivation for this decision? |
Gateway was designed for migration purposes to MinIO deployments. From docs:
Gateway has been around for years now, the time given for migration has reached its fruition. It is time now to move away and move our focus on following things. For Azure, GCS we support Tiering directly from MinIO
HDFS gateway is not neededWe have migration tools written https://github.com/minio/hdfs-to-minio and HDFS also provides a way to copy data out directly to MinIO. what is not going away?
|
I understand and having the possibility for bucket versioning in single node setup will be great! |
Here, are we referring to single filesystem mode, or something else? |
Single drive mode, |
Very interesting decision. Many on-premise enterprise grade Kubernetes solutions are backed by a high available storage solution which is either based on Until now, I've made use of the If I understand correctly, a How would I solve such situation in the "new Minio world"? Do I have to use the |
@Julius112 Great Question! We have the same situation here. I think MinIO Gateway NAS is simple and a great solution to initiate the use of object storage, an after the death of it would be more difficult to new clients initiate using it. As example, we initiated using MinIO Gateway NAS, and now MinIO became very important in our solutions that we are in process to adquire the paid MinIO Support. |
@Julius112 Please read the original post. Note this:
Also...
It sounds like you are looking for a distributed setup, but refusing to do a distributed setup. @dboc Feel free to write |
@klauspost Our sales team have been in contact with then, independently we still have interest. Yes, we now looking for a distributed setup. This is why i think it will be interesting that MinIO Gateway NAS continue to exist. |
@klauspost sorry I misread the info about minio Please don't get me wrong - I'm not trying to fight against the decision of discontinuing development on the NAS Gateway. I'm just trying to figure out how the new available components will fit into the scenario which I described above. I have a few clients that are using one external high available, redundant storage solution (eg. Netapp) as their storage backend for Kubernetes. I'm looking for a high available setup which, as far as I understand, can only be done by using Minio in Maybe I'm getting this whole thing wrong and then I'm happy to be corrected. However, the |
My use case for MinIO gateway is simpler : I run stuff in various cloud providers. Azure doesn't have an object storage with an S3 API while everyone else does. I could adapt my code but a lot of software are compatible with S3 and not Azure, for good reasons. So I use MinIO Gateway in front of their object storage with good success so far. I don't have the capabilities, the resources, nor the motivation to install a distributed MinIO setup as good as a cloud object storage. I understand that my use case doesn't correspond to what MinIO wanted the gateway to be. |
@fungiboletus In that case, just keep your setup as-is. @Julius112 I don't see where Harsha states that the new setup would disallow accessing the same data from multiple servers, though I can't really see much benefit from it. One MinIO server should be able to handle everything your storage can deliver fine. If you are using it only for failover, you won't need concurrent access anyway. Distributed mode is truly scalable. What you are proposing is not. |
newer MinIO server removes "gcs" gateway support as per #14331
Unfortunately no, that's not a priority for us - we have given it enough thought. This has been rather explained very well in https://blog.min.io/deprecation-of-the-minio-gateway/ You can keep using the older release of MinIO there is nothing new that we have added anyways in the gateway for close to 2yrs now. |
Hi, Will I be able to use older minio docker image versions which support gcs gateway ? Thanks ! |
@meirsegev please read the issue description - this has been mentioned multiple times already. |
Is there a recommended migration path to move existing single drive setups to the new erasure-coded 0-parity mode? It's possible to spin up a second instance, mirror everything over (though that would take 2x space temporarily), move over all policies etc., but it would be nice to have a direct upgrade path. |
I don't see a comment about when versioning and replication will work on single drive setups as from the docs it says it needs a distributed setup. Is there any information on this now the gateway has been removed? |
Watch out for when this issue is closed, that's when it's "available" |
Users of the GCS gateway functionality can look into using the GCS XML API. For my use cases it was just a config change and the provision of a set of hmac credentials for my application to switch from Minio GCS Gateway straight to GCS XML API. My application points to https://cloud.google.com/storage/docs/xml-api/overview |
Here is a list of APIs GCS implements poorly and implements all wrong, moving to GCS while using S3 API has its limitations.
GCS fails in this manner while they succeed on AWS S3 and MinIO fine.
GCS fails in this manner while they succeed on AWS S3 and MinIO fine.
GCS fails in this manner while they succeed on AWS S3 and MinIO fine.
GCS fails in this manner while they succeed on AWS S3 and MinIO fine.
GCS fails in this manner while they succeed on AWS S3 and MinIO fine.
GCS fails in this manner while they succeed on AWS S3 and MinIO fine.
SummaryAWS S3 API Compatibility is quite limited, since few basic APIs do not work well. Other APIs that are not mentioned are not part of GCS S3 compatibility layer implementation are |
Single drive mode is now fully XL backend format, no more pre-existing data is supported anymore with single drive mode. Legacy FS mode continues to work for folks who have existing content, no new deployments are allowed. This issue shall be closed now, we haven't moved |
I am confused,. sorry. We are using |
For anyone stuck with the removal of gateway, I'd recommend looking at SeaweedFS in Cloud Drive mode. It can even read the contents of current bucket and store the metadata locally. Was a very easy migration for me. |
I believe that they want you to setup a new minio cluster, which could be running on Azure or multiple clouds, etc. I'm considering forking the old |
@Cave-Johnson Do you use that with Azure File Storage backend? Is Seaweed fully compatible with s3 client libraries? |
@astrolox I was thinking the same, but we definitely do not have the capacity :( |
I wasn't using Azure backend but it's fully compatible with the s3 client libraries (I have used both the minio library and boto3 to speak to it) https://github.com/chrislusf/seaweedfs/wiki/Cloud-Drive-Architecture |
You should migrate to https://min.io/product/multicloud-azure-kubernetes-service |
Seaweed uses MinIO's authN layers here and there - they have borrowed in some security issues as well looks like never bothered to fix it. https://github.com/chrislusf/seaweedfs/blob/master/weed/s3api/auth_signature_v4.go#L3 https://github.com/chrislusf/seaweedfs/blob/master/weed/s3api/auth_signature_v2.go#L3 https://github.com/chrislusf/seaweedfs/blob/master/weed/s3api/policy/post-policy.go#L4 https://github.com/chrislusf/seaweedfs/blob/master/weed/s3api/chunked_reader_v4.go#L6 |
MinIO Gateway will be removed by June 1st, 2022 from the MinIO repository:
Community Users
Please migrate your MinIO Gateway deployments from Azure, GCS, HDFS to MinIO Distributed Setups
MinIO S3 Gateway will be renamed as "minio edge" and will only support MinIO Backends to extend the functionality of supporting remote credentials etc locally as "read-only" for authentication and policy management.
Newer MinIO NAS/Single drive setups will move to single data and 0 parity mode (that will re-purpose the erasure-coded backend used for distributed setups but with 0 parity). This would allow for distributed setup features to be available for single drive deployments as well such as
Existing setups for NAS/Single drive setups will work as-is nothing changes.
Paid Users
All existing paid customers will be supported as per their LTS support contract. If there are bugs they will be fixed and backported fixes will be provided. No new features will be implemented for Gateway implementations.
The text was updated successfully, but these errors were encountered: