-
Notifications
You must be signed in to change notification settings - Fork 255
EFS Mounts are created for each subnet available in the VPC #2164
Comments
Issues with the creation of EFS mount targets goes back at least a year. Issue #1739 points out the problem of mounting an existing EFS that has mount targets already defined--it will throw the same "already exists" error and destroy everything. There is a hack described in 1739 that uses the x-aws-cloudformation "overlay" feature in the compose file to essentially overwrite the unwanted mount target definitions and transform them into EFS Access Points using the same mount target names generated by the compose-cli. There's no penalty for multiple Access Points. Use the compose convert command to see CF template generated and work from there as an interim solution until (or if) this issue gets some attention. (thanks for the hack thorfi). |
@geocertify this also happens when you create completely new cluster but have more subnets, I think the solution is to just create 1 mount per availability zone.or best to lookup only the public nets and make sure they are all in different a-zone, this is the optimal resolution to this I think.. It is almost the same story with load balancers. |
My 2cts with storage such as NFS/RDS and such, is to create it "separately" so that you don't loose it in case you needed to rollback. Hence why I try to make WIth that said, with x-efs (which can be shortened just by setting volume driver properties), my approach is to create the mount points in each of the storage subnets (usually there is only 1 per AZ, but that's down to the user). Then it's just creating access points for the ECS roles (and therefore services) to access mount points. My day-to-day test cases are around kafka, so I don't have much in terms of EFS usage, but always test it with the usual suspects such as Wordpress. Actually, I do use if for Grafana 🤔 |
Still looks like this is an issue. Volumes just arent working right in deployments with multiple subnets |
Still facing this issue. I used the cloudformation edit / substitution workaround from here: #1739 (comment) I then tried to use variable substitution to sub in the names of my subnets from the my test and prod environments, but then I ran into docker/compose#5567 and moby/moby#33540 and https://forums.docker.com/t/variable-substitution-in-a-key-of-a-yml-file-in-docker-compose/82939 and docker/compose#3108 |
@samskiter if you can compile it yourself you can try my fix for the issue from here - https://github.com/komatom/compose-cli/tree/efs-subnets-fixes (note the branch) |
This is not right, NFS mounts needs to be available once per availability zone, not for each subnet..
For example if you have 3 public and 3 private subnets in 3 separate availability zones, the error is the following
fsmt-0b65688085d114ae1 already exists in stack arn:aws:cloudformation:us-east-2:446807330699:stack/XXXXXXXX/1e2861a0-e7e6-11ec-a395-02518b3e9a32
and various other errors like that
so I believe we need a way to specify where mounts and access points are created, or we pull only the subnets that are required for them to work..
Actually the whole EFS feature works only if you have 3 public subnets on 3 different availability zones, in other words subnet-public-a, subnet-public-b and subnet-public-c
Thanks
The text was updated successfully, but these errors were encountered: