All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
solution for docker swarm
Hi All,
I am learning docker swarm to make the stack deployment easier. And I am looking for a solution or ideas for the below problems.
1) persistent storage
I want a volume available for all the containers in all nodes in the swarm.
Use case:
to store MySQL data
to sore static files and session
I see people using GFS and making the volume available in all nodes. But I want something to be deployed as Docker Swarm service via compose file instead of manual setups.
2) sync files across the container
I want to sync folders and files from one container to all the containers. I can use lsyncd, but as I will scale the services, I like the host details to be dynamic.
Note: I will use this thread to reply to any new problem as I am learning. I am not going to use this thread as support. As I will do all extensive research and only if I find there is no idea, I will look for ideas from LET members
Thanks
Comments
Common NFS mount on all servers and then mount said NFS mount as a volume in each container?
If you're open to using K8s, persistent volumes are an option.
I set up a three-node Docker Swarm cluster around 1.5 years ago. For storage, it was GlusterFS. Everything worked perfectly until I decommissioned the environment a month ago.
Usage was low. It was around ten docker services and a small DB for an SMB.
Then it was Nginx on all nodes with rolling DNS failover if a node failed.
I suggest using Portainer once it's set up. It's a great UI to use and saves a lot of time. But unfortunately, I only used it around two months ago before realising the system design needed changing.
I am learning K8s in the meantime.
My app requires more IOPS. I see many differences when the app is deployed in NVMe and SSD.
Won't GlusterFS degrade the performance?
Example: storing MariaDB data into glusterFS volume and PHP codes into GlusterFS volume.
Your next alternative is to outsource the DB outside of Docker perhaps, maybe a Galera master-master cluster. I've set that up before but it would have been overkill for my use case.
I would suggest trialing the different options and testing things before committing to something.
I have found that SeaweedFS is the best solution for persistent storage in Docker Swarm. https://github.com/seaweedfs/seaweedfs/wiki/SeaweedFS-in-Docker-Swarm
You don't need to store MySQL data on several nodes at the same time. The right way is to build a MySQL cluster (Percona) where each node will have its own storage.
I am started using K8s now. But still, I find shared storage issues over here. I tested longhorn, which is simple to use. But it has poor IOPS.
Example:
Longhorn - 1300IOPS
Local Path - 40K IOPS
Due to this, my app performs 3 times slower. Any self hosted solution which have integration with K8s?