New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
reverse proxy goes down and everything goes down.
why not just use one nginx on bare metal and namebased virtual hosting like it's supposed to be used? the more complicated your setup the more that things can go wrong.
Indeed...RR DNS, mysql replication, etc. leaves no room for a single point of failure. Or if you're playing in the big boy clouds, load balancers.
^
Depends on the sites, but if they're not static then there are some advantages to keep them all in separate containers providing the containers are setup properly then one hacked site shouldn't mean they're all compromised
I've got no quantitative metrics to hand, but 200 separate Docker containers will not be a problem even on relatively modest hardware.
From an architectural standpoint Docker is meant to be used horizontally, so 200 containers makes much more sense. You can then scale or provide HA (using a swarm) and you provide better inter-site isolation.
Pro tip: If you're running 200 near-identical instances of a binary, turning on some form of memory compression will help keep your overall memory usage down. Don't be tempted to overcommit, that can be dangerous. But it does give you more headroom in case.
My preferred method (swap out 512M with a value equal to approximately half of your system memory, else just have a play around):
Docker is LXC. And provides better security defaults than LXC too...
When docker started, it was using lxc exclusively to access container api. But now docker is using mainly own libcontainer, with just some remaining interfaces of libvirt, lxc and systemd...
These types of questions can only have one answer, test yourself via setting up a test case comparing both configurations and measure their resource usage and performance. It ultimately depends on how you configure Docker and what is running within these Docker containers i.e. how Docker containers use memory swap and kernel memory and networking etc.
Then just spin another reverse proxy. This is docker we talk about. You can literally destroy the container and create it with the same setting in under 10 seconds.
What exactly does a reverse proxy get you that RR DNS does not?
Of course it doesn't solve everything. I agree with you. It even introduce new problem, such as how to configure the cluster, how to design the high availability, how to configure the persistent storage, what reverse proxy to use, how to upgrade the cluster without downtime, and so on
But it do solve many problems related to how to deploy (orchestrate) 200 container (asked by @yokowasis) (sorry I make assumptions 1 web server is 1 container) to many nodes. But I also agree that it would needs much more resources compared to 1 "web server in 1 server", that is why hosting business exist, right?
@yokowasis if you want to use docker, just do it.
1 webserver in container to serve 200 website is good choice if you have concern about resources.
200 web server in 200 container will make you use more resources, but it can give more isolation (1 website hacked should not affect another website, although I agree that the security is not perfect until some level), and also will make your live easier if you use traefik as reverse proxy and integrate with docker
I don't know if RR DNS can do it, but it's used for forwarding request from the public ip to the local Private IP, IF I choose to spin 200 container.
I'm just amazed at how byzantine modern IT has become.
20 years ago: stand up a webserver, put your 200 virtual hosts on it, you're done. (Even back then, but shared hosters had thousands on a single apache)
2020: we need 200 docker containers, kubernetes to manage, reverse proxy to route it...
20 years ago, internet was a privileged. Now, even a toddler have access to the internet. Yes, internet was much much more simpler on those old days. Nobody try to ddos, of hack your server. No such thing as web apps. Now everything is cloud based. Even a toaster.
and run everything using root 😂
All of that existed 20 years ago and I’m not talking about “simpler times”. I’m just saying that people have injected a lot of needless complexity.
s/20/10/ and the point still holds.
Sometimes folks make the mistake of trying to find problems to fit a solution, rather than finding solutions to a fit the problem