Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


1 Docker 200 Website, vs 200 Docker 1 website each. Which one is better performance wise ? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

1 Docker 200 Website, vs 200 Docker 1 website each. Which one is better performance wise ?

2»

Comments

  • @yokowasis said:

    I don't need them to listen on their own IP. I will just spin a nginx reverse proxy with 1 public ip, and direct each domain to specific upstream / container.

    reverse proxy goes down and everything goes down.

    why not just use one nginx on bare metal and namebased virtual hosting like it's supposed to be used? the more complicated your setup the more that things can go wrong.

    Thanked by 1quicksilver03
  • raindog308raindog308 Administrator, Veteran

    @Abdussamad said: reverse proxy goes down and everything goes down.

    Indeed...RR DNS, mysql replication, etc. leaves no room for a single point of failure. Or if you're playing in the big boy clouds, load balancers.

  • kkrajkkkrajk Member

    @adilolv said: use one big server

    ^

  • @Abdussamad said:

    @yokowasis said:

    I don't need them to listen on their own IP. I will just spin a nginx reverse proxy with 1 public ip, and direct each domain to specific upstream / container.

    reverse proxy goes down and everything goes down.

    why not just use one nginx on bare metal and namebased virtual hosting like it's supposed to be used? the more complicated your setup the more that things can go wrong.

    Depends on the sites, but if they're not static then there are some advantages to keep them all in separate containers providing the containers are setup properly then one hacked site shouldn't mean they're all compromised

  • JackHJackH Member

    I've got no quantitative metrics to hand, but 200 separate Docker containers will not be a problem even on relatively modest hardware.

    From an architectural standpoint Docker is meant to be used horizontally, so 200 containers makes much more sense. You can then scale or provide HA (using a swarm) and you provide better inter-site isolation.

    Pro tip: If you're running 200 near-identical instances of a binary, turning on some form of memory compression will help keep your overall memory usage down. Don't be tempted to overcommit, that can be dangerous. But it does give you more headroom in case.

    My preferred method (swap out 512M with a value equal to approximately half of your system memory, else just have a play around):

    echo "zram" > /etc/modules-load.d/zram.conf
    echo 'KERNEL=="zram0", ATTR{disksize}="512M" RUN="/sbin/mkswap /dev/zram0", TAG+="systemd"' > /etc/udev/rules.d/99-zram.rules
    echo "/dev/zram0 none swap defaults,pri=10 0 0" >> /etc/fstab
    
  • JackHJackH Member

    @pbx said: Keep in mind that docker wasn't created with security in mind, it's not gonna give you XEN/KVM or even LXC's isolation if an attacker wanna play with your stuff...

    Docker is LXC. And provides better security defaults than LXC too...

  • JarryJarry Member

    @JackH said:
    Docker is LXC...

    When docker started, it was using lxc exclusively to access container api. But now docker is using mainly own libcontainer, with just some remaining interfaces of libvirt, lxc and systemd...

    Thanked by 1pbx
  • eva2000eva2000 Veteran

    @yokowasis said: How much overhead tough. If the overhead is small / negligible then, I don't really mind, for the benefit of more isolated containerization.

    For the context of the discussion I am talking about Wordpress. I manage wordpress website for my customer, with very specific, template, plugin and functionality.

    These types of questions can only have one answer, test yourself via setting up a test case comparing both configurations and measure their resource usage and performance. It ultimately depends on how you configure Docker and what is running within these Docker containers i.e. how Docker containers use memory swap and kernel memory and networking etc.

  • yokowasisyokowasis Member
    edited June 2020

    @Abdussamad said:

    @yokowasis said:

    I don't need them to listen on their own IP. I will just spin a nginx reverse proxy with 1 public ip, and direct each domain to specific upstream / container.

    reverse proxy goes down and everything goes down.

    why not just use one nginx on bare metal and namebased virtual hosting like it's supposed to be used? the more complicated your setup the more that things can go wrong.

    Then just spin another reverse proxy. This is docker we talk about. You can literally destroy the container and create it with the same setting in under 10 seconds.

  • raindog308raindog308 Administrator, Veteran

    @yokowasis said: Then just spin another reverse proxy.

    What exactly does a reverse proxy get you that RR DNS does not?

  • akhfaakhfa Member
    edited June 2020

    @Jarry said:

    @akhfa said:
    This is why, container orchestration exist :smile:
    All issue written on your comment already solved by the people that has same thinking with you :smile:

    I have been using docker & kubernetes for a few years and can tell you it does not solve everything. Problem of running many web-servers (not web-sites) on single public IP is far more complicated...

    Of course it doesn't solve everything. I agree with you. It even introduce new problem, such as how to configure the cluster, how to design the high availability, how to configure the persistent storage, what reverse proxy to use, how to upgrade the cluster without downtime, and so on

    But it do solve many problems related to how to deploy (orchestrate) 200 container (asked by @yokowasis) (sorry I make assumptions 1 web server is 1 container) to many nodes. But I also agree that it would needs much more resources compared to 1 "web server in 1 server", that is why hosting business exist, right? ;)

    @yokowasis if you want to use docker, just do it.

    1 webserver in container to serve 200 website is good choice if you have concern about resources.

    200 web server in 200 container will make you use more resources, but it can give more isolation (1 website hacked should not affect another website, although I agree that the security is not perfect until some level), and also will make your live easier if you use traefik as reverse proxy and integrate with docker ;)

  • @raindog308 said:

    @yokowasis said: Then just spin another reverse proxy.

    What exactly does a reverse proxy get you that RR DNS does not?

    I don't know if RR DNS can do it, but it's used for forwarding request from the public ip to the local Private IP, IF I choose to spin 200 container.

  • raindog308raindog308 Administrator, Veteran

    I'm just amazed at how byzantine modern IT has become.

    20 years ago: stand up a webserver, put your 200 virtual hosts on it, you're done. (Even back then, but shared hosters had thousands on a single apache)

    2020: we need 200 docker containers, kubernetes to manage, reverse proxy to route it...

    Thanked by 2angelius vimalware
  • @raindog308 said:
    I'm just amazed at how byzantine modern IT has become.

    20 years ago: stand up a webserver, put your 200 virtual hosts on it, you're done. (Even back then, but shared hosters had thousands on a single apache)

    2020: we need 200 docker containers, kubernetes to manage, reverse proxy to route it...

    20 years ago, internet was a privileged. Now, even a toddler have access to the internet. Yes, internet was much much more simpler on those old days. Nobody try to ddos, of hack your server. No such thing as web apps. Now everything is cloud based. Even a toaster.

  • @raindog308 said:
    2020: we need 200 docker containers, kubernetes to manage, reverse proxy to route it...

    and run everything using root 😂

  • raindog308raindog308 Administrator, Veteran

    @yokowasis said:
    20 years ago, internet was a privileged. Now, even a toddler have access to the internet. Yes, internet was much much more simpler on those old days. Nobody try to ddos, of hack your server. No such thing as web apps. Now everything is cloud based. Even a toaster.

    All of that existed 20 years ago and I’m not talking about “simpler times”. I’m just saying that people have injected a lot of needless complexity.

    s/20/10/ and the point still holds.

    Thanked by 1Falzo
  • eva2000eva2000 Veteran

    @raindog308 said: 2020: we need 200 docker containers, kubernetes to manage, reverse proxy to route it...

    Sometimes folks make the mistake of trying to find problems to fit a solution, rather than finding solutions to a fit the problem :)

    Thanked by 2raindog308 vimalware
Sign In or Register to comment.