Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


When would you run things in Docker?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

When would you run things in Docker?

jeromezajeromeza Member
edited October 2015 in Help

As an example my VPS's run the following:

PowerDNS Master
PowerDNS Slave
Netflix Proxy
Mailcow
Websites
VPN

When should I consider using Docker and why is it better for certain use cases?

I assume it's:

a) easier / quicker to deploy if a server goes down.

b) easier to roll out changes via Puppet / Ansible / Chef etc as you can just push a new Docker image.

Any other reasons why its a must have? I have done much more than play with it at this point...

Thanks,

Comments

  • You create your image - you push it into your private registry (if you want to be fancy) - you deploy it anyway where Docker is running with 2 commands. No need to go through your configuration notes ever again (well, until you need to do some major update to the image :p)

  • I'm currently just running Discourse in docker, mainly because I got lazy when I installed it and saw they recommended docker method :P but so far no complaints

    Thanked by 1vimalware
  • raindog308raindog308 Administrator, Veteran

    It's exactly the same anywhere you deploy it, and it's easy to move.

    Thanked by 1vimalware
  • The downside of any "image" based approach is that development cycles are longer. If compatibility tweaks are necessary, it will be a lot easier to tweak and re-run a simple shell script than building and uploading a whole container as part of a test cycle.

    Essentially, since you're usually going to build a container by script, you can cut out the middle man (the docker image) and run the script directly in the target environment.

    Where Docker really makes sense if you have lots of machines that are to be managed uniformly, or you are going to store your Docker images in the cloud from where they can be retrieved and executed faster than the build time. I wouldn't do this unless it is clear that these benefits are worth the extra time spent maintaining such a setup versus simple scripts.

  • singsing said: The downside of any "image" based approach is that development cycles are longer. If compatibility tweaks are necessary, it will be a lot easier to tweak and re-run a simple shell script than building and uploading a whole container as part of a test cycle.

    Damn, i actually agree with this guy, that's new.

    I rather install a KVM/OVZ/LXC container manually and either run one of my deploy scripts for what i need (nginx, mysql, redis, whatever) or use something like asyd/puppet/chef than creating a new image with it already installed.

    Thanked by 1netomx
  • In any case, learning something like Ansible should be a prerequisite to deploying docker for production.

  • deadbeefdeadbeef Member
    edited October 2015

    @singsing @William

    Your concern is valid but docker's use purpose is not to replace Ansible or Salt. It does offer similar functionality, yes. What I do when creating a new image for something I'm not really familiar with is

    • I build it manually (launch an ubuntu container, get a shell, install, tweak)
    • write down the steps
    • delete the container
    • create a Dockerfile build script based on my notes
    • Build an image with the Dockerfile

    Isn't that similar to what you do with say Ansible but instead of a throwaway container you use a throwaway Vagrant VM?

    Where Docker shines is the easy way it lets you operate with your containers, their interconnections and their scaling in more nodes. For my personal needs this is very useful for:

    • Micro-services
    • Multi-tenancy
  • deadbeefdeadbeef Member
    edited October 2015

    @vimalware said:
    In any case, learning something like Ansible should be a prerequisite to deploying docker for production.

    Not really - see Docker Swarm (container deployment in multiple nodes) & Docker Machine (server provision)

  • @deadbeef said:
    Not really - see Docker Swarm (container deployment in multiple nodes) & Docker Machine (server provision)

    I'd actually have to try them all before making the paranoid comment about vendor lock-in.

    Abstraction pyramid never stops growing. :/

    Thanked by 1deadbeef
  • @vimalware said:
    vendor lock-in.

    Well, it's open source, the company is well funded, the project has huge traction and even Microsoft implemented Docker support for Windows Servers. I wouldn't worry about vendor lock-in in this case :)

  • @deadbeef said:
    singsing William

    • Build an image with the Dockerfile

    Just out of curiosity, is the Dockerfile provide the feature parody to Salt or Ansible?

  • deadbeefdeadbeef Member
    edited October 2015

    @bookstack said:
    feature parody

    I assume you mean parity :)

    A Dockerfile is just a way to run shell commands in a structured way during the image build process. You can even inject bash/ansible/whatever scripts into the image building process and execute them. So, there's no feature comparison, it's a defined way to run commands during the image building.

  • deadbeef said: Micro-services

    Ok, so it's the second time you're mentioning micro-services ... do you actually use this paradigm? Link to a decent micro-services software stack?

  • @singsing said:
    Ok, so it's the second time you're mentioning micro-services ... do you actually use this paradigm?

    Yes

    Link to a decent micro-services software stack?

    No

  • for me its smaller image size, a bare alpine linux docker image is 5MB, busybox images is 2MB, debian is around 80MB

    and help a lot with the devops team work load, especially dealing with the services configuration, docker make it easy for the developer team to configure the services as they like it (port forwarding, web server, proxy, etc) and push the final image to devops team for deployment.

  • smansman Member
    edited October 2015

    I was surprised after all the hoopla that I was unable to find any justification for using it for much of anything. Sounds like it's great for some things like application development. However, there are some really hard limitations that come out of the woodwork once you start taking a close look at it. Try do something as simple as creating a LAMP stack for example.

  • sman said: Try do something as simple as creating a LAMP stack for example.

    But then again, you might not need to ... sudo docker pull linode/lamp.

  • sman said: I was surprised after all the hoopla that I was unable to find any justification for using it for much of anything.

    i use docker mainly for tool set grouping or testing - most used docker image of mine is for bundling nghttp2 and all ssl/http2 testing tools I use at https://hub.docker.com/r/centminmod/docker-ubuntu-nghttp2/

Sign In or Register to comment.