Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Is it good idea to run different servers on multiple VPSes?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Is it good idea to run different servers on multiple VPSes?

sandrosandro Member
edited December 2012 in General

Hi,
does someone run heavy traffic websites with php/mysql/web servers on different machines to balance load but inside the same datacenter to keep pings low?

«1

Comments

  • if you want to be pretentious, sure

  • Well, you could do this.

    BuyVM offers this model with an offloaded MySQL server in same facility on a VLAN.

    This is traditionally how I ran everything. Different servers on shared VLAN. It works so long as your install and network stuff is clean and full throughput. Gbit makes a world of difference. 100Mbps is acceptable, but if you have lots of traffic heading to MySQL, it gets slacky and high on the latency quickly.

  • Yes I meant using a local network between them. Are you sure that with MySQL you require high bandwidth? I mean usually it's only a few KB/s shared per process between php (to name one) and the db.

    I know it depends a lot on the type of website but what process did you see from your experience that requires dedicated ram/cpu power?

  • If you are going to consider something like this you may consider finding a provider that uses Applogic and build an application stack. Applogic allows you to create your own mini-network so all of your appliances can communicate one their own private virtual lan. It also has pre-made load balancing appliances you can use in your Application. You can have multiple web servers using a central private data store as well you can have them share a Mysql server, or even have multiple replicating Mysql servers. You can read more about it on CA's website: http://www.ca.com/us/cloud-platform.aspx

    If you are interested in this technology and would like a recommendation of providers you can private message me and I will be happy to suggest a provider for you.

    Applogic is a virtual platform and makes use of XEN technology. The main difference being the way Applogic allows you to build application stacks.

    Either way, you get to learn a little more about another technology :)

    Please note: I know this would not fit into LEB, this is more so a technology for higher end users. However if you have been considering using multiple dedicated servers in an datacenter like you suggested with having multiple servers for web servers, database servers and the like, this may be an option for you.

    Cheers!

  • RobertClarkeRobertClarke Member, Host Rep

    I believe that would be a good idea, especially for database stuff.

  • Good part about it is you put MySQL on an internal interface shielded from the world mostly. Still not bullet proof, but it helps on attacks, helps get rid of unnecessary network chatter too.

    Once you get databases in the gigabyte and above range, offloading makes sense, especially in shared VPS environments.

  • In my experience, using a separate VPS for MySQL and for your frontend just increases your exposure to downtime. 1 front end VPS + 1 backend VPS doubles your risk. The only way it makes sense to separate out your MySQL server from your web server is if you're going to set up replication and automatic failover for your database.

    As far as performance goes, there's not really any noticeable difference - the added latency from the network traffic is balanced out by the performance gain from running things on two separate servers.

  • raindog308raindog308 Administrator, Veteran

    @pubcrawler said: Once you get databases in the gigabyte and above range

    Database scalability is limited more by concurrent use than size. A small DB with high concurrency won't scale as easily as a huge DB with low concurrency.

  • All that depends @NickM.

    People speak highly of BuyVM's setup like this. Although limited to maybe 10GB for your database.

    I've done this for years with real fully controlled servers. It makes a world of difference when you are doing anything significant or complicated or large.

    Gigabit is really necessary as 100Mbit connection falls apart pretty quick when pushing any sustained traffic like this over link. Database isn't such a forgiving let's delay sort of thing.

    Agree replication and failover should be part of any recipe, but it gets complicated and doubles or triples cost to do it "right".

  • I assume concurrent usage is the idea behind anyone's isolation of MySQL/database.

    But, size of database alone is reason to get it off your VPS. Odds are if it's big, you likely could use more RAM for storing indexes, cache, etc.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @pubcrawler said: People speak highly of BuyVM's setup like this. Although limited to maybe 10GB for your database.

    Not true. We don't count DB space against a users space usage. :)

    The 15GB extra is for their own backups or what have you.

    Francisco

  • @NickM has a point like I was doing multiple front end webservers and one would always go down, it seems for the hell of it.

  • @Francisco, still a bit confused :) Excuse me I've been up most the past 96 hours :(

    With the MySQL offloading, what are the limits on an account?

  • @bamn, front ends should be disposable.

    @raymii has a tutorial on how he balances his multiple mini VPS nodes into one big failed over thing... Can't find the exact tutorial:
    https://raymii.org

    My trick to do this is again, Nginx with proxy mode. Up front Rage4DNS spits out multiple A records for the geography. I connect front end nodes to backend MySQL as needed over a SSH tunnel.

    Again, you should be able to lose front ends and keep going. Losing the database, well that's an entire different and typically more complicated piece to deal with.

  • My 2 cents:

    Level 1 - public facing
    Level 2 - private, only accessible from mngt network or Level 1

    Database on seperate boxes on tier-2, ssd's maybe, lots of ram, depending on how much you are going to do. Mysql master-slave (master master is a pain to manage and setup), if you app supports it go with something like Postgres (replication and failover work much better IMHO).

    Frontend load balancing with nginx if you prefer (level 1), and maybe static files caching

    Backend web servers (level 2), also multiple, if you do file storage use something like DRBD, gluster or nfs for the shared storage, apache also has good integration with Heartbeat, so failover should work

    Would be a bit more professional then you maybe want. What I use for my website (https://raymii.org) are multiple lebs (12 ATM), a round-robin DNS config, and a deploy script which deploys my website to all my webservers. I however do not use a database, but if I did i also would have clustered database server.

  • sandrosandro Member
    edited December 2012

    What does MySQL like more RAM or more CPU?

    anyway very informative inputs guys. I like your idea raymii as well:)

    Is ngnix able to load balance to different boxes? How can you handle that with MySQL is there a frontend load balancer for that as well?

    If level1 goes offline what's your failover?

  • @sandro said: What does MySQL like more RAM or more CPU?

    anyway very informative inputs guys. I like your idea raymii as well:)

    both, and depends on what mysql is doing.

  • 24khost24khost Member
    edited December 2012

    Ram, ram and more ram. cpu depending on how many concurrent requests but mostly memory.

  • A few datacentres have a 1Gbps internal network, and I just have 2 dedicated servers that use that to connect to each other. 1 for MySQL, 1 for minecraft/vps's. I find MySQL takes up a huge load, and pretty much requires a separate server.

  • @sandro, MySQL is a pain in the arse in general. It takes up all resources (disk, CPU, RAM). In order to have it run right, you need a config file that truly fits your needs and the gear resources it is running on.

    Is ngnix able to load balance to different boxes? How can you handle that with MySQL is there a frontend load balancer for that as well?

    Nginx can do all sorts of load balancing and redirection magic. But for most people, they are using Nginx up front as the web server and the config proxies the backend app server like PHP. PHP would be the one typically talking to MySQL.

    So, typically it would be good and redundant to give PHP a way to talk to it's local MySQL instance and in case of failure a remote one (basically talk to the master or the slave node). You can accomplish that with something like MySQL Proxy ---> http://www.cyberciti.biz/tips/mysql-proxy-howto.html

    There are other MySQL proxy solutions out there though.

  • Why do you keep saying that a fast local network is important for MySQL? Since when it needs all that bandwidth? I mean it usually (at least for my applications or forums to name one) handles few KB of data on every query ... just "text", why do I need a 1Gbps connection? Latency seems much more important considering how fast the queries are executed.

    Anyway regarding the nginx load balance from the documentation I get that you don't get redirected but nginx actually acts like a proxy (duh!) so all data has to go back trough the front end and then to user. So in this way you add travel time, wouldn't this make the overall experience SLOW? The data would need to be uploaded to the front end from each backed server and then from the front end to the user, so you'll need to pay for each backend servers' BW plus the BW for front end that will be HUGE (a total of all the backends). What if one of the backends is slow in upload?

    Am I thinking correctly?

  • can someone comment on my last post about the nginx BW?:) thank you

  • @sandro, we haven't used anything MySQL wise across any network slower than gigabit in 10 years.

    Depending on application, traffic and packets can get high quickly between the front and MySQL. Lots of people bump their head quick and hard going from MySQL on same server to moving it to its own server. A page depending on software is capable of making 100's of queries and moving megabytes of traffic sustained per second and it only gets worse with load. For this reason, MySQL and MySQL touching apps often have connected socket that stays open to eliminate the negotiation and connection management.

    Nginx load balancing like most load solutions is indeed a "proxy". Certainly is overhead to that. Slow? No, not unless something is really wrong or the backends are slow. Is it wire speed? Doubt it, but probably in area of sub < 10ms. Could be a fraction of a ms depending.

    I try to segment slow backends from fast ones. Think for instance of separating your fast reads from your slow writes.

    None of this matters though where you have simple apps, no traffic, no use, etc. :)

  • Thanks. I was thinking that other problems would arise. When you have 1 frontend and 3 backends connected with 1Gbps ports each the frontend will receive 3Gbps of data while the port will only able to receive 1Gbps...how would you handle that?
    A mega front end to handle static files traffic with a lot of RAM to cache files and only put php/mysql on the backends that will unlikely saturate the bandwidth?

  • dont multiple front ends need a front end anyway? or you mean controlled by round robin DNS?

  • That sounds neat :)

  • I separate my web from my DB servers w/ Linode. There's an internal network between the web and db, and I'm looking to move away from Linode due to their high pricing. It's difficult to find a VPS provider that offers private networking using Xen.

    Anyways, my DB server handles both MySQL and MongoDB and it does pretty well on a 1GB VPS. Our web server loves using swap and I'm going to have to move it into a 2GB VPS soon. So yea, if not for the offloading, I would require a much bigger VPS to run both BUT the point being if my web server gets hacked, at least the DB server remains intact (or so I would like to think that's the case.)

  • I thought getting a local connection between VPSes would be more easy!

  • Local connection can be done in the same box (localhost / 127.0.0.1), but that is worst approach if compromised.

    Otherwise, you go to different backend server instance for MySQL and that is typically in a VLAN.

  • yeah i meant via vlan between different boxes in different nodes in the same datacenter. Do they offer that?

Sign In or Register to comment.