New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I have a deal with him, I tell him to go after and he leaves me alone.
It's not these guys is it?
http://www.hackermurphy.com/
Main sites are in PhoenixNAP with SecuredServers, blog is on RamNode. Use 2 CDN's - EdgeCast & MaxCDN).
Backups in Dropbox & SecureDragon.
Also have 5 redundant boxes (so far) for our uptime monitoring nodes:
RamNode
Fusioned
Prometeus
DigitalOcean
Hostigation
I have a dedicated server with VolumeDrive which I have XenServer installed on. I have a virtual machine setup for my websites. I have the virtual machine set to sync with a server I have at home for redundancy. For images and other files I use MaxCDN.
http://en.wikipedia.org/wiki/Murphy's_law
Sites hosted on 4 - 8 VPS from different providers in different locations +1 or 2 more "hot spares" not exposed to the public as live backup/disaster recovery. Drupal/LAMP-based sites using MySQL master-master replication to keep databases in sync, and DNS-based GSLB to provide load balancing, failover, and high availability between the servers. All services (web, email, DNS, etc) are separate providers. Any provider dumps & runs (eNetSouth, UptimeVPS, DixHost) or starts to perform poorly (won't name these :-), just drop them out of the ring, cancel service, and keep going.
Downside is the nightmare of managing the payments -- all providers are intentionally paid month-to-month; to reduce dollars lost to the "runners" and "summer hosts." Upside though is I can quickly take advantage of better offers as they come out.
So for far less than the cost of most dedicated boxes (or even some individual VPS's :rolleyes:), this has been a global, risk-averse, reliable, scalable setup.
To do list:
0) Make money, lol
1) Want to do the GSLB myself
2) Do the same for Joomla, and other select services
3) Automated RSync to sync static content
4) Nginx or Varnish to serve static content and/or caching
5) SNI based SSL for multiple domains (or a multi-domain SSL Proxy?)
6) Dedicated boxes
7) Colocation of a couple of boxes
8) Dump MySQL and move to PostgreSQL; don't know if Oracle can be trusted with "opensource"
@KuJoe, see you went the route recommended when you were getting DDoS'd along with the LEB/LET sites. Glad it's working out.
What we do, although right now just for one main project site is this:
Colo in 2 locations. Real stuff stored here. No public traffic goes directly there (well working on that - have some legacy exposure still weeding out).
VPS front ends geographically thrown about the world. Running now publicly, 2 in US, 1 in Europe and 1 in Asia. Real traffic hits these. Traffic somewhat regulated, DDoS scripts and iptables to regulate request frequency, maximum speed per IP, etc. Have another half dozen in the install and testing phase that we will bring live (some are there just for Oh crap! just in case spares).
These VPS nodes in addition to filtering run a caching proxy of static elements. Dynamic pages and uncached elements are fetched from real server over a secure SSH connection from the real servers which are both live. Main server designated to one location via load balancing with other there in failover mode. All Nginx wizardry although crude and still basic somewhat.
Did away with the CDN since we kinda accomplish that on our own. Would reintroduce a CDN if found one cheap enough with no monthly fees with an API and stats type package that was programmable so could build into our software. MaxCDN is about the best we've tried cost wise, would probably revisit them to see about API in the future depending on growth and need.
Static elements are rsync'd every 10 minutes between our real colo'd servers.
DNS up front is geographic-aware. Using Rage4 as an outsourced service. It's okay, but geographic portions are less than perfect at this point. Working to help them improve that with examples of wrong geographic labeling of IPs. Been through other service providers for DNS offering competing services, but their pricing has sucked and all absent any real customer service. Cest la'vie.
Weakness now for us is the MySQL backend we use. Detest MySQL. Weary of connecting the two sites with replication live as one security issue and could end up with both locations with bad or no data. Doing full dumps regularly of content that is vital. Have issues with some of our data being just too darn big to be manageable easily (think tables with 40-100 million rows). Segmenting some of this data is in the cards. Trying to reduce to smaller blocks is important since MySQL can be very slow on optimizes and other times where entire table gets re-created (like when adding new field to database schema).
Other part of solution for redundancy is DNS feeding two A-B IPs for resolution,
(i.e www.whatever.com = 1.1.1.1 and 2.2.2.2). Nginx on either box has same routing to the actual backend --- colo box A first and colo box B second.
So everything is A-B dual optioned
We didn't get DDOSed, our DC forced us to take our website down when we were threatened.
@KuJoe, doh! Threatened with DDoS = takedown by DC. Ouch. Not fond of those sorts of DC's.
@pubcrawler Yup, it's one of the reasons we left.
Running my website on a self-built cluster:
It ran on a glusterfs based webroot and a round-robin DNS config.
Running the same setup (webroot on glusterfs) with a 4-server mysql master-master + heartbeat for the database and round-robin DNS. This runs a huge JOOMLA site. Have had 2 "disasters", one where the DC hosting 2 MySQL servers plugged out the racks power, and one time when other admin did a shutdown in 4 wrong terminal windows, bringing down half of the webservers. Both had no downtime.
https://raymii.org/cms/p_Gluster_webroot_cluster
@KuJoe
Which DC was that? The one that you left. Just to make sure I stay away
GoRack IIRC