New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Can Atom N2800 handle high traffic for static images?
I'm thinking of buying a Atom N2800 dedi from kimsufi during the flash sale but I was wondering if it's possible to host millions of images on Atom N2800. It will not host php or mysql so it's basically pure static. I'll just be hotlinking the images onto my main site. It will receive about 50k hits daily with about 200 concurrent users viewing the images so it's going to end up having high reading rate than write.
Do you think N2800 can handle this traffic? if so what kind of setup should I do?
Comments
Yes but using nginx you should be able to do serve that much traffic on a VPS even.
50k hits/day is virtually nothing (<1 hit/sec). People were saturating 100Mbps lines with Pentium II machines with 64MB RAM. KS server will have absolutely no problem as long as your setup is at least half sane.
Should be okay CPU-wise, but not sure about a single SATA HDD with 200 concurrent users, also remember it only has 100 Mbit uplink, if your images are large this might be a problem as well.
Consider getting two or three, this kind of task sounds like you can easily balance the load.
Ok so I should install nginx so is there anything else? like optimization, caching types
Images aren't big they're roughly 500kb each
For static images this CPU is more than enough. As said for many concurrent requests the disks could bottleneck long before the CPU. Getting a version with SSD instead of HDD would greatly help in this case (if the space is enough for you).
Atoms do have limited IO. But sufficiently optimised you should easily get 500mbps sustained out that hardware, 1gbps with a reasonable amount of work.
You will need to use nginx or perhaps lighttpd of course (or any other lightweight web server, i.e not Apache). If its a limited number of files, it should be served from ramcache and disks shouldnt be an issue.
Don't run unnecessary daemons on this server. You won't need more than nginx, postfix or other lighter-weight mta, ssh, crond and iptables-persistent on bootup. Use debian jessie.
As for caching the OS itself will cache disk operations.
try caddy webserver. I'm curious to see it handles the traffic
I had on one website about 30-50k hits per day, and atom pushed it with mariadb on apahce2, but the load was near 100%, and that's because of the mariadb. So I guess it would be no problem for static serving.
Rendering static images is fairly low CPU IO. Couple that with nginx and you shouldn't have any issues at all.
PHP based? Was it snappy?
Storing too many files/subdirectories under the same directory will harm your system performance heavily. So the way you structure the storage is important. Other than that, I think an Atom will do fine.
So how would you structure the storage?
Normally you look at something like:
A/BC/blah.jpg
Nope, "normally" in this case it's
b/bl/blah.jpg
.and disable the hotlinking for all other request
Let's assume that you have 1M image files which are named in 6 digits from "000000" to "999999." If you put them all in a single directory, on average it needs to scan 500K (half 1M) file entries to look up the file you want, assuming the file system scans linearly.
Now let's assume you use a 2-level directory structure instead, with the 1st level directory based on the first 3 digits of the file name, and 2nd based on the last 3 digits of the file name. At most 1K subdirectories/files under any single directory. The image files are stored under the 2nd level subdirectories (the file "012345" is stored at /012/345/012345, for example). To look up a certain file, on average you need to scan 500 (half 1K, 1st level) + 500 (half 1K, 2nd level) = 1K file entries, which is 1/500 of the previous case.
So if I need to store so many files like you do, I would try to spread the files evenly among multi-level directories, with no more than 1K files/subdirectories under any single directory.
Assuming would be the problem, since we haven't had "linear" filesystems for decades now.
Most (all?) modern filesystems use some sort of B-tree or the like. I think ext2 was the last "modern" filesystem to still be stuck in sort ranges, and I haven't used ext2 for anything other than boot partitions in many years. If someone is indeed using ext2 at this juncture, that would point to another problem outside of storing millions of files per directory.
Putting millions of files in a directory and picking out a single one, if you know the exact name of it, will not be a problem. Managing the enormous mess, trying to back up those files, or move them around will be another story, but that's not the discussion here.
true!
It was ok, page load was 1-5s, you can't expect from an atom to run mariaDB + apache + PHP and handle all that traffic without hiccups. Although the PHP web app could have been more optimized, but It was easier to migrate to a stronger dedicated server than optimize the web app.
So basically these atom cpus are "useless" for php websites?
A bit of a generalization. The most powerful server you've ever experienced will be brought to its knees by PHP websites that are poorly optimized and very popular.
That being said, 99% of dynamic websites really do not need to be dynamic. Most of our recent customer education has been to get people to reconfigure their CMS systems, service content that ends up being static, as static pages. There's no need to keep re-executing PHP for content that hasn't changed; execute it once, save the output, serve the output, have a mechanism to invalidate that output when the output should be changed.