Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


I'm stupid in physics. Help me out. (bandwidth vs. download speed)
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

I'm stupid in physics. Help me out. (bandwidth vs. download speed)

OhJohnOhJohn Member
edited April 2020 in Help

So I'm always looking for bandwidth speed offered for a vps or dedi but than I started thinking (or tried to do so, correct me, where I'm wrong):

I mostly host small files, some really small, talking about one digit KB file sizes.

So a 1gbps connection would only be faster compared to a 100mbps if the server would have to handle lots of those in parallel, right?
But if I'm only using the bandwidth by e.g. 5mbps on average the actual download speed would be the same for both port speed options.

And than I should rather look for i/o to see if the dedi or vps is even able to deliver files fast enough from disk to use the connection speed?

Any errors in my thoughts?

(and than again it is probably more important if the bandwidth is shared or dedicated and what the latency for an users region is).

Comments

  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    I think this is more or less correct. You can also making use of in-memory cache to make it performs even better.

    Alternatively, CDN might be a better choice for hosting those static files, it can also serve different regions effectively. I would recommend BunnyCDN and perhaps you can try it out.

    Thanked by 1OhJohn
  • One of the side to compute for the min and max speed is:

    • File Size
    • Disk I/O
    • Port Speed
    • Actual Throughput of your server during peak/non-peak

    But actual download speed still related to:

    • User's Last Mile Connection Speed
    • Latency/Speed from client to your server
    Thanked by 1OhJohn
  • rcxbrcxb Member
    edited April 2020

    @OhJohn said:
    I mostly host small files, some really small, talking about one digit KB file sizes.

    So a 1gbps connection would only be faster compared to a 100mbps if the server would have to handle lots of those in parallel, right?

    Not really. A connection that is 10x faster can potentially send files of any given size in 1/10th the time. EXCEPT you're right that with very, very small files in the real world... latency probably dominates the total transaction time, so you should indeed be looking at that first, yes.

    And than I should rather look for i/o to see if the dedi or vps is even able to deliver files fast enough from disk to use the connection speed?

    If your files are so small, they'll be cached in memory upon first access, and the disk doesn't need to be touched again. Use the noatime mount option to be sure... UNLESS these are dynamically generated, and you really meant you need lots of disk IO for the database to generate them...

  • FHRFHR Member, Host Rep

    At 1kB files, you'll be bottlenecked by your CPU long before you can reach 1Gbps average.

  • @rcxb said:
    Not really. A connection that is 10x faster can potentially send files of any given size in 1/10th the time. EXCEPT you're right that with very, very small files in the real world... latency probably dominates the total transaction time, so you should indeed be looking at that first, yes..

    That's where I'm or was stuck with my physics theories right now (I mean the first sentence). And than with pratical thinking about your second sentence (regarding latency) where I say a closer server location with better latency but lower connection speed is better than a far more away location with better connection speed but lower latency.

    I'm actually working on a self-built cdn (with right now about 15 pops set up) for a project as no cdn provider can exactly provide what I need (BunnyCDN from @BunnySpeed is actually closest and I would love to use them instead but even they only fulfill 95% of my requirements - still a lot more than all the other cdn providers).

  • OhJohnOhJohn Member
    edited April 2020

    I mean in physics: why is a 1000 mbit/s connection faster than a 100 mbit/s connection in sending one 1kb file? Is it really that the first one needs only 1/1000000 s to sent it while the slower one needs 1/100000 s? This does not look to much like a real life / felt difference.

    @FHR: you are right in showing up another bottleneck. I just calculated that I'll probably be only able to serve 60 mbit/s software or cpu wise (even while yes I use memory to speed things up).

  • OhJohn said: I mean in physics: why is a 1000 mbit/s connection faster than a 100 mbit/s connection in sending one 1kb file? Is it really that the first one needs only 1/1000000 s to sent it while the slower one needs 1/100000 s? This does not look to much like a real life / felt difference.

    It's not working like that.
    You can think of them as car lanes. A road with 4 car lanes vs 40 car lanes. Having 40 lanes doesn't mean you can run a car on 1 lane (transfer 1MB file) faster. Sometimes the road with 4 car lanes are a part of 40 car lanes road (1gbps) which they limited to 4 lanes (100mbps), so in speed-wise they may give you same result.

  • OhJohnOhJohn Member
    edited April 2020

    @khuongcomputer : yes, that's what I thought in deciding what speed I need. For small sized files the question is rather of concurrent requests. And if those are not to high (with 1kb it would be 100000 concurrent requests to hit 100 mbit/s), 100 mbit/s shouldn't be too bad compared to 1 gbps.

    I'm just asking as I'm starting in going into regions like India, South Korea, South Africa and the like where connection speed and bandwidth is really handled differently than in e.g. the US or Europe.

  • OhJohn said: I'm just asking as I'm starting in going into regions like India, South Korea, South Africa and the like where connection speed and bandwidth is really handled differently than in e.g. the US or Europe.

    Nope, should be same.

  • dustincdustinc Member, Patron Provider, Top Host

    Hello,

    Based on what you described you will definitely be bottlenecked here by the CPU before anything. That said I would recommend looking for a server with high single-threaded performance.

    Thanked by 1SCAM_AlphaRacks
  • jsgjsg Member, Resident Benchmarker
    edited April 2020

    @OhJohn

    Bandwidth speed typically says something about a server while download speed typically says something about the users end.

    Bandwidth tells you how fast your data can get into and out of the server. You can think of it as a kind of pipe with a smaller (e.g. 20 Mb/s) or a larger (e.g. 1 Gb/s) diameter. Being at it, "traffic" or "data volume" is how much data are actually transported via that "pipe".

    Latency is a different beast with many meanings depending on the context. Generally speaking it is a time delay. Within a switch for example latency typ. means how long a network packet stays in the switch but in your context it usually means how much time is needed between your server and some other location. Similarly in software latency can also have lots of meanings.

    One (of many) variables when sending/receiving data is the size of the data packet sent/received, largely for two reasons: (a) network packets - into which your data are to be put -, and (b) the (sometimes related) way how your software generates those data packets.

    As for (a): network packets carry some overhead, e.g. headers, and in fact of multiple layers. Secondly network packets are related to processing events and those events carry build up and tear down costs. That's why sending/receiving say 1000 packets with 12 bytes payload each is much more expensive than doing 10 packets with 1200 bytes payload each.
    As for (b): if your software must access slow storage or remote storage and if it has lots of client connections it obviously needs more time to generate the data to be sent. Plus, as for the network build up and tear down also plays a role, e.g. thread switching.

    So the question of how wide your network pipe is, is only one among many issues.

    And than I should rather look for i/o to see if the dedi or vps is even able to deliver files fast enough from disk to use the connection speed?

    What you actually want is a well balanced solution that is, one where the dedi's or VPS's config are good enough (e.g. processor, memory, type of storage, etc.) and matching your need and the same for the network part.
    Example: a 64 core Zen dedi with 128 GB memory and very fast NVMe and a 10 Gb/s pipe won't do you any good if you only need to respond to 200 http requests per second by generating and sending 2 MB of data.

  • rcxbrcxb Member

    @khuongcomputer said:

    OhJohn said: Is it really that the first one needs only 1/1000000 s to sent it while the slower one needs 1/100000 s?

    It's not working like that.

    Actually, he's right and now you're just confusing him. It absolutely DOES work like this in the simple case. There are, however, several complicating factors (TCP 3-way handshake, TCP sliding window, customer ISP speeds, etc.).

    Having 40 lanes doesn't mean you can run a car on 1 lane (transfer 1MB file) faster.

    That's a terrible analogy because a 1Gbps network connection is a single "lane" and you absolutely CAN use all of it to transfer a single file at (nearly) 1Gbps, assuming every segment along the way can sustain 1Gbps speeds.

    A more useful car analogy would be:

    • There's a super-fast interstate freeway (internet backbone).
    • You choose between a 50MPH on-ramp or a 500MPH on-ramp (your server bandwidth).
    • It's unlikely any single one of your customers has a 500MPH off-ramp (Cable/DSL/FIOS/Cellular ISPs). So, it's more likely you'll make good use of your high-speed on-ramp with multiple simultaneous customers.
    • With any delivery (file transfer), you have to FIRST send an EMPTY truck back and forth to ask them if it is okay to send them a package now (TCP 3-way handshake).
    • They will ALWAYS start by saying you can only send the first few (e.g. 10) boxes. (TCP receive window)
    • After each successful delivery, they'll let you deliver several more boxes than in the previous trip. (TCP sliding window)
    • But if any of the boxes gets lost or damaged, they'll tell you to go back to half as many per truck. (TCP exponential backoff)
  • edited April 2020

    rcxb said: That's a terrible analogy because a 1Gbps network connection is a single "lane" and you absolutely CAN use all of it to transfer a single file at (nearly) 1Gbps, assuming every segment along the way can sustain 1Gbps speeds.

    He's transfer small files (at his first post) which use ~5mbps so it doesn't work faster on 1gbps connection. If it's big files then it's another story if the download can max out the connection. But with 1MB files, that'll not likely to happen.

    Plus if you want to count all matters, I can say VPS on 1gbps shared vs VPS on 100mbps shared from 10gbps node, who will win?

  • @khuongcomputer said:

    rcxb said: That's a terrible analogy because a 1Gbps network connection is a single "lane" and you absolutely CAN use all of it to transfer a single file at (nearly) 1Gbps, assuming every segment along the way can sustain 1Gbps speeds.

    He's transfer small files (at his first post) which use ~5mbps so it doesn't work faster on 1gbps connection. If it's big files then it's another story if the download can max out the connection. But with 1MB files, that'll not likely to happen.

    Plus if you want to count all matters, I can say VPS on 1gbps shared vs VPS on 100mbps shared from 10gbps node, who will win?

    Whomever has lowest RTT latency.

Sign In or Register to comment.