Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Storage KVM-Romania-1gbps-DDos protection included-Start from 1.66 eur/1tb limited offer - Page 13
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Storage KVM-Romania-1gbps-DDos protection included-Start from 1.66 eur/1tb limited offer

1789101113»

Comments

  • jsgjsg Member, Resident Benchmarker
    edited November 2019

    @TimboJones said:
    Did you ever run jsg's benchmark app on any of your servers? If yes, did the numbers seem as expected or were they nonsensical? For the Linux servers I ran it on, the disk numbers made no sense at all.

    You really know how to stubbornly unnerve someone.

    • That's why I do my test on FreeBSD if any possible. Linux cheats with aggressive caching. To break that I'd need to use min. 4 GB test file sizes - not exactly a smart thing to do on a VPS which might have no more no 5 GB disk in total.
    • In fact, my disk test routine isn't complicated. Here's an overview of what happens:

    Test file size: 256 MB <- (16K * slice) unless set differently by cmdline param.
    Slice size (unless ...): 16 KB

    WriteSeqTest
       do 16K times:
           write 16 KB sequentially (no file seek)
           every 128th time Sleep
    
    WriteRndTest
       do 16k times:
           choose random slice position (on sector boundary)
           prepare slice full of random data
           write slice to disk (with file seek)  
           every 128th time Sleep
    
    ReadSeqTest
       do 16K times
           read 16 KB sequentially (no file seek)
           every 128th time Sleep
    
    ReadRndTest
       do 16k times
           choose random slice position (on sector boundary)
           read slice from disk (with file seek)  
           every 128th time Sleep
    

    There are just 3 points that are different from usual testing:

    • I use real random (using XS128p, a widely used and good quality PRNG) -both- to choose a random sector for random reading and writing for RndRD and RndWr) as well as for creating the data to be written (all test). The latter helps to break some caching trickery by the OS.And we want to learn about the disk performance rather than about the OS caching, right?
    • My timing is (a) microsecond precision, and (b) only timing the actual disk interaction (pretty much all others use less granular timing and time everything, incl. their preparation, etc).
    • All above test sleep for a bit (typ. 5 - 20 ms) after having done 128 slices so as to not kidnap the whole node but to rather be a good neighbour.

    Also, I KNOW that my disk tests seem to not make sense sometimes but that's not due to a flaw in my testing but rather to the fact that most people are used to utterly meaningless numbers from primitive benchmarks that get tricked by the OS and sometimes the disk controllers.
    I do not care. My goal wasn't to replace all those funny meaningless "benchmarks" people use. My aim was to build and have a benchmark that provides relevant and true information.

    You prefer other tests? No problem, just use them and enjoy the numbers that (seem to) make sense.

    Thanked by 2poisson isunbejo
  • poissonpoisson Member
    edited November 2019

    @jsg said:

    • That's why I do my test on FreeBSD if any possible.

    Also, I KNOW that my disk tests seem to not make sense sometimes but that's not due to a flaw in my testing but rather to the fact that most people are used to utterly meaningless numbers from primitive benchmarks that get tricked by the OS and sometimes the disk controllers.

    I wanted to respond with these but I decided against it because there's no point casting pearls before swines, or if you prefer the Mandarin equivalent, to play the zither to a cow.

    You have already explained your methodology several times over the last few weeks (at least in the last few weeks I have been paying attention) and I understand the context, although I don't necessarily grasp the technicalities, but you have been clear enough for the most part. It is fine not to understand the method as well as you do, but to call it "nonsensical" when you don't understand is rubbish.

    I am a social scientist and I don't understand the calculations behind the theory of relativity and whatever numbers I throw at it, the numbers are "nonsensical" to me. So, the problem is with the theory of relativity?

    Thanked by 1jsg
  • @jsg said:

    @TimboJones said:
    Did you ever run jsg's benchmark app on any of your servers? If yes, did the numbers seem as expected or were they nonsensical? For the Linux servers I ran it on, the disk numbers made no sense at all.

    You really know how to stubbornly unnerve someone.

    • That's why I do my test on FreeBSD if any possible. Linux cheats with aggressive caching. To break that I'd need to use min. 4 GB test file sizes - not exactly a smart thing to do on a VPS which might have no more no 5 GB disk in total.
    • In fact, my disk test routine isn't complicated. Here's an overview of what happens:

    interesting discussion,

    so it's good freeBSD or Linux for proxy caches with millions of static files like image / json, please enlighten me?

  • jsgjsg Member, Resident Benchmarker
    edited November 2019

    @isunbejo said:
    so it's good freeBSD or Linux for proxy caches with millions of static files like image / json, please enlighten me?

    Yes and no. First, it depends on the application and how it handles files (and possibly caching itself). Second, the goals might seem similar but they are not really. For testing a disks performance one wants to break OS caching, no matter whether it's good or bad. The problem I can't break linux' (quite aggressive) caching fully is due to my small dataset size which needs to be small because the test candidates often are small (wrt disk in my case).
    Third, caches are a tricky thing and in the case of static file caches with millions of files it's usually not the disk that is the primary problem but (a) to find them fast, and (b) to cache them smartly, which is a task that (if well done) is quite complicated.
    Forth, disk and file caching has multiple levels and I wouldn't say that this or that OS is better for it without knowing all the relevant details.

    I'd like to have given a simpler answer but then I would have told BS. The best simple advice I have to offer is (a) to hope that your proxy cache software is designed by smart, experienced professionals, and (b) to ask them for advice.

    In any case I don't see a reason to consider Linux' or FreeBSD's disk and file handling generally better (except maybe for Linux having more file systems available), but for my benchmark FreeBSD happens to be considerably better. For your proxy cache or many other applications linux might be better.

    Thanked by 3poisson uptime isunbejo
  • Hello, is there a discount on Black Friday?

  • DPDP Administrator, The Domain Guy

    @152917 said:
    Hello, is there a discount on Black Friday?

    We'll know when the day comes.

    And congrats on your 2nd comment :smiley:

  • @152917 said:
    Hello, is there a discount on Black Friday?

    I heard there are some at lowendbox.com

    Thanked by 1vyas11
  • @poisson said:

    @152917 said:
    Hello, is there a discount on Black Friday?

    I heard there are some at lowendbox.com

    Thanks. They look awesome.

  • sjkmesjkme Member
    edited November 2019

    @vyas11 said:

    @poisson said:

    @152917 said:
    Hello, is there a discount on Black Friday?

    I heard there are some at lowendbox.com

    Thanks. They look awesome.

    BF is just around the corner, if you can wait till then. you might get even awesomer deals

    Thanked by 1vyas11
  • @sjkme said:

    @vyas11 said:

    @poisson said:

    @152917 said:
    Hello, is there a discount on Black Friday?

    I heard there are some at lowendbox.com

    Thanks. They look awesome.

    BF is just around the corner, if you can wait till then. you might get even awesomer deals

    All the CC goodness and goodies. Gotta love 'em.


    I am sure it will be summer somewhere in the world this Black Friday for Summer hosts to catch some sunshine.

  • uptimeuptime Member
    edited November 2019

    @vyas11 said:

    @poisson said:

    @152917 said:
    Hello, is there a discount on Black Friday?

    I heard there are some at lowendbox.com

    Thanks. They look awesome.

    Any benchmarks? How is their IPv6 support?

    Also add more stock ok I need it bro

    Thanked by 3dahartigan DP vyas11
  • @jsg said:

    @TimboJones said:
    Did you ever run jsg's benchmark app on any of your servers? If yes, did the numbers seem as expected or were they nonsensical? For the Linux servers I ran it on, the disk numbers made no sense at all.

    You really know how to stubbornly unnerve someone.

    • That's why I do my test on FreeBSD if any possible. Linux cheats with aggressive caching. To break that I'd need to use min. 4 GB test file sizes - not exactly a smart thing to do on a VPS which might have no more no 5 GB disk in total.
    • In fact, my disk test routine isn't complicated. Here's an overview of what happens:

    Test file size: 256 MB <- (16K * slice) unless set differently by cmdline param.
    Slice size (unless ...): 16 KB

    WriteSeqTest
       do 16K times:
           write 16 KB sequentially (no file seek)
           every 128th time Sleep
    
    WriteRndTest
       do 16k times:
           choose random slice position (on sector boundary)
           prepare slice full of random data
           write slice to disk (with file seek)  
           every 128th time Sleep
    
    ReadSeqTest
       do 16K times
           read 16 KB sequentially (no file seek)
           every 128th time Sleep
    
    ReadRndTest
       do 16k times
           choose random slice position (on sector boundary)
           read slice from disk (with file seek)  
           every 128th time Sleep
    

    There are just 3 points that are different from usual testing:

    • I use real random (using XS128p, a widely used and good quality PRNG) -both- to choose a random sector for random reading and writing for RndRD and RndWr) as well as for creating the data to be written (all test). The latter helps to break some caching trickery by the OS.And we want to learn about the disk performance rather than about the OS caching, right?
    • My timing is (a) microsecond precision, and (b) only timing the actual disk interaction (pretty much all others use less granular timing and time everything, incl. their preparation, etc).
    • All above test sleep for a bit (typ. 5 - 20 ms) after having done 128 slices so as to not kidnap the whole node but to rather be a good neighbour.

    Also, I KNOW that my disk tests seem to not make sense sometimes but that's not due to a flaw in my testing but rather to the fact that most people are used to utterly meaningless numbers from primitive benchmarks that get tricked by the OS and sometimes the disk controllers.
    I do not care. My goal wasn't to replace all those funny meaningless "benchmarks" people use. My aim was to build and have a benchmark that provides relevant and true information.

    You prefer other tests? No problem, just use them and enjoy the numbers that (seem to) make sense.

    I know you don't care, stubborn and hostile, that's why I gave up rather than feeding back the results. Basically, on a shit ass Cloud at Cost server that had undeniably bad disk I/O you'd report random read and writes over 1 GB/s. Other tools put it at 10-60MB/s. If you know Cloud at Cost, you know that range to be true.

    On other servers with NVMe disks would report much less. Cloud at Cost servers benching higher than ImpactVPS. Anyone who has an NVme from ImpactVPS knows they are fast.

    I went to spin up some Cloud at Cost freebsd servers to compare and even let you take it over and debug if it also showed crazy disk numbers. But netbook.xyz wasn't working and the CaC template was freebsd 10, too old. Seeing how slow it installs, how slow apt updates, the terrible ioping latency, these are things you don't experience when handed a server and just do benchmarks, you have nothing else to sanity check your numbers.

    You were touting that your benchmark was more accurate and tests the actual drive speeds. I told you before random shouldn't be substantially better than sequential, or else either you're doing it wrong, or the hardware is doing it wrong. Why that doesn't make sense to you puzzles me.

    So I was curious if anyone else ran your benchmark and the result, and sounds like a big no.

  • @poisson said:

    @jsg said:

    • That's why I do my test on FreeBSD if any possible.

    Also, I KNOW that my disk tests seem to not make sense sometimes but that's not due to a flaw in my testing but rather to the fact that most people are used to utterly meaningless numbers from primitive benchmarks that get tricked by the OS and sometimes the disk controllers.

    I wanted to respond with these but I decided against it because there's no point casting pearls before swines, or if you prefer the Mandarin equivalent, to play the zither to a cow.

    You have already explained your methodology several times over the last few weeks (at least in the last few weeks I have been paying attention) and I understand the context, although I don't necessarily grasp the technicalities, but you have been clear enough for the most part. It is fine not to understand the method as well as you do, but to call it "nonsensical" when you don't understand is rubbish.

    I am a social scientist and I don't understand the calculations behind the theory of relativity and whatever numbers I throw at it, the numbers are "nonsensical" to me. So, the problem is with the theory of relativity?

    >

    You must have a medicine cabinet full of snake oil with that attitude. You missed the verify part of "trust but verify".

    The test results were nonsensical (not just "to me") and the test method is definitely questionable. I've raised this point before.

  • jsgjsg Member, Resident Benchmarker
    edited November 2019

    @TimboJones

    • My benchmark does not specifically test I/O latency. That's in part due to its declared goal of being a good neighbour on a VPS. One may or may not consider I/O latency as important for testing. My personal take is that it's largely nonsensical to measure disk latency on a VPS because, especially with SSD and ŃVMe it changes from moment to moment on a busy node. Plus at the end of the day I want to know what I can expect from a disk. But I respect your view of considering I/O latency important.

    • I've had quite some providers meanwhile who asked me to test one or some of their VPS products. In fact, I know of many more who like and value my benchmark than I know of people who don't.

    This is an offer thread of cociu. I took the liberty to respond to your newest outburst but I suggest that we respect the fact that this threads main topic is cociu's offer and stop this side-discussion now.

    Thanked by 1cociu
  • @jsg said:

    @isunbejo said:
    so it's good freeBSD or Linux for proxy caches with millions of static files like image / json, please enlighten me?

    Yes and no. First, it depends on the application and how it handles files (and possibly caching itself). Second, the goals might seem similar but they are not really. For testing a disks performance one wants to break OS caching, no matter whether it's good or bad. The problem I can't break linux' (quite aggressive) caching fully is due to my small dataset size which needs to be small because the test candidates often are small (wrt disk in my case).
    Third, caches are a tricky thing and in the case of static file caches with millions of files it's usually not the disk that is the primary problem but (a) to find them fast, and (b) to cache them smartly, which is a task that (if well done) is quite complicated.
    Forth, disk and file caching has multiple levels and I wouldn't say that this or that OS is better for it without knowing all the relevant details.

    I'd like to have given a simpler answer but then I would have told BS. The best simple advice I have to offer is (a) to hope that your proxy cache software is designed by smart, experienced professionals, and (b) to ask them for advice.

    In any case I don't see a reason to consider Linux' or FreeBSD's disk and file handling generally better (except maybe for Linux having more file systems available), but for my benchmark FreeBSD happens to be considerably better. For your proxy cache or many other applications linux might be better.

    This proxy application is very simple, just use Nginx as a reverse proxy, something like a proxy for WSUS (Windows Update Server). Now this is a good way, because it can handle users 5k with the XFS file system on linux.

    Now I want to try to do the same thing but it applies to small cache files like images / json api, XFS is good for large files, for small files the recommendation is to use ext4 / brftfs, my focus on this discussion is memory efficiency on OSes like FreeBSD / Linux , which one is good for this case.

  • jsgjsg Member, Resident Benchmarker
    edited November 2019

    As it so happens I just finished a backup operation that I had to do yesterday.

    3 storage VPS were involved for a triple backup. Two of those are with reputable providers and not dirt cheap; one is about 5 ms away from the VPS I backup up and the other one about 50 ms. Those I did yesterday, leaving aside the @cociu storage VPS because I didn't want to loose time in case of problems with it.

    Today I did my 3rd backup, the one to the cociu storage VPS. Result: very positive. I'm impressed. The storage VPS is about 20 ms away from the VPS to be backed up and I saw about 200 Mb/s performance when pushing my files to the storage VPS. That cociu storage VPS is about as fast as the two way more expensive storage VPS from reputable providers.

    So, I can not confirm that cocius storage VPS is sh_tty or has major problems. At least not currently and not on the node my VPS is on.

    @isunbejo said:
    This proxy application is very simple, just use Nginx as a reverse proxy, something like a proxy for WSUS (Windows Update Server). Now this is a good way, because it can handle users 5k with the XFS file system on linux.

    Now I want to try to do the same thing but it applies to small cache files like images / json api, XFS is good for large files, for small files the recommendation is to use ext4 / brftfs, my focus on this discussion is memory efficiency on OSes like FreeBSD / Linux , which one is good for this case.

    In my personal view FreeBSD's UFS2 is better than ext2/3/4. Others may have a different opinion. The reason that makes me think you might be better off with linux is mostly that linux offers/supports more file systems.

    Either way the file system should not be a major concern because with what you describe (a) the (many) connections, and (b) cache management will be the bottleneck. But I think that with nginx your are well served (no matter whether on linux or FreeBSD).
    One certainly could do better, even considerably with a specialized application (that might need to be designed and implemented) but for what you seem to need that would be overkill.

    So, my advice is to stay on the OS you are now (linux if I'm not mistaken) and to stick with its standard file system (because not every available file system supports all needed functionality. Keep that in mind!) and to optimize and fine tune your nginx config.

    Thanked by 2isunbejo uptime
  • stonesnakestonesnake Member
    edited November 2019

    Hey brother, I came here again for help. Because the work order I sent was no one to reply to me from the beginning.
    The machine I purchased was unusable from the first day.
    It is specifically unable to ping, can not reinstall the system, can not view the disk, or even no mac address.
    It’s been a week since my purchase, the ticket number is #542219

    If I can't solve it all the time, I can't accept a machine that I can't use. I can only apply for a refund.

  • DPDP Administrator, The Domain Guy

    @stonesnake said:
    Hey brother, I came here again for help. Because the work order I sent was no one to reply to me from the beginning.
    The machine I purchased was unusable from the first day.
    It is specifically unable to ping, can not reinstall the system, can not view the disk, or even no mac address.
    It’s been a week since my purchase, the ticket number is #542219

    If I can't solve it all the time, I can't accept a machine that I can't use. I can only apply for a refund.

    Hope all goes well from here :)

Sign In or Register to comment.