Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Looking for some early feedback on our first MVP: cloud services comparison website - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Looking for some early feedback on our first MVP: cloud services comparison website

2»

Comments

  • @AnthonySmith said:
    Interesting, nice design however:

    > It is your provider not listed here? Let us know
    > In order to add a new provider, it must meet the following requirements:
    > Hourly billing
    > Public API
    > Docker support
    > 

    So you built a site for about 10 hosts in total? interesting.

    Hi AnthonySmith, there are several reasons behind those requirements, some of them are because the benchmarking process is fully automatized, but you will ask "why Docker"?", and the reason is because this is only the first step of this project, I will be posting news soon :)

  • AnthonySmithAnthonySmith Member, Patron Provider

    No I understand the docker support, I just don't understand why you are limiting yourself to such a tiny percentage of all hosts as that fly's in the face of good comparison in itself.

    No doubt you will reveal your reason for only wanting to spend time making a site for the big elite few then posting it on a site where none of them are represented though :)

  • and is there any comparision of plans?

    nevermind i found it.

  • @AnthonySmith said:
    No I understand the docker support, I just don't understand why you are limiting yourself to such a tiny percentage of all hosts as that fly's in the face of good comparison in itself.

    No doubt you will reveal your reason for only wanting to spend time making a site for the big elite few then posting it on a site where none of them are represented though :)

    As I mentioned before, we will start soon to benchmark every server, frequently, the smallest instances will be benchmarked everyday, maybe more than once per day (switching between regions), that's why we need an API and hourly billing to operate.

    We know that there are less than 20/30 (maybe more?) providers which meet these requirements, but for now we will focus on those.

    Maybe in the future we can consider adding more providers, and dedicated servers (with monthly billing), but for now we don't have neither time nor money :P

  • @dedicados said:
    and is there any comparision of plans?

    nevermind i found it.

    We are also adding comparison of whole providers, will be ready soon.

    Thanked by 1dedicados
  • AnthonySmithAnthonySmith Member, Patron Provider

    Ok fair enough, I don't know why you would spend time and money building an echo chamber, but its always fun being proven wrong :)

    Thanked by 2k0nsl arielse
  • k0nslk0nsl Member

    I like your design and the name itself.

  • Could you make the single-core geekbench score independent of cost, and add the cost divider to the multi-core score only? Would be so much more accurate, and you can use the single core score just "for reference", not for final score calculation.

  • @teamacc said:
    Could you make the single-core geekbench score independent of cost, and add the cost divider to the multi-core score only? Would be so much more accurate, and you can use the single core score just "for reference", not for final score calculation.

    @teamacc I use to work on a video startup, and we transcoded thousands of videos, and when you are working with transcoding, the single core performance is more important that the multi-core performance, because It's not worth using more than 2/3 threads per task (normally you will use only one core per task).

    We are thinking on create a "context" model, to have several scores depending on the tasks the server is doing, so we can assign weights to every benchmark score depending on the context.

  • @arielse said:

    @teamacc said:
    Could you make the single-core geekbench score independent of cost, and add the cost divider to the multi-core score only? Would be so much more accurate, and you can use the single core score just "for reference", not for final score calculation.

    @teamacc I use to work on a video startup, and we transcoded thousands of videos, and when you are working with transcoding, the single core performance is more important that the multi-core performance, because It's not worth using more than 2/3 threads per task (normally you will use only one core per task).

    We are thinking on create a "context" model, to have several scores depending on the tasks the server is doing, so we can assign weights to every benchmark score depending on the context.

    That sounds very weird. Video transcoding should be one of the tasks that scales quite well over multi-core (up to a point of course). 8 threads is most definitely not hurting my transcoding.

  • @teamacc said:

    @arielse said:

    @teamacc said:
    Could you make the single-core geekbench score independent of cost, and add the cost divider to the multi-core score only? Would be so much more accurate, and you can use the single core score just "for reference", not for final score calculation.

    @teamacc I use to work on a video startup, and we transcoded thousands of videos, and when you are working with transcoding, the single core performance is more important that the multi-core performance, because It's not worth using more than 2/3 threads per task (normally you will use only one core per task).

    We are thinking on create a "context" model, to have several scores depending on the tasks the server is doing, so we can assign weights to every benchmark score depending on the context.

    That sounds very weird. Video transcoding should be one of the tasks that scales quite well over multi-core (up to a point of course). 8 threads is most definitely not hurting my transcoding.

    In my case I was using ffmpeg:

    "For ffmpeg, after 4 threads the performance gain becomes insignificant no matter how many CPUs are used, which verifies that ffmpeg does not scale up."

    ffmpeg transcoding

    From: Big Data Benchmarks, Performance Optimization, and Emerging Hardware

    Thanked by 2teamacc WSS
  • WSSWSS Member

    @teamacc said:
    That sounds very weird. Video transcoding should be one of the tasks that scales quite well over multi-core (up to a point of course). 8 threads is most definitely not hurting my transcoding.

    Things just aren't optimized to the point of handling 32 threads. I think the last time someone even bothered to try, 4 cores was awesomesauce.

  • @arielse said:

    @teamacc said:

    @arielse said:>

    In my case I was using ffmpeg:

    "For ffmpeg, after 4 threads the performance gain becomes insignificant no matter how many CPUs are used, which verifies that ffmpeg does not scale up."

    ffmpeg transcoding

    From: Big Data Benchmarks, Performance Optimization, and Emerging Hardware

    VERY interesting, thank you.

  • williewillie Member
    edited March 2017

    arielse said:

    @teamacc I use to work on a video startup, and we transcoded thousands of videos, and when you are working with transcoding, the single core performance is more important that the multi-core performance, because It's not worth using more than 2/3 threads per task (normally you will use only one core per task).

    I don't understand this. If you're transcoding 1000s of videos wouldn't the total throughput (i.e. multi-thread performance) be the only thing that mattered? Just keep all the cores busy all the time.

    I don't mean use a multi-threaded ffmpeg that uses multiple cores to transcode a single video fast. I mean do single thread transcodes on many different videos at the same time.

    Are cloud servers even a sane way to do that, instead of dedis or maybe gpu machines?

  • WSSWSS Member

    I don't think he has said anywhere that he only did one at a time. It'd be trivial to set CPU affinity under virtually any host OS.

  • arielsearielse Member
    edited March 2017

    @willie said:

    arielse said:

    @teamacc I use to work on a video startup, and we transcoded thousands of videos, and when you are working with transcoding, the single core performance is more important that the multi-core performance, because It's not worth using more than 2/3 threads per task (normally you will use only one core per task).

    I don't understand this. If you're transcoding 1000s of videos wouldn't the total throughput (i.e. multi-thread performance) be the only thing that mattered? Just keep all the cores busy all the time.

    I don't mean use a multi-threaded ffmpeg that uses multiple cores to transcode a single video fast. I mean do single thread transcodes on many different videos at the same time.

    Yes we have done that, many transcodes on many different videos at the same time, your question is interesting, I'm not an expert, but I suppose that depends on how GeekBench performs the multi-threaded tests, the difference is big on the biggest instances, for example in Vultr:

    https://www.cloudbeard.io/providers/vultr/compute/98304

    • GeekBench multi-core score: 19.783
    • GeekBench single-core score: 2.438
    • Number of threads: 24 (12 cores)

    12 * 2438 = 29256 (ideal) vs 19.783 (real)

    I can also say that although they are probably using E5-2697 v4 or something similar you can't use the 100% of the hardware as it were a dedicated server, so we don't know how they are limiting single-core and multi-core processes.

Sign In or Register to comment.