Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


www.verelox.com accused of using fake SSD ? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

www.verelox.com accused of using fake SSD ?

2

Comments

  • @mayer22 said:

    This thread about Kloxo-MR or 'www.verelox.com fake SSD'?.

  • comXyz said: I have a server using SSD hard drivers.

    This SSD expert only has a link that is about 4 years old on an ask site that gets voted up the "best answer". I bet in his spare time he's quite an astrophysicist

  • @mustafaramadhan said:
    This thread about Kloxo-MR or 'www.verelox.com fake SSD'?.

    I couldn't resist. Sorry.

  • mayer22mayer22 Member
    edited April 2015

    @doughmanes said:
    This SSD expert only has a link that is about 4 years old on an ask site that gets voted up the "best answer". I bet in his spare time he's quite an astrophysicist

    @comXyz could be.

  • I'm making fun of you for being a fool who doesn't open a support ticket, justifies their stupidity with a 4 year old link and is being beaten by a group of people but is so edgy takes a little jab versus admit failure.

    You couldn't even get the link in the title and the link in the message right as you furiously aired out your own stupidity for everybody to see

  • mayer22mayer22 Member
    edited April 2015

    @doughmanes said:
    I'm making fun of you for being a fool who doesn't open a support ticket, justifies their stupidity with a 4 year old link and is being beaten by a group of people but is so edgy takes a little jab versus admit failure.

    You couldn't even get the link in the title and the link in the message right as you furiously aired out your own stupidity for everybody to see

    How about some respect dough ? Let the provider post some proofs first. Everyone here could sell fake SSD and no one will ever know with such attitude like yours.

  • Why did you dodge the coathanger

  • mayer22mayer22 Member
    edited April 2015

    [root@test ~]# cat /proc/scsi/scsi

    Attached devices:

    Host: scsi0 Channel: 00 Id: 00 Lun: 00

    Vendor: ATA Model: QEMU HARDDISK Rev: 0.12

    Type: Direct-Access ANSI SCSI revision: 05

  • AmitzAmitz Member
    edited April 2015

    Just for the record:

    Vultr SSD (I just spun it up for you! <3)


    [root@vultr ~]# cat /sys/block/vda/queue/rotational
    1
    [root@vultr ~]# dd if=/dev/zero of=sb-io-test bs=1M count=1k
    1024+0 Datensätze ein
    1024+0 Datensätze aus
    1073741824 Bytes (1,1 GB) kopiert, 6,05496 s, 177 MB/s
    Thanked by 2yomero Pwner
  • @Amitz said:
    Just for the record:

    Vultr SSD (I just spun it up for you! <3)

    > [root@vultr ~]# cat /sys/block/vda/queue/rotational
    > 1
    > [root@vultr ~]# dd if=/dev/zero of=sb-io-test bs=1M count=1k
    > 1024+0 Datensätze ein
    > 1024+0 Datensätze aus
    > 1073741824 Bytes (1,1 GB) kopiert, 6,05496 s, 177 MB/s
    > 

    How about

    [root@test ~]# cat /proc/scsi/scsi

    Attached devices:

    Host: scsi0 Channel: 00 Id: 00 Lun: 00

    Vendor: ATA Model: QEMU HARDDISK Rev: 0.12

    Type: Direct-Access ANSI SCSI revision: 05

  • coolicecoolice Member
    edited April 2015

    @comXyz said:
    I have a server using SSD hard drivers.

    When I run this command on the server, it returns 0

    cat /sys/block/sda/queue/rotational 0

    When I run the above command on a KVM running on the server, it returns 1

    cat /sys/block/sda/queue/rotational 1

    Yes it is the same for me, my server is pure ssd raid over 800MB dd on node (which is not relevant test)

    Inside a kvm on qcow2 partition it was /sys/block/sda/queue/rotational 1

    @mayer22 rds100 told you it is not about dd do a FIO test watch the iops you'll get

  • AmitzAmitz Member
    edited April 2015

    @mayer22 said:
    How about

    I do not see the relevance, but this shows me a virtual DVD drive at Vultr and simply nothing on my Prometeus SSD VPS.

  • gianggiang Veteran

    mayer22 said: How about

    [root@test ~]# cat /proc/scsi/scsi

    Attached devices:
    Host: scsi0 Channel: 00 Id: 00 Lun: 00
    Vendor: ATA Model: QEMU HARDDISK Rev: 0.12
    Type: Direct-Access ANSI SCSI revision: 05

    It's a virtual hard drive...

    Thanked by 1netomx
  • @mayer22 None of your tests is accurate/reliable. For example, rotational is 1 on my dedicated server (running HW RAID10 4xSSD). You should also look at ioping and iowait.

    As for DigitalOcean:

    digitalocean:~# cat /sys/block/sda/queue/rotational
    1
    digitalocean:~# dd if=/dev/zero of=sb-io-test bs=1M count=1k
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1,1 GB) copied, 39.5487 s, 27.1 MB/s
  • blackblack Member

    @mayer22 said:

    [root@test ~]# cat /proc/scsi/scsi

    Attached devices:

    Host: scsi0 Channel: 00 Id: 00 Lun: 00

    Vendor: ATA Model: QEMU HARDDISK Rev: 0.12

    Type: Direct-Access ANSI SCSI revision: 05

    You do realize that KVM instances are running virtual devices that's provided by QEMU right?

    Thanked by 2linuxthefish netomx
  • black said: You do realize that KVM instances are running virtual devices that's provided by QEMU right?

    Bro, that's not in his ServerBear benchmark script and his 4 year old link!

  • blackblack Member

    doughmanes said: Bro, that's not in his ServerBear benchmark script and his 4 year old link!

    Wat. I have no idea what's going on in this thread anymore... I'm out.

    Thanked by 2Amitz netomx
  • cat /proc/scsi/scsi <-- It will be based on what kind of virtual disk you use, raw or qcow2; persistent or non-persistent, how far away is the moon etc...

  • Master_BoMaster_Bo Member
    edited April 2015

    What's the point of the entire thread, @mayer22 ?

    "Telling truth" isn't answer. "Truth" should be facts-supported. You wish to tell LET readers that Verelox intentionally misinforms their users, offering them magnetic disks instead of SSDs?

    What do multiple tests show? lspci/lshw, hdparm, phpsysinfo? By using dd only, you are unable to tell HDD from SSD with 100% precision, especially if I/O is capped for your VM.

    So far, no solid-rock facts, mere emotions and demonstration of OP's self-esteem over 9000.

    Let wait till they answer.

    Update:

    This is pure HDD disks server, I run it this test too which clarify things out for everyone :

    JFYI: virtual machine can report any type of device, it's up to hypervisor. The result of testing /proc, /sys etc contents are only reliable on physical servers running bare OS.

  • mayer22 said: Why don't you try to fix your kloxo-mr panel. Full of bugs from first install, email falling apart not working, all updates are a real danger to kill the server. I couldn't resist. Sorry.

    First visit here, first post with bashing a provider using false statement, lot of members telling you that your claim is maybe false, people is trying to help or suggest things and you attack to a guy that put all his effort to maintain and evolve a free control panel?

    Why you don't GTFO?

  • linuxthefishlinuxthefish Member
    edited April 2015

    I find ioping to be more helpful then DD when testing disk response times.

    Online.net XC SSD version:

    root@linuxthefish ~ # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 9.28862 s, 116 MB/s

    root@linuxthefish ~ # ioping -c 10 /
    --- / (ext4 /dev/disk/by-uuid/1d2e1f13-8e26-4ae9-9590-47f72c7fd049) ioping statistics ---
    10 requests completed in 9004.2 ms, 3407 iops, 13.3 mb/s
    min/avg/max/mdev = 0.3/0.3/0.3/0.0 ms

    HW RAID1:

    --- / (ext4 /dev/sda3) ioping statistics ---
    10 requests completed in 9.0 s, 6.5 k iops, 25.6 MiB/s
    min/avg/max/mdev = 84 us / 152 us / 251 us / 45 us

    HW RAID10:

    --- / (ext4 /dev/sda3) ioping statistics ---
    10 requests completed in 9.0 s, 12.4 k iops, 48.3 MiB/s
    min/avg/max/mdev = 63 us / 80 us / 122 us / 19 us

    Single 7200rpm disk:

    --- / (ext4 /dev/md126p3) ioping statistics ---
    10 requests completed in 9.1 s, 100 iops, 403.2 KiB/s
    min/avg/max/mdev = 113 us / 9.9 ms / 32.6 ms / 11.5 ms

  • Just give him a free vps for life and he will shut up his shit lol

    Thanked by 1doughmanes
  • AnthonySmithAnthonySmith Member, Patron Provider

    @mayer22 said:

    DO embed kernels, your test is not reliable.

  • NomadNomad Member
    edited April 2015

    @linuxthefish said:
    I find ioping to be more helpful then DD when testing disk response times.

    Online.net XC SSD version:

    root@linuxthefish ~ # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 9.28862 s, 116 MB/s

    root@linuxthefish ~ # ioping -c 10 /
    --- / (ext4 /dev/disk/by-uuid/1d2e1f13-8e26-4ae9-9590-47f72c7fd049) ioping statistics ---
    10 requests completed in 9004.2 ms, 3407 iops, 13.3 mb/s
    min/avg/max/mdev = 0.3/0.3/0.3/0.0 ms

    HW RAID1:

    --- / (ext4 /dev/sda3) ioping statistics ---
    10 requests completed in 9.0 s, 6.5 k iops, 25.6 MiB/s
    min/avg/max/mdev = 84 us / 152 us / 251 us / 45 us

    HW RAID10:

    --- / (ext4 /dev/sda3) ioping statistics ---
    10 requests completed in 9.0 s, 12.4 k iops, 48.3 MiB/s
    min/avg/max/mdev = 63 us / 80 us / 122 us / 19 us

    Single 7200rpm disk:

    --- / (ext4 /dev/md126p3) ioping statistics ---
    10 requests completed in 9.1 s, 100 iops, 403.2 KiB/s
    min/avg/max/mdev = 113 us / 9.9 ms / 32.6 ms / 11.5 ms

    Online.net limited 1215, Raid 1

    [13:57] root@Loki: /home/nomad # dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 6.44042 s, 167 MB/s
    
    
    [13:59] root@Loki: /home/nomad # ioping -c 10 /                       4 KiB from / (ext4 /dev/sda2): request=1 time=134 us
    4 KiB from / (ext4 /dev/sda2): request=2 time=156 us
    4 KiB from / (ext4 /dev/sda2): request=3 time=141 us
    4 KiB from / (ext4 /dev/sda2): request=4 time=104 us
    4 KiB from / (ext4 /dev/sda2): request=5 time=144 us
    4 KiB from / (ext4 /dev/sda2): request=6 time=107 us
    4 KiB from / (ext4 /dev/sda2): request=7 time=119 us
    4 KiB from / (ext4 /dev/sda2): request=8 time=150 us
    4 KiB from / (ext4 /dev/sda2): request=9 time=104 us
    4 KiB from / (ext4 /dev/sda2): request=10 time=176 us
    
    --- / (ext4 /dev/sda2) ioping statistics ---
    10 requests completed in 9.00 s, 7.49 k iops, 29.3 MiB/s
    min/avg/max/mdev = 104 us / 133 us / 176 us / 23 us
    
  • Responding as we were mentioned:

    @mayer22 said:
    This is pure HDD disks server, I run it this test too which clarify things out for everyone :

    [root@test ~]# cat /sys/block/sda/queue/rotational

    1

    [root@test ~]#

    I should get 1 for hard disks and 0 for a SSD.

    Incorrect. Firstly, this is guaranteed to work on a physical server; not a virtualized instance as @Master_Bo explained - it depends on the hypervisor. Here's a similar question to the one you posted, from a StackExchange site as well: http://serverfault.com/questions/551453/how-do-i-verify-that-my-hosting-provider-gave-me-ssds

    Secondly, the servers in our SSD category utilize pure 100% SSD drives; just because the speed is slow doesn't mean they aren't SSD - although we understand that the I/O writing speed we currently provide is slow or unsatisfactory, we have plans to upgrade to a writing speed of more than 700Mb/s, but the servers you receive are SSD servers nevertheless.

    mayer22 said: I did that, even if they respond in seconds for other questions this one is stuck in queue...

    Incorrect, again. We couldn't find any ticket in our ticketing system that shows any complaints about the speed of our SSD services. We would have appreciated your feedback and responded as quick as we could, however.

    mayer22 said: People get more offended this days when telling the truth.

    It appears that you insist that this is an HDD server, when in fact it isn't. I ran the commands for you on our main node so that you can make sure it's a virtualization related issue:

    [root@fr2 ~]# dd if=/dev/zero of=sb-io-test bs=1M count=1k
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 0.79486 s, 1.4 GB/s
    [root@fr2 ~]# dd if=/dev/zero of=sb-io-test bs=1M count=1k
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 0.805351 s, 1.3 GB/s
    [root@fr2 ~]# cat /sys/block/sda/queue/rotational
    0
    [root@fr2 ~]# cat /sys/block/sdb/queue/rotational
    0
    [root@fr2 ~]# cat /sys/block/sdc/queue/rotational
    0
    

    If you have really opened a ticket, please post the ticket number if that's the case and we'll be happy to respond.

  • @Verelox

    Cheers gee! You did good here. Never let yourself get down by people who only brag and brag and brag and can't provide any proofs like @mayer22

    Thanked by 1netomx
  • @Verelox said:
    Responding as we were mentioned:

    If you have really opened a ticket, please post the ticket number if that's the case and we'll be happy to respond.

    And that's why you really don't want to "shame" a provider before you open a ticket and ask, because you end up looking like an idiot when the provider responds.

    Thanked by 2netomx TarZZ92
  • I'm pretty sure providers claiming to use SSD, but actually using HDD is a lot more common than we think it is.

  • @HostMantis said:
    I'm pretty sure providers claiming to use SSD, but actually using HDD is a lot more common than we think it is.

    how?

This discussion has been closed.