Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


I Love Unsustainable, Crazy Offers. This Won't End Well. - Page 4
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

I Love Unsustainable, Crazy Offers. This Won't End Well.

124

Comments

  • jsgjsg Member, Resident Benchmarker

    @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    No.

    Side note: Better NVMes use on board DRAM as cache because it's way faster than even the best SLC flash.

  • @jsg said:

    @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    No.

    Side note: Better NVMes use on board DRAM as cache because it's way faster than even the best SLC flash.

    Or just get one with both

  • @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    yes.

  • @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    It would depend on how much I/O it uses and how often the data changes. But yes, for an unsuspecting user with an expectation already of a shared host with possible occasional lags.

    Thanked by 1lirrr
  • Daniel15Daniel15 Veteran
    edited May 2022

    @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    DDR4-3600 CL12 has around 6.7ns first word latency: https://www.cgdirector.com/ram-memory-latency/
    Samsung 980 Pro 1TB has ~100µs (100000ns) average read latency on a full drive: https://www.anandtech.com/show/16087/the-samsung-980-pro-pcie-4-ssd-review/4

    So yes, RAM is still a lot faster in terms of latency. Some users might not notice, I guess, if they're used to a VPS being slow anyways.

    Read speeds are getting closer but there's still a distance. DDR4 is usually 20-30GB/s depending on timings, and the 980 Pro is ~7GB/s read and ~5GB/s write according to Samsung's marketing. It's a lot closer to DDR2 which can be around 12-13GB/s, but systems that take DDR2 probably don't even have enough PCIe lanes to fully handle the throughput of a Gen4 NVMe drive anyways.

    I think NVMe drives will have the ability to get much faster with Gen5!

    @kevinds said:

    @Calin said:
    It depends on how you want to take this, for example I found a lot of DDr3 ECC 16 GB DIMM for recycling for 2e/per DIMMs a long time ago, if you have enough on a 24 DIMM server you could make a profit

    Not counterfeit?

    Where? lol

    Go dumpster diving :tongue: DDR4 is old enough now that demand for DDR3 is a lot lower, and some people are just throwing it out because there's limited use for it.

    Thanked by 2jsg lentro
  • SirFoxySirFoxy Member

    LET: i'm mad.

    Also LET: let's keep giving it attention.

    btw raindog, great headline.

  • jsgjsg Member, Resident Benchmarker

    @Daniel15 said:
    ... the 980 Pro is ~7GB/s read and ~5GB/s write according to Samsung's marketing.

    ... if reading from its cache. After that it drops down quick and hard.

  • AdvinAdvin Member, Patron Provider
    edited May 2022

    @Daniel15 said:

    @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    DDR4-3600 CL12 has around 6.7ns first word latency: https://www.cgdirector.com/ram-memory-latency/
    Samsung 980 Pro 1TB has ~100µs (100000ns) average read latency on a full drive: https://www.anandtech.com/show/16087/the-samsung-980-pro-pcie-4-ssd-review/4

    So yes, RAM is still a lot faster in terms of latency. Some users might not notice, I guess, if they're used to a VPS being slow anyways.

    Read speeds are getting closer but there's still a distance. DDR4 is usually 20-30GB/s depending on timings, and the 980 Pro is ~7GB/s read and ~5GB/s write according to Samsung's marketing. It's a lot closer to DDR2 which can be around 12-13GB/s, but systems that take DDR2 probably don't even have enough PCIe lanes to fully handle the throughput of a Gen4 NVMe drive anyways.

    I think NVMe drives will have the ability to get much faster with Gen5!

    @kevinds said:

    @Calin said:
    It depends on how you want to take this, for example I found a lot of DDr3 ECC 16 GB DIMM for recycling for 2e/per DIMMs a long time ago, if you have enough on a 24 DIMM server you could make a profit

    Not counterfeit?

    Where? lol

    Go dumpster diving :tongue: DDR4 is old enough now that demand for DDR3 is a lot lower, and some people are just throwing it out because there's limited use for it.

    I doubt the raw read/write speeds would matter too much, I think it's more about the latency or maybe random reads/writes. I wonder how Optane would do compared to a Gen4 NVMe, since Optane iirc is specialized for low latency :)

    I also kind of wonder if it would eventually be possible (in the future) for NVMe to get fast enough to be feasible to use as actual memory. Gen5 is getting closer but still isn't quite there yet.

    On a semi-side note, I really wanna see the RAMdisk PCIe adapter back again

    Super cool and would be cool as a boot drive if you had extra memory. You could basically put extra memory modules in there and it would show up as a disk. While you could technically do this with software, this has the benefit of not taking from your system memory and it had a 12 hour backup battery to retain the data. Unfortunately, Gigabyte hasn't made one since DDR2 :neutral:

  • AXYZEAXYZE Member
    edited May 2022

    @jsg said:

    @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    No.

    Side note: Better NVMes use on board DRAM as cache because it's way faster than even the best SLC flash.

    Side note is correct for SATA, not NVMe.
    NVMe uses on board DRAM as cache in order to not use system/host RAM. This is called "host memory buffer" and its in NVMe spec. Its built on DMA which is part of PCIE.

    SATA Dramless cant work that way so they are shit, flash is used as buffer and contains table etc. but on NVMe Dramless system RAM is used so they are still okay - the TLC/QLC modules are real limitation there.

  • NoCommentNoComment Member
    edited May 2022

    @AXYZE said:

    @jsg said:

    @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    No.

    Side note: Better NVMes use on board DRAM as cache because it's way faster than even the best SLC flash.

    Side note is correct for SATA, not NVMe.
    NVMe uses on board DRAM as cache in order to not use system/host RAM. This is called "host memory buffer" and its in NVMe spec. Its built on DMA which is part of PCIE.

    SATA Dramless cant work that way so they are shit, flash is used as buffer and contains table etc. but on NVMe Dramless system RAM is used so they are still okay - the TLC/QLC modules are real limitation there.

    HMB should be slower than on-board dram because there will be some added latency. But I think this doesn't matter. The TLC/QLC modules are usually not the problem too. You used to see MLC on even "consumer" drives but nowadays even the most high-end "consumer" drives I see are mostly TLC. This is probably due to innovation with the NAND layer count and as they keep increasing the layer count it becomes more performant and more reliable.

    But dramless usually means worse drives because it would mean the drive is using a controller without dram support which are lower end than the controllers with dram support. No dram also means less power, the thermals become less of an issue and this probably also means the design can be more lenient.

    But these things are not what matters most imo. What matters most (which many are unaware of) is the overprovisioning. If you buy a nvme drive with dram, most likely it's more expensive and the manufacturer can afford to overprovision more space so you get more SLC cache and this is what affects your sustained write speed most.

    Also, I believe dram extends the longevity of your drive somewhat, because it also helps with write operations.

    @Advin said: I doubt the raw read/write speeds would matter too much, I think it's more about the latency or maybe random reads/writes. I wonder how Optane would do compared to a Gen4 NVMe, since Optane iirc is specialized for low latency

    Unfortunately, I think intel gave up on intel optane memory. But the idea of it is basically something in between nvme and ram and it seemed like a good idea.

  • @jsg said:

    @Daniel15 said:
    ... the 980 Pro is ~7GB/s read and ~5GB/s write according to Samsung's marketing.

    ... if reading from its cache. After that it drops down quick and hard.

    Please read professional review sites and how things work so you'll know when you're wrong before posting.

    The cache is for writing benefits, not reading. The reading is sustained with high queue depths and not cache.

    Samsung’s 1TB 980 Pro wrote at a rate of 5.2 GBps for roughly 120GB before the TurboWrite SLC cache filled. Once it began writing directly to the TLC flash, average performance measured 1.8GBps until full. After we filled the cache completely, performance increased to an average of 2.2 GBps.

    Thanked by 1cybertech
  • @Advin said:

    @Daniel15 said:

    @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    DDR4-3600 CL12 has around 6.7ns first word latency: https://www.cgdirector.com/ram-memory-latency/
    Samsung 980 Pro 1TB has ~100µs (100000ns) average read latency on a full drive: https://www.anandtech.com/show/16087/the-samsung-980-pro-pcie-4-ssd-review/4

    So yes, RAM is still a lot faster in terms of latency. Some users might not notice, I guess, if they're used to a VPS being slow anyways.

    Read speeds are getting closer but there's still a distance. DDR4 is usually 20-30GB/s depending on timings, and the 980 Pro is ~7GB/s read and ~5GB/s write according to Samsung's marketing. It's a lot closer to DDR2 which can be around 12-13GB/s, but systems that take DDR2 probably don't even have enough PCIe lanes to fully handle the throughput of a Gen4 NVMe drive anyways.

    I think NVMe drives will have the ability to get much faster with Gen5!

    @kevinds said:

    @Calin said:
    It depends on how you want to take this, for example I found a lot of DDr3 ECC 16 GB DIMM for recycling for 2e/per DIMMs a long time ago, if you have enough on a 24 DIMM server you could make a profit

    Not counterfeit?

    Where? lol

    Go dumpster diving :tongue: DDR4 is old enough now that demand for DDR3 is a lot lower, and some people are just throwing it out because there's limited use for it.

    I doubt the raw read/write speeds would matter too much, I think it's more about the latency or maybe random reads/writes. I wonder how Optane would do compared to a Gen4 NVMe, since Optane iirc is specialized for low latency :)

    I also kind of wonder if it would eventually be possible (in the future) for NVMe to get fast enough to be feasible to use as actual memory. Gen5 is getting closer but still isn't quite there yet.

    On a semi-side note, I really wanna see the RAMdisk PCIe adapter back again

    Super cool and would be cool as a boot drive if you had extra memory. You could basically put extra memory modules in there and it would show up as a disk. While you could technically do this with software, this has the benefit of not taking from your system memory and it had a 12 hour backup battery to retain the data. Unfortunately, Gigabyte hasn't made one since DDR2 :neutral:

    (thats dumb, putting ”expensive" RAM on bandwidth limited bus instead of increasing system RAM at peak speeds). How do you boot from a ramdisk that doesn't contain persistent storage? I don't think you thought about the use case enough.

    It's almost like someone told those engineers about PXE and they were like, "ok, that makes much more sense than this if you want to boot OS from RAM".

  • AdvinAdvin Member, Patron Provider
    edited May 2022

    @TimboJones said:

    @Advin said:

    @Daniel15 said:

    @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    DDR4-3600 CL12 has around 6.7ns first word latency: https://www.cgdirector.com/ram-memory-latency/
    Samsung 980 Pro 1TB has ~100µs (100000ns) average read latency on a full drive: https://www.anandtech.com/show/16087/the-samsung-980-pro-pcie-4-ssd-review/4

    So yes, RAM is still a lot faster in terms of latency. Some users might not notice, I guess, if they're used to a VPS being slow anyways.

    Read speeds are getting closer but there's still a distance. DDR4 is usually 20-30GB/s depending on timings, and the 980 Pro is ~7GB/s read and ~5GB/s write according to Samsung's marketing. It's a lot closer to DDR2 which can be around 12-13GB/s, but systems that take DDR2 probably don't even have enough PCIe lanes to fully handle the throughput of a Gen4 NVMe drive anyways.

    I think NVMe drives will have the ability to get much faster with Gen5!

    @kevinds said:

    @Calin said:
    It depends on how you want to take this, for example I found a lot of DDr3 ECC 16 GB DIMM for recycling for 2e/per DIMMs a long time ago, if you have enough on a 24 DIMM server you could make a profit

    Not counterfeit?

    Where? lol

    Go dumpster diving :tongue: DDR4 is old enough now that demand for DDR3 is a lot lower, and some people are just throwing it out because there's limited use for it.

    I doubt the raw read/write speeds would matter too much, I think it's more about the latency or maybe random reads/writes. I wonder how Optane would do compared to a Gen4 NVMe, since Optane iirc is specialized for low latency :)

    I also kind of wonder if it would eventually be possible (in the future) for NVMe to get fast enough to be feasible to use as actual memory. Gen5 is getting closer but still isn't quite there yet.

    On a semi-side note, I really wanna see the RAMdisk PCIe adapter back again

    Super cool and would be cool as a boot drive if you had extra memory. You could basically put extra memory modules in there and it would show up as a disk. While you could technically do this with software, this has the benefit of not taking from your system memory and it had a 12 hour backup battery to retain the data. Unfortunately, Gigabyte hasn't made one since DDR2 :neutral:

    (thats dumb, putting ”expensive" RAM on bandwidth limited bus instead of increasing system RAM at peak speeds). How do you boot from a ramdisk that doesn't contain persistent storage? I don't think you thought about the use case enough.

    It's almost like someone told those engineers about PXE and they were like, "ok, that makes much more sense than this if you want to boot OS from RAM".

    You would keep your system running all the time - in the event of a power outage or something, it had a battery backup on the PCIe card to keep the memory running so it doesn't lose the data.

    Not everyone has enough RAM slots - For example, I have an extra 2x8GB kit but I only have 2 ram slots in my PC which both already have 2x32GB DDR4. A PCIe adapter like this would allow you to put that 2x8GB kit and create a RAMdisk, which you could then maybe use as a SWAP file or could use as a cache or something of that sort.

    Thanked by 1TimboJones
  • Daniel15Daniel15 Veteran

    @Advin said: I only have 2 ram slots in my PC

    but why? honestly I don't think I've ever owned a PC with only two RAM slots, even going back to when PC133 RAM was standard.

  • @Daniel15 said:

    @Advin said: I only have 2 ram slots in my PC

    but why? honestly I don't think I've ever owned a PC with only two RAM slots, even going back to when PC133 RAM was standard.

    i build only m-itx form factors now so only 2 slots, if i know ram usage will increase in 5 years i will get a single stick of ram, otherwise dual channel.

  • jsgjsg Member, Resident Benchmarker
    edited May 2022

    @AXYZE said:

    @jsg said:

    @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    No.

    Side note: Better NVMes use on board DRAM as cache because it's way faster than even the best SLC flash.

    Side note is correct for SATA, not NVMe.
    NVMe uses on board DRAM as cache in order to not use system/host RAM. This is called "host memory buffer" and its in NVMe spec. Its built on DMA which is part of PCIE.

    No, and funny that you focused on that (DRAM location) but totally missed the really relevant points. Oh, and btw, DMA is by no means defined by PCIe (but used); DMA is a by far more general concept.

    So what is the major relevant point? It's obviously the protocols but also very importantly, albeit far less known/understood, the connection (to put it in a layman's terms). SATA is about connections over some distance while NVMe (and friends e.g. M2) is on-board. Both transport serial buses/signals but the former does it over an "unknown" (not known exactly) distance and in an "unknown" EM environment. For an electronics engineer that is a very major difference but it's also a significant difference on the logic ("system") level.

    But BOTH (usually) need to employ some kind of RAM. Looking correctly at it both are PCIe linked but with SATA the controller usually is on the mainboard (or a PCIe card) while with NVMe the controller is on the storage device itself (but also see U.2 which basically is a somewhat strange interbreed of the two). Note that SATA devices also do have an on-device controller (which may or may not have local buffer RAM). As soon as one wants to have the advantage of SATA (devices located somewhere else in the chassis and over some distance) the game changes and e.g. U.2 also has the SATA problem to solve: how to properly and reliably transmit quite fast serial signals over a "unknown" distance and in an "unknown" EM environment? That's done by (tightly controlled) bipolar pairs; but no matter, any speed advantages gained by on-device RAM buffers is limited by the bus speed which is dimensionally lower than DRAM (let alone SRAM).

    So one has to ask what's the purpose of those RAM buffers anyway? In both cases it's mainly about compensating for the much much lower storage device (be it platters or flash); it's just the case that NVMe looks different (roughly as described albeit misunderstood by you) because NVMe is directly PCIe coupled while SATA is indirectly coupled and has the controller between the drive and the mainboard; accordingly SATA has the buffer RAM with the controller while NVMe may use the host's RAM, but better ones still also do have some RAM, usually DRAM on the device. NVMes may also have some SLC as cache/buffer and in fact a few even have both SLC and RAM.

    SATA Dramless cant work that way so they are shit, flash is used as buffer and contains table etc. but on NVMe Dramless system RAM is used so they are still okay -

    For a start what is "SATA Dramless"? You seem to not know that even in the old pre-SATA days some drives did have local cache/buffers! And btw, using the hosts ("system" as you call it) RAM is NOT somehow an advantage (other than for the vendors who save some pennies). The golden rule is: A device with its own RAM is always preferable to one using the host's RAM.

    ... the TLC/QLC modules are real limitation there.

    That's the part you got half-right. Why only half? Because all flash storage, even SLC, is dimensionally slower than even crappy DRAM. But you are right in that TLC and in particular QLC flash is crappy (compared to SLC) but alas, TLC (at best) and QLC is what's sold nowadays (unless one is willing to pay excessive prices).

    @NoComment said:

    @AXYZE said:

    @jsg said:

    @dane_doherty said:
    Is a PCI-E x4 Gen4 NVMe drive swap space fast enough to pass as a RAM to an unsuspecting user?

    No.

    Side note: Better NVMes use on board DRAM as cache because it's way faster than even the best SLC flash.

    Side note is correct for SATA, not NVMe.
    NVMe uses on board DRAM as cache in order to not use system/host RAM. This is called "host memory buffer" and its in NVMe spec. Its built on DMA which is part of PCIE.

    SATA Dramless cant work that way so they are shit, flash is used as buffer and contains table etc. but on NVMe Dramless system RAM is used so they are still okay - the TLC/QLC modules are real limitation there.

    HMB should be slower than on-board dram because there will be some added latency. But I think this doesn't matter.

    Depends. But the more important point is what that RAM is used for. In this context the answer largely is "to compensate for (much) slower flash speed, to buffer (typical) write patterns and to maybe do a little read ahead" ... and for that even crappy DDR2 is damn good enough.

    The TLC/QLC modules are usually not the problem too. You used to see MLC on even "consumer" drives but nowadays even the most high-end "consumer" drives I see are mostly TLC. This is probably due to innovation with the NAND layer count and as they keep increasing the layer count it becomes more performant and more reliable.

    Nope. TLC and even worse QLC has way lower speed and flash live than MLC and (even better) SLC. Yes, you are right, flash technologies have somewhat improved here and there but still, QLC is plain crap and TLC is a bad compromise - but it's what one can get for a reasonable price.

    But these things are not what matters most imo. What matters most (which many are unaware of) is the overprovisioning. If you buy a nvme drive with dram, most likely it's more expensive and the manufacturer can afford to overprovision more space so you get more SLC cache and this is what affects your sustained write speed most.

    (a) DRAM cache is faster than SLC cache, and (b) No, sustained write speed is always limited and way lower than cached speed, no matter whether RAM or SLC cache. Why? Because every cache is limited in size and usually in the single digit GB range.

    Also, I believe dram extends the longevity of your drive somewhat, because it also helps with write operations.

    IF typical write patterns are used.

  • jsgjsg Member, Resident Benchmarker

    @Daniel15 said:

    @Advin said: I only have 2 ram slots in my PC

    but why? honestly I don't think I've ever owned a PC with only two RAM slots, even going back to when PC133 RAM was standard.

    Open up your notebook and have a look inside ... *g

    Also, frankly, leaving aside psychological factors and group dynamics, 2 RAM sockets are enough for the vast majority of users. That doesn't mean that I'm preaching for 2 RAM sockets; my point is that most users need not care because for their 8, 16, or maybe 32 GB 2 sockets are good enough.

  • Daniel15Daniel15 Veteran
    edited May 2022

    @jsg said: Open up your notebook and have a look inside ... *g

    A lot of them have soldered RAM these days :(

    @jsg said: RAM sockets are enough for the vast majority of users.

    That's true! I usually see motherboards with at least four slots though, unless it's a very small motherboard.

  • NoCommentNoComment Member
    edited May 2022

    @jsg said:

    @NoComment said:
    The TLC/QLC modules are usually not the problem too. You used to see MLC on even "consumer" drives but nowadays even the most high-end "consumer" drives I see are mostly TLC. This is probably due to innovation with the NAND layer count and as they keep increasing the layer count it becomes more performant and more reliable.

    Nope. TLC and even worse QLC has way lower speed and flash live than MLC and (even better) SLC. Yes, you are right, flash technologies have somewhat improved here and there but still, QLC is plain crap and TLC is a bad compromise - but it's what one can get for a reasonable price.

    You missed the point. Obviously, SLC/MLC is "better" than TLC/QLC. I was pointing out that "consumer" drives have transitioned from MLC for flagship drives to just TLC and this is probably due to NAND advancements.

    In 2015, toshiba's MLC NAND had 30 MB/s program throughput. (source) In 2021, toshiba's TLC NAND had 160 MB/s program throughput. (source) The TLC of today could be better than the MLC of the past.

    For more details you would need access to IEEE journals though.

    @jsg said:

    @NoComment said:
    But these things are not what matters most imo. What matters most (which many are unaware of) is the overprovisioning. If you buy a nvme drive with dram, most likely it's more expensive and the manufacturer can afford to overprovision more space so you get more SLC cache and this is what affects your sustained write speed most.

    (a) DRAM cache is faster than SLC cache, and (b) No, sustained write speed is always limited and way lower than cached speed, no matter whether RAM or SLC cache. Why? Because every cache is limited in size and usually in the single digit GB range.

    The SLC cache is usually not in the single digit GB range. Data is written to the SLC cache and when it's full data is written directly to TLC. The drives which can maintain a relatively high sustained write speed after the SLC cache is full simply overprovisioned a lot of SLC cache and they are able to clear the cache at a relatively fast speed.

  • AdvinAdvin Member, Patron Provider
    edited May 2022

    @Daniel15 said:

    @Advin said: I only have 2 ram slots in my PC

    but why? honestly I don't think I've ever owned a PC with only two RAM slots, even going back to when PC133 RAM was standard.

    I have a Mini-DTX motherboard, even with 4 RAM slots my point still stands. What if you had 4x8GB in your PC but had a slower 2x8GB kit laying around? Inevitably, once DDR5 becomes standard, some people will definitely have DDR4 laying around. In fact, that adapter I pointed out was made around when DDR2 (iirc) released so people could use their extra (older) memory for something

  • jsgjsg Member, Resident Benchmarker

    @Daniel15 said:

    @jsg said: Open up your notebook and have a look inside ... *g

    A lot of them have soldered RAM these days :(

    Yep, sad, but yet another 3 cents more profit thing it seems ...

    @jsg said: RAM sockets are enough for the vast majority of users.

    That's true! I usually see motherboards with at least four slots though, unless it's a very small motherboard.

    Well, then you probably mainly work work with uATX (and larger) mainboards, but the trend (at least currently) is "ever smaller" which, among other things, translates to 2 RAM sockets only. But don't get me wrong, I'm, simply talking about observations; I myself prefer 4 sockets too for my main systems (but also like "micro-PCs" like e.g. some thin clients which almost invariably come with 2 sockets only, if that).

    @NoComment said:
    You missed the point. Obviously, SLC/MLC is "better" than TLC/QLC. I was pointing out that "consumer" drives have transitioned from MLC for flagship drives to just TLC and this is probably due to NAND advancements.

    And those advancements are only valid and relevant for TLC and QLC? I don't think so.

    In 2015, toshiba's MLC NAND had 30 MB/s program throughput. (source) In 2021, toshiba's TLC NAND had 160 MB/s program throughput. (source) The TLC of today could be better than the MLC of the past.

    Nice but what would be really interesting is what throughput could be achieved with todays Toshiba MLCs. And btw, to even use SLC or MLC as cache for TLC or QLC it must be faster, so there are SLCs and MLCs using the a.m. achievements.

    Short version: all types show higher performance today - but MLC still is better than TLC and QLC and SLC still is better than MLC, which is why it's used for caches.

    The SLC cache is usually not in the single digit GB range. Data is written to the SLC cache and when it's full data is written directly to TLC. The drives which can maintain a relatively high sustained write speed after the SLC cache is full simply overprovisioned a lot of SLC cache and they are able to clear the cache at a relatively fast speed.

    You're probably right as my focus was on RAM caches; any flash cache for some "main" flash is but a crook.

  • ralfralf Member

    Interesting, I hadn't heard of all the terms SLC,MLC,TLC,QLC before (and so I had no idea the Samsung 980 PRO I just bought was actually a downgrade from the 970 PRO I had in my last desktop system).

    Anyway, I found https://www.kingston.com/unitedkingdom/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand and thought it was pretty interesting that TLC only has an expected lifetime of 3000 P/E cycles and QLC 1000 P/E cycles. I guess that still equates to a full drive rewrite every day for almost 3 years, but definitely seems like there's a reasonable risk that many of these drives will fail just outside out their warranty period.

  • @ralf said:
    Interesting, I hadn't heard of all the terms SLC,MLC,TLC,QLC before (and so I had no idea the Samsung 980 PRO I just bought was actually a downgrade from the 970 PRO I had in my last desktop system).

    Anyway, I found https://www.kingston.com/unitedkingdom/en/blog/pc-performance/difference-between-slc-mlc-tlc-3d-nand and thought it was pretty interesting that TLC only has an expected lifetime of 3000 P/E cycles and QLC 1000 P/E cycles. I guess that still equates to a full drive rewrite every day for almost 3 years, but definitely seems like there's a reasonable risk that many of these drives will fail just outside out their warranty period.

    Realistically speaking, the NAND is the least likely component to fail for consumer use.

  • @ralf said: I had no idea the Samsung 980 PRO I just bought was actually a downgrade from the 970 PRO I had in my last desktop system

    It's hardly a downgrade when every benchmark is between 10% and 30% faster.
    https://ssd.userbenchmark.com/Compare/Samsung-980-Pro-NVMe-PCIe-M2-500GB-vs-Samsung-970-Pro-NVMe-PCIe-M2-512GB/m1307906vsm498971

    Thanked by 1Daniel15
  • ralfralf Member

    @NoComment said:
    Realistically speaking, the NAND is the least likely component to fail for consumer use.

    My disk usage patterns are somewhat different to an average consumer. I spend a lot of my days recompiling huge C++ projects and rebuilding large amounts of art assets.

    Like I said, "I guess that still equates to a full drive rewrite every day for almost 3 years" so I will still get a useful life out of the drive, but I hadn't realised just how few write cycles newer NAND tech was. I assumed the speed gains were from running more cells in parallel, made possible by increased capacity. Which of course is my fault for not researching enough before buying.

  • @Advin said:
    Not everyone has enough RAM slots - For example, I have an extra 2x8GB kit but I only have 2 ram slots in my PC which both already have 2x32GB DDR4. A PCIe adapter like this would allow you to put that 2x8GB kit and create a RAMdisk, which you could then maybe use as a SWAP file or could use as a cache or something of that sort.

    The cost to develop this, add memory controller, driver, etc, makes poor sense vs putting in bigger sticks and/or upgrading to a 4 stick MB since the system is inadequately designed if needing SWAP frequently. But sure, there's some people on the entire planet that could make use of that if they didn't just solve the issue with off the shelf stuff.

  • @jsg said:

    @Daniel15 said:

    @jsg said: Open up your notebook and have a look inside ... *g

    A lot of them have soldered RAM these days :(

    Yep, sad, but yet another 3 cents more profit thing it seems ...

    That's the third factor, behind thinner design and PCB real estate (more integrated functionality means less chips and more space for chipdown designs that don't need RAM risers). Cost of the components is less value than the SKU flexibility with interchangeable RAM. Upgrades at time of purchase is $$$.

  • @dane_doherty said:

    @ralf said: I had no idea the Samsung 980 PRO I just bought was actually a downgrade from the 970 PRO I had in my last desktop system

    It's hardly a downgrade when every benchmark is between 10% and 30% faster.
    https://ssd.userbenchmark.com/Compare/Samsung-980-Pro-NVMe-PCIe-M2-500GB-vs-Samsung-970-Pro-NVMe-PCIe-M2-512GB/m1307906vsm498971

    Benchmarks showed when the write cache was full, the 970 Pro was 2.7GBps and 980 Pro was 1.7-2.2GBps or something like that.

    So if your workload is bursty with idle periods for recovery, the 980 is better. But if you write for long periods without breaks, the 970 is better. These short benchmarks are not useful for heavy use. The iometer graphs showing the sustained write speeds over time is pretty much the most meaningful benchmark. It gives you an idea of what kind of work loads you can do before it's crippled, and then how badly crippled.

  • jsgjsg Member, Resident Benchmarker

    @dane_doherty said:

    @ralf said: I had no idea the Samsung 980 PRO I just bought was actually a downgrade from the 970 PRO I had in my last desktop system

    It's hardly a downgrade when every benchmark is between 10% and 30% faster.
    https://ssd.userbenchmark.com/Compare/Samsung-980-Pro-NVMe-PCIe-M2-500GB-vs-Samsung-970-Pro-NVMe-PCIe-M2-512GB/m1307906vsm498971

    Uhm, depends. I've intentionally designed my benchmark software to allow for extensive disk testing, although what I showed here at LET is but kind of a summary and following a (boring but strictly standard) pattern. But it can do funny things like looking at diverse patterns.

    A short example: I can give a device under test a couple of milliseconds in between reading or writing chunks of data (whose size can also be changed). Using the exact same test parameters but giving an NVMe a couple of milliseconds or not often makes a quite significant difference.

    The reason is the devices onboard cache. Keeping the chunk size below the cache size or not and giving it a few milliseconds to write the cache out or not can make a device look very fast - or quite slow.

    That's why I prominently mentioned usage pattern. A NVMe that is a "very nice fast device" can be a snail slow crappy thing for say frequently storing vast amounts of data or for a heavily used DB server.

    My advice is to not be fooled or lured by high numbers per se but to (a) "stay high on the ladder" and prefer MLC (if available and affordable) over TLC and to stay away from QLC for any use case where speed, reliability, and long life are of concern, and (b) if you look at benchmark numbers then look at those that are relevant for your use case. And maybe (c) try to avoid the smaller drives of some drive series as they tend to be significantly slower than the larger ones (there's a reason why most benchmarks are done with the high end models).

  • AdvinAdvin Member, Patron Provider
Sign In or Register to comment.