New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Can't argue with that lol
Yes especially for file storage purposes like that, if you care about data integrity
Without ECC, at a minimum you should run regular memtests, and have a checksumming filesystem like BTRFS/ZFS and run regular scrubs, replacing any corrupt files from backup
Fair point.
Thanks! I just booted into Rescue Mode to confirm it wasn't just my OS (FreeBSD 12) and when uploading and downloading at the same time it's limited to 100Mbps rather then Full Duplex on my server...guess I'll open a ticket now.
"Server = must have ECC" is a misguided obsession of people who tag along with things they heard somewhere, just because "well everyone knows that's true", without much critical thought or risk assessment of themselves (or the capability to do such things in the first place).
To rephrase, if I'd be explaining you like you're five, about ECC or lack thereof, I would just say this:
It's gonna be fine.
My workstations also must have ECC.
One of the key features of ECC is to report back to the OS when an error occurs. I've seen my fair share of those reports, more than enough to sell me on the concept
With a checksumming filesystem, file corruption becomes more obvious and would be very easily missed otherwise. I see it often enough to be concerned on non-ECC systems, where the hard disk can be ruled out (no failed read/writes logged)
Likewise. I'm using a Supermicro server board in my desktop at the moment, the IPMI power control is a godsend, and I've never had such stability from consumer desktop motherboards in the past, even high end ones
I'll never build another PC out of gamer-grade parts again
I see, so the ECC religion goes even further. Speaking of which, you're not concerned about CPU computation errors, are you? Why not get three CPUs running the same code in parallel, so that in case one makes a mistake, the other two are able to correct it and "report back to the OS when an error occurs". Because otherwise you never know!
Oh and nothing against checksumming filesystems, those are nice. That's one place where the tradeoffs (risk/cost considerations) make a good case for having checksums in the FS rather than not.
Then look towards your storage controller first and foremost. A buggy or overheating one will be a couple orders of magnitude more likely to be the cause of corruptions, rather than imagined "RAM errors".
I have one from last time in Canadian data centre that I have idling.
Would someone be willing to pay for it? Can they be transferred?
Same.
Once I had the first Xeon at home I can't imagine going back anytime soon/ever.
We will talk about "imagined" RAM errors right after you went through heavy calculations on a non-ECC machine and wondering about the results.
Thanks, wonder if any are left, I've been offline past few days. I already have an i5 at BHS but it's a little tempting to save some money by downgrading to an i3. It's in RBX that I had an N2800 with 1x2TB at 8.99e/m (ouch), and I got one of the new ones with 2x2TB for the same price with the idea of migrating. I just ended up renewing the 1x2TB for another month because it's about to expire (3 days) and I haven't migrated it yet so I gave myself some extra time. It takes almost 2 days to move 2TB of data through a 100 mbit port...
You missed out on the sale.
I hear you. Used to be 6.99 then went to 7.99 and then 8.99 and now back to 5.99 (yay!) - I've never seen the 2TB variant at this price point and so snagged one in GRA (and RBX - that's another minefield - let's not go there )
I'm kind of iffy on the 2x2TB with the N2800s - the rebuild on it is going to be a bear I think esp. with the weak CPU on top of it. Price wise it's great though.
And network wise, while my thinking has been anti-100Mbit, I've realized/experienced/seen that the KS 100Mbit is a very consistent 100Mbit connection and that has something to be said about it, which quite often puts other provider Gbit links to shame. So it's not all that bad considering the reliability+price combination.
Well, true, if you are scared because of ECC errors, could destroy your porn archive.
You may get 3 server instead of 1, for any error corrections.
Even that, does not fully prevent any loss.
So get another 3 backup solutions.
Maybe, the error gets stored on the backup "solutions".
So you get a backup, backup solution.
Are you done yet? No.
What if Earth gets destroyed by a giant fucking xray from a dying planet?
You better got to SpaceX, and build a DC on Mars, oh wait, the Sun will die at one Point.
Lets expand into Andromeda.
Sounds like a lot of effort for a porn collection! That better be some good porn for that effort!
I second the point about consistent speed. I am actually in Asia and despite the distance from FR, I do consistently max out 100Mbits both ways. Most of the other providers I have tried are very good in their own way, but none have gotten me close to the advertised maximum either way. The predictability is a huge bonus because I can actually plan my life around it, i.e. if I set any job, I can be sure it will be done by a certain time. For this, I think it is every euro well spent.
While I agree storage controllers would be a viable cause, all my my issues have been on chipset sata ports, or even where the sata controller is integrated into the CPU (e.g. bay trail d)
I don't think error checking needs to extend any further than mediums that involve leakage and imperfections, CPUs are too stable except in the presence of eg radiation. Considering HDDs, tape, flash memory all need ECC, it seems like a no brainer to have it in DRAM too. Even CPU caches have ECC!
I was checking the raid on a server the other week. I noticed in December about 8 ECC errors over a few hours and not since then. It likely would have inconvenienced my holidays and customers busy season if the server up and shit the bed. The ECC premium cost paid for itself in my labour and on not stopping production.
ECC errors in a RAID? This sounds like you have no idea what you're talking about. The specific term ECC happens to refer to error correction in various types of RAM only. If you meant parity mismatches on RAID5/6, then by all means call them just that, but not "ECC".
ECC I think is critical if handling large volumes of data on an extended basis (say fintech or big data analysis where over a hundred GB of data goes into the RAM for extended processing). For plebeians like me just doing some personal syncing or downloading, I don't think I need ECC on my server. It's a cost-benefit comfort zone issue for the average person.
Haha. The purpose of checking on the server was to check the raid status. But server manager shows hardware status for all of server, hence my noticing the ECC error log. I guess I thought my point was that the server continued without blue screening and I didn't know or was impacted by it. Without ECC, I would have had to deal with blue screened server without any ECC error logs to pinpoint the issue immediately and had downtime.
That being said, you know your raid card has/can have ECC memory, too, right? Not much benefit of Raid if ram fucks it all up.
Google search hit:
https://serverfault.com/questions/574068/what-does-single-bit-ecc-errors-were-detected-on-the-raid-controller-mean
It's nothing to do with volume and everything to do with data integrity. But I think your half point is the larger the files, the more that is lost on single bit errors. 1TB in small files isn't same as 1TB system image backup.
It is, but it isn't.
Yes, data integrity is important but the average user doesn't deal with files of hundreds of gigabytes in size so the odds of corruption is a good deal lower even without ECC, and even if it happens, it is probably not a deal breaker especially if you have a few backups.
In general, my point is that most average people have an outsized tolerance for the occasional data corruption due to single bit errors because it doesn't happen often and the consequences usually are trivial. If your use case is that tolerance for error is really low, then of course I will advise ECC RAM together with Super Micro grade motherboards.
ECC is a must.
You will come to the same conclusion if you think about it a bit.
ECC is a must on a kimsufi
I've certainly seen memory errors when I've done computation intensive things on many kinds of computers. That includes my i7-3770 Hetzner dedi. Hetzner swapped out the ram when I opened a ticket and I haven't thoroughly tested the new ram yet, but I'd rather have ECC if I could get it. I will probably switch the i7 to an E3 at some point.
With the Kimsufi servers I don't worry about it as much, since I use them mostly for backup storage and don't compute much on them.
Unnecessary for the atom seedboxes, at least from my perspective.
For computation, you need ECC ram so you don't crash. In backups, you need ECC for data integrity. The rate at which ram will have errors doesn't discriminate on the load. Data in and out of your storage all goes through ram corrupting a lot more data than the last failed compile.