New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
That's true. I pointed to the few .fw binary updates in the Debian package to note how one could look in there to find any 'after Intel release' for their CPU. I guess it would have made sense to leave less to the cuttenpaste crew.
Turns out it does not disable that.
https://www.phoronix.com/scan.php?page=news_item&px=AMD-Branch-Prediction-Still
Hi,
For easy to track, Intel has released new info with a list of all CPUs affected https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00088&languageid=en-fr
I'm not a expert in hardware, and I have 1 old computer with a Intel Core2Quad Q8400 but even after check it's official page at https://ark.intel.com/products/38512/Intel-Core2-Quad-Processor-Q8400-4M-Cache-2_66-GHz-1333-MHz-FSB I'm unable to confirm if it's affected or not.
Just don't understand the intel core generation names... Can someone help me on this and confirm to me if my Core2Quad Q8400 is affected by Spectre and Meltdown?
Thanks
You're fine
It would be easier to release a list of CPUs that are not affected. Would be a much shorter list.
{}
holy shit , all mine versions are affected (curious what is not affected anyway )
so will we find cheap intel cpus on ebay anytime soon ?
Here's their page which formulates what are "2nd generation", "3rd generation", etc, Core processors: https://www.intel.com/content/www/us/en/processors/processor-numbers.html
As it says "2nd Generation" is not those ancient Core 2 E/Q ones (as one would think from the "2"), it's those of the i-naming already. And I run one Core 2 Quad Q8300 which is not affected either.
Well, we made computers to solve complex problems and automate stuff, so you have less to do, but at the end, you need to do more stuff.
Proxmox released patched kernels today (v4 & v5): https://forum.proxmox.com/threads/meltdown-and-spectre-linux-kernel-fixes.39110/
Heftily TL;DR'd version of what Ms. greg kroa hartman and Mr. Matt Dillon (Dragon Fly BSD) habe to say (cited and slightly commented):
That's for Meltdown. As for Spectre:
Matt Dillon from Dragonfly BSD:
the impact by 50% (e.g. 5%->30% becomes 3%->15% performance loss)".
themselves. Meltdown is the 1000 pound gorilla. I won't be buying any new
Intel chips that require the mitigation. I'm really pissed off at Intel."
Intel knowingly manufactured & sold me a defective 8700K - I'll most likely return this system for an AMD equivalent.
Well noted, that post from me was not meant pro or anti anyone but rather as a "state of things" and summary based on the people being deeply involved who should know.
My main motivation was the fact that there are loads and loads of "info" but very many people have a hard time to see a clear gestalt, to have a reasonable picture of the situation.
Out of order execution will now be known as bang out of order execution ...
For those who are interested:
intel (and others) had good reasons to go super-scalar. It simply enhances performance dramatically by pushing as many instructions through the cores as are any possible.
But there's an ugly but: It adds complexity and doesn't come for free in terms of energy, real estate, etc. For quite some time the tradeoff was attractive. But there was certainly a reason for intel to go a different route with Itanium (simplified, the route of many simple cores). One would probably not be mislead to say that intel obviously had understood that super-scalar is a seducing bride but one that turns into a very ugly wife quite soon.
The current problem - as well as the meta-problem behind it - is very much to do with the ugly side of super-scalar, in particular with its horrendous complexity. To solve it will be much more complicated than changing some details of some internal wiring; it will require a significant change in an utterly complex mechanism where almost every bit depends on many, many other bits and those relations aren't one dimensional but multidimensional. In other words: To repair meltdown and spectre will, with a very high probability, introduce new issues plus it won't be done quickly.
>
It's certainly awkward timing for me. I'm supposed to be building a second PC for home (gaming PC). My plan had been for a 8700/8700k rather than AMD, now I'm not sure which way to go. That Intel still went head with that 8700 release irks me given they knew about the issue. That said, AMD aren't without their issues.
AMD has stated that they plan to support the socket for 3 - 4 years or something like that, which is a lot better than Intel.
I've been wanting to build a new rig (though hard to justify it since I still have 4970k in here), and have been wavering towards AMD.
Francisco
Cheers Francisco I wasn't aware of that. Good to know.
Likewise my current PC is a 4790k, the only reason I'm doing this is that my kids are gaming more and taking up my PC a lot. So time to build a new one, and let them have the 4790k.
AMD SLICE? plz.
https://www.pcworld.com/article/3155496/computers/amd-ryzen-cpus-7-all-new-details-revealed-at-ces-2017.html
Claims that AMD expects AM4 to last until at least 2020 when they expect DDR5 to become a thing.
It's not a big thing, but if you want some room to upgrade down the road, it's not a bad thing. Intel has a bad policy of requiring a new socket & chipset every other generation.
Francisco
Haha
I actually looked into that last year but didn't find any boards I liked.
Francisco
And what pray tell are the AMD Ryzen issues? The new firmware even allows disabling the PSP if you have the tin foil hat on. Try getting Intel to give you the option to disable their remote access component..
I guess those extra 10 FPS above 120fps when gaming GPU bound are the "issues"... You can always wait for ryzen 2 likely in March should hit 4.5ghz+.. But then again unlike Intel which changed platforms 3 times in 1 year while selling what are now known and known to them to be highly insecure CPUs AMDs AM4 platform is promised till 2020. You can always upgrade to ryzen 2 from ryzen right now..
So benchmarks are out and NVME is having big performance loss. Wonder if that's only for setups without dedicated raid cards. Anyone tested it here?
Yes, it's about the extremes, usually in gaming. Looking at the price/performance ratio, though, amd's zen products are unbeatable. So (not too smart anyway) gamers will go for intel but companies are way better served by amd because, looking at the price/performance ratio it's both cheaper and faster to simply add some more amd boxen in the rack. Also keep in mind that it's exactly the very upper end where intels pricing goes insane.
Anything with lots of i/o will have considerable performance losses. Note, though, that buffer size is less/almost no issue (well, within reasonable limits); it's simply about the number of syscalls.
I agree
And that is my main problem.
In terms of virtualization, I would rather have more cores each doing nothing except housekeeping when waiting for data, so the overall heat dissipation is much lower.
As for the real estate, well, more cores will add a lot, i still hope the future is more cores, than further adding complexity and tricks to make it backwards compatible.
I see ARM goes that way, or it looks like it, I would love to have 100 cores CPU the size of a palm doing at 40-50 degrees much better than a 24 cores one using 3 times that power and practically impossible to cool well enough in full load.
I think that encapsulates the problem entirely. People are greedy and people like more bang for their buck. Nevermind the problems for down the road. It'd be interesting to know if there were issues raised at the time of them making the conscious decision 20+ years ago. It seems fairly blatant for this extra performance squeeze to bypass all the security constraints... though on the flip side it's hard to see how devs would keep quiet about such a monumental flaw for so long.
@ricardo
To be fair: intel didn't do it for the fun of it. They did it because there was tremendous demand for ever higher performance. Keep in mind that ibm's basic motivation was to play the nice guy in a market segment they - very much erroneously - estimated to be an insignificant toy segment. This is clearly symbolized in the choice of the 8088 rather than the 8086, with the former being an 8086 (16 bits) with a crippled 8 bit external interface; obviously that was deemed damn enough for that toy segment.
But the toy segment took off wildly and intel, which was, bluntly speaking, not at all prepared and a relatively unexperienced company ws pushed and begged by the market to squeeze ever more performance out of that architecture. It seems to me that when it was finally realized how insanely important the PC (and hence its processor) were becoming it was too late for major changes. Note that even the next generation was available both as a "full" 16 bit 80186 and a 80188, the most important reason being compatibility and keeping cost (of the whole system) low. In short, even with what was already clearly recognizable as a major boom intels next generation was basically but a somewhat faster (clock) "pimped up" 8086/8088.
This in a way repeated even 2 generations later when there were also 16 bit versions of the 386 which was the first (x86) 32-bit processor.
There obviously were smart people at intel who understood that the x86 was, bluntly speaking, a creepy monster born way too early and pushed with diverse tricks and band aids to ever more performance. Their answer was the 960 (after some other series like the 432 and the 860) which interestingly was a RISC processor. there were others, too, from other companies, some of them quite nice processors (e.g. amd 29k or ns32x32) but the x86 architecture was already the king on the hill; everybody and his dog wanted an x86 based PC (and btw, by that time another race had begun and attracted more attention, namely networking).
In short, intel was under immense pressure almost from day 1 to - no matter how - squeeze more performance out of the x86 architecture. For those of you who love to have a culprit and a name: that might quite probably be Fred Pollack, the super-scalar proponent and big shot at intel.
Another point is "did intel know and since when?" - I might be wrong but my take is that intel did not know, at least not officially. I'm absolutely sure that there were intel engineers who had hurting bellies with a violently screaming hunch, sure, but: What was intel supposed to do? Let's be realistic. Even carefully touching that issue would come down to either going ahead, somehow, no matter the tricks and band aids, and make billions - or - to confess that x86, frankly, was a shitty architecture, born too early, immature even at the beginning, and way beyond repairability.
And, frankly, even if, just suppose, intel did the latter. You can bet your house and your car that the next day millions would revolt and demand that intel went ahead. One can't simply stop a behemoth market, one can't tell millions of companies "tough luck. No more x86".
So, in a way we are all guilty, too and intel simply doesn't have a lot of options.
There have been alternatives and attempts, even from intel themselves but everyone and his dog just wanted yet another generation of x86.
Wow, i never knew 80188 even existed and i am one of the rather few people who have seen a 80186 based PC first hand. Quick research seems to suggest at least this beast never made it into anything that could be called a PC though and is just used as an embedded CPU as most 80186 are.
That, and the fact that by the time the 80186 machines were released, the 80286 was already being built, featuring a 16 bit internal and external data path.