New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
And apparently AMD has released a microcode update which disables branch prediction altogether on Ryzen: https://lists.opensuse.org/opensuse-security-announce/2018-01/msg00004.html
I wonder is there really no other way, and what's the actual performance hit observed.
Just to let everyone know, patches are now out for CentOS 7
Patches for CentOS 6, Ubuntu, Debian, Windows, still not available yet.
Couple days ago: "We're not affected by this exploit "
Today: jk. Let's quietly disable that piece of our architecture...
Couple days from now: "See! We're not affected "
I dug through the Project Zero post because this defies my understanding of strategies used in modern processors. I found this paragraph:
I bolded the phrase that I think is relevant. Doesn't this imply that the processor will only speculatively execute the single predicted branch? Otherwise there should be no need to mess with the branch predictor and make it predict a certain way.
I simply have not heard of any CPU speculatively executing both branches. It also makes the performance of programs like the toy example I posted earlier inexplicable -- if the processor speculatively executes both branches, the performance should be the same regardless of the selectivity.
@perennate
No, That's about what the branch prediction will be (and manipulating it), not about whether the loser branch is executed.
Again, note that the whole problem circles around the fact that the losing branch is not in the focus of the memory watchdog, i.e. accesses that would be interdicted in the winning branch are possible during the spec. exec of the losing branch.
I agree that I can be unwilling to accept things that contradict my current understanding, and I should improve this.
Here, though, I hope you can understand why I got frustrated by your replies. On history of branch prediction, my intention was simply to point out that branch prediction strategies, with speculative execution, existed since the 1950's, and I linked the Wikipedia article that I was referring to. You first pointed out that branch prediction could be used in other ways, but the example I was talking about (IBM Stretch / 7030) does do speculative execution (as far as I can tell). Then you kept bringing up that it's a mainframe and not a processor, but I didn't see how that relates to my statement.
Haha! They must have read my post above about amd being agile.
Note that the linked fix is for family 17h, in other words, for zen processors only (ryzen, etc).
Hmm. I guess what I'm asking is, if both branches are speculatively executed, why does the exploit need to manipulate the branch predictor? It seems that it could ignore the branch predictor, since the out-of-bounds array check will always be speculatively executed (but of course dropped once the actual branch is evaluated), and so the effect on the cache will always be seen. I mean in this case it sounds like you're implying the branch predictor is not used at all.
In other words, they have this conditional right?
The sentence I bolded explains how they cause the branch predictor to predict that "untrusted_offset_from_caller < arr1->length" is true. But if the body would be speculatively executed anyway then this shouldn't be necessary.
Edit: maybe I'm simply misunderstanding what you mean by "speculatively execute both branches". I take that to mean, instead of predicting which of the two branches will be taken, the CPU just executes both of them, and then after the branch is evaluated, it can drop the effects of the loser branch and continue on the winner one. However, my understanding is that modern CPUs only speculatively executes the branch that the branch predictor thinks will be taken; if it turns out to be wrong after the branch is evaluated, then it drops that branch and only then starts executing on the correct one.
I believe this is very depended on the workload.
I remember reading some papers a while ago about some new cores, AMD I think, about how out of order executions and prediction works and was struggling to mentally compute how would this reflect in the actual performance.
At first, it looks like there will be a lot of performance gain, however, this is just an optimization to use the "waste" core time in which it waits for data, to be prepared, just in the unlikely case a losing branch will yield usable or needed data at some point.
It is not actually free, you need cache space at least and some load/dump operations, but those are also, most likely to be used in the "free" time of the controllers.
Think of it like having a bunch of workers waiting for the slow truck to go fetch some materials they only discover they need to work with while actually working. They do not know how good quality the material will be, nor how long will exactly take until it arrives, in the meantime will sit twiddling their thumbs. You as a supervisor, think you could put them to do something useful in this time, so, with the available materials and scraps, you do some things which may or may not be useful or sellable, but would be wasted most of the time anyway. There is a limited available warehouse (cache) locally to keep both the "scraps" and what you make with them, besides the mainline products and the mats for them, so, you should also make sure the people carrying stuff to and from it as well as the available space are always there for the "mainline" business.
This exploit or similar ones are actually some instructions for the part of the people which actually work, to ask for and study the scraps available locally and even order some by the truckload in order to gain some data about the main business and steal the corporate secrets and recipes, this will be done under the supervisor's nose, at times even pickpocketing him in search for valuables and secrets, eventually this will lead to the discovery of said secrets, compromising the whole operation and maybe even taking control over the supervisor and blackmailing him into doing whatever the spies want, including spying on his own master.
We can ask ourselves if insisting for "no idle hands" is a good idea in the first place, if what those hands are doing can end up being harmful, keeps the power consumption higher and puts a drag on the clock of the cores doing the actual work as well as wasting time with moving garbage data in and out.
In theory, it can be beneficial, and the benchmarks will show by how much, now, and how much less heat will dissipate in a real world scenario.
I am sure overall there is a performance gain, but I do not think it is as high as people think it is, except for the high io, high thread densities in some servers.
So, desktop and single-threaded apps, it is probably better to turn it off, doing so in a busy server, the performance hit will be high to seriously disruptive...
And this shit has been "in the wild" for quite a while now.. Who knows who have been exploiting this?
Now let's wait for a "fix", and fear browsing webpages with any javascript...
Encrypted or not, your data belongs to us.
For anyone running more rookie systems, Fedora also has a kernel patch out, which I installed by doing "sudo dnf --refresh --security upgrade".
Intel knew about this since June, yet proceeded with a vulnerable coffee lake "launch" in October - lovely.
Gosh, I'm waiting for the first news about 'unknown guy who stole a fortune in whatever-coins due to the Intel bug'. Shitcoins and Spectre are just a perfect match - all you need is just to read one's wallet password using JS in pixelframe leaving no traces. Fscking scary it is!
Even Pentium M is affected, was produced between 2003 and 2008.
https://gist.github.com/ErikAugust/724d4a969fb2c6ae1bbd7b2a9e3d4bb6
Just as a point of reference, I updated my cPanel shared hosting servers with the CentOS 7 patch this morning, and I'm seeing no perceptible performance changes at all.
So, it looks like the performance implications may not be as bad as feared.
NixOS already has an opt-in patch.
And I think this is how it should be. Nobody knows better than me if i need it or not. At least an opt-out...
Depends on what the patch is actually mitigating. Keep in mind that this a class of vulnerabilities and not "a vulnerability". Some of them can be mitigated cheaper (in terms of performance loss).
Moreover, it was to be expected that the relevant parties would find a "compromise mitigation strategy" that wouldn't look too bad performance wise.
But then, hey, it's up to each and everyone how much he trusts a group that first kept the whole cluster fuckup silent for months (and even launched new vulnerable processors) ...
P.S. Kudos to AMD who chose the safe route (simply disabling spec. exec) ... at least for now. I guess that something cheaper - yet still safe - is in the pipeline.
Everything older then 5 years wont be patched by Intel:
"Intel has already issued updates for the majority of processor products introduced within the past five years. By the end of next week, Intel expects to have issued updates for more than 90 percent of processor products introduced within the past five years. "
https://newsroom.intel.com/news-releases/intel-issues-updates-protect-systems-security-exploits/
It being opt-in is primarily because the patch only exists for the latest kernel version right now; and that's not what NixOS ships with by default. Therefore, 'opting in' just involves switching your kernel version to the latest version (which is one line of configuration in NixOS).
I expect that once patches are backported, it will become enabled by default in the default kernel version.
>
Hmmm
It doesn’t say so.
Obviously focus is on the most recent CPUs since those run in the highest numbers probably. They might then do the others, that have been de-commissioned in large numbers already, a bit later, or not.
It seems like to me, they did not mentioned a timeline or such.
They just said 5 Years multiple times.
Yep, but the situation is very much fluid for them and it’s visible, so it’s a bit unfair to outright assume they have no plans on publishing microcode updates for older CPUs, they did not say it, just give them some time.
From where/for which OS?
And: md5? Really?? Sounds like "repairing a security flaw with a security flaw".
Yes, that sounds realistic.
But there's a but: Their main plan might have a step in between, namely new processors series. I might be wrong but from I see I strongly expect that intel will try to make a profit out of that cluster fuck.
Well, you don't start an in-facto monopoly business to stay competitive, do you ;p?
csf / lfd, it basically does an md5sum on known binaries every X interval, if something changes it reports it - despite md5sum having collisions, it's still "good enough" for doing a simple check if something changed or not.
It's just a silly notification.
The software does do a
MD5SUM = "/usr/bin/md5sum"
so you could just do/usr/bin/sha512sum
if the sum is super important to you.By online and ovh statement, supermicro should release a microcode tomorrow...
Can these patches introduce more crashes? Kappa
Haha. But it's ugly. Look: What alternatives do people have? Jack and Joe could buy amd, yes, but will they throw away their computers and shell out 500 or 1000 bucks for new ones?
Yes, companies could ... but will they do that? Nope, most won't.
Now imagine intel comes up with a new product series that is about as fast as the current ones and "spectre-safe" and, say, just to make it more attractive offer a 50 or 100 bucks trade in deal.