Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Your Intel x86 CPU is Deeply Flawed (Meltdown/Spectre)
New on LowEndTalk? Please Register and read our Community Rules.

Your Intel x86 CPU is Deeply Flawed (Meltdown/Spectre)

raindog308raindog308 Administrator
edited January 2018 in General

Thanks to @Infinity for sharing this...

https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/

"It is understood the bug is present in modern Intel processors produced in the past decade. It allows normal user programs – from database applications to JavaScript in web browsers – to discern to some extent the contents of protected kernel memory.

"The fix is to separate the kernel's memory completely from user processes using what's called Kernel Page Table Isolation, or KPTI.

"The downside to this separation is that it is relatively expensive, time wise, to keep switching between two separate address spaces for every system call and for every interrupt from the hardware. These context switches do not happen instantly, and they force the processor to dump cached data and reload information from memory. This increases the kernel's overhead, and slows down the computer. Your Intel-powered machine will run slower as a result."

tl;dr you're going to get patched and will be trading up to 30% of your CPU performance in exchange for protection from a security flaw.

Not saying that's not the right choice, but I see rebellion and forks coming...you know, the "speed is critical, we won't upgrade past Linux 4.14..." crowd, or the "we're building a mining rig, so we want to use Dark Chester's non-isolation patches" tutorial people.

@WSS I think this is the equivalent of the introduction of the catalytic convertor. Shade tree coders?

EDIT: https://meltdownattack.com

«13456719

Comments

  • mkshmksh Member
    edited January 2018
  • MikePTMikePT Moderator, Patron Provider

    This is not good. Not when we spend hundreds of Euros for a damn CPU.

  • SplitIceSplitIce Member, Host Rep
    edited January 2018

    This will be murder on technologies like nfnetlink and similar that do frequent (packet per second like) switches between address space.

    10nm proving too hard? Just slow down your existing CPUs and sell fixed editions.

    Thanked by 1MineCloud
  • WSSWSS Member
    edited January 2018

    @raindog308 Ironically, the cat does a lot of good on turbo cars, but they certainly don't help as much for the butt dyno as open headers.

    This, however, is just a huge "5eyez finally released information now that they're done with those backdoors.." (if you ask @Maounique) grade of fuckage. It's like reimplementing EMS page switching.

    @SplitIce said:
    This will be murder on technilogies like nfnetlink and similar that do frequent (packet per second like) switches between address space.

    10nm proving too hard? Just slow down your existing CPUs and sell fixed editions.

    Or, you know, use ASICs instead of gutter x86 hardware.

  • raindog308 said: "The downside to this separation is that it is relatively expensive, time wise, to keep switching between two separate address spaces for every system call and for every interrupt from the hardware. These context switches do not happen instantly, and they force the processor to dump cached data and reload information from memory. This increases the kernel's overhead, and slows down the computer. Your Intel-powered machine will run slower as a result."

    ouch... wonder if there's any work being done to make context switching faster ?

  • WSSWSS Member
    edited January 2018

    @eva2000 said:
    ouch... wonder if there's any work being done to make context switching faster ?

    ..because working around hardware bugs that can't be patched in CPU-level software is going to exponentially help if you NOP pad it enough? The fact that you bust caching for this is seriously going to limit hardware abilities based upon the few things they've built over the last decade. Shit hasn't been getting much faster - Mhz wise, but it sure has been getting more cores and cache. Now remove that from the equation.

  • MikePTMikePT Moderator, Patron Provider
    edited January 2018

    It will bs interesting to see the performance impact and how clouds/vps providers will bear with it.

  • Well, fuck..

    Thanked by 1flatland_spider
  • patch is probably optional. Nothing crucial here, next thread.

  • FranciscoFrancisco Member, Top Host, Host Rep

    @Hxxx said:
    patch is probably optional. Nothing crucial here, next thread.

    No, it's being merged into every public kernel. Maybe they'll add a boot time flag, no promises though.

    Francisco

  • WSSWSS Member

    @Francisco said:

    @Hxxx said:
    patch is probably optional. Nothing crucial here, next thread.

    No, it's being merged into every public kernel. Maybe they'll add a boot time flag, no promises though.

    Francisco

    Get a kernel page!

  • SplitIceSplitIce Member, Host Rep

    What are the implications for this on HN's in a cloud scenario?

  • mfsmfs Banned, Member

    Francisco said: Maybe they'll add a boot time flag

    both pti=off and nopti are mainlined and referenced in Torvald's kernel-parameters.txt

  • perennateperennate Member, Host Rep
  • FranciscoFrancisco Member, Top Host, Host Rep

    I wonder whats going on with the 8700k's that there's that big of a drop?

    Francisco

  • @Francisco said:

    I wonder whats going on with the 8700k's that there's that big of a drop?

    Francisco

    the i7 8700K system used Samsung 950 PRO NVMe SSD so could be related ?

    FS-Mark performance appears to be significantly slower with this latest Linux kernel Git code, at least when using faster storage as found with the Core i7 8700K setup. The i7-8700K system was using a Samsung 950 PRO NVMe SSD while the i7-6800K system was using a slower SATA 3.0 Toshiba TR150 SSD.

  • xen this or that, kvm, etc - forget it, this animal is a fucking slayer in our field (networking). The problem is about syscalls, i.e. switching to/from ring 0 which fucks you the harder the more syscalls you make, that number typically being between fucking painfully and insanely high.

    Secondly, consider the fact that it's not firmware/microcode repairable which translates to hardwired "smart" shortcuts right in the silicon.

    The good news (if you like amd) is: For amd this may turn out to be just the perfect turbo because now - as well as for some more time (one doesn't change the innards of a complex design like intel cpus in a week or so. plus considerable parts of the production line will need to be adapted) - "just get an amd based system" is about the most sensible alternative.

    I guess this one is far worse than the floating point fuckup many years ago.

    Thanked by 3netomx kmas ricardo
  • @eva2000

    Those phoronix benchmarks are utterly worthless for most of us as they are game focussed whereas server loads are largely i/o bound.
    In fact, those tests are even worthless for normal desktop scenarios as gaming is among the least crippled scenarios (lots and lots of calculations, not a lot of i/o).

  • WSSWSS Member

    I like the fact that this patch currently forces ALL Intel based CPUs to use PTI.

    Thanked by 1qrwteyrutiyoup
  • jarjar Member, Patron Provider

    Nerds!

    / downs more everclear

    Thanked by 1Aidan
  • bsdguy said: Those phoronix benchmarks are utterly worthless for most of us as they are game focussed whereas server loads are largely i/o bound. In fact, those tests are even worthless for normal desktop scenarios as gaming is among the least crippled scenarios (lots and lots of calculations, not a lot of i/o).

    believe more benchmarks are to come but yeah...

  • jarjar Member, Patron Provider

    Gaming is the only benchmark that matters.

    Sincerely,
    15,000 people on Reddit probably

    Thanked by 3netomx Hxxx cassa
  • MikeAMikeA Member, Host Rep

    @jarland said:
    Gaming is the only benchmark that matters.

    Sincerely,
    15,000 people on Reddit probably

    Did anyone test with Crysis???

  • Did anyone test with Crysis???

    Nothing can run Crysis, no point in testing it.

    Thanked by 1Wolveix
  • @Aidan said:

    Did anyone test with Crysis???

    Nothing can run Crysis, no point in testing it.

    Cyrix can.

  • jarjar Member, Patron Provider

    @MikeA said:

    @jarland said:
    Gaming is the only benchmark that matters.

    Sincerely,
    15,000 people on Reddit probably

    Did anyone test with Crysis???

    It's not loading under nglide for some reason.

  • WSSWSS Member

    @AuroraZ said:

    @Aidan said:

    Did anyone test with Crysis???

    Nothing can run Crysis, no point in testing it.

    Cyrix can.

    CentaurHauls!

  • Aidan said: Nothing can run Crysis, no point in testing it.

    There goes my hope of running crysis on this sweet new quantum build

  • perennateperennate Member, Host Rep
    edited January 2018

    bsdguy said: xen this or that, kvm, etc - forget it, this animal is a fucking slayer in our field (networking). The problem is about syscalls, i.e. switching to/from ring 0 which fucks you the harder the more syscalls you make, that number typically being between fucking painfully and insanely high.

    Can't you simply boot your system with nopti option? The attack surface for a router or similar application seems pretty small, making avoiding the performance loss worth it.

    Edit: or if you're talking about VMs in general, the point is that Xen HVM guests might be unable to exploit the hardware vulnerabilities because of some feature of the hypervisor. In that case, the host can leave page table isolation disabled, right? Whether the guest remains vulnerable doesn't matter too much since the user can choose whether to boot with isolation or not.

Sign In or Register to comment.