Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


A new speculative execution attack on x86
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

A new speculative execution attack on x86

bulbasaurbulbasaur Member
edited July 2022 in News

Spectre was one of the “OG” speculative execution attacks revealed along with Meltdown. Spectre was initially thought to have been mitigated by implementing retpolines, but as it turns out, the mitigation also doesn’t work as expected!

Meet “Retbleed”:

https://arstechnica.com/information-technology/2022/07/intel-and-amd-cpus-vulnerable-to-a-new-speculative-execution-attack/

Though I’d suggest reading the papers themselves for anyone looking to get a deeper understanding:

Spectre: https://spectreattack.com/spectre.pdf

Retbleed: https://comsec.ethz.ch/wp-content/files/retbleed_sec22.pdf

Honestly I sometimes feel we just shouldn’t bother with speculative execution and go back to the days of 8086s and their simple architectures.

«1

Comments

  • Please ELI5 speculative execution

    Thanked by 20xbkt tjn
  • Not_OlesNot_Oles Moderator, Patron Provider
    Thanked by 1szarka
  • raindog308raindog308 Administrator, Veteran

    Where's the cool logo? So disappointing.

    Thanked by 1rm_
  • CabbageCabbage Member
    edited July 2022

    @vitobotta said:
    Please ELI5 speculative execution

    From what I understood from Meltdown's whitepaper, it's an optimisation technique where the CPU basically calculates some stuff that may be needed ahead of schedule, even if it doesn't know it is needed or not. If it turns out to be needed, then you get your result pre-calculated. If not, it's discarded.

    Correct if I'm wrong though.

    @raindog308 said:
    Where's the cool logo? So disappointing.

    Who the hell designs them? It's like giving supervillans really cool name for no reason.

  • bulbasaurbulbasaur Member
    edited July 2022

    @vitobotta said:
    Please ELI5 speculative execution

    Inside the CPU there are circuits handling a different part of the instruction's execution, such as instruction fetch, instruction decode, operand fetch, execution, and memory writeback.

    Since these phases of execution don't have dependencies on each other, a CPU can have a "pipeline" as deep as the number of independent circuitry, and load up at least that many instructions to speed up the rate at which instructions are executed. In practice, a CPU may load instructions an order of magnitude greater than the pipeline depth, by finding chains of independent instructions and reordering them before putting them on to the pipeline.

    However, branches (if and return statements) can prevent pipelining, as the instructions to fetch in advance wouldn't be known until the branch's operands have been fetched and the result has been computed. This is where branch prediction mechanisms come in. They use heuristics and look at the execution history of the program's execution and can make a guess for pipelining to continue working as usual. If the prediction is wrong, the CPU discards the effects of the incorrect prediction. This is called "speculative execution"

    However, a complete reversal of such effects isn't entirely possible, for example, loading data in the CPU cache as part of speculative execution that might not have been otherwise loaded in the correct execution.

    Since executing programs share at least a L3 cache on the CPU, and can also share L1 and L2 caches via hyperthreading or timesharing, and programs can't be written without branches, so when the spy program executes, it can:

    1. It can mistrain the branch predictor by biasing the result of a branch to always have the same outcome. When the victim program is executed later, the mistrained branch predictor is tricked into loading sensitive data from the victim program into the shared cache.

    2. It can evict cache entries (Flush+Reload) or fill up a cache with its own data (Prime+Probe), so that when the victim program executes, there is a measurable time difference in loading the data as it causes a cache miss.

    3. The time differences in loading data can again be measured by the victim program, when it tries to load some of its own data in the cache. Combined with the fact that branches are extremely common in programming, it can be used to obtain information about the victim program's state and data.

    Note that the "spy program" just needs to be able to run on your system, and doesn't require root access, filesystem permissions or any other special permissions. Even if you say you never execute untrusted programs on your device, you kinda do - whenever you visit a web page, you execute Javascript and WASM.

    @Cabbage said: From what I understood from Meltdown's whitepaper

    Meltdown is similar in that it uses cache effects as a side channel, but it is a different attack that's mitigated by performing permission checks before loading data from the cache. Spectre is more general in nature, or from the paper:

    Spectre attacks only assume that speculatively executed instructions can read from memory that the victim process could access normally, e.g., without triggering a page fault or exception. Hence, Spectre is orthogonal to Meltdown which exploits scenarios where some CPUs allow out-of-order execution of user instructions to read kernel memory. Consequently, even if a processor prevents speculative execution of

    instructions in user processes from accessing kernel memory, Spectre attacks still work.

    (A sidenote - please read the papers. They're interesting, and easily approachable as they reference all the information necessary to understand the attack.)

  • @vitobotta said:
    Please ELI5 speculative execution

    @stevewatson301 Jesus Christ, what 5 year old's do you talk to?

    Thanked by 4ralf Erisa ntlx martheen
  • bulbasaurbulbasaur Member
    edited July 2022

    @TimboJones said:

    @vitobotta said:
    Please ELI5 speculative execution

    @stevewatson301 Jesus Christ, what 5 year old's do you talk to?

    A real five year old has no business trying to understand electronics, let alone speculative execution in CPUs.

    I could post some of those simplified explanations by Redhat etc. (which are perhaps intended for business executives), but they do no better than "something something your data is leaked"

  • @raindog308 said:
    Where's the cool logo? So disappointing.

    Especially with a name like that.

  • TimboJonesTimboJones Member
    edited July 2022

    @stevewatson301 said:

    @TimboJones said:

    @vitobotta said:
    Please ELI5 speculative execution

    @stevewatson301 Jesus Christ, what 5 year old's do you talk to?

    A real five year old has no business trying to understand electronics, let alone speculative execution in CPUs.

    I could post some of those simplified explanations by Redhat etc. (which are perhaps intended for business executives), but they do no better than "something something your data is leaked"

    How about:

    You have a friend invited over for a playdate. You ask your friend what they want for lunch so that when your mom asks you what you guys want for lunch, you have the answer ready to go. And depending on how fast you answer your mom "chicken nuggets", your mom could figure out what friend is over. /shrug

    Or, sometimes you can tell when people are lying or telling the truth because they have an answer faster than it takes to think about it.

    Thanked by 1Erisa
  • jackbjackb Member, Host Rep
    edited July 2022

    Your dog knows he usually gets a walk after lunch. When he sees you preparing your lunch, he starts getting excited for his walk even though you've not told him he's getting one yet.

    People in the park might be able to guess when you had lunch based on when you walked the dog.

  • @vitobotta said:
    Please ELI5 speculative execution

    Even within a single CPU core, there are multiple parts that are not needed for every part of every instruction, so the CPU can effectively do several things at once. The oldest version of this is a simple 3-stage pipeline: for each instruction the CPU needs to load it, execute it, and save the result, and you can often store the result of instruction 1 while processing instruction 2 and loading/decoding instruction 3 ready to be processed. In an ideal situation (assuming each stage takes the same amount of effort) this means a sequence of instructions can be performed three times as fast.

    Some instructions take different amounts of effort and therefore time to run: simple set value is faster than increment which is faster than general addition which is much quicker than multiplication, and so on, so modern CPUs have deeper pipelines meaning a few adds can be done while it is chewing over a multiply. For instance the instructions “add register r1 to register r2” and “multiply register r3 by register r4” can be done at the same time. So if it can load instructions fast enough from cache it can effectively do several instructions per clock tick. This, while a disjoint and confusing mess (sorry, no time to edit more) is a massive over-simplification (see https://en.wikipedia.org/wiki/Instruction_pipelining for a starting point of a more complete picture) but illustrates the point well enough. In fact x86 CPUs since the Pentium have had multiple arithmetic units and similar parts specifically to take advantage of this, so if two independent multiply instructions are next to each other it can do them at the same time instead of the second waiting for the ALU to finish on the first.

    Sometimes you can't do multiple things at once because the effect of one is needed by the next: you can't combine “add r1 to r2 then multiply r3 by r2” this way because the result of the first instruction is needed for the second to be performed, or you use the old value of r2 and get the wrong result in r3 at the end.

    This is particularly a problem for conditional branches and loops: if you don't know what direction the code is going to go in after a certain instruction you don't know which other bits to run to save time. Speculative execution basically means the CPU picks one of the two options and does those, but doesn't store the results until the conditional jump is decided at which point it either stores the results (so time is saved as they were calculated early) or throws them away (as they are now not longer needed and shouldn't have been run). You could potentially run both sets of instructions and just drop one, but I don't think this has been found to be practical in real chip designs. Despite sometimes having to throw work away this can have a significant speed advantage, particularly for some tight loops and decision trees, because of the time saved when the right guess was made.

    Fully dropping all the state from the incorrectly speculatively executed instructions can be quite complicated. In fact it often isn't really needed so CPUs didn't bother (the state was just going to be overwritten and never read again first, so why worry?). Speculative execution attacks usually work by being able to inspect these side effects which might include data loaded into L1 cache that the attack code might not normally be able to access in main memory.

    Thanked by 1bikegremlin
  • @jackb said:
    Your dog knows he usually gets a walk after lunch. When he sees you preparing your lunch, he starts getting excited for his walk even though you've not told him he's getting one yet.

    People in the park might be able to guess when you had lunch based on when you walked the dog.

    My beef with these explanations is how they don’t tell you anything about how it actually works. Analogies are fun, but only as long as they preserve the original message somewhat.

  • @stevewatson301 said:

    @jackb said:
    Your dog knows he usually gets a walk after lunch. When he sees you preparing your lunch, he starts getting excited for his walk even though you've not told him he's getting one yet.

    People in the park might be able to guess when you had lunch based on when you walked the dog.

    My beef with these explanations is how they don’t tell you anything about how it actually works. Analogies are fun, but only as long as they preserve the original message somewhat.

    When someone says "ELI5", they just want to know how good/bad it is. No details on purpose.

  • ralfralf Member

    @stevewatson301 said:

    @jackb said:
    Your dog knows he usually gets a walk after lunch. When he sees you preparing your lunch, he starts getting excited for his walk even though you've not told him he's getting one yet.

    People in the park might be able to guess when you had lunch based on when you walked the dog.

    My beef with these explanations is how they don’t tell you anything about how it actually works. Analogies are fun, but only as long as they preserve the original message somewhat.

    I actually thought this was the best answer here for an ELI5. If you want to know how it works, you don't want an ELI5.

    There's a bunch of youtube videos sponsored by Wired that explains physics concepts at different levels you'd be taught at in your life, at 5 different levels of complexity. Largely education is like this, at each level it's "that thing we told you before, it's not really like that at all".

    Here's the one on gravity for example:
    https://youtube.com/watch?v=QcUey-DVYjk

  • Well, the bottom line is. How exactly do I incorporate a new speculative execution attack on x86 into my grind? In what way is it going to help me obtain the bag? Thanks.

  • edited July 2022

    Would be a lot easier to just listen on I/O port 61 or use an ICO file to throw a shell up

  • emgemg Veteran

    I imagine how much more difficult it is to write good reliable POST code (power-on self test) for the newest systems with the newest processors.

    Writing test code for all of the caching, cache flushing, speculative execution, and other obscure processor functionality must be complex. Keep in mind that no test can depend on functionality that has not been tested yet. You cannot fully rely on instruction timing either, because that can vary too.

  • @emg said:
    I imagine how much more difficult it is to write good reliable POST code (power-on self test) for the newest systems with the newest processors.

    What? No. The power on self test is just an electrical test, it'll have no impact at all. It's more correct to say it can have a bios impact, but the problem would need to be addresses in CPU microcode, OS and compilers.

    Writing test code for all of the caching, cache flushing, speculative execution, and other obscure processor functionality must be complex. Keep in mind that no test can depend on functionality that has not been tested yet. You cannot fully rely on instruction timing either, because that can vary too.

    It doesn't do what you think it does.

    Thanked by 1darkimmortal
  • emgemg Veteran

    @TimboJones said:

    What? No. The power on self test is just an electrical test, it'll have no impact at all. It's more correct to say it can have a bios impact, but the problem would need to be addresses in CPU microcode, OS and compilers.

    (POST) - It doesn't do what you think it does.

    That does not match my experience, but perhaps times have changed.

    I wrote the POST code in assembly and machine language for several complex computer systems. We never assumed anything at startup, not even the CPU or the POST code itself. Each step in the testing process relied only on elements that had been verified up to that point. It is a bootstrap-like process. You start at the most basic CPU elements - registers, etc. and move up from there.

    The POST code that I am familiar with is in the startup firmware - the first instructions the CPU executes at startup. It happens before OS and compilers, but outside the internal microcode of the CPU itself.

    Thanked by 1bikegremlin
  • yoursunnyyoursunny Member, IPv6 Advocate

    @TimboJones said:

    @stevewatson301 said:

    @jackb said:
    Your dog knows he usually gets a walk after lunch. When he sees you preparing your lunch, he starts getting excited for his walk even though you've not told him he's getting one yet.

    People in the park might be able to guess when you had lunch based on when you walked the dog.

    My beef with these explanations is how they don’t tell you anything about how it actually works. Analogies are fun, but only as long as they preserve the original message somewhat.

    When someone says "ELI5", they just want to know how good/bad it is. No details on purpose.

    Then it's shortened to one sentence:
    The end is nigh.

    Thanked by 1ralf
  • jmgcaguiclajmgcaguicla Member
    edited July 2022

    @emg said:
    I imagine how much more difficult it is to write good reliable POST code (power-on self test) for the newest systems with the newest processors.

    Writing test code for all of the caching, cache flushing, speculative execution, and other obscure processor functionality must be complex. Keep in mind that no test can depend on functionality that has not been tested yet. You cannot fully rely on instruction timing either, because that can vary too.

    @emg said:
    The POST code that I am familiar with is in the startup firmware - the first instructions the CPU executes at startup. It happens before OS and compilers, but outside the internal microcode of the CPU itself.

    Microcode updates can mitigate certain (non-architectural flaw) vulnerabilities so it doesn't make sense to test for those before boot.

    POST failures are supposed to be "halt and catch fire" type of errors in which you're not allowed to proceed until you address the issue, imagine a BIOS update bricking your machine just because they added a check for a new CPU vulnerability.

  • jsgjsg Member, Resident Benchmarker
    edited July 2022

    (a) Is this thread about Retbleed or about attacking and/or ridiculing "ELI5" and/or explaining (almost certainly in vain I guess) Retbleed along with "computers 101 - computers are miraculous things" to "ELI5" people?
    Note that I bring this up for a reason: I don't see why the same attitude and intellectual discipline problems wouldn't exist inside intel and AMD too ... with ugly consequences.

    (b) probably the most important question in the minds of many here , "so, are we doomed no matter the processor?"
    Short - and superficial - answer: no, don't worry, intel Alder Lake and AMD Zen3 processors are OK.

    (c) the real answer: Yes, we are doomed, absolutely so - but we are culprits and guilty too.
    Explanation:
    Technical factors, let alone safety, are not decisive. What's really decisive is the fact that processors aren't designed in an empty space; they are designed by companies and the two really dictating everything factors are (a) shareholder interests, and (b) the customer herds.
    The former simply tick like "I want high dividends" and hence "[intel | AMD] are dividend generators" which is closely related to the latter, 99% of which tick like "I want performance, performance, performance! Plus some 'cool' stuff". That dictates how the cogs inside intel and AMD turn.
    So, don't hold your breath waiting for safety really becoming a major factor.

    And yes, that's also visible in both the research and the linux developer discussions. Let me provide a quote as example "f_ck [don't care about] Skylake". Why? Because mitigation is too expensive (in terms of performance) on that processor. Which btw also nicely shows another attitude that is very much in the interest of the intel and AMD shareholders: keep the cycle short, have the customer herd upgrade to the most current processor frequently!

    Another very major problem that also shines through in the OS developer discussions is the fact that the whole game is based on (among others) premise error twins plus ignorance and shifting responsibility/guilt from one to the other. The linux (and other) kernel developers largely rely on the processor manufacturers not (grossly) lying and designing like responsible engineers -while- the processor people rely on the OS developers "cooperating" and acting like kind of team players.

    Unfortunately both are wrong and the dual premises are false. Engineers at intel and AMD (as well as at Arm designers and manufacturers) act as exactly that, as engineers working at and being directed to an unhealthily large degree) by corporate interests.
    Ergo you'll find e.g. utterly ridiculous and misplaced neural network BS in modern processors instead of the designers (being allowed a budgeted) finally cleaning and repairing some of the cruft and outright dangerous sh_t. Why? Simple: because intel and AMD (and others) are NOT white techno-knights, they are corporations who must generate profit and nice dividends and to achieve that they must deliver what is (or more often seems to be) what, pardon me, utterly clueless plus PR massaged customer herds ask for or, in other words, performance and cool gadgets.

    And that's why we are doomed. We might (in another universe) be able to change the way corporations tick but we will certainly not be able to change how we (well, 99% of us) tick.

    (d) a few practical considerations wrt LET
    "low end" pretty much always translates to "yester-year" processors, plus, to make it worse, it usually also translates to not only shared (hardware) cores but also even to shared hyperthreads, or in other words, to the worst possible scenario regarding the current kind of vulnerabilities. Sorry for the indeed bad news.

    But not all is lost. With AMD stay away from Zen 1 (and 1+) processors. Zen 2 (unlike Zen 3) is vulnerable but if the node OS and/or hypervisor chose a smart mitigation scheme (there are divers options) the performance penalty stays in a bearable 10% range. And with intel stay away from Skylake processors. Post Skylake processors (unlike Alder Lake) are vulnerable but if the node OS and/or hypervisor chose a smart mitigation scheme (there are divers options) the performance penalty stays in a bearable range.

    And yes, unfortunately that means that you must ask providers which processor the node uses (sorry, providers but something like "Qemu ... processor" doesn't cut it anymore) and what OS and/or hypervisor and what version is running on the node. Of course you also should choose the OS you use in your VM wisely.

    Maybe some LET user will be kind enough to create and publish a table showing which linux distro (and version) uses which mitigation scheme/kernel config. Please note that, again, those can and should be different for different processors (intel vs AMD) and different generations.

    Edit, correction: Alder Lake is vulnerable too but (like Coffe Lake) only partially.

    Thanked by 1bikegremlin
  • now we're talking

    image

  • It's all speculation, really.

    Thanked by 1ralf
  • emgemg Veteran

    @jmgcaguicla said:

    Microcode updates can mitigate certain (non-architectural flaw) vulnerabilities so it doesn't make sense to test for those before boot.

    POST failures are supposed to be "halt and catch fire" type of errors in which you're not allowed to proceed until you address the issue, imagine a BIOS update bricking your machine just because they added a check for a new CPU vulnerability.

    True, but you would not want the computer to boot if a critical component on the motherboard is bad. What if the RAM or disk controller were bad and flipping random bits, for example?

    My experience comes from the days when a "microcode update" involved manufacturing a new revision of the processor. I will not speculate on testing when the microcode can be overwritten like a common firmware update.

    Yes, a failed test during POST caused the system to halt. That was the expected and desired result. There was always a primitive way to export an error code to humans, which usually worked, unless the failure affected it. In practice in the field, a POST failure meant replacing the affected unit without additional analysis.

    The purpose of the POST test is to ensure that the system can be trusted and it is in a known operational state. A POST test would not look for a CPU vulnerability that is inherent in the CPU design, but I think you knew that. Note that the systems I am referring to were not typical computers that you find on desktops or in racks.

    An Example Where POST Helps:

    I have been burned more than once by subtle RAM failures that might have been detected by a robust POST. Those various RAM failures resulted in data losses and other "bad" outcomes.

    The side effects of the POST code were helpful in other ways, too. After the POST completed, it left behind an unusual pattern in RAM memory, not all "00" or "FF". I chose the specific pattern as a design feature. That pattern made it easy to tell "virgin RAM" from other data in memory for analysis and debugging. You might find the pattern where you expected data, or see data where you expected virgin RAM ... and know that your code was not behaving properly. I will never forget the time when someone was working on a text to speech feature and the code started reading virgin RAM "data" as if it were ASCII text. Suddenly a computer voice started speaking the RAM pattern aloud, over and over and over. Everyone in the room burst into laughter - they all instantly recognized what the bug was doing.

  • @jmgcaguicla said:
    now we're talking

    image

    That got two separate belly laughs from me and some spilled Pepsi. I gave up on the previous post even before realizing how long it was.

  • TimboJonesTimboJones Member
    edited July 2022

    @emg said:

    @jmgcaguicla said:

    Microcode updates can mitigate certain (non-architectural flaw) vulnerabilities so it doesn't make sense to test for those before boot.

    POST failures are supposed to be "halt and catch fire" type of errors in which you're not allowed to proceed until you address the issue, imagine a BIOS update bricking your machine just because they added a check for a new CPU vulnerability.

    True, but you would not want the computer to boot if a critical component on the motherboard is bad. What if the RAM or disk controller were bad and flipping random bits, for example?

    Sounds like you're saying, "The power on self test is just an electrical test".

    My experience comes from the days when a "microcode update" involved manufacturing a new revision of the processor. I will not speculate on testing when the microcode can be overwritten like a common firmware update.

    Yes, a failed test during POST caused the system to halt. That was the expected and desired result. There was always a primitive way to export an error code to humans, which usually worked, unless the failure affected it. In practice in the field, a POST failure meant replacing the affected unit without additional analysis.

    The purpose of the POST test is to ensure that the system can be trusted and it is in a known operational state. A POST test would not look for a CPU vulnerability that is inherent in the CPU design, but I think you knew that. Note that the systems I am referring to were not typical computers that you find on desktops or in racks.

    You've gone 180 degrees from your previous post and now seem to understand.

    An Example Where POST Helps:

    I have been burned more than once by subtle RAM failures that might have been detected by a robust POST. Those various RAM failures resulted in data losses and other "bad" outcomes.

    There's no comprehensive testing in a POST test. It's a basic electrical check. You'd only find non-obvious RAM problems with a longer patterned test, done in a BIST.

    The side effects of the POST code were helpful in other ways, too. After the POST completed, it left behind an unusual pattern in RAM memory, not all "00" or "FF". I chose the specific pattern as a design feature. That pattern made it easy to tell "virgin RAM" from other data in memory for analysis and debugging. You might find the pattern where you expected data, or see data where you expected virgin RAM ... and know that your code was not behaving properly. I will never forget the time when someone was working on a text to speech feature and the code started reading virgin RAM "data" as if it were ASCII text. Suddenly a computer voice started speaking the RAM pattern aloud, over and over and over. Everyone in the room burst into laughter - they all instantly recognized what the bug was doing.

    Filed under "stuff that never happened". "Virgin RAM"? Oh boy.

  • jsgjsg Member, Resident Benchmarker

    @jmgcaguicla said:
    POST failures are supposed to be "halt and catch fire" type of errors in which you're not allowed to proceed until you address the issue ...

    Not really, the "not allowed to proceed" part is just what normal users see. Actually it's about not being able at all or not being able to reliably be based/continue on the assumption that the hardware is OK and working. If, for example some I2C bus hardware is not within spec (or broken) running the BIOS makes no sense and would almost certainly be doomed to fail anyway.
    Nice way to get it: why do electronics technicians pretty much always check the power rails first, even in say a complex spectrum analyzer? Simple, it's because flaky/not working/out of spec power rails more often than not are the cause of problems, often also the initial cause of having grown large and complex error scenarios. Often replacing one or some lowly capacitors "miraculously" brings say your complex FPGA back to life again.

  • jsgjsg Member, Resident Benchmarker

    @TimboJones said:

    @emg said:
    I imagine how much more difficult it is to write good reliable POST code (power-on self test) for the newest systems with the newest processors.

    What? No. The power on self test is just an electrical test

    Not really but you're not completely off.

    It's more correct to say it can have a bios impact, but the problem would need to be addresses in CPU microcode, OS and compilers.

    I'm shocked. That's actually more or less right.

    @TimboJones said:
    There's no comprehensive testing in a POST test. It's a basic electrical check. You'd only find non-obvious RAM problems with a longer patterned test, done in a BIST.

    Finally back to normal, spreading BS.
    POST is simply a particular form of BIST and happens to be well known (unlike many, many other forms of BIST).

  • emgemg Veteran

    @TimboJones said: Filed under "stuff that never happened". "Virgin RAM"? Oh boy.

    What do you mean by that? What are you accusing me of? Please be very specific about what you say "never happened". That is a personal attack. Of course it happened. I was there.

    -> What basis do you have for calling me a liar?
    I have been honest and forthright in every post I have written on LowEndTalk and I stand by them. I do my best to separate facts from opinions. When I make an honest mistake, I admit it, explain the error, and apologize.

    -> When you drop empty accusations in public like that, all you do is expose your own shortcomings.

    We did not have a proper term for it, so "virgin RAM" was my invented term to describe RAM that had not been overwritten since the POST ran at system startup. I never had to define the term for anyone. My coworkers all intuitively understood what the term meant, and it stuck. The first time I used it in my post above, I put the words inside quotation marks to highlight the fact that it was a non-standard, invented term. I may not have described it well enough for TimboJones to understand, or perhaps TimboJones is too inexperienced in this area to figure out what the term meant from my description.

    Repeating what I described:
    The POST code left a distinct data pattern in the RAM when it completed, rather than zero'ing it out. We found it far more useful than you might imagine - it helped with all kinds of development activities, especially debugging. After we all noticed its usefulness, I chose a better data pattern, which I deliberately designed to help developers and testers (and myself) in a variety of ways.

    The incident with the text-to-speech feature happened as I described. There was an errant pointer, and the speech synthesizer started reading the special data pattern in "virgin RAM". Everyone in the large room heard the speaker as the system "read" the pattern aloud. We all understood immediately what had happened and the whole room busted up. There were at least 20 people working at their individual open frame "labs" in that open room at the time.

    I am not going to debate terminology over whether "POST" is an "electrical test". I described POST as I understood the term at the time I wrote the code and helped build the systems. The many engineers who worked on the same projects all used the term POST as I described it. In my opinion, it went far beyond a mere "electrical test" - it verified the operational readiness of the systems from a functional standpoint.

    I wonder how many unique computer systems TimboJones has designed, built, and deployed? How many of them are still in use? How many lines of POST firmware has TimboJones written, debugged (... on the target hardware?), tested, and deployed? ... in machine code? in assembly? in C? Note: Student and hobby projects do not count. Buying ready-made commercial hardware and writing POST firmware for it does not count.

    I leave it for others to judge who is a liar and who is an asshole.

    Thanked by 1ralf
Sign In or Register to comment.