Bootloading operating systems, some opening bits and the current state in personal computing

December 31, 2019 by Lucian Mogosanu

Since TMSR-OS is still in its first stages of conception -- Mircea Popescu's piece lays the ground for "what uses for a computer are there, anyway"; meanwhile, Dorion does a first breakdown of the system's components and, among others, states its mission and some of the reasons why all of this is needed and useful; last but not least, Trinque analyzes in painstaking detail the feasibility of owning an operating system -- so as I was saying, since TMSR-OS is still at its very beginning, and since I've committed to look into the booting aspect of this, right about now looks like a very good time to explore the fundamentals, and from there proceed to figure out what the fuck do. And it all starts with the moment when, given a piece of iron, the machine is swiftly brought from the stage of simply being powered up to the state where a so-called "fully-fledged" operating system is running and ready to be used.

It's perhaps -- at least I happen to think it's worth noting that the first numerical1 computers didn't come with any sort of bootloaders, nor BIOSen nor operating systems installed, nor any other pieces of elaborately sophisticated pieces of software employed in commodity2 computing nowadays. In fact all they had was a place where you would stick a roll of punched tape, which encoded a program that the computer would read and execute. Then the same computer would output the result on a piece of paper, or, if you were lucky enough, on a screen, and then goto 10.

Nowadays' personal computer is somewhat of a more complicated beast: PCs come with a ROM chip pre-loaded with a program, literally a "Basic Input/Output System", i.e. a BIOS, which is the very first thing that runs when the central processor is up. Back in the day, that ROM memory was relatively small, in any case not big enough to hold an operating system, which (traditionally) makes the BIOS no more than a bootstrapping mechanism3, checking basic I/O functionality and then loading the operating system.

This then begs the question of how exactly to "load the operating system": let's say that by "operating system" we in this case mean a program which gets the actual fully-fledged system to execute; and that this program resides somewhere on a storage medium, e.g. a magnetic disk; and that the BIOS can work within a protocol whereby the program can be found at a certain address, within certain limits, such that it can be loaded in memory and executed. This protocol is part of the ad-hoc "IBM PC compatible" wannabe-standard, and it states that the first sector4 of any bootable disk should contain a bunch of metadata, describing e.g. how the disk is partitioned, and some code that's to be executed when the BIOS has finished its share of the boot sequence; this sector is called a Master Boot Record, i.e. MBR.

The problem is that the size of a sector is traditionally -- as established in the same wannabe-standard -- only 512 octets, which once again won't fit the entire operating system code. Instead, a so-called "bootstrap code" will be used for some minimal initialization and getting the operating system into execution, this code being what everyone nowadays calls a "bootloader" program5.

So, in short: the BIOS is the first program that runs in personal computers, which loads into memory and executes a bootloader, according to some set of rules; then the bootloader, according to its own configuration, loads into memory and executes the operating system program, which does some bootstrapping of its own, resulting in some setup that's available to interface with the user. Until the beginning of 2010s, the spec for all this was the "PC AT" standard, which, however poorly specified, worked successfully for two decades. Then the Microsoft/Intel racket decided that they needed more control over what people do with their PCs, so they "consensus"ed a new spec which they call the "Unified Extensible Firmware Interface", i.e. UEFI, which, they said, was due to replace the old PC-BIOS way of bootloading operating systems -- I won't get into all the gory details of this story, but the "way of bootloading OSen" here is directly related to a couple of political goals. For one, the bootloader isn't supposed to be able to load operating system programs unless said programs have been somehow stamped "approved" by the racket in question. For the other, the user and his software are supposed to know less (than they did before) about the internals of the used-to-be-a-PC they "own", which means that driver code that was supposed to be in the bootloader now comes as an opaque mechanism as part of the BIOS and is exposed through a "unified interface".

This entire UEFI specification comes in the form of a 2500-page "limbă de lemn" document (source) that I'm supposed to systematically review. Let's start with a very non-systematically selected quote:

The "PC-AT" boot environment presents significant challenges to innovation within the industry. Each new platform capability or hardware innovation requires firmware developers to craft increasingly complex solutions, and often requires OS developers to make changes to their boot code before customers can benefit from the innovation. This can be a time-consuming process requiring a significant investment of resources.

or, say, this pile of lulz:

Through the use of abstract interfaces, the specification allows the OS loader to be constructed with far less knowledge [as opposed to the legacy approach] of the platform and firmware that underlie those interfaces.

Really, I'm not sure how I'm supposed to go through all this and not go fucking nuts, so instead I'm going to start by giving an overview of the overall structure of the document: the introductory chapter, which gives an overview of the proposed interface and booting scheme, ironically, the only chapter that discusses anything remotely of interest here -- well, the intro is followed by a. a description of the system architecture, in the second chapter; and b. a description of each component slash protocol, in each of the remaining chapters. The only interesting piece in this second part of the document might be the chapters related to the "Secure Boot" abomination, but for the lack of space I won't go in depth into that subject either -- at least not yet.

On the very surface this UEFI thing seems mostly benign, since whatever BIOS is running, whether UEFI or classical, is going to require a few drivers6 in order to access peripherals and load stuff off them in order to get the bootloader running. That's all good, except when peeling the surface layer, all sorts of perversions start to pop up: the BIOS is now a "UEFI firmware"/"platform"/"boot manager" which is capable of loading "UEFI applications", a subset of which are so-called "OS loaders", i.e. bootloaders. In other words, UEFI does not consider the bootloader to be a program running on its own power; no, instead the bootloader is just another application dependent on BIOS services. And what's more, who knows what other vendor-specific policies this "unified interface" might impose, e.g. maybe the OS loader needs to be signed using Microsoft's keys, or else. I don't see anything else to add here other than the plain statement that this UEFI shit is very, very harmful. Because you see, this kind of layering isn't a matter of removing a few degrees of liberty, but one of turning the entire machine into no more than a useless toy.

Given this situation, the first line of attack involves disabling anything remotely related to UEFI in one's BIOS and replace the damned thing with a self-built image where possible7. Unsurprisingly, UEFI does not come with any discussion on replacing the firmware with a self-built thing; moreover, it does not come with a discussion on how to disable "secure boot"; nor on how to disable UEFI boot services or anything else; all it discusses is a PC AT "legacy mode", which discussion is entirely limited to the following couple of paragraphs (in wooden-tongue):

The UEFI specification represents the preferred means for a shrink-wrap OS and firmware to communicate during the boot process. However, choosing to make a platform that complies with this specification in no way precludes a platform from also supporting existing legacy OS binaries that have no knowledge of the UEFI specification.

The UEFI specification does not restrict a platform designer who chooses to support both the UEFI specification and a more traditional "PC-AT" boot infrastructure. If such a legacy infrastructure is to be implemented, it should be developed in accordance with existing industry practice that is defined outside the scope of this specification. The choice of legacy operating systems that are supported on any given platform is left to the manufacturer of that platform.

and there's a couple more places in the document where this "legacy" thing is mentioned, but nothing remotely interesting or useful, so I'll spare the reader. Long story short, yes, the board vendor may implement some sort of "legacy mode" for compatibility reasons -- but yes, UEFI "is preferred", although we're not told by whom -- but no, this compatibility is in no way mandatory, nor (from my reading of the thing) recommended.

This brings us directly to:

mircea_popescu: in any case, my point being, this whole uefi thing needs some serious mapping.
mircea_popescu: there;s at least one major separation in the uefi latrine (plenty others, of course, the thing's fractally broken), which fortunately occured just about that moment in time when intel chips became thoroughly useless. so not supporting uefi-2, "must have to work" is relatively a small loss, as it goes with the shitty spycore intel chips anyways, which nobody wants (though might be tolerated in some roles for cheapness' sake). uefi-1 however is just this jumble of works-with-or-without, maybe-so-maybe-not, up to indeed about 2015 or so.
mircea_popescu: getting concrete details on this partition would certainly help, as a starting point.
mircea_popescu: i don't really know anyone who both a) is technically literate and b) thinks post 2015 intel chips are actually worth money, as it happens. a situation eerily reminiscent of every other socialism's progress, late sovok folk similarily didn't think late sovok artefacts worth deploying.

My own experience with various machines confirms that this partition exists in the wild; however, it turns out that the UEFI-1/2 split is drawn along completely blurred, ad-hoc lines8: for one, there's post-2015 hardware which under certain circumstances can do legacy mode; for the other, there's pre-2015 hardware that has big troubles getting old-style MBR disks to boot. The partition is drawn mainly along the lines of what disk and/or I/O controller drivers are implemented in the legacy BIOS versus the UEFI one; for example, some 2014-era Asus laptops can boot CDs/DVDs in legacy mode -- or what they call "Compatibility Support Module" -- but not USB sticks. Similarly, it should be possible to boot a MBR disk after enabling the board's AHCI controller, but not the RAID one -- overall, your mileage will vary based on whatever retardation the vendor has decided to bake into your motherboard.

So then, what would be the right and proper way to determine whether UEFI boot can be disabled on a given motherboard? ideally before buying it. Sadly I don't see any clear recipe other than digging through web forums and learning from other people's problems with the things9. There's also various "Linux on laptop" databases that might prove useful, though I'd be very cautious when it comes to basing my decisions on such folklore. It would seem that the only reliable rule of thumb here is that older hardware, perhaps no older than 2012, is more likely to boot using the MBR scheme and probably the older ones don't even come with any sort of UEFI enabled.

Then there's the following: while all the UEFI "boot manager" crap is anti-computing, the GUID Partition Table, i.e. GPT scheme it describes is actually useful. MBR-based partitioning is limited to disks of about 2TB10 in size, so if you want to boot off, say, a 4TB disk, it's very possible that you might get some11 amount of pain in the ass trying to get GPT boot working in legacy mode.

At this point, the question "what do?" keeps popping up in my mind and this last paragraph is where I'd usually add a meaningful conclusion and lay off some grounding for the future. Well, I haven't said anything about ARM bootloaders for example, but last I looked the bootloader situation was even more of a mess there, given that every system-on-chip can and very often does define its own bootloading scheme, usually with added "TrustZone" interaction and other crap. In any case, I guess a starting point would be to select one or two boards that are both available and supported by Coreboot, genesize all the software required to get the operating system loaded, then let users ask before buying their hardware12 and spawn new V branches if they boot on something other than the two existing boards. But more generally, I believe most of the stuff here requires some more discussion, so... the comment box awaits you!


  1. As opposed to analog computers, which can be used to compute various mathematical functions through simple electronic circuits, mostly based on various configurations of operational amplifiers in feedback loops. The major advantage of analog computers over their discrete, i.e. digital counterparts is that they're not susceptible to the usual numerical instability that plagues floating-point operations, since there's no such thing as "floating point" there -- the value of a function is determined through measurement, so precision loss usually comes either from defects in the electronic components or from bad measurement equipment.

    Otherwise mathematical functions such as the integral can be very easily -- that is, without all that fuss over "numerical methods" -- computed using such equipment, yet your home computer comes without access to such wonders of technology. Why do you think that's the case? 

  2. That is, based on hardware readily available on the market, as opposed to custom baked iron.

    The only value of the multi-headed hellspawn that is today's "personal computer" architecture, as it emerged from the IBM PC standard introduced circa four decades ago, is that it's readily available. There isn't really any standard in use anymore, given on one hand the lot of motherboards and peripherals with closed, proprietary, patented interfaces, and on the other the gap between Intel-based and AMD-based products. The slots are there though, and you can connect various stuff to them and it sort of works except when it doesn't... and that somehow passes for acceptable nowadays, what the hell else can you do. The 2010s are all but gone and building one's own hardware still requires an insane amount of resources, much despite idiots' pretenses of "progress" in the field. 

  3. Technically speaking though, the BIOS also provides a set of "functions" or "services" -- a set of mechanisms which the bootloader and/or operating system can call via software-generated interrupts. Which actually makes the BIOS a sort of very basic operating system of sorts. 

  4. The sector being the smallest addressable unit of a disk. 

  5. Of course, things are a wee bit more complicated than that: since the bootstrap code area is so small, i.e. 446 octets, the bootloader will have to bootstrap itself by loading a "stage 2" program that comes with e.g. filesystem drivers, which makes it capable of reading the operating system image from a file residing on one of the partitions and so on. Then what resides in the MBR is called the "stage 1" bootloader. 

  6. Now, why some people think the BIOS or the bootloader need a networking stack implementation is beyond me. I've never booted any PC off the network, and I've booted quite a few... but you know, feel free to leave a comment if you think I've missed something. For that matter, by now BIOS software is big and complex enough that it contains an entire operating system, so why not move the actual OS there, since we're here? 

  7. The Coreboot project currently supports some boards. At the time of writing, the reader can clearly notice that there are a lot more items marked red than green; understandably, since newer generations of PC hardware come with more ways to make it harder for honest folks to run their own software on the damned iron, like it's some special snowflake or something. 

  8. Yes, I naively expected the UEFI spec to yield some actual information on what's happening with this and where it's going. Shame on me. 

  9. Does the absence of WWW-chatter on some item indicate that there's no trouble there, though? It could be that the item in question is just not that popular, so if you buy some godforsaken motherboard manufactured in international waters, you're probably on your own. 

  10. The start Logical Block Address, i.e. LBA and size of a partition (in number of sectors) occupy 4 bytes of the partition record each, so the maximum addressable unit is about 2 to the power of 32; thus 2^32 * 2^9 totals to about 2TB of storage space addressable using the MBR scheme. 

  11. I wish I could say the exact amount, if only the number of potentially-hidden variables in this game weren't so fucking high.

    Let's elaborate a bit, though. The GPT scheme relies on "extending" the MBR with additional metadata, as follows: only the first MBR partition record is used, the "starting LBA" field of said record being set to 1, which points to the primary GPT header. The GPT header, similarly to MBR, contains all the metadata required to describe disk partitions, except the size/address fields are 8 bytes long and the partition table can be configured so as to specify an arbitrary number of partitions -- which comes as a "layers of meta upon meta" abomination that's supposed to replace its logical/extended partition grandfather. Additionally, the UEFI spec mandates the existence of a FAT32 "EFI system partition", which is supposed to be the first actual partition in the whole scheme, where among others the bootloader(s) reside.

    Thus, the bootloader could in principle be loaded in legacy mode, i.e. as part of the "bootstrap code area" in the MBR sector, then it would detect that the disk is partitioned using GPT and boot the operating system as usual. Note that this in itself requires no such thing as a "system partition", the whole thing is easily decoupleable from any UEFI baggage. Moreover, it's probably the case that bootloaders such as GRUB already support the booting scheme described here, which would come as a boon for e.g. monster server systems with multi-terabyte disks. 

  12. This might perhaps give some incentive to stock up on TMSR-OS supported hardware and sell it; but I may just be naive. 

Filed under: computing.
RSS 2.0 feed. Comment. Send trackback.

10 Responses to “Bootloading operating systems, some opening bits and the current state in personal computing”

  1. #1:
    Mircea Popescu says:

    Keks, send man to examine artefact, watch man come back horrified.

    > ideally before buying it.

    I really don't give shit one about buying a buncha onsies mobos, to supplement whatever l1/2 can report of their own hardware. There's really no rule we must cover "all" nor any need to attempt. Getting a dozen or two alternatives listed is arch-sufficient. Let the rest fucking wither ; and let them learn the miner lesson in the process.

    > I've never booted any PC off the network,

    I have, actually. Not recently, but for a while back there in the pre-pentium era it was the ~only way to handle a fleet.

  2. #2:
    Robinson Dorion says:

    I really enjoyed this; while I had a decent grasp, really helps to break down the details. Apologies for the delayed comment, I'm getting back to normal this week and cutting down on latency moving forward.

    the bootloader will have to bootstrap itself by loading a "stage 2" program

    Is this "stage 2" program "init" ?

    The "PC-AT" boot environment presents significant challenges to innovation within the industry.

    They "just want" s/innovation/subversion/.

    I guess a starting point would be to select one or two boards that are both available and supported by Coreboot, genesize all the software required to get the operating system loaded

    This does look like a good starting point. Start small, build out incrementally, let late comers either scramble of hardware scraps already supported, buy from surplus l1/2 accumulates or invest in extending the support for boards not yet V-ified themselves.

    Jacob worked out the process of flashing the x200 thinkpads with Coreboot using chip clips of offline flashing, though those aren't widely available - for sure a scrap market.

    That being said, I've allowed the all Intel is shit discussion to slide down priority, probably a good time to revisit prior to selecting the worthy chips to start with.

  3. #3:
    spyked says:

    @Mircea Popescu: AFAIK right now we only have APU1 and the IBM x60 board, and sadly the latter is 32-bit only. I have a few more that I could list as "doesn't require EFI", though much of the hardware I have is Intel-based, which I expect will cause some hives among the L1.

    Guess I should get a list/table with known-to-work items started soon.

    @Robinson Dorion: I'm happy you enjoyed reading the article!

    > Apologies for the delayed comment

    I'm off to a slow start this year myself, so I can't really complain.

    > Is this "stage 2" program "init" ?

    Not even close. The bootloader stage2 is usually found somewhere in /boot, e.g. for Grub it's /boot/grub/core.img and kernel.img -- a decent description of these images can be found here. Not only that, but Grub can dynamically load modules for e.g. filesystems, so that it can look up the Linux kernel image, usually found in /boot/vmlinuz-something. Only after the kernel is loaded, the hardware is probed and configured and the root filesystem is mounted, only then will the kernel launch "init" into execution. All in all a long road, paved by a history of ad-hoc (as opposed to principled, I guess) developments in both hardware and software.

    > Start small, build out incrementally, let late comers either scramble of hardware scraps already supported, buy from surplus l1/2 accumulates or invest in extending the support for boards not yet V-ified themselves.

    This sounds reasonable IMHO and resonates with footnote 12 above. I believe there's a market there for x86 (64-bit) vintage without EFI, weird BIOSen and so on. For example I've been finding it hard to get ahold of an APU1 and getting it from an in-WoT supplier would be a great win IMHO.

    > all Intel is shit

    Intel started infecting their CPUs with crapware, e.g. Management Engine, as soon as they switched from Core Duo to Core i3/5/7. I don't know about Nehalems, but Sandy Bridges (the CPUs on which e.g. x220 Thinkpads run) are clearly infected. Still, TMSR-OS would boot on those by virtue of the BIOS not being (completely) stolen by UEFI yet, and perhaps there's lower-criticality applications (gaming, writing) where they'd work.

  4. #4:
    Robinson Dorion says:

    Not even close.

    Trinque's latest hit further cemented how not even close.

    Re APU1, pcengines says it's in production, how many of those named distributors have you looked into so far ? I'm open to splitting the list with you to accelerate the hunt.

    http://logs.ossasepia.com/log/trilema/2019-12-09#1954653

    Intel started infecting their CPUs with crapware, e.g. Management Engine, as soon as they switched from Core Duo to Core i3/5/7

    That is my understanding as well. The x200 that I'm most familiar with use Core 2 Duo Penryn.

  5. #5:
    spyked says:

    > how many of those named distributors have you looked into so far ?

    I haven't looked systematically recently, will definitely have to give it a (documented, this time) go. Last time I looked circa four months ago, Varia were the only ones selling APU1 units, most (all?) others only had APU2 and above.

    Anyway, I'd prefer not dealing with Swiss import/export taxes and the likes if possible, so I'll look deeper at this in February.

    > I'm open to splitting the list with you to accelerate the hunt.

    Sure, that'd work, though I think I'd be able to do it reasonably fast (~1 week?) once I get to it. Notice that the list has many duplicates, so as a matter of fact there are only a few EU providers, another few in the US etc.

    > The x200 that I'm most familiar with use Core 2 Duo Penryn.

    I'll look into x200 as well, the Coreboot page seems to have quite a few details on support. I also have an x220 at home, AFAIK the Coreboot people have struggled for years to get the ME gangrene out. Perhaps worth taking a brief look at that as well.

  6. [...] as some of jfw's writings on the topic. My earlier concerns about UEFI were validated once I read spyked's recent article as well. So, in a sense I was able to get somewhat organized ahead of [...]

  7. [...] stepped up to the plate here and published his initial report on Bootlading Operating Systems December 31st, through which, the decision to support a dozen or two motherboards per architecture [...]

  8. [...] about December, having spent most of it fixing various personal issues and some time examining the sad state of personal computers, with a particular focus on the bootloading stage. Which concludes 2019, which looks like a great [...]

  9. [...] motherboards without EFI, but that's only a small part of the taint. Now do you even begin to grasp the knowledge needed to [...]

  10. [...] likely want to scrape the second-hand markets for items lacking the usual deadweights such as UEFI, ME, TPM and other baked-in modules used for imperial vetting. This approach is especially useful [...]

Leave a Reply