From the fabulous world of fantasy consoles: PICO-8

February 11, 2022 by Lucian Mogosanu

The term "fantasy console" is a fancy-schmancy name for a virtual machine1 that emulates non-existing hardware (hence "fantasy"), for the most part used for gaming (hence "console"). Thus fantasy consoles are software platforms inspired by old systems such as CHIP-8, or commercial machines such as the Commodore 64 or ye olde ZX Spectrum, in that they imitate them partially from a functional perspective and as much as possible from a subcultural perspective.

In other words, some kids got old, or they otherwise dove into the minute design details of old gaming computers, which inspired them to somehow attempt to "improve" these consoles, or rather, as fashions tend to go in nowadays' "progressive" world, to recreate them using more modern tooling. For example, while some of them decided to stick to BASIC2, others, PICO-8 included, use Lua as a programming environment. Either way, most of these fantasy consoles draw from their older, iron counterparts, in unexpected ways, such as the maximum supported resolution -- 128x128 pixels in the case of PICO-8 -- or the number of supported colours -- 16 in PICO-8's case. These limitations, believe it or not, are chosen almost entirely for subcultural reasons rather than any design principles. Quoting from the official site:

The harsh limitations of PICO-8 are carefully chosen to be fun to work with, to encourage small but expressive designs, and to give cartridges made with PICO-8 their own particular look and feel.

This notion of a specific "look and feel" is reminiscent of (and perhaps an evolution of) the demoscenster gang that specifically looked at restrictive environments in which to create their stuff. The fortunate side-effect of this so steadily enforced minimalism, for me at least, is that most of the games launched on the platform lack all the sophistication, and with it many of the problems of modern titles, including those related to graphics, loading times or convoluted game mechanics -- actually, let's take a look at a few examples!

For instance, there's a Doom clone that "looks and feels" quite close to the original. There's also an adventure game demo that I found absolutely delicious in the ten minutes that it took to explore it. Furthermore there's a bunch of platformers (plural), roguelikes, a few racers, Breakout/Arkanoid clones and even a train simulator!

As for puzzle games, one of my favourites was Stuck in the Sewers, a rat-in-the-maze game which took about twenty minutes to finish, although I fear that might be too much, since they included a timer and a step counter to measure performance. Also, unexpectedly, I am now addicted to the puzzle game called Pieces of Cake, a "cooking game" which presupposes combining elements ("ingredients") with a few simple properties in complex, interrelated ways in order to obtain a recipe -- actually: to reach a minimum goal with respect to one of the ingredients, leading to yet harder to reach goals. I have no idea whether this resonates with the reader, but I am now completely engrossed in these types of games, it's like the mid-nineties are suddenly back and kicking3.

For the brave souls who are interested in attempting to make games using PICO-8, the nets are full of tutorials -- this one looks like a pretty neat example, although this other one doesn't look too bad either. Overall the PICO-8 engine primitives look simple enough so that they can be grasped in a few hours, and in any case, many of the games above were the result of so-called "game jams", i.e. hackathons lasting at most a day or two. If nothing else, to my eye and on a first, second and third glance this makes PICO-8 a great platform for prototyping.

Based on the list in footnote 2, there's also quite a few of these fantasy consoles that are open sores/free as in freedom. Who knows, maybe I'm going to review one or two of them sometime.


  1. Like many other notions in computing, "virtual machine" is vaguely-defined in the literature, which shall inevitably make it hard for the reader to parse my meaning. On one hand, in the systems and Lunix world, virtual machines are programs that emulate very specific irons, using very specific functionalities such as hardware accelerators specialized for the task, so that from the virtualized perspective, the "guest" has an image of the world that is as close as possible to the "host" iron -- this is how clouds were built, for example. This meaning of "virtual machine" is very specific, though, and it is used as such in those particular social circles.

    The more general meaning of "virtual machine" is, for example, the one in "Java Virtual Machine": here, the programming language and its libraries are used as support to provide an operating system, i.e. an environment where the underlying hardware is abstracted so as to "make it easier for the programmer" to work with the computer, whatever that means. PICO-8, in particular, did not emerge from a particular piece of existing hardware, but rather it was inspired from a family of computers now long gone, so when I refer to it as a virtual machine, I mean something closer to JVM (Android, for example) than to QEMU. In other words, the virtual machine is an inevitable consequence of the property of Turing machines of being stackable on top of one another. 

  2. For what it's worth, someone is keeping a list of these fantasy consoles on Shithub. Now, you might wonder -- at least I for one couldn't help but wonder, why wasn't ScummVM included in this list? It fully qualifies as far as I can tell, it's just that its creators didn't provide it with fancy-schmancy labels.

    Anyway, some of these fantasy consoles are truer to original hardware than others. Some of them are in fact glorified gaming engines allowing C# as a programming language, while others are emulations quite faithful of ZX Spectrum or whatever, so they instead provide BASIC as a programming environment. Either way, this may not amount to much overall, and I for one will admit I haven't tried each and every system on that list, although I've looked at most of them, either way the variety in that list is astounding. So either plenty of kids have a buttload of time on their hands, or the world isn't such a small place after all.

    Anyway, maybe it's not immediately observable, but I'm actually happy to have stumbled upon said list. 

  3. I suspect upon proper scavenging, one may be able to find further items on sites such as itch.io. This is where I found PICO-8 initially, but I haven't looked into more detail at the retro category, at least not yet. 

Filed under: gaming.
RSS 2.0 feed. Comment. Send trackback.

12 Responses to “From the fabulous world of fantasy consoles: PICO-8”

  1. #1:
    Verisimilitude says:

    I started reading this article expecting more criticism than was present. I think the core of it is knowing the full system, as an acquaintance of ours so often stresses it. With a truly constrained ``environment'', I know exactly what's possible, and I can do anything within it, even if it won't be easy.

    I'm most interested in machine code hacking, so I was drawn to CHIP-8 over the years. It was originally implemented on the COSMAC VIP, using the RCA 1802, in just 512 octets. Even here, some people extend it beyond recognition. I participate in a game jam, and during one year I wrote a forty octet game while someone else exhausted the extended sixteen-bit address space on animations; the original limit is three and one half kibibytes. This rift even at the lowest levels can be a bottomless void.

    At this level, with a small monochrome screen lacking sound beyond a bell, the constraints truly reduce a game to its most distilled essence, or it simply doesn't get made.

    I've no special appreciation of it, but this implementation is easy to use, at the least:
    http://johnearnest.github.io/Octo/

    Here one can see a simple line avoidance game having undergone the latest of three iterations:
    http://verisimilitudes.net/2020-10-27

    It doesn't get much simpler than this, for games, and the ability to document it in such extreme detail is oh so satisfying.

  2. #2:
    spyked says:

    While I agree in principle, the problem with "knowing the full system" is establishing in fact where that knowledge begins and ends, as per that old problem of bootstrapping: you don't really *know* the *full* system unless and until you are able to build it from the ground up, down to the transistor level.

    Otherwise if you agree to build it e.g. as a "virtual machine" on commodity hardware then you don't in fact know the *full* system, which begs the question: how much knowledge about the system is actually *sufficient* in order to be able to use it to its full extent? I never probed my Z80 with a multimeter, yet by using it alone I learned how to control it fully.

    This is why this criticism is not there and sort of why I stopped adding further to that computing category altogether, at least from the systems perspective: as long as I don't fully own the hardware, I really don't care all that much. Moreso when it comes to gaming.

  3. #3:
    Verisimilitude says:

    Don't be so obtuse. When I program in CHIP-8, I program against its specification, and avoid the vague areas. If my correct program then fails, it's naught to do with me. That's enough to satisfy me, in this, for now.

  4. #4:
    spyked says:

    Maybe I'm being obtuse, or perhaps it's rather that you're trying to be too clever and I'm not falling for it.

    If specification is equivalent to "knowing the full system", then by this criterion alone I deem POSIX and C to be enough. So what are we talking about, then? If size by itself, without an underlying reason, is a criterion for this criticism of yours, then I am unimpressed and not interested.

    Sure, I understand that small systems entail small specifications, which may make for good didactic exercises; I also understand that the discussion of size is absolutely needed when establishing economic limitations such as memory size; hell, I may even give it that it's fun to approach the field of systems design from a minimalist perspective, for purely artistic purposes, i.e. that "distilled essence" you mentioned. But you don't properly bother to put the subject of critique in context other than some vague mention of "machine code hacking", which isn't a subject in and of itself in the world I inhabit, but rather a small activity in an otherwise vast field.

  5. #5:
    Verisimilitude says:

    The issue is that POSIX and the C language are grotesquely large and have intractable failure cases. Both of these systems force irrelevant problems to be solved for which there's no good solution.

    There's a difference between that and having no instruction timing guarantees and so using the system timer to its fullest, or not assuming a certain value comprises a flag generated by an instruction, both as in CHIP-8.

    The focus here is that, in such machine code hacking, I can establish invariant cases easily, more easily than in some higher-level languages, even good such languages. At the lowest levels, I can compress a world of variation and uncertainty into a single known point.

  6. #6:
    spyked says:

    > POSIX and the C language are grotesquely large

    I know; and I'm not debating this aspect, as I know it having taken my time to go through the specs. I am however debating the argument against size as a principal issue, as it sounds too similar to the old high school moaning "I didn't read War and Peace because it was too long". The main argument against this problem of size is my having read the specs: yes, the items in question are full of flaws, but this may only be determined by reading them and pointing out the flaws in question, not just bitching about bloat.

    > and have intractable failure cases

    I don't know what this means. I know what is an intractable problem, I know what is a failure case, but I have no clue how a specification/system can "have intractable failure cases". Perhaps an example or two would help here?

    > Both of these systems force irrelevant problems to be solved for which there's no good solution

    I am not debating that said systems pose problems, but I'm not sure which of them you consider irrelevant (and more importantly, irrelevant *in what context*?) and to what extent there is no good solution available (and "good" *in what context*?). Can you give two examples of such problems?

    > [...] both as in CHIP-8

    Then why choose CHIP-8 and why not, say, Forth?

    > At the lowest levels

    My point, as stated above, is that the lowest level is always the physical iron. Simply because CHIP-8 ran on RCA hardware back in the '70s does not mean you can wave away that you are running it, say, on a *nix machine in the 2020s, nor that you are using a *nix machine to browse the web. Sure, it's nice to get rid of complexity, but one doesn't simply get rid of it by "changing the language", but by pushing it to another layer. Otherwise you are going to hit that complexity one way or another, simply by using the system, some systems just manage to hide it better than others. In other words, *everything* in engineering (and in economics and life in general) is to be paid for some way or another, there is no such thing as "for free".

    From this point of view, one example would be that C and POSIX are not evil because they "force irrelevant problems", but because they force them upon the application developers, who need not be concerned with pointerisms and various undefined behaviours. Still, I personally have written correct programs, using subsets of C and POSIX, which did not require all the complexity specified in the standards and that, similarly, had clear, reasonable invariants. So to my eye the underlying issue is rather that some abstractions are better at solving *certain* problems than others and, circling back to that old problem of operating systems/language environments, that there is no such thing as a "general-purpose" tool, or in other words, that things are hammers only inasmuch as they're good at driving nails into certain materials specified beforehand.

    TL;DR: I'm well beyond discussing this stuff in terms of hyperboles and I find arguments along the lines of "X is The Right System" to be a waste of time. If you still think this makes me obtuse, then let's leave it at that.

  7. #7:
    Verisimilitude says:

    An intractable failure case is a failure case for which it's either very difficult or impossible to account. Consider that the write system call can fail indefinitely, with no recourse. Apparently, the pwd command may conform to POSIX by returning a single period. When this nonsense becomes an ingredient for the real recipe, it only gets in the way.

    A machine code is more well specified than a Forth, generally. When I write machine code, and when I read it, I evaluate it in my head first; if the true execution fails, I may have entered something incorrectly, or the implementation be flawed. It will be one or the other.

    Still, I suppose this is a tangent. Feel free to have the last word.

  8. #8:
    spyked says:

    I guess this is not so much a last word, rather than yet another tangent.

    > Consider that the write system call can fail indefinitely, with no recourse

    I agree; now let's consider that all software written on top of Unix (including, but not limited to, all emulators and high-level language implementations) is doomed to use it. And let us further consider why this "indefinite failure" is the case (at least as far as I can tell): because all I/O (occuring between the von Neumann machine's "main memory" and "peripheral devices") was squeezed together into the abstraction called "file", which file doesn't even behave consistently: sometimes it's a socket, other times it's a pipe, while other times who even knows what is it... a "device", eh?

    I'm not attempting to exonerate the folks who made POSIX from their idiocy. However, this is the cost of a self-proclaimed general purpose abstraction for I/O, the alternative being e.g. the DOS way, where each and every application came bundled with drivers for the peripherals they used, with the exception of... actual files, of course. This is where the perversity lies: that even though POSIX *claims* to provide a general-purpose abstraction for I/O, in practice the application will be forced to implement an ad-hoc driver for whichever "/dev/X" it attempts to talk to.

    As far as I'm concerned, it's liberating to spell this out, despite the fact that, as I mentioned, I did implement sane applications using write. Now, did they work by accident?

    > the pwd command may conform to POSIX by returning a single period

    The spec pretty clearly states otherwise.

  9. #9:
    Verisimilitude says:

    I suppose I'll continue replying then.

    because all I/O (occuring between the von Neumann machine's "main memory" and "peripheral devices") was squeezed together into the abstraction called "file", which file doesn't even behave consistently

    Yes, exactly.

    However, this is the cost of a self-proclaimed general purpose abstraction for I/O, the alternative being e.g. the DOS way, where each and every application came bundled with drivers for the peripherals they used, with the exception of... actual files, of course.

    There are alternations. I've written about exactly this here:

    I firmly believe proper system design takes pain unto itself to eliminate edge cases. A proper such system would provide the option of treating wildly different I/O sources as the same, but permit the reducing of failure cases for any single one by permitting avoiding this. It may seem reasonable to give a terminal, file system, and TCP similar interfaces, yet this group poses necessarily different failure cases. The source of input for a terminal is usually a human and so a read can't fail, with a potentially indefinite wait; a file system can make a request, yet fail due to recognized hardware failure or semantic issues with the underlying model; a TCP connection can fail for many reasons and may result in failure from which there's no reasonable recovery mode. Collapsing these under a lone interfaces collapses their failure modes into a single model, as well, and it should be realized the addition of invariants through a lack of genericity is valuable.

    We see similar idiocy with Unicode, and pouring the edge cases of all languages and other nonsense into one vase.

    Now, did they work by accident?

    No, but the design of the system allows most any program to be deadlocked without recourse, meaning it's garbage.

    As for pwd, I seemed to have outdated information. Consider a different example: Some interface requires a file, so a unique file must be created, which opens it to naming conflicts, space exhaustion, and other nonsense. To repeat myself, when this nonsense becomes an ingredient for the real recipe, it only gets in the way.

    One of the primary issues with computing is how an idiot can build a broken interface, which humans must then work around indefinitely. This wasn't an issue with mathematics, if only because the practitioners were their computers, and so valued concision and beauty, lest they rewrite something to have those qualities.

  10. #10:
    spyked says:

    > There are alternations. I've written about exactly this

    I went through your easy-peasy-tcp example and... well, I'll digress and instead challenge you to consider a less trivial task, that of a WWW server written in Common Lisp. Consider that it took me ten humongous articles (minus the summary) to fully document the code, *minus* its dependencies; and consider that that task made me realize that I was attempting to rewrite Wordpress in Common Lisp, which task I gave up and instead moved on to the actual Wordpress, based upon that PHP abomination we all know. Do you think me a fool for resorting to this, rather than someone who values his time?

    Similarly, consider my attempt to a much more approachable task, i.e. to write a fault-tolerant IRC bot in the same language, and the reasons why I gave up. I wish all the luck to whomever may try this again in the future, and I don't particularly blame Common Lisp or SBCL for this failure, just don't come biting me in the ass about how "everything would be ok had I just used FFI". I will rather use C than resort to this sort of nonsense.

    > a TCP connection can fail for many reasons and may result in failure from which there's no reasonable recovery mode

    Precisely this -- I don't think C is particularly the problem here, rather than the bizarre design of TCP and all the implementation issues that derive from it. I'm not talking out of my ass here either, as I've read reasonable chunks of the Linux TCP implementation numerous times, when dealing with some of those intractable failure cases. Those failure cases did not come from C or Linux or whatever, they came from the TCP implementation itself, which is an unmaintainable horror that should have never made it into the kernel, for it fails to adhere to even the most basic "guidelines" laid out by Torvalds himself.

    > the design of the system allows most any program to be deadlocked without recourse, meaning it's garbage.

    The "design" of humans allows each single individual to be deadlocked without recourse, that alone does not make it (or them) garbage.

    > Some interface requires a file, so a unique file must be created

    mkstemp may be of some help here? I really don't know.

    > One of the primary issues with computing is how an idiot can build a broken interface, which humans must then work around indefinitely

    Verisimilitude, I'm not disagreeing with you here, merely pointing out that quite often, folks tend to attempt to solve problems using the wrong means. Maybe you don't actually *need* to create a unique file for that interface? I don't know, I'm merely considering this side of the issue.

    > This wasn't an issue with mathematics

    This is a very naive view of the history of mathematics, which went through an evolution spanning centuries before reaching the distilled form currently present in the didactic material. Consider the example that Lord Newton's calculus is as much a consequence of his Lordship as much as his genius, or in other words that politics can't be simply brushed aside from this history. Consider also some of the very unprincipled methods (e.g. approximations) used by some of the practitioners (e.g. physicists) and that some mathematicians had to literally invent new mathematics (e.g. the Dirac delta function) in order to model their experiments.

    Bringing this back to computing, my best guess is that despite the obvious amplification brought about by computers, the field, aged less than a century, is probably just about nearing its stage of puberty. Whether this stage will be further delayed by the incoming dark ages, or whether war will actually speed up the maturization process (as they often do), I guess we'll see.

  11. #11:
    Verisimilitude says:

    I went through your easy-peasy-tcp example

    That was explicitly labelled an experimentation, so don't judge it as something in which I've more confidence.

    Do you think me a fool for resorting to this, rather than someone who values his time?

    Neither of us knows the other well enough to call him a fool, so no. Compare it to what I do, however. I'll try to go without than get trapped. I didn't write the HTTP server I use, but I wrote my Gopher server. I try to prefer implementing for myself what I can, but it's not reasonable for everything, currently, no.

    I will rather use C than resort to this sort of nonsense.

    Not everything is worth doing, I agree.

    Precisely this -- I don't think C is particularly the problem here, rather than the bizarre design of TCP and all the implementation issues that derive from it.

    Every network operation is unreliable. No packet is guaranteed. Thus it follows that motion is impossible.

    The "design" of humans allows each single individual to be deadlocked without recourse, that alone does not make it (or them) garbage.

    We're stuck with our design, for now. Just because a man can shit himself to death doesn't mean we should build computers to do the same.

    mkstemp may be of some help here? I really don't know.

    The point is it can fail, and the greater operation may have had no naturally intractable failure case. This is condemnation of bad design.

    This is a very naive view of the history of mathematics, which went through an evolution spanning centuries before reaching the distilled form currently present in the didactic material.

    I don't recall Euclid's Elements having these issues, although I'm not finished reading my copy, I'll admit.

    Still and again, I suppose this is a tangent. Feel free to have the last word.

  12. #12:
    spyked says:

    Let's bring this full circle then.

    > That was explicitly labelled an experimentation, so don't judge it as something in which I've more confidence.

    There's nothing wrong with that, I'm just pointing out that I've seen very few systems that lacked essential complexity and that at the same time were... useful. One of them was my old Romanian ZX Spectrum clone, which took sometimes an hour to load the larger 8-bit games off the magnetic tape, but that didn't even have an ethernet port; and I suspect the cost of adding ethernet alone (without the rest of the networking stack) would be non-trivial.

    > I try to prefer implementing for myself what I can, but it's not reasonable for everything, currently, no.

    > Every network operation is unreliable. No packet is guaranteed. Thus it follows that motion is impossible.

    ... theoretically. In practice, the particular HTTP server hosting this blog serves web pages just fine and the blog also has a comment mechanism, which is what it was intended for in the first place. This supposed paradox actually ties pretty well into the mathematics thread above and below, thank teh Lords for infinitesimal calculus!

    > Just because a man can shit himself to death doesn't mean we should build computers to do the same.

    Again, this must sound really annoying: I don't disagree at all, but let's consider that Nature (if anything deserves a personification, I guess it's "nature") has spent millions of years attempting to perfect autonomous systems that nevertheless have this failure mode. The Big Question is, can any non-trivial autonomous systems be built that *don't* have this failure mode?

    > I don't recall Euclid's Elements having these issues

    I'm not saying *some* problems cannot be solved elegantly. I myself am more interested in problems along the lines of acoustic modelling, which are the subject of thick books comprising just as thick equations with partial derivatives that I'm not at all equipped to present here. Or more generally, take some of the fluid dynamics problems which can only be solved numerically... I haven't visited this field in a while, but to my eye it's just one of the many examples of impedance mismatch between the beautiful Platonic ideals and reality.

Leave a Reply