A list of scattered definitions

June 22, 2023 by Lucian Mogosanu

Generally speaking, in computing a virtual machine is a software implementation of some particular language/Turing automaton1. In this respect there is no difference between "language run-times", "interpreters", "application programming interfaces" and "operating systems". They are all the same sort of thing implemented in different ways.

A process, in the POSIX sense, is a collection of virtual resources, i.e. whose interaction with the concrete physical machine is mediated by the operating system.

A thread is an abstraction of the execution resource of a process, i.e. of the CPU.

A virtual machine, in the cloud sense, is a collection of virtual resources which finely resemble the underlying physical machine, whose interaction with physical resources is mediated by the operating system. In other words, the virtual machine in this sense is a process that allows itself to fall victim to the lie2 that it is running on a physical machine. Now with special hardware support.

A container, or namespace in Linux parlance, is a means to restrict the set of virtual resources available to a set of processes.

A variable is an abstraction of volatile data storage.

A file is an abstraction of persistent data storage.

A program is a specific type of file containing a recipe or a set of laws for an instance of a virtual machine, i.e. that upon execution becomes a process.

A user interface is any part of the physical machine which on one hand exposes, while on the other mediates manipulation of the aforementioned abstractions to/from the outside world in a digestible3 way.


  1. The fact that it's software has a few interesting consequences too. Turing automata are stackable, that is, you can build a Lisp in a Java built in C++, yielding something such as Clojure; or you can build a Java written in C++ running atop a Linux along with a shitload of support services, yielding Android. The possibilities are endless, and endlessly lulzy. 

  2. When big pharma sold all those covid jabs to a government near you, what would you say that was? Did they install a live upgrade on the virtual machine? And did you experience any downtime in the process? 

  3. This is hot topic of debate which gave birth to an entire field of science, that of human-computer interaction. There are two problems with this state of affairs.

    The first problem is indeed scientific: while the computing machine is a nearly-perfect form manipulator, the sane human mind (as studied by psychology) tends to disregard form in favour of substance. This has led practitioners in the field to spend an enormous amount of resources (from focus groups to research papers to what have you) in attempting to find the perfect form suitable for interaction with even a monkey (or a three-year old), this perfect form being the smartphone/tablet. The less technically-inclined will simply use the simplest functionalities while being entirely baffled by anything coming their way, say, a system update -- while the more "technically-inclined" will jump with joy at all the new bells and whistles added in the latest upgrade and at the latest "entertainment" in store. Quite the perfect recipe, wouldn't you say? The problem is that all this scientific research still cannot address substance, at least nothing beyond "removing biases" or any other such nonsense.

    The second problem, as often problem tends to go, is socio-cultural (and let's not forget: political): historically speaking, computing seems to thrive in environments where human-computer interaction is more direct. Thus a washing machine with fewer configuration options will always to the average mind be preferable to one with more configuration options; while a computer with a simpler (as opposed to "simpler to use") interface will outperform in terms of usability one with a more complex one. Still, in computing as well as in life and regardless of inclinations in the average mind, abundance of resources will yield complexity, while lack of resources will push towards centralization. The end result, in life first and foremost, is a complex set of tools which no one knows how to use anymore.

    These problems, as unsolvable as they find themselves today, will solve themselves through extinction. Hopefully the next generations will take this as a lesson... or who knows, maybe they'll invent the wise computer by then. 

Filed under: computing.
RSS 2.0 feed. Comment. Send trackback.

3 Responses to “A list of scattered definitions”

  1. #1:
    Cel Mihanie says:

    "A file is an abstraction of persistent data storage"

    I dunno about that, the Unix/POSIX conception of a "file" seems to be more like "an abstraction for accessing data external to the process", i.e. anything that's not a variable. You have sockets, FIFOs, procfs and sysfs entries that are not persistent, not finite, and sometimes they don't even store anything.

  2. #2:
    spyked says:

    While I agree that a uniform interface for manipulating I/O resources is useful, I didn't go the orthodox way because the standard definition performs a series of semantic abuses, beginning with the sheer fact that the interface defined by the POSIX folks is barely uniform. For example you can't lseek a pipe or a socket as well as you can't recvmsg a regular file.

    It really escapes me why they tried to fit all I/O within the single umbrella of "file", only then to provide ad-hoc implementations of each "file type", when they could have just called the "file descriptor" a "resource descriptor" (or whatever) and let a file be a file, a socket be a socket and so on and so forth. Anyway, no one using Linux calls a socket a "type of file", do they? Because each of these abstracts different resources: the socket a network connection, the file some disk space, the pipe a virtual support for consumer-producer design patterns; and so they're different things, no matter how elaborately the authors of the standard(s) try to lie to us.

    procfs and sysfs are indeed another type of perversion, in that they provide the illusion of persistent storage -- as opposed to device files and named pipes, which are indeed denoted as special types in the file system.

    I can only feel sorry for the POSIX folks: it's so fucking hard to design an interface that on one hand supports the complexity they required, while on the other maintains consistency -- this also a propos our discussion regarding various types of disks. So I understand, they gave up consistency in favour of practicality, the end result being that the systems programmer will now also be doomed to suffering along with them.

  3. #3:
    Cel Mihanie says:

    I think it all makes a lot more sense if you just mentally substitute their unfortunately named "file" for your/our "resource". So in POSIX, you're actually opening "resources", having different "resource types", from the "resource system".

    Amusingly, even my beloved ZX Spectrum, a machine very far removed from any POSIX standard (or any standard at all), also features an I/O abstraction interface as one of the many overengineered things in its ROM. The PRINT and LIST commands seem like they are meant for the screen, but in fact they can be told to send data over any "channel" (think 'character device'). By default you have a tty-like "channel" for the main screen, one for the input area, one for the printer and one for the network, and I think you can also open channels for files on a floppy (but not tape). On the other hand, LOAD/SAVE commands have an entirely different I/O abstraction for tapes, disks, ramdisks, etc. At least they use "files" in the traditional sense.

    I/O abstraction is one HELLUVA drug.

Leave a Reply