EWD 1036, annotated

June 9, 2022 by Lucian Mogosanu

Ante-scriptum: below lies a copy of Edsger W. Dijkstra's On the cruelty of really teaching computing science, annotated with my own comments in the form of footnotes. I am reviewing this (not the first time, either) because I firmly believe the subject to remain painfully actual thirty-four years since the initial essay/talk, moreso given the bleak state of the so-called field of "education" as a whole.

On the cruelty of really teaching computing science

The second part of this talk1 pursues some of the scientific and educational consequences of the assumption that computers represent a radical novelty. In order to give this assumption clear contents, we have to be much more precise as to what we mean in this context by the adjective "radical". We shall do so in the first part of this talk, in which we shall furthermore supply evidence in support of our assumption.

The usual way in which we plan today for tomorrow is in yesterday's vocabulary. We do so, because we try to get away with the concepts we are familiar with and that have acquired their meanings in our past experience. Of course, the words and the concepts don't quite fit because our future differs from our past2, but then we stretch them a little bit. Linguists are quite familiar with the phenomenon that the meanings of words evolve over time, but also know that this is a slow and gradual process.

It is the most common way of trying to cope with novelty: by means of metaphors and analogies we try to link the new to the old, the novel to the familiar. Under sufficiently slow and gradual change, it works reasonably well; in the case of a sharp discontinuity, however, the method breaks down: though we may glorify it with the name "common sense", our past experience is no longer relevant, the analogies become too shallow, and the metaphors become more misleading than illuminating. This is the situation that is characteristic for the "radical" novelty.

Coping with radical novelty requires an orthogonal method. One must consider one's own past, the experiences collected, and the habits formed in it as an unfortunate accident of history, and one has to approach the radical novelty with a blank mind, consciously refusing to try to link it with what is already familiar, because the familiar is hopelessly inadequate. One has, with initially a kind of split personality, to come to grips with a radical novelty as a dissociated topic in its own right. Coming to grips with a radical novelty amounts to creating and learning a new foreign language that can not be translated into one's mother tongue. (Any one who has learned quantum mechanics knows what I am talking about3.) Needless to say, adjusting to radical novelties is not a very popular activity, for it requires hard work. For the same reason, the radical novelties themselves are unwelcome.

By now, you may well ask why I have paid so much attention to and have spent so much eloquence on such a simple and obvious notion as the radical novelty. My reason is very simple: radical novelties are so disturbing that they tend to be suppressed or ignored, to the extent that even the possibility of their existence in general is more often denied than admitted4.

On the historical evidence I shall be short. Carl Friedrich Gauss, the Prince of Mathematicians but also somewhat of a coward, was certainly aware of the fate of Galileo -- and could probably have predicted the calumniation of Einstein -- when he decided to suppress his discovery of non-Euclidean geometry, thus leaving it to Bolyai and Lobatchewsky to receive the flak. It is probably more illuminating to go a little bit further back, to the Middle Ages. One of its characteristics was that "reasoning by analogy" was rampant; another characteristic was almost total intellectual stagnation5, and we now see why the two go together. A reason for mentioning this is to point out that, by developing a keen ear for unwarranted analogies, one can detect a lot of medieval thinking today.

The other thing I can not stress enough is that the fraction of the population for which gradual change seems to be all but the only paradigm of history is very large, probably much larger than you would expect. Certainly when I started to observe it, their number turned out to be much larger than I had expected.

For instance, the vast majority6 of the mathematical community has never challenged its tacit assumption that doing mathematics will remain very much the same type of mental activity it has always been: new topics will come, flourish, and go as they have done in the past, but, the human brain being what it is, our ways of teaching, learning, and understanding mathematics, of problem solving, and of mathematical discovery will remain pretty much the same. Herbert Robbins clearly states why he rules out a quantum leap in mathematical ability:

"Nobody is going to run 100 meters in five seconds, no matter how much is invested in training and machines. The same can be said about using the brain. The human mind is no different now from what it was five thousand years ago. And when it comes to mathematics, you must realize that this is the human mind at an extreme limit of its capacity." My comment in the margin was "so reduce the use of the brain and calculate7!". Using Robbins's own analogy, one could remark that, for going from A to B fast, there could now exist alternatives to running that are orders of magnitude more effective. Robbins flatly refuses to honour any alternative to time-honoured brain usage with the name of "doing mathematics", thus exorcizing the danger of radical novelty by the simple device of adjusting his definitions to his needs: simply by definition, mathematics will continue to be what it used to be. So much for the mathematicians.

Let me give you just one more example of the widespread disbelief in the existence of radical novelties and, hence, in the need of learning how to cope with them. It is the prevailing educational practice, for which gradual, almost imperceptible, change seems to be the exclusive paradigm. How many educational texts are not recommended for their appeal to the student's intuition! They constantly try to present everything that could be an exciting novelty as something as familiar as possible. They consciously try to link the new material to what is supposed to be the student's familiar world. It already starts with the teaching of arithmetic. Instead of teaching 2 + 3 = 5 , the hideous arithmetic operator "plus" is carefully disguised by calling it "and", and the little kids are given lots of familiar examples first, with clearly visible such as apples and pears, which are in, in contrast to equally countable objects such as percentages and electrons, which are out. The same silly tradition is reflected at university level in different introductory calculus courses for the future physicist, architect, or business major, each adorned with examples from the respective fields. The educational dogma seems to be that everything is fine as long as the student does not notice that he is learning something really new; more often than not, the student's impression is indeed correct. I consider the failure of an educational practice to prepare the next generation for the phenomenon of radical novelties a serious shortcoming. [When King Ferdinand visited the conservative university of Cervera, the Rector proudly reassured the monarch with the words; "Far be from us, Sire, the dangerous novelty of thinking.". Spain's problems in the century that followed justify my characterization of the shortcoming as "serious".] So much for education's adoption of the paradigm of gradual change8.

The concept of radical novelties is of contemporary significance because, while we are ill-prepared to cope with them, science and technology have now shown themselves expert at inflicting them upon us. Earlier scientific examples are the theory of relativity and quantum mechanics; later technological examples are the atom bomb and the pill. For decades, the former two gave rise to a torrent of religious, philosophical, or otherwise quasi-scientific tracts. We can daily observe the profound inadequacy with which the latter two are approached, be it by our statesmen and religious leaders or by the public at large. So much for the damage done to our peace of mind by radical novelties.

I raised all this because of my contention that automatic computers represent a radical novelty and that only by identifying them as such can we identify all the nonsense, the misconceptions and the mythology that surround them9. Closer inspection will reveal that it is even worse, viz. that automatic computers embody not only one radical novelty but two of them.

The first radical novelty is a direct consequence of the raw power of today's computing equipment. We all know how we cope with something big and complex; divide and rule, i.e. we view the whole as a compositum of parts and deal with the parts separately. And if a part is too big, we repeat the procedure. The town is made up from neighbourhoods, which are structured by streets, which contain buildings, which are made from walls and floors, that are built from bricks, etc. eventually down to the elementary particles. And we have all our specialists along the line, from the town planner, via the architect to the solid state physicist and further. Because, in a sense, the whole is "bigger" than its parts, the depth of a hierarchical decomposition is some sort of logarithm of the ratio of the "sizes" of the whole and the ultimate smallest parts. From a bit to a few hundred megabytes, from a microsecond to a half an hour of computing confronts us with completely baffling ratio of 109! The programmer is in the unique position that his is the only discipline and profession in which such a gigantic ratio, which totally baffles our imagination, has to be bridged by a single technology. He has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before. Compared to that number of semantic levels, the average mathematical theory is almost flat. By evoking the need for deep conceptual hierarchies, the automatic computer confronts us with a radically new intellectual challenge that has no precedent in our history10.

Again, I have to stress this radical novelty because the true believer in gradual change and incremental improvements is unable to see it. For him, an automatic computer is something like the familiar cash register, only somewhat bigger, faster, and more flexible. But the analogy is ridiculously shallow: it is orders of magnitude worse than comparing, as a means of transportation, the supersonic jet plane with a crawling baby, for that speed ratio is only a thousand.

The second radical novelty is that the automatic computer is our first large-scale digital device. We had a few with a noticeable discrete component: I just mentioned the cash register and can add the typewriter with its individual keys: with a single stroke you can type either a Q or a W but, though their keys are next to each other, not a mixture of those two letters. But such mechanisms are the exception, and the vast majority of our mechanisms are viewed as analogue devices whose behaviour is over a large range a continuous function of all parameters involved: if we press the point of the pencil a little bit harder, we get a slightly thicker line, if the violinist slightly misplaces his finger, he plays slightly out of tune. To this I should add that, to the extent that we view ourselves as mechanisms, we view ourselves primarily as analogue devices: if we push a little harder we expect to do a little better11. Very often the behaviour is not only a continuous but even a monotonic function: to test whether a hammer suits us over a certain range of nails, we try it out on the smallest and largest nails of the range, and if the outcomes of those two experiments are positive, we are perfectly willing to believe that the hammer will suit us for all nails in between.

It is possible, and even tempting, to view a program as an abstract mechanism, as a device of some sort. To do so, however, is highly dangerous: the analogy is too shallow because a program is, as a mechanism, totally different from all the familiar analogue devices we grew up with. Like all digitally encoded information, it has unavoidably the uncomfortable property that the smallest possible perturbations -- i.e. changes of a single bit -- can have the most drastic consequences12. [For the sake of completness I add that the picture is not essentially changed by the introduction of redundancy or error correction.] In the discrete world of computing, there is no meaningful metric in which "small" changes and "small" effects go hand in hand, and there never will be.

This second radical novelty shares the usual fate of all radical novelties: it is denied, because its truth would be too discomforting. I have no idea what this specific denial and disbelief costs the United States, but a million dollars a day seems a modest guess.

Having described -- admittedly in the broadest possible terms -- the nature of computing's novelties, I shall now provide the evidence that these novelties are, indeed, radical. I shall do so by explaining a number of otherwise strange phenomena as frantic -- but, as we now know, doomed -- efforts at hiding or denying the frighteningly unfamiliar.

A number of these phenomena have been bundled under the name "Software Engineering". As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory. Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot.".

The popularity of its name is enough to make it suspect. In what we denote as "primitive societies", the superstition that knowing someone's true name gives you magic power over him is not unusual. We are hardly less primitive: why do we persist here in answering the telephone with the most unhelpful "hello" instead of our name?

Nor are we above the equally primitive superstition that we can gain some control over some unknown, malicious demon by calling it by a safe, familiar, and innocent name, such as "engineering". But it is totally symbolic, as one of the US computer manufacturers proved a few years ago when it hired, one night, hundreds of new "software engineers" by the simple device of elevating all its programmers to that exalting rank13. So much for that term.

The practice is pervaded by the reassuring illusion that programs are just devices like any others, the only difference admitted being that their manufacture might require a new type of craftsmen, viz. programmers. From there it is only a small step to measuring "programmer productivity" in terms of "number of lines of code produced per month". This is a very costly measuring unit because it encourages the writing of insipid code, but today I am less interested in how foolish a unit it is from even a pure business point of view. My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger14.

Besides the notion of productivity, also that of quality control continues to be distorted by the reassuring illusion that what works with other devices works with programs as well. It is now two decades since it was pointed out that program testing may convincingly demonstrate the presence of bugs, but can never demonstrate their absence. After quoting this well-publicized remark devoutly, the software engineer returns to the order of the day and continues to refine his testing strategies, just like the alchemist of yore, who continued to refine his chrysocosmic purifications15.

Unfathomed misunderstanding is further revealed by the term "software maintenance", as a result of which many people continue to believe that programs -- and even programming languages themselves -- are subject to wear and tear16. Your car needs maintenance too, doesn't it? Famous is the story of the oil company that believed that its PASCAL programs did not last as long as its FORTRAN programs "because PASCAL was not maintained".

In the same vein I must draw attention to the astonishing readiness with which the suggestion has been accepted that the pains of software production are largely due to a lack of appropriate "programming tools". (The telling "programmer's workbench" was soon to follow.) Again, the shallowness of the underlying analogy is worthy of the Middle Ages. Confrontations with insipid "tools" of the "algorithm-animation" variety has not mellowed my judgement; on the contrary, it has confirmed my initial suspicion that we are primarily dealing with yet another dimension of the snake oil business.

Finally, to correct the possible impression that the inability to face radical novelty is confined to the industrial world, let me offer you an explanation of the -- at least American -- popularity of Artificial Intelligence. One would expect people to feel threatened by the "giant brains or machines that think". In fact, the frightening computer becomes less frightening if it is used only to simulate a familiar noncomputer. I am sure that this explanation will remain controversial for quite some time, for Artificial Intelligence as mimicking the human mind prefers to view itself as at the front line, whereas my explanation relegates it to the rearguard17. (The effort of using machines to mimic the human mind has always struck me as rather silly: I'd rather use them to mimic something better.)

So much for the evidence that the computer's novelties are, indeed, radical.

And now comes the second -- and hardest -- part of my talk: the scientific and educational consequences of the above. The educational consequences are, of course, the hairier ones, so let's postpone their discussion and stay for a while with computing science itself. What is computing? And what is a science of computing about?

Well, when all is said and done, the only thing computers can do for us is to manipulate symbols and produce results of such manipulations. From our previous observations we should recall that this is a discrete world and, moreover, that both the number of symbols involved and the amount of manipulation performed are many orders of magnitude larger than we can envisage: they totally baffle our imagination and we must therefore not try to imagine them18.

But before a computer is ready to perform a class of meaningful manipulations -- or calculations, if you prefer -- we must write a program. What is a program? Several answers are possible. We can view the program as what turns the general-purpose computer into a special-purpose symbol manipulator, and does so without the need to change a single wire (This was an enormous improvement over machines with problem-dependent wiring panels.) I prefer to describe it the other way round: the program is an abstract symbol manipulator, which can be turned into a concrete one by supplying a computer to it. After all, it is no longer the purpose of programs to instruct our machines; these days, it is the purpose of machines to execute our programs19.

So, we have to design abstract symbol manipulators. We all know what they look like: they look like programs or -- to use somewhat more general terminology -- usually rather elaborate formulae from some formal system. It really helps to view a program as a formula20. Firstly, it puts the programmer's task in the proper perspective: he has to derive that formula. Secondly, it explains why the world of mathematics all but ignored the programming challenge: programs were so much longer formulae than it was used to that it did not even recognize them as such. Now back to the programmer's job: he has to derive that formula, he has to derive that program. We know of only one reliable way of doing that, viz. by means of symbol manipulation. And now the circle is closed: we construct our mechanical symbol manipulators by means of human symbol manipulation.

Hence, computing science is -- and will always be -- concerned with the interplay between mechanized and human symbol manipulation, usually referred to as "computing" and "programming" respectively. An immediate benefit of this insight is that it reveals "automatic programming" as a contradiction in terms. A further benefit is that it gives us a clear indication where to locate computing science on the world map of intellectual disciplines: in the direction of formal mathematics and applied logic, but ultimately far beyond where those are now, for computing science is interested in effective use of formal methods and on a much, much, larger scale than we have witnessed so far. Because no endeavour is respectable these days without a TLA (= Three-Letter Acronym), I propose that we adopt for computing science FMI (= Formal Methods Initiative), and, to be on the safe side, we had better follow the shining examples of our leaders and make a Trade Mark of it21.

In the long run I expect computing science to transcend its parent disciplines, mathematics and logic, by effectively realizing a significant part of Leibniz's Dream of providing symbolic calculation as an alternative to human reasoning. (Please note the difference between "mimicking" and "providing an alternative to": alternatives are allowed to be better.)

Needless to say, this vision of what computing science is about is not universally applauded. On the contrary, it has met widespread -- and sometimes even violent -- opposition from all sorts of directions. I mention as examples

(0) the mathematical guild, which would rather continue to believe that the Dream of Leibniz is an unrealistic illusion

(1) the business community, which, having been sold to the idea that computers would make life easier, is mentally unprepared to accept that they only solve the easier problems at the price of creating much harder ones22

(2) the subculture of the compulsive programmer, whose ethics prescribe that one silly idea and a month of frantic coding should suffice to make him a life-long millionaire

(3) computer engineering, which would rather continue to act as if it is all only a matter of higher bit rates and more flops per second23

(4) the military, who are now totally absorbed in the business of using computers to mutate billion-dollar budgets into the illusion of automatic safety24

(5) all soft sciences for which computing now acts as some sort of interdisciplinary haven

(6) the educational business that feels that, if it has to teach formal mathematics to CS students, it may as well close its schools.

And with this sixth example I have reached, imperceptibly but also alas unavoidably, the most hairy part of this talk: educational consequences.

The problem with educational policy is that it is hardly influenced by scientific considerations derived from the topics taught, and almost entirely determined by extra-scientific circumstances such as the combined expectations of the students, their parents and their future employers, and the prevailing view of the role of the university: is the stress on training its graduates for today's entry-level jobs or to providing its alumni with the intellectual bagage and attitudes that will last them another 50 years25? Do we grudgingly grant the abstract sciences only a far-away corner on campus, or do we recognize them as the indispensable motor of the high-technology industry? Even if we do the latter, do we recognize a high-technology industry as such if its technology primarily belongs to formal mathematics? Do the universities provide for society the intellectual leadership it needs or only the training it asks for?

Traditional academic rhetoric is perfectly willing to give to these questions the reassuring answers, but I don't believe them. By way of illustration of my doubts, in a recent article on "Who Rules Canada?", David H. Flaherty bluntly states "Moreover, the business elite dismisses traditional academics and intellectuals as largely irrelevant and powerless.".

So, if I look into my foggy crystal ball at the future of computing science education, I overwhelmingly see the depressing picture of "Business as usual". The universities will continue to lack the courage to teach hard science, they will continue to misguide the students, and each next stage of infantilization of the curriculum will be hailed as educational progress26.

I now have had my foggy crystal ball for quite a long time. Its predictions are invariably gloomy and usually correct, but I am quite used to that and they won't keep me from giving you a few suggestions, even if it is merely an exercise in futility whose only effect is to make you feel guilty.

We could, for instance, begin with cleaning up our language by no longer calling a bug a bug but by calling it an error. It is much more honest because it squarely puts the blame where it belongs, viz. with the programmer who made the error. The animistic metaphor of the bug that maliciously sneaked in while the programmer was not looking is intellectually dishonest as it disguises that the error is the programmer's own creation. The nice thing of this simple change of vocabulary is that it has such a profound effect: while, before, a program with only one bug used to be "almost correct", afterwards a program with an error is just "wrong" (because in error)27.

My next linguistical suggestion is more rigorous. It is to fight the "if-this-guy-wants-to-talk-to-that-guy" syndrome: never refer to parts of programs or pieces of equipment in an anthropomorphic terminology, nor allow your students to do so. This linguistical improvement is much harder to implement than you might think, and your department might consider the introduction of fines for violations, say a quarter for undergraduates, two quarters for graduate students, and five dollars for faculty members: by the end of the first semester of the new regime, you will have collected enough money for two scholarships.

The reason for this last suggestion is that the anthropomorphic metaphor -- for whose introduction we can blame John von Neumann -- is an enormous handicap for every computing community that has adopted it. I have now encountered programs wanting things, knowing things, expecting things, believing things, etc., and each time that gave rise to avoidable confusions. The analogy that underlies this personification is so shallow that it is not only misleading but also paralyzing28.

It is misleading in the sense that it suggests that we can adequately cope with the unfamiliar discrete in terms of the familiar continuous, i.e. ourselves, quod non. It is paralyzing in the sense that, because persons exist and act in time, its adoption effectively prevents a departure from operational semantics and thus forces people to think about programs in terms of computational behaviours, based on an underlying computational model. This is bad, because operational reasoning is a tremendous waste of mental effort.

Let me explain to you the nature of that tremendous waste, and allow me to try to convince you that the term "tremendous waste of mental effort" is not an exaggeration. For a short while, I shall get highly technical, but don't get frightened: it is the type of mathematics that one can do with one's hands in one's pockets. The point to get across is that if we have to demonstrate something about all the elements of a large set, it is hopelessly inefficient to deal with all the elements of the set individually: the efficient argument does not refer to individual elements at all and is carried out in terms of the set's definition29.

Consider the plane figure Q, defined as the 8 by 8 square from which, at two opposite corners, two 1 by 1 squares have been removed. The area of Q is 62, which equals the combined area of 31 dominos of 1 by 2. The theorem is that the figure Q cannot be covered by 31 of such dominos.

Another way of stating the theorem is that if you start with squared paper and begin covering this by placing each next domino on two new adjacent squares, no placement of 31 dominos will yield the figure Q.

So, a possible way of proving the theorem is by generating all possible placements of dominos and verifying for each placement that it does not yield the figure Q: a tremendously laborious job.

The simple argument, however is as follows. Colour the squares of the squared paper as on a chess board. Each domino, covering two adjacent squares, covers 1 white and 1 black square, and, hence, each placement covers as many white squares as it covers black squares. In the figure Q, however, the number of white squares and the number of black squares differ by 2 -- opposite corners lying on the same diagonal -- and hence no placement of dominos yields figure Q.

Not only is the above simple argument many orders of magnitude shorter than the exhaustive investigation of the possible placements of 31 dominos, it is also essentially more powerful, for it covers the generalization of Q by replacing the original 8 by 8 square by any rectangle with sides of even length30. The number of such rectangles being infinite, the former method of exhaustive exploration is essentially inadequate for proving our generalized theorem.

And this concludes my example. It has been presented because it illustrates in a nutshell the power of down-to-earth mathematics; needless to say, refusal to exploit this power of down-to-earth mathematics amounts to intellectual and technological suicide. The moral of the story is: deal with all elements of a set by ignoring them and working with the set's definition.

Back to programming. The statement that a given program meets a certain specification amounts to a statement about all computations that could take place under control of that given program. And since this set of computations is defined by the given program, our recent moral says: deal with all computations possible under control of a given program by ignoring them and working with the program. We must learn to work with program texts while (temporarily) ignoring that they admit the interpretation of executable code.

Another way of saying the same thing is the following one. A programming language, with its formal syntax and with the proof rules that define its semantics, is a formal system for which program execution provides only a model. It is well-known that formal systems should be dealt with in their own right, and not in terms of a specific model. And, again, the corollary is that we should reason about programs without even mentioning their possible "behaviours"31.

And this concludes my technical excursion into the reason why operational reasoning about programming is "a tremendous waste of mental effort" and why, therefore, in computing science the anthropomorphic metaphor should be banned.

Not everybody understands this sufficiently well. I was recently exposed to a demonstration of what was pretended to be educational software for an introductory programming course. With its "visualizations" on the screen it was such an obvious case of curriculum infantilization that its author should be cited for "contempt" of the student body", but this was only a minor offense compared with what the visualizations were used for: they were used to display all sorts of features of computations evolving under control of the student's program! The system highlighted precisely what the student has to learn to ignore, it reinforced precisely what the student has to unlearn. Since breaking out of bad habits, rather than acquiring new ones, is the toughest part of learning, we must expect from that system permanent mental damage for most students exposed to it.

Needless to say, that system completely hid the fact that, all by itself, a program is no more than half a conjecture. The other half of the conjecture is the functional specification the program is supposed to satisfy. The programmer's task is to present such complete conjectures as proven theorems.

Before we part, I would like to invite you to consider the following way of doing justice to computing's radical novelty in an introductory programming course.

On the one hand, we teach what looks like the predicate calculus, but we do it very differently from the philosophers. In order to train the novice programmer in the manipulation of uninterpreted formulae, we teach it more as boolean algebra, familiarizing the student with all algebraic properties of the logical connectives. To further sever the links to intuition, we rename the values {true, false} of the boolean domain as {black, white}32.

On the other hand, we teach a simple, clean, imperative programming language, with a skip and a multiple assignment as basic statements, with a block structure for local variables, the semicolon as operator for statement composition, a nice alternative construct, a nice repetition and, if so desired, a procedure call. To this we add a minimum of data types, say booleans, integers, characters and strings. The essential thing is that, for whatever we introduce, the corresponding semantics is defined by the proof rules that go with it33.

Right from the beginning, and all through the course, we stress that the programmer's task is not just to write down a program, but that his main task is to give a formal proof that the program he proposes meets the equally formal functional specification34. While designing proofs and programs hand in hand, the student gets ample opportunity to perfect his manipulative agility with the predicate calculus. Finally, in order to drive home the message that this introductory programming course is primarily a course in formal mathematics, we see to it that the programming language in question has not been implemented on campus so that students are protected from the temptation to test their programs. And this concludes the sketch of my proposal for an introductory programming course for freshmen.

This is a serious proposal, and utterly sensible. Its only disadvantage is that it is too radical for many, who, being unable to accept it, are forced to invent a quick reason for dismissing it, no matter how invalid. I'll give you a few quick reasons.

You don't need to take my proposal seriously because it is so ridiculous that I am obviously completely out of touch with the real world. But that kite won't fly, for I know the real world only too well: the problems of the real world are primarily those you are left with when you refuse to apply their effective solutions. So, let us try again.

You don't need to take my proposal seriously because it is utterly unrealistic to try to teach such material to college freshmen. Wouldn't that be an easy way out? You just postulate that this would be far too difficult. But that kite won't fly either for the postulate has been proven wrong: since the early 80's, such an introductory programming course has successfully been given to hundreds of college freshmen each year. [Because, in my experience, saying this once does not suffice, the previous sentence should be repeated at least another two times.] So, let us try again.

Reluctantly admitting that it could perhaps be taught to sufficiently docile students, you yet reject my proposal because such a course would deviate so much from what 18-year old students are used to and expect that inflicting it upon them would be an act of educational irresponsibility: it would only frustrate the students. Needless to say, that kite won't fly either. It is true that the student that has never manipulated uninterpreted formulae quickly realizes that he is confronted with something totally unlike anything he has ever seen before. But fortunately, the rules of manipulation are in this case so few and simple that very soon thereafter he makes the exciting discovery that he is beginning to master the use of a tool that, in all its simplicity, gives him a power that far surpasses his wildest dreams35.

Teaching to unsuspecting youngsters the effective use of formal methods is one of the joys of life because it is so extremely rewarding. Within a few months, they find their way in a new world with a justified degree of confidence that is radically novel for them; within a few months, their concept of intellectual culture has acquired a radically novel dimension. To my taste and style, that is what education is about. Universities should not be afraid of teaching radical novelties; on the contrary, it is their calling to welcome the opportunity to do so. Their willingness to do so is our main safeguard against dictatorships, be they of the proletariat, of the scientific establishment, or of the corporate elite36.

Austin, 2 December 1988

prof. dr. Edsger W. Dijkstra
Department of Computer Sciences
The University of Texas at Austin
Austin, TX 78712-1188
USA


  1. I'm assuming, on no particular basis other than sheer chronology, that the first part is EWD 1035. I'm not reviewing that one, since almost all of the relevant issues are also raised in the current text.

    So maybe 1035 is not really "the first part of the talk", as much as a precursor to 1036? I've no idea, nor do I care all that much, to be honest. 

  2. Except when it doesn't, of course.

    At this point, I am suspicious of the author's logical discourse on two counts: first, he proposes contextualizing "radical", but continues assuming "novelty" as an implicit of this particular universe. Then, he states that "the future differs from our past" without explaining in what sense this is so. Granted, objects similar to what we today call "computers" have seldom been observed throughout history, e.g. the one from Antikythera, which makes Dijkstra's premise not entirely without basis. Granted, Dijkstra did not set out to hold a course in history, but otherwise I'm not sure how one can tie the future with the present and the past. 

  3. Since we're here, let's see what he's talking about.

    The so-called "radical novelty" of quantum mechanics stems from Planck's experiments wherein he observed that in certain physical systems energy is not distributed continuously, but rather in discrete "quanta" -- hence the name "quantum physics". This observation was so baffling that it led to the development of new mathematics around it, and to further baffling and seemingly contradictory observations, such as those that a sub-atomic particle sometimes "acts like" an actual particle, while in other situations it exhibits wave-like properties. But, make no mistake! the newly-developed mathematical language was a -- let's not call it ad-hoc -- extension of the old one, such that today's physicists are still pursuing the grand goal of making every piece consistent with the other, even if said goal will have proven in the end to be entirely pointless.

    In other words, this process of "coping with radical novelty", as the author calls it, does not work through the act of forgetting the old language, but rather through the incorporation of new language into the old body -- in other words, through generalization. Yes, some old habits need to go away, some words might need to be redefined, but the question of which thing goes where in the grand scheme remains a matter of exploration.

    In yet other words, it's not enough to assert that something is a "radical novelty", not without at least attempting to assess how radical a novelty said thing is, and in what sense. 

  4. I'm not sure what he's referring to here, but I suspect it must have come from some underlying frustration regarding academic funding for his department. Otherwise 80386 was produced as early as 1985, when the IBM PC platform was already becoming quite popular, while Apple II was already making millions of US dollars. So I very much doubt that in 1988, the year of Dijkstra's essay, computing was suppressed or otherwise ignored, quite the contrary. In fact this is where I suspect Dijkstra is contradicting himself and computers either aren't such a disturbingly radical novelty, or otherwise the evolution of language was so perverse that by 2006 Merriam-Webster added the verb "google" to its dictionary. This is ultimately why I don't find his premise to this problem of "teaching computing science" very convincing, and it may also be the reason why his essay is not cited too often nowadays. 

  5. So does our future differ from our past, after all? On one hand, Dijkstra waves away, say, Aquinas, Dante and Occam, claiming "almost total intellectual stagnation" during the Medieval times, while on the other hand he recognizes that his time's proliferation of intellectualisms resulted in bullshit for the most part. Rather, he's reducing that hairy discussion about politics and science to one or two examples, while carefully avoiding to inspect why exactly "one can detect a lot of medieval thinking today". 

  6. As if any enduring discoveries in the history of science can be attributed to "a vast majority" rather than particular individuals. From which I deduce that Dijkstra is quite a fervent democrat, in other words, a believer that "a majority" is able to birth anything other than... well, a bunch of consensuses, I guess. 

  7. I'm laughing my ass off over here -- listen to him, "reduce the use of the brain". The conflation of brain and mind, as well as the naïve belief that "reducing the use of the mind" can bring anything but sheer, unmitigated idiocy, is what brought us to this sad state of affairs. Amplifying the use of the mind, on the other hand, now that's quite something. 

  8. My experience with the Romanian (Soviet-inspired) practice of education was somewhat different: there, the teachers would very often implicitly introduce new terms without attaching as much as a hint of an example to them and then, especially in engineering courses, they would proceed to fill the blackboard with a shit-ton of mathematical formulae, resulting in a mental aerobics exercise in jumping from one Greek letter to another. Mind you, not all the professors did that, I still to this day remember my first year maths course on Borel spaces, for example. This is incidentally why Romanians consistently gave so many gold medalists in the mathematical olympiad throughout the years, and perhaps also why the engineering schools gave so many politicians.

    Anyway, I understand the fallacy of calling the plus sign "and", as my chemistry teacher in the seventh grade would pedantically insist on attaching the correct terms to objects (e.g. "there exists a sulphuric acid molecule", not "we have a so-and-so molecule"), but I fail to see how this applies to the larger picture of Western education, which kinda proves Dijkstra's point: simplistic examples are indeed not very helpful in the quest for understanding, actually in some cases they're plain unhelpful. 

  9. On this much I agree with him, only I'm not sure we're referring to the same "we". 

  10. At some point I called this "abstraction hell". I wrote that article eleven years ago and I don't think it gets even close to the core problem, and neither do I think Dijkstra's statement of the issue does. I fully agree that computers amplify in orders of magnitude, but the relationship between the amplification and the complexity of the technology is scarcely understood. On one hand, thinking in orders of magnitude reminds me quite a bit of Laplace's transform from time into frequency domain, so it's not like peering into the orders is an unimaginable task; while on the other, I know that system complexity increases exponentially with the number of variables, and yet I believe that given enough time, any system, regardless of its complexity, may be exposed in the nude to the human eye. After all, the relationship between variables very often drives an exponential problem into linearity, this is why e.g. program compilation works in (most) particular cases.

    Anyway, I agree that Dijkstra identifies a Big Problem here, it only needs to be stated in more clearly nuanced terms, so that in being clearly understood, one may then approach it in a systematic manner.

    At this point what (yet again) nags me in Dijkstra's discourse is that he begins by calling computing "a radical novelty, therefore it is suppressed/ignored", only to then shift the pole to "computing encompasses this and this radical novelties, that are suppressed/ignored". Then he places computing power and complexity under the same umbrella without attempting to explain how exactly one is a consequence of the other and what exactly in folks' psychology places them in the situation of believing that "piling up chairs will lead us to the moon". 

  11. This is at best a simplistic view of how "we view ourselves".

    For one, overcompensation in any activity (say, piano practice) may lead from a gradual progress to clearly observable qualitative progresses. Take my walking exercise for example: in January I (an individual with severe anemia) walked about ten kilometers, in February about fifty, while in March I (a reasonably healthy person) walked more than a hundred. In my experience, I didn't observe any improvement in my walking speed nor my stamina in the first fourteen days of walking; but in the fifteenth day, I saw a noticeable difference in both. The whole human organism works in qualitative thresholds, if one reflects upon them.

    As for the other: while the mechanisms behind genetics are modeled probabilistically, the code is discrete, i.e. driven by the various chains of nucleotides; furthermore, many other mechanisms within the human body occur in a more or less rhythmical fashion, such as, say, circadian processes, blood "flow" or neural activation. In what way are these continuous?

    I suspect what he's trying to say here is that in attempting to model certain processes, humans very often seek the more approachable linear regions and when they eventually find them sufficient, they settle for that, while the hidden variables build up, leading to an eventual disaster. Granted, maybe theories of complex systems were not much in fashion during Dijkstra's time, but then, should we say that Dijkstra was a man of fashions? 

  12. This, as the last paragraph in my previous footnote anticipates, is indeed a serious issue, but it has nothing to do with continuous/analogue versus discrete/digital behaviour, as the same phenomenon occurs just as well in many analogue systems -- or rather, in systems modeled using continuous functions, e.g. strange attractors. Control theory studies as much as it can of these nonlinear behaviours, and in particular the problems of controllability and observability deal with how a system behaves from the point of view of inputs, as well as outputs. I'm not sure why this problem is attributed specifically to computing, as geostationary satellites and ABS were known technologies in the late '80s. That "software" people refuse to employ the correct intellectual means to "view themselves" is their problem and theirs alone.

    Furthermore, I yet again fail to see why he conflates "discrete/continuous models" and nonlinearity into the same problem, while on the other hand he views the nonlinearity of orders of magnitude of amplification and the nonlinearity of input/output transfer functions as two entirely separate "novelties". Overall, I don't think Dijkstra does a very good job of peering into the "nature of computing's novelties", as he calls them; in fact, he leaves me wondering why he identified precisely two novelties and not three or four, in any case, said novelties are not described unequivocally, which casts doubt over his understanding of the problem he's describing. 

  13. While I somewhat agree with his assessment, I very much doubt that it has anything at all to do with naming. Rather, the process of perversion of programming into "software engineering" was a process of co-option of otherwise natural intellectual ownership of the field.

    As far as I understand the history of computing, its birth lies in the electrical engineering, electronics or automatic control departments in various universities around the world. In other words, the usage -- no, at no point there was any real difference between "programming" and "using" computers, that delineation is purely Microsoftian -- so, the usage of computers in various engineering departments is what caused the conflation of "programmer" and "software engineering". Then all that the "computing industry" did was to co-opt this conflation by "raising demand for software engineers", thereby riding the wave of grade inflation that already ran rampant at the time. The process is ongoing too, and it's not particular to computing.

    There are multiple angles to this "programmer versus software engineer" distinction and, regardless of what the reader may expect, I won't bother attempting to validate any of them. This relates among others to the preconceived notion that computers are to be either "programmed" or "used", at which point I completely agree with Dijkstra that no, a discussion of the subject is not possible merely on the grounds that the priors (novel from a linguistically point of view or otherwise) simply aren't there in the year 2022, as well as they weren't there in 1988. This is not by any means a tragedy, we simply have to define them first, and if not "us", then maybe my nephews or their nephews will carry the discussion forward after shedding all this preconceived bullshit -- which, by the way, has very little to do with "clinging to the language of the past". 

  14. Rather, I believe that the "programmer" oughta carefully look at all (minimally, both) sides of the ledger, by at the very least wondering: what is the cost of certain lines of code? (no, not all "N lines of code" are equal) and what does the system (as well as its users) gain by spending them? moreover, what is the opportunity cost of not introducing some N lines of code in the program at some particular time during the process?

    The list of questions, I suppose, can go on for a long time. I invite the reader to add further to it. 

  15. Today's QA engineer acts precisely like the "alchemist of yore", this much we can agree on. But what keeps frustrating me about Dijkstra's discourse carried from the top of his white horse, is that he doesn't even bother to question why this is the case. And as it happens, I, some nobody from the 2020s, hold the answer, while prof. Dijkstra, on his white horse, didn't. How the fuck is this even possible?!

    The essence of poor quality assurance (as well as with many other issues with the so-called "software engineering") doesn't lie in the fact that, as Dijkstra will no doubt tell us, "engineers prefer writing test cases instead of employing formal methods". Since we're here, fuck formal methods with a hot rod! The essence behind poor engineering lies with the poor language itself, with the unexamined practice of writing code before clearly specifying what it's supposed to do. I am not blaming engineers in particular for this either, as their customers tend to state their requirements at least as poorly, which only leads to further confusion, as well as does the moving of poles along the lines of "just add a button over there, what's the big deal?" Anything goes, at which point sure, all sorts of aberrations become not merely possible, but inevitable. 

  16. The unexamined assumption that "programs are subject to wear and tear" is otherwise not entirely without merit either. Once the dude (kid, neckbeard etc.) who owned your program left the company, or died, or whatever, and is replaced by some other kid/neckbeard/etc., the program becomes "subject to wear and tear". This phenomenon is amply documented in Asimov's Foundation, such as in his example of the "Church of Scientism", which illustrates how this "wear and tear" replaces an otherwise rational process with magical thinking. 

  17. Yet he's the same dude who proposes "reducing the use of the brain" and delegating it to computers. Ha!

    Fortunately for him, Dijkstra didn't live to see the resurrection of this "AI will take over the world" ideology/fashion, although I believe the worms that consumed his carcass turned in his grave quite a few times over the last few decades. How could he know that this AI thing would not only not die already, but it would intensify in the times to come! 

  18. This paragraph once again reaches at the core of what I don't like about this essay. First he proposes computing and computers as a product of systematic human labour; then he jumps into the same "medieval"/magical/scientist bandwagon by proposing the opposite, that the manipulation done by what he calls "computers" is somehow "beyond the imagination". This is absolutely false, for a number of reasons.

    First, what he calls "computers", i.e. the electrical machines prototyped at e.g. the electrical engineering department at MIT, are only a particular type of computers. Had Turing-based computing machines been designed using hydraulic or fine mechanical pieces, then "software engineering" would have been a byproduct of those departments; or had it been designed by carpenters idem, I think this much is settled in footnote #13 above.

    Moreover, the advent of "fast computing" cannot be decoupled by the fact that, just to provide a quick example, humans were doing numerical methods by hand long before, say, UNIVAC was a thing. My mother, an engineer at the very same university that I graduated, albeit in a different field, still has her numerical methods book in her personal library and she told me the stories of how she was doing Runge-Kutta using pen and paper back in the day. That today's monkeys cannot reduce the algorithms that they implement with their own hands in [insert programming language here] to a pen-and-paper procedure is a sin that they pay for most dearly and a sign of the same magical thinking that Dijkstra himself is decrying in his talk. What else do you think "formal proofs" are rather than a method of analysis using the very same pen and paper?

    That is not to say that being able to perform symbolic computation in the order of megaherz or gigaherz does not open the door to certain qualitative changes in the process of computing. It most definitely does, but this alone does not "baffle the imagination", it just challenges it to represent the problem in different (temporal and spatial) terms. 

  19. I fail to see the distinction, or its relevance to the discussion. What if the "programs" are machines? then who instructs whom and who executes what? 

  20. It "really helps" to view a program in many ways: as a circuit, as a "formula" (maybe he meant: as a relation?), as a truth table, as an algebraic/geometric state space, as... whatever. The point is there are many methods of analysis and "derivation" available to the so-called "programmer", some more useful than others depending on the problem that he attempts to solve. 

  21. Just to be clear: I called this bullshit all the way in footnote #15.

    I suppose Dijkstra was naïve and he didn't see how this field of "formal methods" may be itself perverted by the proliferation of automation. He is right, in that computing science is concerned with "the interplay between mechanized and human symbol manipulation", only he didn't see how the "human" side of this interplay would be largely ignored, mainly because of intellectual laziness.

    Moreover, I am (yet again!) irked by the fact that he is unable to connect computing with the larger field of automation, itself part of the larger field of... I will call it consonantist psychology, for the love of Romanian pioneers in the field -- of which the reader has likely never heard, as by all signs, Dijkstra himself hasn't. In any case, he is proposing the reinvention of a shoddy wheel, under the impression that it will "transcend mathematics and logic" and human reasoning altogether. Nonsense. 

  22. I repeat myself once again, but... meanwhile some parts of "the business community" have adopted formal methods, and... what then? This is yet another reason for not taking his "Formal Methods Initiative" seriously. 

  23. Insofar as "computer engineering" is concerned, it needn't be more than that. What, you tout the radical novelty of "orders of magnitude", but then you completely ignore its economic implications? If anything, this particular side of computing lies precisely at the intersection between automation and economics. 

  24. He's spot on. If anything, the military should have devised some defense strategy before irresponsibly letting ad-hoc hosts for botnets in the hands of the common man. 

  25. I am yet again laughing my ass off over here. Do you want "academic funding" or do you want to rape your students with education that will last them for the next fifty years? 'cause in a democratic world, you'd be fucking naïve to believe that you can have both.

    I don't know precisely how the story went at Eindhoven, Austin, or wherever he taught during his lifetime, but I know precisely how things went at my own university. In the summer of 2016, while I was packing my bags and preparing to leave the teaching profession, UPB, encouraged by political pressure from my department (which in turn was encouraged by pressure from "the industry"), was raising the number of admitted students, and implicitly, the number of students per teacher; instead of lowering this number and raising taxes, which are in any case less than a tenth of those from a US university. At no point did I see them strive to gather financial independence from either the state (which is still providing most of the dough) or the "industry" (which is pushing some financing now and then, either indirectly through supporting EU projects or directly through donations) and at no point did I see most of the people there being concerned with the state of teaching in the next five decades. This, this, this is the result of "a vast majority" being in charge of making decisions and this is ultimately why I packed my bags and left, not looking back.

    And in the end this is the result of the post-modern "market of ideas" that I suppose Dijkstra himself supported in some way or another. "High-technology industry" simply perverted and extracted what value they could as fast as possible as they were able to and as soon as there's nothing left, they'll happily move on. And I do hope they do, as soon as possible. 

  26. And he wasn't wrong, of course.

    I'm not particularly a belieber in the revolutionary "let's drive everything to dust and rebuild from scratch" approach, but meanwhile the everything is doing a fine job of driving itself to dust, so at least at that point... maybe the crystal ball won't look all that gloomy. 

  27. This is one of the paragraphs that bought me into Dijkstra's discourse years ago. But see how he contradicts himself yet again: while in the past engineering errors brought down bridges, today's mindless disconnect from the past and attempt to reinvent the wheel is what brought everyone into the "it's okay to ship bad products" mindset. And this is only a direct consequence of the "it's okay to systematically underestimate development efforts" mindset, which itself is a consequence of a systematic misunderstanding of that "human" side of the symbolic manipulation (i.e. the system specification, either formal or otherwise). Which is a consequence of an ad-hoc approach to design, that in most cases partially specifies what a program oughta do, but it rarely completely specifies what it oughtn't do, which is in fact the reason why they can't "prove the absence of bugs".

    The reason for that isn't "absence of formal this or the other", but sheer intellectual laziness, or as we say in Romanian, "lasă mă, că merge și-așa", i.e. just give it a couple more hammers and your wagon is now an airplane. And when that becomes the norm, even those otherwise good-willing folks who "try to do the right thing" will end up skipping steps to keep up with delivery.

    In other words, your "market of ideas" is broken beyond repair, what the fuck can you do. 

  28. While I (yet again) somewhat agree, one can only wonder if the "programming language" metaphor isn't due to be thrown out the window altogether.

    Other than that, I wonder how exactly one is supposed to call the act of moving data from one place to another other than by employing the "this guy talks to the other guy" metaphor. There are literally billions of written words relying on this so-called "communication", which are used to maintain among others the vast network of computers called the Internet. What shall we do about that?

    In other words, what shall we do about standards? because that's what it boils down to. 

  29. On the flip side, computers themselves contain no notion of "meaning", yet they can be used to quickly find answers about all the elements of a set by looking at each and every one. So which of the two approaches is better in general? Maybe none? 

  30. Since he mentions even length, why did he not provide a number theoretical proof of this problem? 

  31. Yet one shouldn't by any means ignore the power of (counter)examples. In fact this is all that "bugfinding" relies upon: finding an example which can then be generalized to a whole set of examples which demonstrate a certain (counter)property of the program. This is also powerful, especially given the relation between recursion and induction. 

  32. Why not {flubber, blobber}? 

  33. So he's basically describing Pascal here, isn't he?

    I'm well aware Dijkstra did not swallow the goto statement, but I for one am also a firm believer that a language oughta also teach the basic primitives of the physical machine, as they are. Adding a macro system (such as that of Common Lisp), one may easily implement any control structures they wish upon those basic primitives, given that I see no particular reason to write my loops using while {} or for (...) {} or whatever other diabetes-inducing syntactic sugars. Let the kids roll their own and in the process learn something, or otherwise let them break their heads and then never touch a computer again. 

  34. I wonder: who's gonna teach the kids to write not a formal specification, no -- but who is going to teach them to elaborate at the very least a proper informal specification of a program? It's one thing to exercise your mind with abstract nonsense to exhaustion; and a whole other thing to learn how to first clearly and unambiguously express a problem and its proposed solution. And while mathematics certainly help (and, I would add, they are even necessary), they have proven themselves quite insufficient on their own. 

  35. Or, to continue my previous footnote, it gives him the illusion thereof, in any case, if his mind gets filled with naught but abstract nonsense. 

  36. I find this last paragraph to be entirely gratuitous, basically because he reduces everything to "teach kids formal methods and the field of computing sciences will spread its wings and fly". Basically he's just about as naïve as the Heiser dude, except (fortunately!) he doesn't propose mechanizing said formalisms as well.

    Anyway, I don't blame Dijkstra for his naïvete. He starts from false premises and along the way he only accidentally gets it right sometimes, then he stops at a single point and hails that as The Solution, despite his own experience (as other EWDs of his clearly illustrate) with this mind-numbingly costly endeavour of "doing computing".

    I went through this essay a couple of years ago and my conclusion has now remained the very same as then: computing is a very young craft, merely in its puberty. It is young much as mathematics was young in the time of Pythagoras; and it is a craft much like carpentry is one, i.e. both are really good at solving a certain set of problems, while both fail horribly when you get them tools out of that comfort zone. The only notable difference between the two is that computers are currently out to eat the world, hence my point regarding puberty: it will take a lengthy, strenuous and frustrating road down to the maturity of this field, at which point we may, who knows, be able to call it "engineering", or a "science", or who knows, maybe even "art".

    Signs of this art (to be) lie here and there even in our day. Now, whether that Art will be reached in all its fullness in five years, decades or centuries remains to be seen. In either case, I very much doubt that Formal Methods Are The Answer. The answer, if it even exists at all, is much more complex and goes beyond the mere paragraphs of an essay. 

Filed under: olds.
RSS 2.0 feed. Comment. Send trackback.

5 Responses to “EWD 1036, annotated”

  1. [...] we can proceed to examine how The Geostationary Truth Machine works, by analogy -- the problem with Dijkstra's naïvete is that he doesn't specify precisely "by analogy" to what, and the what is as important [...]

  2. [...] of my recent ruminations on the matter. Only this time around I won't bother the reader with a fully annotated read, instead I will resort to some highlights and notes on the parts which I've actually found [...]

  3. [...] know I'm repeating myself, but... 1947, mmkay? Four decades before Dijkstra's [...]

  4. [...] throughout history we're often stuck for a long while in metaphorical swamps, so since everyone and their dog calls the current thing "AI", I'm left with no option but to adopt [...]

  5. [...] the way, computer "science" isn't science either! It's merely a cheap offshoot of discrete mathematics which only became [...]

Leave a Reply