Alan Kay, my favorite curmudgeon, spent decades trying to remind us we keep reinventing concepts that were worked out in the late 70s and he’s disappointed we’ve been running in circles ever since. He’s still disappointed because very few programmers are ever introduced to the history of computer science in the way that artists study the history of art or philosophers the history of philosophy.
When I was at RIT (2006ish?) there was an elective History of Computing course that started with the abacus and worked up to mainframes and networking. I think the professor retired years ago, but the course notes are still online.
To strenghten GPs point a bit: There are courses on conceptual art (1966-72) or minimal art alone. One "History of Computing" course, while appreciated, is not doing its history enough justice.
To be fair, the history of computing is only ~200 years old even if you go back to Babbage and Lovelace. The history of art is literally as old as recorded history.
Hello fellow RIT alum! I don't think I knew about this class when I went there, though I started as a Computer Engineering student (eventually switched to Computing Security).
The effective history of computing spans a lifetime or three.
There's no sense comparing the two. In the year 2500 it might make sense to be disappointed that people don't compare current computational practices with things done in 2100 or even 1970, but right now, to call what we have "history" does a disservice to the broad meaning of that term.
Another issue: art and philosophy have very limited or zero dependence on a material substrate. Computation has overwhelming dependence on the performance of its physical substrate (by various metrics, including but not limited to: cpu speed, memory size, persistent storage size, persistent storage speed, network bandwidth, network scope, display size, display resolution, input device characteristics, sensory modalities accessibly via digital to analog conversion, ...). To assert that the way problems were solved in 1970 obviously has dramatic lessons for how to solve them in 2025 seems to me to completely miss what we're actually doing with computers.
If Alan Kay doesn't respond directly to this comment, what is Hacker News even for? :)
You're not wrong about history, but that only strengthens Kay's case. E.g., our gazillion-times better physical substrate should have led an array of hotshot devs to write web apps that run circles around GraIL[1] by 2025. (Note the modeless GUI interaction.) Well, guess what? Such a thing definitely doesn't exist. And that can only point to programmers having a general lack of knowledge about their incredibly short history.
(In reality my hope is some slice of devs have achieved this and I've summoned links to their projects by claiming the opposite on the internet.)
Edit: just so I get the right incantation for summoning links-- I'm talking about the whole enchilada of a visual language that runs and rebuilds the user's flowchart program as the user modelessly edits it.
My gut is your main complaint is largely the modern web ecosystem? Games can run circles around that application, as obvious inspiration. But high end architectural tools are probably more of what you have in mind.
The easy example I used to use to really blow people's minds on what was possible was Mathematica.
That is to say, it isn't so much lack of knowledge of history. It is lack of knowledge of the present. And a seeming unwillingness to want to pay for some things from a lot of folks.
> E.g., our gazillion-times better physical substrate should have led an array of hotshot devs to write web apps that run circles around GraIL[1] by 2025
Why? What problem did it solve that we're suffering from in 2025?
This is just "old person yelling at cloud" territory, though? People often don't know the actors, singers, authors, inventors, whatever from the last few generations. They know the current generation and maybe some originals.
But the rhyme and reason for who is known is not at all obvious. Outside of "who is getting marketed."
The man on the street may not know this history, but serious actors, singers, authors, and inventors themselves certainly know what came before them. If not, they are presumably not actually that interested in their own vocation (which is also normal, by the way).
Do you know this for fact? My gut is that most performers will know of the performers they watched for inspiration. Just like athletes. But few will know the history of their field.
I will agree that the "greats" seem to tend to know all of this. Such that I think I'm agreeing with your parenthetical there. But most practitioners?
I don't know it for fact, no. BUT...I would be very surprised if the average working film director hasn't heard of Ernst Lubitch or Ringo Lam (here I'm deliberately picking names that aren't commonly known by the public at large, like Steven Spielberg). Obviously we could do this for lots of vocations, but really my statement above was about serious practitioners, people who are deliberately trying to improve their art, rather than just hammer a check (which, again, is normal and fine!).
I'll confirm (and then nerdily complicate) your thesis for the art-form I practiced professionally for the first half of my adult life: yes, every serious actor I've been privileged to work with knows of previous performers, and studies texts they leave behind.
I owned at one time a wonderful two-volume anthology called Actors on Acting, which collected analysis and memoir and advice going back... gosh, to Roman theatre, at least. (The Greeks were more quasi-religious, and therefore mysterious - or maybe the texts just haven't survived. I can't remember reading anything first-hand, but there has been a good deal of experimental "original practice" work done exploring "how would this have worked?"). My graduate scholarship delved into Commedia dell'Arte, and classical Indian theatre, as well as 20th century performers and directors like Grotowski, and Michael Chekhov, and Joan Littlewood. Others, of course, have divergent interests, but anyone I've met who cares can geek out for hours about this stuff.
However, acting (or, really, any performance discipline), is ephemeral. It invokes a live experience, and even if you (and mostly you don't, even for the 20th c) have a filmed version of a seminal performance it's barely anything like actually being there. Nor, until very recently, did anyone really write anything about rehearsal and training practice, which is where the real work gets done.
Even for film, which coincidentally covers kinda the same time-period as "tech" as you mean it, styles of performance - and the camera technology which enables different filming techniques - have changed so much, that what's demanded in one generation isn't much like what's wanted in the next. (I think your invocation of film directors is more apt: there are more "universal" principles in composition and framing than there are in acting styles.)
Acting is a personal, experiential craft, which can't be learned from academic study. You've got to put in hours of failure in the studio, the rehearsal room, and the stage or screen to figure out how to do it well.
Now, here's where I'll pull this back to tech: I think programming is like that, too. Code is ephemeral, and writing it can only be learned by doing. Architecture is ephemeral. Tooling is ephemeral. So, yes: there's a lot to be learned (and should be remembered) from the lessons left by previous generations, but everything about the craft pulls its practitioners in the opposite direction. So, like, I could struggle through a chapter of Knuth, or I could dive into a project of my own, and bump up against those obstacles and solve them for myself. Will it be as efficient? No, but it'll be more immediately satisfying.
Here's another thing I think arts and tech have in common: being a serious practitioner is seldom what gets the prize (if by that you mean $$$). Knuth's not a billionaire, nor are any of my favorite actors Stars. Most people in both disciplines who put in the work for the work's sake get out-shined by folks lucky enough to be in the right place at the right time, or who optimize for hustle or politics or fame. (I've got no problem with the first category, to be clear: god bless their good fortune, and more power to them; the others makes me sad about human nature, or capitalism, or something.) In tech, at least, pursuing one's interest is likely to lead to a livable wage - but let's see where our AI masters leave us all in a decade, eh?
Anyway, I've gone on much to much, but you provoked an interesting discussion, and what's the internet for if not for that?
> Another issue: art and philosophy have very limited or zero dependence on a material substrate. Computation has overwhelming dependence on the performance of its physical substrate
That's absolutely false. Do you know why MCM furniture is characterized by bent plywood? It's because we developed the glues that enabled this during world war II. In fashion you had a lot more colors beginning in the mid 1800s because of the development of synthetic dyes. Really odd that oil paints were really perfected around Holland (major place for flax and thus linseed oil), which is what the dutch masters _did_. Architectural mcmansions began because of the development of pre-fab roof trusses in the 70s and 80s.
How about philosophy? Well, the industrial revolution and it's consequences have been a disaster for the human race. I could go on.
The issue is that engineers think they're smart and can design things from first principles. The problem is that they're really not, and design things from first principles.
> To assert that the way problems were solved in 1970 obviously has dramatic lessons for how to solve them in 2025 seems to me to completely miss what we're actually doing with computers.
True they might not all be "dramatic lessons" for us, but to ignore them and assume that they hold no lessons for us is also a tragic waste of resources and hard-won knowledge.
Its because CS is not cared about as a true science for the most part. Nearly all of the field is focused on consolidating power and money dynamics. No one cares to make a comprehensive history since it might give your competitors an edge.
I have thought that's the common definition and doesn't need much thought...
My dictionary absolutely implies that, it even claims that all the sciences were split of from Philosophy and that a common modern topic of Philosophy is the theory of science. The point of Philosophy is to define truth in all aspects, how is that not science? It's even in the name: "friend of truth". Philosophy is even more fundamental and formal than mathematics. Mathematics asks what sound systems are, what properties they have and how they can be generalized. Philosophy asks, what something truly is, what it means to know, what it means to have a system and whether it's real. The common trope of going even more fundamental/abstract goes: "biology -> chemistry -> physics -> mathematics -> philosophy"
You're confusing computer science with economics. The ahistorical nature of classical and neoclassical economics basically declares that history is irrelevant. Economists do not really concern themselves with economic history, like at all.
You first sentences already suggest one comparison between the histories of computing and philosophy: history of computing ought to be much easier. Most of it is still in living memory. Yet somehow, the philosophy people manage it while we computing people rarely bother.
I always think it is great value to have a whole range of history of X courses.
I once thought about a series of PHYS classes that focus on historical ideas and experiments. Students are supposed to replicate the experiments. They have to read book chapters and papers.
History of physics is another history where we have been extremely dependent on the "substrate". Better instruments and capacity to analyze results, obviously, but also advances in mathematics.
Art and philosophy have very limited or zero dependence on a material substrate
Not true for either. For centuries it was very expensive to paint with blue due to the cost of blue pigments (which were essentially crushed gemstones).
Philosophy has advanced considerably since the time of Plato and much of what it studies today is dependent on science and technology. Good luck studying philosophy of quantum mechanics back in the Greek city state era!
Just because a period of history is short doesn't make it _not history_.
Studying history is not just, or even often, a way to rediscover old ways of doing things.
Learning about the people, places, decisions, discussions, and other related context is of intrinsic value.
Also, what does "material substrate" have to do with history? It sounds here like you're using it literally, in which case you're thinking like an engineer and not like a historian. If you're using it metaphorically, well, art and philosophy are absolutely built on layers of what came before.
The rate of change in computer technology has been orders of magnitudes faster than most other technologies.
Consider transport. Millennia ago, before the domestication of the horse, the fastest a human could travel was by running. That's a peak of about 45 km/h, but around 20 km/h sustained over a long distance for the fastest modern humans; it was probably a bit less then. Now that's about 900 km/h for commercial airplanes (45x faster) or 3500 km/h for the fastest military aircraft ever put in service (178x faster). Space travel is faster still, but so rarely used for practical transport I think we can ignore it here.
My current laptop, made in 2022 is thousands of times faster than my first laptop, made in 1992. It has about 8000 times as much memory. Its network bandwidth is over 4000 times as much. There are few fields where the magnitude of human technology has shifted by such large amounts in any amount of time, much less a fraction of a human lifespan.
That gives even more reason to study the history of CS. Even artists study contemporary art from the last few decades.
Given the pace of CS (like you mentioned) 50 years might as well be centuries and so early computing devices and solutions are worth studying to understand how the technology has evolved and what lessons we can learn and what we can discard.
> Computation has overwhelming dependence on the performance of its physical substrate (by various metrics, including but not limited to: cpu speed, memory size, persistent storage size, persistent storage speed, network bandwidth, network scope, display size, display resolution
This was clearly true in 01970, but it's mostly false today.
It's still true today for LLMs and, say, photorealistic VR. But what I'm doing right now is typing ASCII text into an HTML form that I will then submit, adding my comment to a persistent database where you and others can read it later. The main differences between this and a guestbook CGI 30 years ago or maybe even a dialup BBS 40 years ago have very little to do with the performance of the physical substrate. It has more in common with the People's Computer Company's Community Memory 55 years ago (?) using teletypes and an SDS 940 than with LLMs and GPU raytracing.
Sometime around 01990 the crucial limiting factor in computer usefulness went from being the performance of the physical substrate to being the programmer's imagination. This happened earlier for some applications than for others; livestreaming videogames probably requires a computer from 02010 or later, or special-purpose hardware to handle the video data.
Screensavers and demoscene prods used to be attempts to push the limits of what that physical substrate could do. When I saw Future Crew's "Unreal", on a 50MHz(?) 80486, around 01993, I had never seen a computer display anything like that before. I couldn't believe it was even possible XScreensaver contains a museum of screensavers from this period, which displayed things normally beyond the computer's ability. But, in 01998, my office computer was a dual-processor 200MHz Pentium Pro, and it had a screensaver that displayed fullscreen high-resolution clips from a Star Trek movie.
From then on, a computer screen could display literally anything the human eye could see, as long as it was prerendered. The dependence on the physical substrate had been severed. As Zombocom says, the only limit was your imagination. The demoscene retreated into retrocomputing and sizecoding compos, replaced by Shockwave, Flash, and HTML, which freed nontechnical users to materialize their imaginings.
The same thing had happened with still 2-D monochrome graphics in the 01980s; that was the desktop publishing revolution. Before that, you had to learn to program to make graphics on a computer, and the graphics were strongly constrained by the physical substrate. But once the physical substrate was good enough, further improvements didn't open up any new possible expressions. You can print the same things on a LaserWriter from 01985 that you can print on the latest black-and-white laser printer. The dependence on the physical substrate has been severed.
For things you can do with ASCII text without an LLM, the cut happened even earlier. That's why we still format our mail with RFC-822, our equations with TeX, and in some cases our code with Emacs, all of whose original physical substrate was a PDP-10.
Most things people do with computers today, and in particular the most important things, are things fewer people have been doing with computers in nearly the same way for 30 years, when the physical substrate was very different: 300 times slower, 300 times smaller, a much smaller network.
Except, maybe, mass emotional manipulation, doomscrolling, LLMs, mass surveillance, and streaming video.
A different reason to study the history of computing, though, is the sense in which your claim is true.
Perceptrons were investigated in the 01950s and largely abandoned after Minsky & Papert's book, and experienced some revival as "neural networks" in the 80s. In the 90s the US Postal Service deployed them to recognize handwritten addresses on snailmail envelopes. (A friend of mine who worked on the project told me that they discovered by serendipity that decreasing the learning rate over time was critical.) Dr. Dobb's hosted a programming contest for handwriting recognition; one entry used a neural network, but was disqualified for running too slowly, though it did best on the test data they had the patience to run it on. But in the early 21st century connectionist theories of AI were far outside the mainstream; they were only a matter of the history of computation. Although a friend of mine in 02005 or so explained to me how ConvNets worked and that they were the state-of-the-art OCR algorithm at the time.
Then ImageNet changed everything, and now we're writing production code with agentic LLMs.
Many things that people have tried before that didn't work at the time, limited by the physical substrate, might work now.
Usually it's because of an initiative by the Long Now Foundation that is supposed, among other things, to raise awareness about their 10,000 years clock and what it stands for.
There are always a few trolls who complain about how other people spell their words, wear their hair, format their dates, choose their clothes, etc. Usually on HN these complaints get flagged into oblivion pretty quickly. But most people don't care, preferring to read about things like the interaction between technological development and digital art forms than complaints about whether "aluminum" should actually be spelled "alumium".
You do know people have imagination and guys back in 1970 already imagined pretty much everything we use now and even posed problems that are not going to be solved by our computing power.
Dude watch original StarTrek from 1960’s you will be surprised.
You might also be surprised that all AI stuff nowadays is so hyped was already invented in 1960’s only that they didn’t have our hardware to run large models. Read up on neural networks.
I recall seeing a project on github with a comment:
Q: "Sooo... what does this do that Ansible doesn't?"
A: "I've never heard of Ansible until now."
Lots of people think they are the first to come across some concept or need. Like every generation when they listen to songs with references to drugs and sex.
I think software engineering have so many social problems to a level that other fields just don't have. Dogmatism, superstition, toxicity ... you name it.
When I was a graduate student at UCLA, I signed up for a CS course that turned out to secretly be "the Alan Kay show." He showed up every week and lectured about computer science history. Didn't learn much about programming language design that semester (what the course ostensibly was) but it was one of my most formative experiences.
I can't concur enough. We don't teach, "how to design computers and better methods to interface with them" we keep hashing over the same stuff over and over again. It gets worse over time and the effect is that what Engelbart called, "intelligence augmenters" become, "super televisions that cause you political and social angst."
How far we have fallen but so great the the reward if we could, "lift ourselves up again." I have hope in people like Bret Victor and Brenda Laurel.
I reflect on university, and one of the most interesting projects I did was an 'essay on the history of <operating system of your choice>' as part of an OS course. I chose OS X (Snow Leopard) and digging into the history gave me fantastic insights into software development, Unix, and software commercialisation. Echo your Mr Kay's sentiments entirely.
Sadly this naturally happens in any field that ends up expanding due to its success. Suddenly the number of new practitioners outnumbers the number of competent educators. I think it is a fundamental human resources problem with no easy fix. Maybe llms will help with this, but they seem to reinforce the convergence to the mean in many cases as those to be educated is not in a position to ask the deeper questions.
> Sadly this naturally happens in any field that ends up expanding due to its success. Suddenly the number of new practitioners outnumbers the number of competent educators. I think it is a fundamental human resources problem with no easy fix.
In my observation the problem rather is that many of the people who want to "learn" computer science actually just want to get a certification to get a cushy job at some MAGNA company, and then they complain about the "academic ivory tower" stuff that they learned at the university.
So, the big problem is not the lack of competent educators, but practitioners actively sabotaging the teaching of topics that they don't consider to be relevant for the job at a MAGNA company. The same holds for the bigwigs at such companies.
I sometimes even see the conspiracy that if a lot of graduates saw that what their work at these MAGNA involves is from the history of computer science often decades old and has been repeated multiple times over the decades, this might demotivate the employees who are to believe that they work on the "most important, soon to be world changing" thing.
Your experience with bad teachers seems more like an argument in favor of better education than against it. It's possible to develop better teaching material and methods if there is focus on a subject, even if it's time consuming.
Not really, you only need one really good teacher who can put their knowledge into written or video form so it's easily shared with others. It actually only takes one great mind.
At least for history of economics, I think it's harder to really grasp modern economic thinking without considering the layers it's built upon, the context ideas were developed within etc...
That's probably true for macro-economics. Alas that's also the part where people disagree about whether it made objective progress.
Micro-economics is much more approachable with experiments etc.
Btw, I didn't suggest to completely disregard history. Physics and civil engineering don't completely disregard their histories, either. But they also don't engage in constant navel gazing and re-hashing like a good chunk of the philosophers do.
I spent my teens explaining to my mum that main memory (which used to be 'core', she interjected) was now RAM, a record was now a row, a thin client was now a browser, PF keys were now just function keys. And then from this basis I watched Windows Forms and .NET and all the iterations of the JDK and the early churn of non-standardized JavaScript all float by, and thought, 'hmm.'
True, but these core concepts are very cyclical. Every 7ish years there’s a new coat of paint on an architecture that was invented at Xerox PARC or before.
He's right. I frequently work with founders who are reinventing and taking credit for stuff because they have no idea it was already created. The history of computers + computer science is really interesting. Studying past problems and solutions isn't a waste of time.
> He’s still disappointed because very few programmers are ever introduced to the history of computer science in the way that artists study the history of art or philosophers the history of philosophy.
Software development/programming is a field where the importance of planning and design lies somewhere between ignored and outright despised. The role of software architect is both ridiculed and vilified, whereas the role of the brave solo developer is elevated to the status of hero.
What you get from that cultural mix is a community that values ad-hoc solutions made up on the spot by inexperienced programmers who managed to get something up and running, and at the same time is hostile towards those who take the time to learn from history and evaluate tradeoffs.
See for example the cliche of clueless developers attacking even the most basic aspects of software architecture such as the existence of design patterns.
with that sort of community, how does anyone expect to build respect for prior work.
Maybe history teaches us that planning and design do not work very well....
I think one of the problems is that if someone uses a word, one still does not know what it means. A person can say 'design patterns' and what he is actually doing is a very good use of them that really helps to clarify the code. Another person can say 'design patterns' and is busy creating an overengineered mess that is not applicable to the actual situation where the program is supposed to work.
Maybe he managed to work them out and understand them in the '70s, if you believe him. But he has certainly never managed to convey that understanding to anyone else. Frankly I think if you fail to explain your discovery to even a fraction of the wider community, you haven't actually discovered it.
Alan Kay, my favorite curmudgeon, spent decades trying to remind us we keep reinventing concepts that were worked out in the late 70s and he’s disappointed we’ve been running in circles ever since. He’s still disappointed because very few programmers are ever introduced to the history of computer science in the way that artists study the history of art or philosophers the history of philosophy.