I've worked in tech and lived in SF for ~20 years and there's always been something I couldn't quite put my finger on.
Tech has always had a culture of aiming for "frictionless" experiences, but friction is necessary if we want to maneuver and get feedback from the environment. A car can't drive if there's no friction between the tires and the road, despite being helped when there's no friction between the chassis and the air.
Friction isn't fungible.
John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:
”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”
In "Mind and World", McDowell criticizes this sort of thinking, too, saying:
> We need to conceive this expansive spontaneity as subject to control from outside our thinking, on pain off representing the operations of spontaneity as a frictionless spinning in a void.
And that's really what this is about, I think. Friction-free is the goal but friction-free "thought" isn't thought at all. It's frictionless spinning in a void.
I teach and see this all the time in EdTech. Imagine if students could just ask the robot XYZ and how much time it'd free up! That time could be spent on things like relationship-building with the teacher, new ways of motivating students, etc.
Except...those activities supply the "wants and struggles whose consummations" build the relationships! Maybe the robot could help the student, say, ask better questions to the teacher, or direct the student to peers who were similarly confused but figure it out.
But I think that strikes many tech-minded folks as "inefficient" and "friction-ful". If the robot knows the answer to my question, why slow me down by redirecting me to another person?
This is the same logic that says making dinner is a waste of time and we should all live off nutrient mush. The purposes of preparing dinner is to make something you can eat and the purpose of eating is nutrient acquisition, right? Just beam those nutrients into my bloodstream and skip the rest.
Not sure how to put this all together into something pithy, but I see it all as symptoms of the same cultural impulse. One that's been around for decades and decades, I think.
People want the cookie, but they also want to be healthy. They want to never be bored, but they also want to have developed deep focus. They want instant answers, but they also want to feel competent and capable. Tech optimizes for revealed preference in the moment. Click-through rates, engagement metrics, conversion funnels: these measure immediate choices. But they don't measure regret, or what people wish they had become, or whether they feel their life is meaningful.
Nobody woke up in 2005 thinking "I wish I could outsource my spatial navigation to a device." They just wanted to not be lost. But now a generation has grown up without developing spatial awareness.
> Tech optimizes for revealed preference in the moment.
I appreciate the way you distinguish this from actual revealed preference, which I think is key to understanding why what tech is doing is so wrong (and, bluntly, evil) despite it being what "people want". I like the term "revealed impulse" for this distinction.
It's the difference between choosing not to buy a bag of chips at the store or a box of cookies, because you know it'll be a problem and your actual preference is not to eat those things, and having someone leave chips and cookies at your house without your asking, and giving in to the impulse to eat too many of them when you did not want them in the first place.
Example from social media: My "revealed preference" is that I sometimes look at and read comments from shit on my Instagram algo feed. My actual preference is that I have no algo feed, just posts on my "following" tab, or at least that I could default my view to that. But IG's gone out of their way (going so far as disabling deep link shortcuts to the following tab, which used to work) to make sure I don't get any version of my preference.
So I "revealed" that my preference is to look at those algo posts sometimes, but if you gave me the option to use the app to follow the few accounts I care about (local businesses, largely) but never see algo posts at all, ever, I'd hit that toggle and never turn it off. That's my actual preference, despite whatever was "revealed". That other preference isn't "revealed" because it's not even an option.
Just like the chips and cookies the costs of social meida are delayed and diffuse. Eating/scrolling feels good now. The cost (diminished attention span, shallow relationships, health problems) shows up gradually over years.
Yes i agree with this. I think more people, than not, would benefit from actively cultivating space in their lives to be bored. Even something as basic as putting your phone in the internal zip part of your bag, so when you're standing in line at the store/post office/whatever you can't be arsed to just reach for your phone and instead be in your head or aware of your surroundings. Both can be such wonderful and interesting places but we seem to forget that now
Plants "want" nitrogen, but dump fertilizer onto soil and you get algal blooms, dead zones, plants growing leggy and weak.
A responsible farmer is a steward of the local ecology, and there's an "ecology of friction" here. The fertilizer company doesn't say "well, the plants absorbed it."
But tech companies do.
There's something puritanical about pointing to "revealed preference" as absolution, I think. When clicking is consent then any downstream damage is a failure of self-control on the user's part. The ecological cost/responsibility is externalized to the organisms being disrupted.
Like Schopenhauer said: "Man kann tun, was er will, aber er kann nicht wollen, was er will." One can do what one wants, but one cannot will what one wants.
I wouldn't go as far as old Arthur, but I do think we should demand a level of "ecological stewardship". Our will is conditioned by our environment and tech companies overtly try to shape that environment.
I think that's partially true. The point is to have the freedom to pursue higher-level goals. And one thing tech doesn't do - and education in general doesn't do either - is give experience of that kind of goal setting.
I'm completely happy to hand over menial side-quest programming goals to an AI. Things like stupid little automation scripts that require a lot of learning from poor docs.
But there's a much bigger issue with tech products - like Facebook, Spotify, and AirBnB - that promise lower friction and more freedom but actually destroy collective and cultural value.
AI is a massive danger to that. It's not just about forgetting how to think, but how to desire - to make original plans and have original ideas that aren't pre-scripted and unconsciously enforced by algorithmic control over motivation, belief systems, and general conformity.
Tech has been immensely destructive to that impulse. Which is why we're in a kind of creative rut where too much of the culture is nostalgic and backward-looking, and there isn't that sense of a fresh and unimagined but inspiring future to work towards.
I don't think I could agree with you more. I think that more in tech and business should think about and read about philosophy, the mind, social interactions, and society.
ED Tech for example I think really seems to neglect the kind of bonds that people form when they go through difficult things together, and the pushing through difficulties is how we improve. Asking a robot xyz does not improve ourselves. AI and LLMs do not know how to teach, they are not Socratic pushing and prodding at our weaknesses and assessing us to improve. The just say how smart we are.
This is perhaps one of the most articulate takes on this I have ever read - thank-you!
And - for myself, it was friction that kickstarted my interest in "tech" - I bought a janky modem, and it had IRQ conflicts with my Windows 3 mouse at the time - so, without internet (or BBS's at that time), I had to troubleshot and test different settings with the 2-page technical manual that came with it.
It was friction that made me learn how to program and read manuals/syntax/language/framework/API references to accomplish things for hobby projects - which then led to paying work. It was friction not having my "own" TV and access to all the visual media I could consume "on-demand" as a child, therefore I had to entertain myself by reading books.
Friction is an element of the environment like any other. There's an "ecology of friction" we should respect. Deciding friction is bad and should be eradicated is like deciding mosquitoes or spiders or wolves are bad and should be eradicated.
Sometimes friction is noise. Sometimes friction is signal. Sometimes the two can't be separated.
I learned much the same way you did. I also started a coding bootcamp, so I've thought a lot about what counts as "wasted" time.
I think of it like building a road through wilderness. The road gets you there faster, but careless construction disturbs the ecosystem. If you're building the road, you should at least understand its ecological impact.
Much of tech treats friction as an undifferentiated problem to be minimized or eliminated—rather than as part of a living system that plays an ecological role in how we learn and work.
Take Codecademy, which uses a virtual file system with HTML, CSS, and JavaScript files. Even after mastering the lessons, many learners try the same tasks on their own computers and ask, "Why do I need to put this CSS file in that directory? What does that have to do with my hard drive?"
If they'd learned directly on their own machines, they would have picked up the hard-drive concepts along the way. Instead, they learned a simplified version that, while seemingly more efficient for "learning to code," creates its own kind of waste.
But is that to say the student "should" spend a week struggling? Could they spend a day, say, and still learn what the friction was there to teach? Yes, usually.
I tell everyone to introduce friction into their lives...especially if they have kids. Friction is good! Friction is part of the je ne sais quoi that make human's create
In my experience part of the 'frictionless' experience is also to provide minimal information about any issues and no way to troubleshoot. Everything works until it doesn't, and when it doesn't you are now at the mercy of the customer support que and getting an agent with the ability to fix your problem.
> but friction is necessary if we want to maneuver and get feedback from the environment
You are positing that we are active learners whose goal is clarity of cognition and friction and cognitive-struggle is part of that. Clarity is attempting to understand the "know-how" of things.
Tech and dare I say the natural laziness inherent in us instead wants us to be zombies being fed the "know-that" as that is deemed sufficient. ie the dystopia portrayed in the matrix movie or the rote student regurgitating memes. But know-that is not the same as know-how, and know-how is evolving requiring a continuously learning agent.
Looking at it from a slightly different angle, one I find most illuminating, removing "friction" is like removing "difficulty" from a game, and "friction free" as an ideal is like "cheat codes from the start" as an ideal. It's making a game where there's a single button that says "press here to win." The goal isn't the remove "friction", it's the remove a specific type of valueless friction, to replace it with valuable friction.
I don't know. You can be banging your head against the wall to demolish it or you can use manual/mechanical equipment to do so. If the wall is down, it is down. Either way you did it.
Thank you for expressing this. It might not be pithy but its something I've been thinking about a lot for a long time and this a well articulated way of expressing this
From John Dewey's Human Nature and Conduct, the fallacy that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned."
“The fallacy in these versions of the same idea is perhaps the most pervasive of all fallacies in philosophy. So common is it that one questions whether it might not be called the philosophical fallacy. It consists in the supposition that whatever is found true under certain conditions may forthwith be asserted universally or without limits and conditions. Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned. Because the success of any particular struggle is measured by reaching a point of frictionless action, therefore there is such a thing as an all-inclusive end of effortless smooth activity endlessly maintained.
It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they arc, or when taken universally.”
1. One cannot not communicate
2. Every communication has a content and relationship
aspect such that the latter classifies the former
and is therefore a metacommunication
3. The nature of a relationship is dependent on the
punctuation of the partners' communication procedures
4. Human communication involves both digital and analog
modalities
5. Inter-human communication procedures are either
symmetric or complementary
Re: (1), the "mere" act of using AI communicates something, just like some folks might register a text message as more (or less) intimate than a phone call, email, etc. The choice of modality is always part of what's communicated, part of the act of communication, and we can't stop that. Re: (2), that communication is then classified by each person's idea of what the relationship is.
This is a dramatic and expensive way to learn they had different ideas of their relationship!
Of course, in a teacher/student situation, it's the teacher's job to make it clear to the students what the relationship is. Otherwise you risk relationship-damaging "surprises" like this.
Even ignoring the normative question of what a teacher Should™ do in that situation, it was counterproductive. Whatever benefit the teacher thought AI would provide, they'd (hopefully) agree it was outweighed by the cost to their relationship w/ students. All future interactions w/ those students will now be X% harder.
There's a kind of technical rationale which says that if (1) the GOAL is to improve the student's output and (2) I would normally do that by giving one or more rounds of feedback and waiting for the student to incorporate it then (3) I should use AI because it will help us reach that goal faster and more efficiently.
John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:
”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”
The act of receiving and incorporating feedback is not "inefficient", especially not in a school setting. The consummation of that process is part of the goal. Maybe the most important part!
CS Peirce has a famous essay "The Fixation of Belief" where he describes various processes by which we form beliefs and what it takes to surprise/upset/unsettle them.
This blog post gestures at that idea while being an example of what Peirce calls the "a priori method". A certain framework is first settled upon for (largely) aesthetic reasons and then experience is analyzed in light of that framework. This yields comfortable conclusions (for those who buy the framework, anyhow).
For Peirce, all inquiry begins with surprise, sometimes because we've gone looking for it but usually not. About the a priori method, he says:
“[The a priori] method is far more intellectual and respectable from the point of view of reason than either of the others which we have noticed. But its failure has been the most manifest. It makes of inquiry something similar to the development of taste; but taste, unfortunately, is always more or less a matter of fashion, and accordingly metaphysicians have never come to any fixed agreement, but the pendulum has swung backward and forward between a more material and a more spiritual philosophy, from the earliest times to the latest. And so from this, which has been called the a priori method, we are driven, in Lord Bacon's phrase, to a true induction.”
Wow. I'm reminded of a great essay/blgo I read years ago that I'll never find again that said a good, engaging talk/presentation has to have an element of surprise. More specifically, you start with an exposition of what your audience already knows/believes, then you introduce your thesis which is SURPRISING in terms of what they already know. Not too out of the realm of belief, but just enough.
The bigger/more thought-diverse the audience, the harder this is to do.
I had a grad school mentor William Wells who taught us something similar. A good research publication or presentation should aim for "just the right amount of surprise".
Too much surprise and the scientific audience will dismiss you out of hand. How could you be right while all the prior research is dead wrong?
Conversely, too little surprise and the reader / listener will yawn and say but of course we all know this. You are just repeating standard knowledge in the field.
Despite the impact on audience reception we tend to believe that most fields would benefit from robust replication studies and the researchers shouldn't be penalized for confirming the well known.
And, sometimes there really is paradigm breaking research and common knowledge is eventually demonstrated to be very wrong. But often the initial researchers face years or decades of rejection.
“The fallacy in these versions of the same idea is perhaps the most pervasive of all fallacies in philosophy. So common is it that one questions whether it might not be called the philosophical fallacy. It consists in the supposition that whatever is found true under certain conditions may forthwith be asserted universally or without limits and conditions. Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned. Because the success of any particular struggle is measured by reaching a point of frictionless action, therefore there is such a thing as an all-inclusive end of effortless smooth activity endlessly maintained.
It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they arc, or when taken universally.”
John Dewey on a similar theme, about the desire to make everything frictionless and the role of friction. The fallacy that because "a thirsty man gets satisfaction in drinking water, bliss consists in being drowned."
> The fallacy in these versions of the same idea is perhaps the most pervasive of all fallacies in philosophy. So common is it that one questions whether it might not be called the philosophical fallacy. It consists in the supposition that whatever is found true under certain conditions may forthwith be asserted universally or without limits and conditions.
> Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned. Because the success of any particular struggle is measured by reaching a point of frictionless action, therefore there is such a thing as an all-inclusive end of effortless smooth activity endlessly maintained.
> It is forgotten that success is success of a specific effort, and satisfaction the fulfilment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.
I remember a few years back, here on HN everyone was obsessed with diets and supplements and optimizing their nutrients.
I remember telling someone that eating is also a cultural and pleasurable activity, that it's not just about nutrients, and that it's not always meant to be optimized.
It wasn't well received.
Thankfully these days that kind of posts are much less common here. That particular fad seems to have lost its appeal.
Oh yeah, it’s both funny and understandable how we’ve swung from the mania of huel-esque techbro belief of nutrition to the current holistic eating “beef tallow” and no-seed oils movement. I think we realized guzzling slop alone is spiritually empty.
Always a pleasant surprise to see Peirce linked here!
He influenced many people, both directly and indirectly, but had a...difficult personality. He has many thousands of pages of written work, but resisted finalizing anything "publishable". His style is (IMO) an extreme example of that dense, meandering, 19th-century Victorian style, which can make for very hard reading.
But he made significant contributions to many fields, from mathematics to experimental physics to logic to (nascent) computer science to philosophy.
Umberto Eco said: "Charles Sanders Peirce is undoubtedly the greatest unpublished writer of our century." (cf. https://www.jstor.org/stable/2907146)
Cornel West said: "Charles Sanders Peirce is the most profound philosophical thinker produced in America."
For example, Peirce coined the term "fallibilism":
> For years in the course of this ripening process, I used for myself to collect my ideas under the designation fallibilism; and indeed the first step toward finding out is to acknowledge you do not satisfactorily know already; so that no blight can so surely arrest all intellectual growth as the blight of cocksureness; and ninety-nine out of every hundred good heads are reduced to impotence by that malady — of whose inroads they are most strangely unaware! Collected Papers, §1.13
He was the first to articulate the type/token distinction. He has truth tables ~40 years before Wittgenstein. He had diagrams of electrical circuits that could do basic logic. He was the first to show (in the 1880s) that NOR and NAND were sufficient to reproduce the other logical connectives.
“It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.”
— Alfred North Whitehead, "Introduction to Mathematics" (1911)
Part of what I take Whitehead to be saying is that the act of truly thinking is difficult to the point of being psychologically painful. And I believe that fear of this pain is at the root of procrastination in the realm of academic work.
The American Buddhist Cory Muscara has written: Procrastination is the refusal or inability to be with difficult emotions.
This view presupposes that things "we can perform without thinking about them" are done correctly. We don't get there without thinking about them first. Doing things without thinking about them is, at the level of each individual, a luxury we earn by thinking about them really hard at first. At the scale of society, well, this is supposedly what school is for. But for the society to "not think about" things, individuals have to continue thinking about them.
A lot of people don't get that far for a lot of tasks, so "think more" is not incorrect advice for them.
I've also had occasional success first convincing them they know how to add "mindlessly", by just manipulating symbols, and then explaining that we can have machines do it mindlessly, too. I don't use those words, ofc.
For example, you might ask them "Imagine you had a younger sibling who couldn't add. Maybe they didn't even know what numbers were. Could you teach them how to add by just telling them what symbols to write down as they looked through the symbols in the addition problem? Maybe there's an index card labeled '6+3' and on it is written '9'. You tell them to look for the correct index card and write down the corresponding symbol."
You can also explain binary to any interested student who is 13+ and then the idea that a machine can do it becomes a lot easier.
He uses a metaphor I've gotten a lot of mileage out of. Imagine you have a clerk who can add and multiply like a regular person. Now imagine there's someone else who knows only how to add and count, but have no idea what multiplication is.
If they can add and count fast enough, it'll look to an outsider like they not only know what multiplication is, but they can do any multiplication problem almost instantly.
Computers are like that: dumb but fast.
If they do something fast enough they give the illusion of "understanding", kind of like movies give the illusion of motion by swapping out still images fast enough.
I did this years ago to demonstrate to my students that the "exact solution" can still be written in code. There are implementations in Ruby and Python, with some benchmarking code: https://github.com/jfarmer/fib-bench/
Code winds up looking like:
def fib_phi(n)
((PhiRational(0,1)**n - PhiRational(1,-1)**n)/PhiRational(-1, 2)).a.to_i
end
Tech has always had a culture of aiming for "frictionless" experiences, but friction is necessary if we want to maneuver and get feedback from the environment. A car can't drive if there's no friction between the tires and the road, despite being helped when there's no friction between the chassis and the air.
Friction isn't fungible.
John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:
”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”
In "Mind and World", McDowell criticizes this sort of thinking, too, saying:
> We need to conceive this expansive spontaneity as subject to control from outside our thinking, on pain off representing the operations of spontaneity as a frictionless spinning in a void.
And that's really what this is about, I think. Friction-free is the goal but friction-free "thought" isn't thought at all. It's frictionless spinning in a void.
I teach and see this all the time in EdTech. Imagine if students could just ask the robot XYZ and how much time it'd free up! That time could be spent on things like relationship-building with the teacher, new ways of motivating students, etc.
Except...those activities supply the "wants and struggles whose consummations" build the relationships! Maybe the robot could help the student, say, ask better questions to the teacher, or direct the student to peers who were similarly confused but figure it out.
But I think that strikes many tech-minded folks as "inefficient" and "friction-ful". If the robot knows the answer to my question, why slow me down by redirecting me to another person?
This is the same logic that says making dinner is a waste of time and we should all live off nutrient mush. The purposes of preparing dinner is to make something you can eat and the purpose of eating is nutrient acquisition, right? Just beam those nutrients into my bloodstream and skip the rest.
Not sure how to put this all together into something pithy, but I see it all as symptoms of the same cultural impulse. One that's been around for decades and decades, I think.