So long as you have a steady stream of cardboard (whether from packages shipped to your home or it ends up in your friendly neighbors' recycling), I anticipate that you could always make things, solo or with friends.
I've heard that one can use wheat paste as cardboard glue.
My hope is to have the combination of a cardboard "saw" and cardboard glue is to result in something like (nail-free) carpentry that kids can perform nearly entirely unsupervised.
Even making simple shapes that can go into dioramas and be props for roleplaying would already be great, I hope!
Brookline was rated Best Places to Live for Quality of Life in the U.S. in 2025-2026 by US News which "measures how satisfied residents are with their daily lives, and takes into account factors like crime, quality and availability of health care, quality of education and average commute time"
I suspect most engineers at most companies, working behind the scenes to see the sausage being made, grow reservations about recommending it afterwards.
The internet was about freedom (to most of its initial creators). It is now a cornerstone of the surveillance state. And yet, independent journalism is now possible only mostly through the existence of the internet.
AI & crypto may end up in a similar bucket. It is a paradoxical enabler of both freedom and imprisonment. No one gets to build a technology a declare a problem solved. Alfred Nobel made nitroglycerin safe and thus paradoxically ended up being called the Merchant of Death.
Freedom is not free. It is an eternal struggle.
I am grateful for the builders who try to build something for freedom. I do not spurn them when their invention gets corrupted. It is now our job and the job of future generations to fight the corruption.
Those in power (such as those in the corporate managerial class) are trying to consolidate power by replacing humans with AI. There is the nice story of trying to improve overhead and increase efficiencies. There is also the ugly story of those in power trying to leverage AI for expert opinions even in the face of dissenting human experts.
For now, the human dissenters have a lot of leverage because AI still makes very clear and obvious errors sometimes, and it would be a political nightmare for a decision-maker to be accused of erring on the side of AI by recognized human experts who dissented. I wonder if there will come a time that AI opinion would be on par or even favored over the human expert because the human would be considered more fallible. This doesn't even have to be true - it only has to be sufficiently perceived to be sufficiently true.
Power tries to centralize and consolidate. See the largely successful attempts to consolidate media outlets like local news stations as an example.
The abstract suggests that elites "shape" mass preference, but I think the degree to which this shaping occurs is overblown in many ways (and perhaps underestimated in other ways, such as through education).
AI, even if it is not powerfully "shaped" by the "elites", can push mass preference in predictable ways. If this is true, this phenomenon by itself allows the elites to tighten their grip on power. For example, Trump's rise to power upset (some of) the elites because they really didn't understand the silent, mass preference for Trump.
This could also slow social progress, since elites often cause stagnation rather than progress. AI could generate acceptable, "expert" opinions for the issues that they usually would rely on experts today. I see some signs of that today, where those with authority try to prefer the AI answer in opposition to dissenting, human expert opinions. Human experts seem to be winning, for now.
> At the new place, perf doesn’t care if you use AI or not- just what you actually deliver
I work at Google, and I am of the overall opinion that it doesn't matter what you deliver from an engineering perspective. I've seen launches that changed some behavior from opt-in to opt-out get lauded as worth engineering-years of investment. I've seen demos that were 1-2 years ahead of our current product performance get buried under bureaucracy and nitpicking while the public product languishes with nearly no usage. The point being, what you objectively deliver doesn't matter, but what ends up mattering is how the people in your orbit weave the narrative about what you built.
So if "leadership" wants something concretely done, they must mandate it in such a way that cuts through all the spin that each layer of bureaucracy adds before presenting it to the next layer of bureaucracy. And "leadership" isn't a single person, so you might describe leaders as individual vectors in a vector space, and a clear eigenvector in this space of leadership decisions in many companies is the vector of "increase employee usage of AI tools".
Spot on. I would suggest a slightly different framing where the antagonist isn't really the "approving" teams but "leaders" who all want a seat at the table and exercise their authority lest their authority muscles atrophy. Since they're not part of the development, unless they object to something, would they really have any impact or leadership?
I always laugh-cry with whomever I'm sitting next to whenever launch announcements come out with more people in the "leadership" roles than the individual contributor roles. So many "leaders" but none with the awareness or the care of the farcical volumes such announcements speak.
Involving everyone who shows up to meetings is a great way to move forward and/or trim down attendees. Management who enjoys getting their brain picked or homework assignments are always welcome.
That's presuming a healthy culture. In an unhealthy culture, some people will feel pressure to uphold some comment that someone "senior" made offhand in a meeting several months ago, even if that leader is no longer attending project meetings. The people who report to this leader may otherwise receive blowback if the "decision" their leader made is not being upheld, whether such a leader recalls their several-month-old decision correctly or not, in the case they recall it at all. I have found it frustratingly-more-common-than-I-would-like where people, including leaders, retroactively adjust their past decisions so that they claim "I-told-you-so" and "you-should-have-done-what-I-said".
In response to your comment, yes, I would largely be in favor of moving forward only with whatever is said in the relevant meetings with the given attendees of a meeting. That assumes a reasonably healthy culture where these meetings are scheduled in good faith reasonable times for all relevant stakeholders.
As a Googler, I wish I was as optimistic as you. There is an internal sentiment that valuable roles are being removed that aren't aligned with strategic initiatives, even roles that are widely believed to improve developer productivity. See the entire python maintainers team being laid off: https://www.reddit.com/r/AskProgramming/comments/1cem1wk/goo...
Roles fixing FFmpeg bugs would be a hard sell in this environment, imho.
I am finding this to be the new "meta" for the software development lifecycle. Given this new reality, it's getting harder and harder to actually invest in ambitious, green-field projects. In this new meta, individual contributors and leadership don't trust each other, leading to a vicious feedback loop of making it harder and harder to jumpstart ambitious projects.
It's hard to commit to a green-field project that is predicated on a level of risk that leaders are hesitant to take on as they would also take on the "counterparty risk" of the expert individual contributors holding them over the barrel to finish the project. The expert individual contributors are likewise hesitant to devote themselves to the task knowing that once leadership considers the "hard bits" to be done, leadership's aversion to risk will try to swap out the expert-individual-contributor roles with much more replaceable roles and ultimately replace the expert individual contributors.
P.S. The parallels between this cycle of mistrust and the modern dating crisis (in the US) does not elude me.
So long as you have a steady stream of cardboard (whether from packages shipped to your home or it ends up in your friendly neighbors' recycling), I anticipate that you could always make things, solo or with friends.
I've heard that one can use wheat paste as cardboard glue.
My hope is to have the combination of a cardboard "saw" and cardboard glue is to result in something like (nail-free) carpentry that kids can perform nearly entirely unsupervised.
Even making simple shapes that can go into dioramas and be props for roleplaying would already be great, I hope!
reply