Hacker Newsnew | past | comments | ask | show | jobs | submit | romaniv's commentslogin

The fact that everyone is now constantly forced to use (oftentimes faulty) personal heuristics to determine whether or not they read slop is the real problem here.

AI companies and some of their product users relentlessly exploit the communication systems we've painstakingly built up since 1993. We (both readers and writers) shouldn't be required to individually adapt to this exploitation. We should simply stop it.

And yes, I believe that the notion this exploitation is unstoppable and inevitable is just crude propaganda. This isn't all that different from the emergence of email spam. One way or the other this will eventually be resolved. What I don't know is whether this will be resolved in a way that actually benefits our society as a whole.


> fact that everyone is now constantly forced to use (oftentimes faulty) personal heuristics to determine whether or not they read slop is the real problem here

It would be ironic and terrific if AI causes ordinary Americans to devote more time to evaluting their sources.


> Instead of asking “which language is best?” we need to ask “what is this language going to cost us?”

As long as engineering salaries depend on tribal identity markers (i.e. language and tooling preferences) rather than ability to save money, people will entirely rationally choose tools that look good on their resume rather than save their companies money.


I liked Shneier much more when he was arguing against hyperbolic tech claims that were used as excuses for mass control and surveillance.

The notion that AI is reshaping American politic is a clear example of a made-up problem that is propped up to warrant a real "solution".


No, this is not a pre-existing problem.

In the past the problem was about transferring a mental model from one developer to the other. This applied even when people copy-pasted poorly understood chunks of example code from StackOverflow. There was specific intent and some sort of idea of why this particular chunk of code should work.

With LLM-generated software there can be no underlying mental model of the code at all. None. There is nothing to transfer or infer.


It’s even worse because the solution an LLM produces is not obvious as to whether it was inherently chosen by the user and favored over a different approach for any reason, or it was just what happened to be output and “works”.

I’ve had to give feedback to some junior devs who used quite a bit of LLM created code in a PR, but didn’t stop to question if we really wanted that code to be “ours” versus using a library. It was apparent they didn’t consider alternatives and just went with what it made.


We call these workers “pilots,” as opposed to “passengers.” Pilots use gen AI 75% more often at work than passengers, and 95% more often outside of work.

Identify a real issue with the technology, then shift the blame to a made-up group of people who (supposedly) aren't trying hard enough to embrace the technology.

Embody a pilot mindset, with high agency and optimism

Thanks for the career advice.


Ridiculous, I have it on good authority that embracing the 'hacker ethos' by becoming a 'coding ninja' with a 'wizard' mindset will propel you to next-level synergisms within transformative paradigms like AI and blockchain.


To leverage that hacker ethos for maximum synergy, you'll need to empower a holistic and agile mindset. This allows you to pivot toward a disruptive paradigm and monetize your scalable core competencies across the entire ecosystem.


Yeah, the article was good until I reached that point. In which it became an ad for BetterUp consultancy to transform passengers into pilots.


This isn't wrong though. There's obviously two types of people using AI: one is "explain to me how X works", and the other is "do X for me". Same pattern with every technology.


> Embody a pilot mindset, with high agency and optimism

Fly away from here at high speed


A pilot has ultimate authority of how a plane is flown, because it's their ass in the fire if the plane can't land.

If you're a low-level office drone, you are not a pilot.


Embody a slave mindset


Or we’ll import people who will.


Reminder: you can use indentation to encode s-expressions. The result is very easy to parse and easy to read provided the underlying data structures are not insane.

https://srfi.schemers.org/srfi-49/srfi-49.html


I've been a web developer for over two decades. I have specific well-tested solutions for avoiding external JS dependencies. Despite that, I have the exact same experience as the above security guy. Most developers love adding dependencies.


Not a single reference to P. H. Winston's work on story understanding and the Genesis system. Yes, sorry, I do expect people claiming to operate in some field to have knowledge of that field and acknowledge prior wok. This is absolutely fundamental to having an actual institution of science, rather than a bunch of people tinkering with random projects.

https://groups.csail.mit.edu/genesis/

https://www.youtube.com/watch?v=7XvgBI2KV28


Is this in any way relevant? Or are you just plugging your favorite project whether it fits or not?


Give the project an actual look.

It is a systematic approach to deconstructing and composing a story through standardized schemas, and sounds like it should or would inform something like the OP's visual representation system.


Do you consider the study of computer science and AI in relation to human telling and understanding of stories to be irrelevant to a tool that uses AI to understand human-written stories as well as to provide a computer GUI for editing those stories by re-synthesizing the GUI edits as text?


Stories, AI, words, statements, images- are all arbitrary. The AI bubble is the same as story bubble, it's the dilution of meaning. Automate dilution and you have hallucination. This is occurring across the board in politics, news.


Ooookay. Get well soon!


There's nothing accurate about stories, just as there is nothing accurate about words, which are all metaphors and all are arbitrary. Comp sci's glaring mistake was confusing content with format. It's a fatal error.

"Narrative fallacies arise inevitably from our continuous attempt to make sense of the world. The explanatory stories that people find compelling are simple; are concrete rather than abstract; assign a larger role to talent, stupidity, and intentions than to luck; and focus on a few striking events that happened rather than on the countless events that failed to happen. Any recent salient event is a candidate to become the kernel of a causal narrative.” Daniel Kahnemann Thinking Fast and Slow

“The same science that reveals why we view the world through the lens of narrative also shows that the lens not only distorts what we see but is the source of illusions we can neither shake nor even correct for…all narratives are wrong, uncovering what bedevils all narrative is crucial for the future of humanity.” Alex Rosenberg How History Gets Things Wrong: The Neuroscience of Our Addiction to Stories 2018


The web today is a rotting carcass with various middlemen maggots crawling all over it and gorging themselves on the decay. The only real discussion to be had is what to replace it with and how to design the new protocols to avoid the same issues.


The reason the web is a rotting carcass is not because of the way the web is architected, it is because a lot of people's livelihoods depend on making it as rotten as possible without collapsing it entirely.

From advertising companies, search engines (ok, sometimes both), certificate peddlers and other 'service' (I use the term lightly here) providers there are just too many of these maggots that we don't actually need. We mostly need them to manage the maggots! If they would all fuck off the web would instantly be a better place.


Who do you propose needs to fuck off in order for the web to not need certificate authorities?


Thats the neat thing, you cant really avoid the same issues. Security is not a destination, it's a process. Everything you find a way to make something more secure someone seems to find a new way to attack it, and so the ecosystem evolves.


What do you think is better? The web is indeed questionable, but it is literally the best we have, it is still reasonably simple to deploy a web app.

Desktop app development gets increasingly hostile and OSes introduce more and more TCC modals, you pretty much need a certificate to codesign an app if you sideload (and app stores have a lot of hassle involved), mobile clients had it bad for a while (and just announced that Android will require a dev certificate for sideloading as well).

edit: also another comment is correct, the reason it is like that is because it has the most eyes on it. In the past it was on desktop apps, which made them worse


I don't know what a replacement for the web would look like.

But it seems apparent to me that it will have to work over HTTP/QUIC, and TCP port 443.

Which prompts the obvious question ...


As a friendly reminder, SRV records exist and are great at fixing that magic port syndrome (unless you were hinting at the infinite corporate firewall appliances, for which I have no magic fix)


Right. Egress on anything other than tcp/443 is probably a non-starter for any new protocol.

The question I was alluding to is: if it's HTTP-ish over tcp/443, wouldn't it still be the web anyway?

But thinking about it more, the server could easily select a protocol based on the first chunk of the client request. And the example of RTP suggests that maybe even TCP would be optional.


Spot on.

At some point I tried to figure out where the term "alignment" came from. I didn't find any definitive source, but it seems to have originated on a medium.com blog of Paul Christiano:

https://ai-alignment.com/ai-safety-vs-control-vs-alignment-2...

Basically, certain people are dismissing decades of deep though on this subject from writers (like Asimov and Sheckley), scholars (like Postman) and technologists (like Wiener). Instead, they are creating a completely new set of terms, concepts and though experiments. Interestingly, this new system seems to make important parts of the question completely implicit, while simultaneously hyper-focusing public attention on meaningless conundrums (like the infamous paperclip maximizer).

In my view, the most important thing about the three laws of robotics is that they made it obvious that there are several parties involved in AI ethics questions. There is the manufacturer/creator of the system, the user/owner of the system and the rest of the society. "Alignment" cleverly distracts everyone from noticing the distinctions between these groups.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: