Hacker Newsnew | past | comments | ask | show | jobs | submit | tombert's commentslogin

My neighborhood recycling occurs on Thursday night, so I take all my empty cans and put them in a clear plastic back and put them next to my trash. I do not think that the garbage people have ever gotten the cans; there is always a homeless person that will walk around and pick up the bag of empties, presumably to redeem them somewhere.

I don’t have an issue with it, if they want to do what I am too lazy to do, more power to them.


What a world we live in; we have gotten to a point where computers are so small and cheap that they can literally be “disposable”.

It’s beautiful, I love it.


For my part, I hate anything explicitly labeled "disposable". As the author writes, you're supposed to recycle it, but how many people will do that if it has "disposable" written on it? Even worse, if it was truly disposable they could use a non-rechargeable battery, but because they have to keep up the pretense of it being reusable, they have to include a rechargeable battery with more dodgy chemistry that probably shouldn't end up in a landfill...

> As the author writes, you're supposed to recycle it, but how many people will do that if it has "disposable" written on it?

You need to offer an incentive (ie: discount on new vape if you recycle) and then, from my experience, most people will recycle.


And you also need to refrain from breaking this scheme entirely, by introducing silly restrictions like only exchanging for in-store vouchers instead of cash, or demanding same-store receipt for original purchase (or equivalent) - like it happened in some places (e.g. my country, Poland) to glass and aluminum recycling.

Such restrictions seem to purposefully target poor people, and I have rather strong ethical objections to them (something about making a problem invisible and hoping it'll go away - or starve out), but the effect goes beyond that. Getting $20 back on a $200 product would be a different story, but here, it's more like $2 on $20, or $0.2 on $2; most people aren't going to bother with that (and understandably so: it's not worth the logistics overhead). So at best, all this does is redirect money stream from poor people to recycling companies. More typically, it just makes people recycle less.


I concur on this one.

Here in NY as a cannabis user, one of the brands available that offers vapes (Fernway) offers a recycling program at dispensaries. I get 10% back off my next vape/cart if I return the old one to the recycling dropbox. My dispensary also keeps how many I've returned on file if I return extras, so I keep a 'balance' of disposables returned for the discounts.


To make matters worse, recycling is a scam (with a small handful of exceptions).

Varies widely across country and the type of thing you're recycling. People are so extreme with recycling, it's either "recycle everything!" or "it's a scam, just chuck it all in the garbage"

I’m relatively sure that electronics are not recycled properly anywhere. At best some of the metals are extracted (hopefully not by mixing the ashes with mercury).

What would be properly recycling electronics, if not extracting the metals? should the worthless based board to be melted and used for bottles?

Not burning all the ICs and all the other components that still work perfectly fine would be a good start imo.

That would fall under Reuse rather than Recycle. Reduce, Reuse and Recycle are in the order of best to worst. Recycling is the last ditch effort to not completely waste something. It's always going to feel like a half measure, because it is.

I would say reusing a "disposable" vape would be refilling it and recharging or exchanging the battery, not salvaging it for parts.

To do WHAT with? Catalog and categorize the millions of random penny-priced ICs that MIGHT be usable for something else?

That there is no way to recycle electronics economically is the reason that they are not recycled. I don't claim otherwise.

Isn't that the point of recycling? To reuse the reusable materials like plastic?

If salvaging 100% of the materials that make up something is the only way to "properly" recycle, we are not recycling anything properly. Some components are not recyclable.

I won't speculate about whether the plastic on the board is recyclable, or ecological to recycle. I don't know. This is what I'm asking.


what about best buy and staples? that's where I take mine

I can't tell if this is a tongue-in-cheek comment or not, but all of that is shipped off to 3rd party "recyclers" who pinky promise that they will dispose of it properly. Very often those 3rd parties rely on other 3rd parties until the it ends up in a waste pile in a developing country, but with a long enough chain of differed responsibility that nobody can be held accountable.

The fundamental problem with "recycling" is precisely the fact that we just hand it off and don't ask questions about where it ends up, all while feeling great about ourselves afterwards. Bestbuy and Staples are offering accountability laundering so that you don't have to feel bad and in exchange are more likely to become a customer. The 3rd parties working for them do the same thing, but they usually want cash for it.


sounds like cynicism without any factual basis. I just checked and ERI says otherwise: https://eridirect.com/blog/2025/01/rare-earth-metal-recovery...

> "it's a scam, just chuck it all in the garbage"

This sentiment is the case because very often that's where recycling ultimately ends, we just pay someone to move it far away from us so we don't have to see it when it happens.

Until 2018, when they finally stopped accepting it, one of the US largest exports to China was cardboard boxes sent over for "recycling". We burned tons of bunker fuel shipping back the boxes Chinese goods arrived in. The net environmental impact would likely have been less had we just kept the boxes at home.

It's strange to me how often people prefer a widely acknowledged lie than to simply admit the truth.

I always recycle though because the recycle bin in my city is larger than my trash bin, and I don't have enough room in my trash bin sometimes.


It varies very widely indeed. In some countries it isn't a scam because it gets burned like Denmark but other than that majority of recycling just means shipping it to a landfill in a poor country that they promise to recycle.

Well, it depends a lot on material.

Metals, especially aluminum, get widely recycled because it actually makes financial sense.

Plastics, well, you are probably better off burning them for electricity.


In Hungary it gets sorted out locally. We also recently implemented a bottle return system that (although it's annoying) produces clean stacks of PET, aluminium and glass, all of which are recyclable.

Even with PET, arguably the most recyclable plastic, most of it doesn't go bottle-to-bottle but rather bottle-to-textile. Because most PET "recycling" doesn't close the loop, so it's dubious to even call it recycling. That said, some bottle-to-bottle recycling of PET is done, and this has been getting better.

> because it gets burned

I wouldn’t really call that recycling.


As long as the heat is used for something (electricity, building heating etc.) there is at least some reuse of parts of it. And if exhaust ist filtered pollution is also limited. Better than just putting it on a garbage dump and forgetting about it.

But yes, not proper recycling.


Depends, it’s hard to make a blanket statement like that. Recycled steel and aluminum for example is absolutely not a scam. But for plastics, I agree that waste incineration is mostly a better solution than recycling (which produces low-quality plastics with some risk of unhealthy contaminants in the few cases that it’s not actually a scam).

Can you elaborate on that?

Edit: I'm actually curious l, i don't know how recycling supposed to work for electronics and how it can be a scam.


This youtube video explains why plastic recycling exists, how it's mostly ineffective and why is it a scam created to normalize one-use plastic. This basically applies to electronics and others. "Why would I reuse or reduce, I can buy, consume an recycle".

https://www.youtube.com/watch?v=PJnJ8mK3Q3g


wildly country dependent, e.g. check the stats for the EU: https://ec.europa.eu/eurostat/web/products-eurostat-news/w/d...

Tax CEOs of vape companies the percentage of their vapes that their company doesn't physically retrieve from customers to be recycled ...

A completely ridiculous and nonsensical proposal I can only assume was said in jest.

Hey, if your entire business plan is to produce actual garbage, maybe you should be held responsible for making sure that garbage has a pathway to proper disposal.

Yes, a jest. But essentially you have to directly impact the take home pay off CEOs as that appears to be the only thing they will change their behaviour for.

It sounds like a description of most of a deposit system to me, and deposit systems are good at encouraging recycling.

See "core charges" for many automotive parts to incentivize the return of waste for refurbishing at the higher end and bottle deposits for cans/bottles at the lower end. It's weird how things so common in one part of our society can seem so foreign in others.

Why recycle things that you can make them cheaper, with less resources and in higher quality from scratch?

(The above is not so much about processors, but about plastics. As long as we are still burning any fossil fuels at all, we are probably better off holding off on recycling and instead burning the plastic for electricity to use ever so slightly less new fossil fuels for power, and instead use the virgin fossil fuels to make new plastics.

Especially considering the extra logistics and quality degradation that recycling entails.

Directly re-using plastic bottles a few times might still be worth it, though.)


Is that a genuine question, or are you parodying an ignorant point of view?

The World has limited resources, we don't have a spare.

Do you need it spelling out more clearly?


We are sitting on 5,970,000,000,000,000,000,000,000 kg ball of matter. We have a giant nuclear furnace in the centre of the solar system that's providing us with energy.

Some resources are still scarce. And a lot of those 6E24 kg is iron and nickel we can never get to. Another big fraction is basically molten stone. And we really should stop putting more carbon into the atmosphere.

Also, if you go for measures like mass processed, the weight of microchips, pcbs, parts is only a tiny fraction of what has to be processed and build in the supply chain.

Agreed that it is smarter to use oil for plastics then to burn it directly.


> Agreed that it is smarter to use oil for plastics then to burn it directly.

My argument is that as long as we are still burning oil and gas, we might as well burn old plastic instead of new oil and gas.

If/when we stop burning oil and gas, then we can think more seriously about recycling plastic.


Did you ever try to burn plastic?

1) Plastic is not liquid, so you can't pipe it to a gas or oil power plant. You may argue that coal isn't liquid either, but continue reading...

2) Burning plastic generates toxic fumes.

3) Plastic ash is sticky and very difficult to clean.


You might like to read about https://en.wikipedia.org/wiki/Incineration and https://en.wikipedia.org/wiki/Waste-to-energy_plant

It's a fascinating topic. There's even more problems than the ones you bring up, but engineers are also pretty smart.


That sounds like an almost Malthusian viewpoint.

The world has effectively infinite resources, getting more is usually just a matter of figuring out better extraction techniques or using better energy.


The world only has effectively infinite resources if growth slows down, because exponentials get out of hand surprisingly quickly.

For example at 1% energy growth per year it would only take around 9-10k years before to reach an annual consumption equal to all the energy in the Milky Way galaxy. By "all the energy" I don't just mean consuming all the solar energy from all the stars, and using all the fissionable material in reactors, and fusing everything that can fuse, and burning all the burnable stuff. No, I mean also using all the gravitational potential energy in the galaxy, and somehow turning everything that has mass into energy according to E=mc^2.

From there at 1% annual growth it is only another 2-3k years to using all the energy in the whole observable universe annually.

Population at 1% growth also gets out of hand surprisingly quickly. If we don't get FTL travel then in about 12k years we run out space. That's because in 12k years with no FTL we can only expand into a spherical region of space 12k lightyears in radius. At 1% annual growth from the current population in 12k years the volume of humans would be more than fits in the sphere--and that's assuming we can pack humans so there is no wasted space.

We actually have population growth under 1% now, down to around 0.85%, but that only gets us another 2-3k years.


Eh, because the speed of light is finite, our growth in resource usage will have to 'slow down' to cubic after a while. Sure.

We are very far from that.


>effectively infinite resources

Sure, like effectively infinite atmospheric carbon sink, effectively infinite Helium, effectively infinite fresh water, effectively infinite trees ... we've treated these things as true, because the World is big and population of humans wasn't so big we've got away with that for a time, now those presumptions are coming to bite us, hard.

Yes, we can work our way out of some holes, maybe all of them. But we have to make things sustainable first, then spend those resources. We're not wizards, deus ex machina only reliably happens in movies.

A little Malthusian.


> Directly re-using plastic bottles a few times might still be worth it, though.

Directly reusing plastic bottles that were not meant to be is bad for your health though, isn't it?


The biggest risks are that single-use bottles are usually pretty difficult to clean (usually a narrow opening). The second biggest, which is related, is that those single-use bottles usually aren't very rigid and will tend to make small cracks in the surface as the material flexes which makes things even harder to clean. After that, all the cracks that will develop will mean it'll leach out the bad stuff in the plastics far faster than if you had some other kind of water bottle.

If you just opened it and drank the drink in it, there's probably no harm in filling it soon after and using it a few times like that. Using that same disposable bottle for a few months is probably not a good idea.


Let's start by pricing in the negative externalities.

>they could use a non-rechargeable battery

The problem here is the item lasts 'long enough' that they can't, a single battery, unless it were very large would drain charge first.

But that brings in the second issue of the device not being refillable, which may be the bigger sin.


It reminds me of how Sussman talked about someday we'd have computers so small and cheap that we'd mix dozens in our concrete and be put throughout our space.

Russia started with mixing diodes into concrete a while ago- https://news.ycombinator.com/item?id=41933979

A Deepness in the Sky by Vinge has this as a minor plot point.

> It’s beautiful

Especially since both the waste created in the process of making the device and the e-waste created with it's disposal are somebody else's problem!


> It’s beautiful, I love it.

When computers become disposable, their programmers soon become disposable as well. Maybe, you shouldn't love it.


That doesn't make sense.

Life lessons from anime and the WordPorn meme account.

I remember playing with Alpaca a few years ago, and it was fun though I didn’t find the resulting code to significantly less error-prone than when I wrote regular Erlang. It’s inelegant, but I find that Erlang’s quasi-runtime-typing with pattern matching gets you pretty far and it falls into Erlang’s “let it crash” philosophy nicely.

Honestly, and I realize that this might get me a bit of flack here and that’s obviously fine, but I find type systems start losing utility with distributed applications. Ultimately everything being sent over the wire is just bits. The wire doesn’t care about monads or integers or characters or strings or functors, just 1’s and 0’s, and ultimately I feel like imposing a type system can often get in the way more than it helps. There’s so much weirdness and uncertainty associated with stuff going over the wire, and pretty types often don’t really capture that.

I haven’t tried Gleam yet, and I will give it a go, and it’s entirely possible it will change my opinion on this, so I am willing to have my mind changed.


I don’t understand this comment, yes everything going over the wire is bits, but both endpoints need to know how to interpret this data, right? Types are a great tool to do this. They can even drive the exact wire protocol, verification of both data and protocol version.

So it’s hard to see how types get in the way instead of being the ultimate toolset for shaping distributed communication protocols.


Bits get lost, if you don’t have protocol verification you get mismatched types.

Types naively used can fall apart pretty easily. Suppose you have some data being sent in three chunks. Suppose you get chunk 1 and chunk 3 but chunk 2 arrives corrupted for whatever reason. What do you do? Do you reject the entire object since it doesn’t conform to the type spec? Maybe you do, maybe you don’t, or maybe you structure the type around it to handle that.

But let’s dissect that last suggestion; suppose I do modify the type to encode that. Suddenly pretty much every field more or less just because Maybe/Optional. Once everything is Optional, you don’t really have a “type” anymore, you have a a runtime check of the type everywhere. This isn’t radically different than regular dynamic typing.

There are more elaborate type systems that do encode these things better like session types, and I should clarify that I don’t think that those get in the way. I just think that stuff like the C type system or HM type systems stop being useful, because these type systems don’t have the best way to encode the non-determinism of distributed stuff.

You can of course ameliorate this somewhat with higher level protocols like HTTP, and once you get to that level types do map pretty well and you should use them. I just have mixed feelings for low-level network stuff.


> But let’s dissect that last suggestion; suppose I do modify the type to encode that. Suddenly pretty much every field more or less just because Maybe/Optional. Once everything is Optional, you don’t really have a “type” anymore, you have a a runtime check of the type everywhere. This isn’t radically different than regular dynamic typing.

Of course it’s different. You have a type that accurately reflects your domain/data model. Doing that helps to ensure you know to implement the necessary runtime checks, correctly. It can also help you avoid implementing a lot of superfluous runtime checks for conditions you don’t expect to handle (and to treat those conditions as invariant violations instead).


No, it really isn’t that different. If I had a dynamic type system I would have to null check everything. If I have declare everything as a Maybe, I would have to null check everything.

For things that are invariants, that’s also trivial to check against with `if(!isValid(obj)) throw Error`.


Sure. The difference is that with a strong typing system, the compiler makes sure you write those checks. I know you know this, but that’s the confusion in this thread. For me too, I find static type systems give a lot more assurance in this way. Of course it breaks down if you assume the wrong type for the data coming in, but that’s unavoidable. At least you can contain the problem and ensure good error reports.

The point of a type system isn’t ever that you don’t have to check the things that make a value represent the type you intend to assign it. The point is to encode precisely the things that you need to be true for that assignment to succeed correctly. If everything is in fact modeled as an Option, then yes you have to check each thing for Some before accessing its value.

The type is a way to communicate (to the compiler, to other devs, to future you) that those are the expected invariants.

The check for invariants is trivial as you say. The value of types is in expressing what those invariants are in the first place.


You missed the entire point of the strong static typing.

I don’t think I did. I am one of the very few people who have had paying jobs doing Scala, Haskell, and F#. I have also had paying jobs doing Clojure and Erlang: dynamic languages commonly used for distributed apps.

I like HM type systems a lot. I’ve given talks on type systems, I was working on trying to extend type systems to deal with these particular problems in grad school. This isn’t meant to a statements on types entirely. I am arguing that most systems don’t encode for a lot of uncertainty that you find when going over the network.


With all due respect, you can use all of those languages and their type systems without recognizing their value.

For ensuring bits don't get lost, you use protocols like TCP. For ensuring they don't silently flip on you, you use ECC.

Complaining that static types don't guard you against lost packets and bit flips is missing the point.


With all due respect, you really do not understand these protocols if you think “just use TCP and ECC” addresses my complaints.

Again, it’s not that I have an issue with static types “not protecting you”, I am saying that you have to encode for this uncertainty regardless of the language you use. The way you typically encode for that uncertainty is to use an algebraic data type like Maybe or Optional. Checking against a Maybe for every field ends up being the same checks you would be doing with a dynamic language.

I don’t really feel the need to list out my full resume, but I do think it is very likely that I understand type systems better than you do.


Fair enough, though I feel so entirely differently that your position baffles me.

Gleam is still new to me, but my experience writing parsers in Haskell and handling error cases succinctly through functors was such a pleasant departure from my experiences in languages that lack typeclasses, higher-kinded types, and the abstractions they allow.

The program flowed happily through my Eithers until it encountered an error, at which point that was raised with a nice summary.

Part of that was GHC extensions though they could easily be translated into boilerplate, and that only had to be done once per class.

Gleam will likely never live to that level of programmer joy; what excites me is that it’s trying to bring some of it to BEAM.

It’s more than likely your knowledge of type systems far exceeds mine—I’m frankly not the theory type. My love for them comes from having written code both ways, in C, Python, Lisp, and Haskell. Haskell’s types were such a boon, and it’s not the HM inference at all.


> ends up being the same checks you would be doing with a dynamic language

Sure thing. Unless dev forgets to do (some of) these checks, or some code downstream changes and upstream checks become gibberish or insufficient.


I know everyone says that this is a huge issue, and I am sure you can point to an example, but I haven’t found that types prevented a lot of issues like this any better than something like Erlang’s assertion-based system.

When you say "any better than" are you referring to the runtive vs comptime difference?

You're conflating types with the encoding/decoding problem. Maybe your paying jobs didn't provide you with enough room to distinguish between these two problems. Types can be encoded optimally with a minimally-required bits representation (for instance: https://hackage.haskell.org/package/flat), or they can be encoded redundantly with all default/recovery/omission information, and what you actually do with that encoding on the wire in a distributed system with or without versioning is up to you and it doesn't depend on the specific type system of your language, but the strong type system offers you unmatched precision both at program boundaries where encoding happens, and in business logic. Once you've got that `Maybe a` you can (<$>) in exactly one place at the program's boundary, and then proceed as if your data has always been provided without omission. And then you can combine (<$>) with `Alternative f` to deal with your distributed systems' silly payloads in a versioned manner. What's your dynamic language's null-checking equivalent for it?

While I don't agree with the OP about type systems, I understand what they mean about erlang. When an erlang node joins a cluster, it can't make any assumptions about the other nodes, because there is no guarantee that the other nodes are running the same code. That's perfectly fine in erlang, and the language is written in a way that makes that situation possible to deal with (using pattern matching).

Interesting! I don't share that view at all — I mean, everything running locally is just bits too, right? Your CPU doesn't care about monads or integers or characters or strings or functors either. But ultimately your higher level code does expect data to conform to some invariants, whether you explicitly model them or not.

IMO the right approach is just to parse everything into a known type at the point of ingress, and from there you can just deal with your language's native data structures.


I know everything reduces to bits eventually, but modern CPUs and memory aren’t as “lossy” as the network is, meaning you can make more assumptions about the data being and staying intact (especially if you have ECC).

Once you add distribution you have to encode for the fact that the network is terrible.

You absolutely can parse at ingress, but then there are issues with that. If the data you got is 3/4 good, but one field is corrupted, do you reject everything? Sometimes, but often Probably not, network calls are too expensive, so you encode that into the type with a Maybe. But of course any field could be corrupt so you have to encode lots of fields as Maybes. Suddenly you have reinvented dynamic typing but it’s LARPing as a static type system.


I think you can avoid most issues by not doing what you're describing! Ensuring data arrives uncorrupted is usually not an application-level concern, and if you use something like TCP you get that functionality for free.

TCP helps but only to a certain extent; it only guarantees specific ordering of bits during its session. Suppose you have to construct an object out of three separate transmissions, like some kind of multipart style thing. If one of the transmissions gets corrupted or gets errors out from TCP, then you still fall into that maybe trap.

so you need transactions?

I get what your saying, but can't you have the same issue if instead you have 3 local threads that you need to get the objects from, one can throw an exception and you only receive 2, same problem


Sometimes, but I am arguing that you need to encode for this uncertainty if you want to make distributed apps work correctly. If you can do transactions for what you’re doing then great, not every app can do that.

When you have to deal with large amounts of uncertainty, static types often reduce to a bunch of optionals, forcing you to null check every field. This is what you end up having to do with dynamic typing as well.

I don’t think types buy you much in cases with extreme uncertainty, and I think they create noise as a result.

It’s a potentially similar issue with threads as well, especially if you’re not sharing data between them, which has similar issues as a distributed app.

A difference is that it’s much cheaper to do retries within a single process compared to doing it over a network, so if something gets borked locally then a retry is (comparatively) free.


> static types often reduce to a bunch of optionals, forcing you to null check every field

On one end, you write / generate / assume a deserialisator that checks whether incoming data satisfies all required invariants, eg all fields are present. On the other end, you specify a type that has all the required fields in required format.

If deserialisation fails to satisfy type requirements, it produces an error which you can handle by eg falling back to a different type, rejecting operation or re-requesting data.

If deserialisation doesn't fail – hooray, now you don't have to worry about uncertainty.

The important thing here is that uncertainty is contained in a very specific place. It's an uncertainty barrier, if you wish: before it there's raw data, after it it's either an error or valid data.

If you don't have a strict barrier like that – every place in the program has to deal with uncertainty.

So it's not necessarily about dynamic / static. It's about being able to set barriers that narrow down uncertainty, and growing number of assumptions. The good thing about ergonomic typing system is that it allows you to offload these assumptions from your mind by encoding them in the types and let compiler worry about it.

It's basically automatization of assumptions book keeping.


Why couldn't you fix this by validating at the point of ingress? If one of the three transmissions fails, retry and/or alert the user.

But your program HAS to have some invariants. If those are not held, simply reject all the data!

What the hell is really the alternative here? Do you just pretend your process can accept any kind of data, and just never do anything with it??

If you need an integer and you get a string, you just don't work. This has nothing to do with types. There's no solution here, it's just no thank you, error, panic, 500.


You handle that in the validation layer, like millions of people have done with dynamic languages in the past.

> Honestly, and I realize that this might get me a bit of flack here and that’s obviously fine, but I find type systems start losing utility with distributed applications. Ultimately everything being sent over the wire is just bits.

Actually Gleam somewhat shares this view, it doesn't pretend that you can do typesafe distributed message passing (and it doesn't fall into the decades-running trap of trying to solve this). Distributed computing in Gleam would involve handling dynamic messages the same way handling any other response from outside the system is done.

This is a bit more boilerplate-y but imo it's preferable to the other two options of pretending its type safe or not existing.


Interesting. Them being honest about this stuff is a point in their favor.

I might give it a look this weekend.


> handling dynamic messages

the dynamic messages have to have static properties to be relevant for the receiving program, the properties are known upfront, and there's no "decades-running trap of trying to solve this".


> there's no "decades-running trap of trying to solve this".

I’m not as certain. The fact that we’ve gone from ASN.1 to COBRA/SOAP to protobuf to Cap’n’web and all the million other items I didn’t list says something. The fact that, even given a very popular alternative in that list, or super tightly integrated RPC like sending terms between BEAMs, basic questions like “should optionality/absence be encoded differently than unset default values?” and “how should we encode forward compatibility?” have so may different and unsatisfactory answers says something.

Not as an appeal to authority or a blanket endorsement, but I think Fowler put it best: https://martinfowler.com/articles/distributed-objects-micros...

It absolutely is a decades old set of problems that have never been solved to the satisfaction of most users.


> I’m not as certain. The fact that we’ve gone from ASN.1 to COBRA/SOAP to protobuf to Cap’n’web and all the million other items I didn’t list says something.

> It absolutely is a decades old set of problems that have never been solved to the satisfaction of most users.

ASN.1 wasn't in the same problem space with CORBA/DCOM, both CORBA and DCOM/OLE were heavily invested in a general-purpose non-domain-specific object model representation that would suppot arbitrary embeddings within an open-ended range of software. I suspect this is the unsolvable problem indeed, but I also believe that's not what you meant with your comment either, since all the known large-scale BEAM deployments (the comment I originally replied to implied BEAM deployments) operate within bounded domain spaces such as telecom and messaging, where distributed properties of the systems are known upfront: there are existing formats, protocols of exchange, and the finite number of valid interactions between entities/actors of the network, the embeddings are either non-existent or limited to a finite set of media such as static images, videos, maps, contacts etc. All of these can be encoded by a compile-time specification that gets published for all parties upfront.

> basic questions like “should optionality/absence be encoded differently than unset default values?”

However you like, any monoid would work here. I would argue that [a] and [] always win over (Option a) and especially over (Option [a]).

> and “how should we encode forward compatibility?”

If you'd like to learn if there's a spec-driven typed way of achieving that, you can start your research from this sample implementation atop json: https://github.com/typeable/schematic?tab=readme-ov-file#mig...


You seem to have a fundamental misunderstanding about type systems. Most (the best?) typesystems are erased. This means they only have meaning "on compile time", and makes sure your code is sound and preferrably without UB.

The "its only bits" thing makes no sense in the world of types. In the end its machine code, that humans never (in practice) write or read.


I know, but a type system works by encoding what you want the data to do. Types are a metaphor, and their utility is only as good as how well the metaphor holds.

Within a single computer that’s easy because a single computer is generally well behaved and you’re not going to lose data and so yeah your type assumptions hold.

When you add distribution you cannot make as many assumptions, and as such you encode that into the type with a bunch of optionals. Once you have gotten everything into optionals, you’re effectively doing the same checks you’d be doing with a dynamic language everywhere anyway.

I feel like at that point, the types stop buying you very much, and your code doesn’t end up looking or acting significantly different than the equivalent dynamic code, and at that point I feel like the types are just noise.

I like HM type systems, I have written many applications in Haskell and Rust and F#, so it’s not like I think type systems are bad in general or anything. I just don’t think HM type systems encode this kind of uncertainty nicely.


> When you add distribution you cannot make as many assumptions

You absolutely can make all assumptions relevant to the handling/dispatching logic expressed at type-level.

> and as such you encode that into the type with a bunch of optionals.

Not necessarily, it can be `Alternative f` of non-optional compound types that define the further actions downstream.

> Once you have gotten everything into optionals, you’re effectively doing the same checks you’d be doing with a dynamic language everywhere anyway.

Not true, your business logic can be dispatched based on a single pattern comparison of the result of the `Alternative f`.


Similar for me, though I used the one built into eMule, which I used for my LINUX ISOS.

I remember back when I would frequent That Guy With the Glasses, there was a thing with Phelous and The Cinema Snob reviewing "Troll 4", a movie that doesn't exist. [1] I remember it being a bit surreal, because I was pretty sure that the movie didn't exist but I wasn't sure.

This reminded me of that.

[1] https://www.thecinemasnob.com/crossovers/brad-and-pheloustro...


I find it a bit odd that people are acting like this stuff is an abject failure because it's not perfect yet.

Generative AI, as we know it, has only existed ~5-6 years, and it has improved substantially, and is likely to keep improving.

Yes, people have probably been deploying it in spots where it's not quite ready but it's myopic to act like it's "not going all that well" when it's pretty clear that it actually is going pretty well, just that we need to work out the kinks. New technology is always buggy for awhile, and eventually it becomes boring.


> Generative AI, as we know it, has only existed ~5-6 years, and it has improved substantially, and is likely to keep improving.

Every 2/3 months we're hearing there's a new model that just blows the last one out of the water for coding. Meanwhile, here I am with Opus and Sonnet for $20/mo and it's regularly failing at basic tasks, antigravity getting stuck in loops and burning credits. We're talking "copy basic examples and don't hallucinate APIs" here, not deep complicated system design topics.

It can one shot a web frontend, just like v0 could in 2023. But that's still about all I've seen it work on.


You’re doing exactly the thing that the parent commenter pointed out: Complaining that they’re not perfect yet as if that’s damning evidence of failure.

We all know LLMs get stuck. We know they hallucinate. We know they get things wrong. We know they get stuck in loops.

There are two types of people: The first group learns to work within these limits and adapt to using them where they’re helpful while writing the code when they’re not.

The second group gets frustrated every time it doesn’t one-shot their prompt and declares it all a big farce. Meanwhile the rest of us are out here having fun with these tools, however limited they are.


Someone else said this perfectly farther down:

> The whole discourse around LLMs is so utterly exhausting. If I say I don't like them for almost any reason, I'm a luddite. If I complain about their shortcomings, I'm just using it wrong. If I try and use it the "right" way and it still gets extremely basic things wrong, then my expectations are too high.

As I’ve said, I use LLMs, and I use tools that are assisted by LLMs. They help. But they don’t work anywhere near as reliably as people talk about them working. And that hasn’t changed in the 18 months since I first promoted v0 to make me a website.


Rather be a Luddite than contribute to these soul suckers like OpenAI and help them lay off workers.

All tech work has been in service of laying off workers. Phone operator, bank teller, longshoreman (outside the US) all used to be serviceable careers to earn a lifetime.

How are they “soul suckers”?

Using LLMs has made it fun for me to make software again.


Shallow learning, overall laziness imprinted on the character over time. For kids and juniors starting the field they are much worse. None of the stuff I've learned over past 20 years was handed over to me in this easy fashion.

Overconfident and over-positive shallow posts just hurt the overall discussion. Also some layer of arrogance - a typical 'if you struggle to get any significant value out of this new toy you must be doing something horribly wrong, look at us all being 100x productive!' which is never ever followed by some detailed explanation of their stack and other details.

Clearly the tools have serious issues since most users struggle to get any sustained reliable added value, and everybody keeps hoping things will improve later due to it being able to write lengthy prose on various topics or fill our government documents.


None of the stuff I've learned over past 20 years was handed over to me in this easy fashion.

Yeah, kids these days just include stdio.h and start printing stuff, no understanding of register allocation or hardware addressing modes. 20 years from now nobody will know how to write an operating system.

Also some layer of arrogance

As compared to "if you claim AI is useful for you, you're either delusional or a shill"? The difference is that the pro-AI side can accept that any specific case it may not work well, while detractors have to make the increasingly untenable argument that it's never useful.


Sure, but think about what it's replacing.

If you hired a human, it will cost you thousands a week. Humans will also fail at basic tasks, get stuck in useless loops, and you still have to pay them for all that time.

For that matter, even if I'm not hiring anyone, I will still get stuck on projects and burn through the finite number of hours I have on this planet trying to figure stuff out and being wrong for a lot of it.

It's not perfect yet, but these coding models, in my mind, have gotten pretty good if you're specific about the requirements, and even if it misfires fairly often, they can still be useful, even if they're not perfect.

I've made this analogy before, but to me they're like really eager-to-please interns; not necessarily perfect, and there's even a fairly high risk you'll have to redo a lot of their work, but they can still be useful.


I am an AI-skeptic but I would agree this looks impressive from certain angles, especially if you're an early startup (maybe) or you are very high up the chain and just want to focus on cutting costs. On the other hand, if you are about to be unemployed, this is less impressive. Can it replace a human? I would say no its still long way to go, but a good salesman can convince executives that it does and thats all that matters.

> On the other hand, if you are about to be unemployed, this is less impressive

> salesman can convince executives that it does

I tend to think that reality will temper this trend as the results develop. Replacing 10 engineers with one engineer using Cursor will result in a vast velocity hit. Replacing 5 engineers with 5 "agents" assigned to autonomously implement features will result in a mess eventually. (With current technology -- I have no idea what even 2027 AI will do). At that point those unemployed engineers will find their phones ringing off the hook to come and clean up the mess.

Not that unlike what happens in many situations where they fire teams and offshore the whole thing to a team of average developers 180 degrees of longitude away who don't have any domain knowledge of the business or connections to the stakeholders. The pendulum swings back in the other direction.


I just think Jevins paradox [1]/Gustafson's Law [2] kind of applies here.

Maybe I shouldn't have used the word "replaced", as I don't really think it's actually going to "replace" people long term. I think it's likely to just lead to higher output as these get better and better .

[1] https://en.wikipedia.org/wiki/Jevons_paradox

[2] https://en.wikipedia.org/wiki/Gustafson%27s_law


Not you, but the word replaced is the being used all the time. Even senior engineers are saying they are using it as a junior engineers while we can easily hire junior engineers (but Execs don't want to). Jevon's paradox wont work in Software because user's wallets and time is limited, and if software becomes too easy to build, it becomes harder to sell. Normal people can have 5 subscriptions, may be 10, but they wont be going to 50 or 100. I would say we would have already exhausted users already, with all the bad practices.

You’ve missed my point here - I agree that gen AI has changed everything and is useful, _but_ I disagree that it’s improved substantially - which is what the comment I replied to claimed.

Anecdotally I’ve seen no difference in model changes in the last year, but going from LLM to Claude code (where we told the LLMs they can use tools on our machines) was a game changer. The improvement there was the agent loop and the support for tools.

In 2023 I asked v0.dev to one shot me a website for a business I was working on and it did it in about 3 minutes. I feel like we’re still stuck there with the models.


I've been coding with LLMs for less than a year. As I mentioned to someone in email a few days ago: In the first half, when an LLM solved a problem differently from me, I would probe why and more often than not overrule and instruct it to do it my way.

Now it's reversed. More often than not its method is better than mine (e.g. leveraging a better function/library than I would have).

In general, it's writing idiomatic mode much more often. It's been many months since I had to correct it and tell it to be idiomatic.


My experience in 2024 AI tools like copilot was if the code compiled first time it was an above average result and I’d need a lot of manual tweaking.

There were definitely languages where it worked better (JS), but if I told people here I had to spend a lot of time tweaking after it, at least half of them assumed I was being really anal about spacing or variable names, which was simply not the case.

It’s still the case for cheaper models (GPT-mini remains a waste of my timetime), but there’s mid level models like Minimax M2 that can produce working code and stuff like Sonnet can produce usable code.

I’m not sure the delta is enough for me that I’d pay for these tools on my own though…


In my experience it has gotten considerably better. When I get it to generate C, it often gets the pointer logic correct, which wasn't the case three years ago. Three years ago, ChatGPT would struggle with even fairly straightforward LaTeX, but now I can pretty easily get it to generate pretty elaborate LaTeX and I have even had good success generating LuaTeX. I've been able to fairly successfully have it generate TLA+ spec from existing code now, which didn't work even a year ago when I tried it.

Of course, sample size of one, so if you haven't gotten those results then fair enough, but I've at least observed it getting a lot better.


Ya but what do you do when there are no humans left?

Prompt for a human?

Making humans look ridiculously and unrealisticly bad still doesn't invalidate criticism of AI, its overhyped marketing and all the astroturfing.

There’s a subtle point a moment when you HAVE to take the driver wheel from the AI. All issues I see are from people insisting to use far beyond the point it stops being useful.

It is a helper, a partner, it is still not ready go the last mile


It's funny how many people don't get that. It's like adding a pretty great senior or staff level engineer to sit on-call next to every developer and assist them, for basically free (I've never used any of the expensive stuff yet. Just things like Copilot, Grok Code in JetBrains, just asking Gemini to write bits of code for me).

If you hired a staff engineer to sit next to me, and I just had him/her write 100% of the code and never tried to understand it, that would be an unwise decision on my part and I'd have little room to complain about the times he made mistakes.


As someone else said in this thread:

> The whole discourse around LLMs is so utterly exhausting. If I say I don't like them for almost any reason, I'm a luddite. If I complain about their shortcomings, I'm just using it wrong. If I try and use it the "right" way and it still gets extremely basic things wrong, then my expectations are too high.

I’m perfectly happy to write code, to use these tools. I do use them, and sometimes they work (well). Other times they have catastrophic failures. But apparently it’s my failure for not understanding the tool or expecting too much of the tool, while others are screaming from the rooftops about how this new model changes everything (which happens every 3 months at this point)


There's no silver bullet. I’m not a researcher, but I’ve done my best to understand how these systems work—through books, video courses, and even taking underpaid hourly work at a company that creates datasets for RLHF. I spent my days fixing bugs step-by-step, writing notes like, “Hmm… this version of the library doesn’t support protocol Y version 4423123423. We need to update it, then refactor the code so we instantiate ‘blah’ and pass it to ‘foo’ before we can connect.”

That experience gave me a deep appreciation for how incredible LLMs are and the amazing software they can power—but it also completely demystified them. So by all means, let’s use them. But let’s also understand there are no miracles here. Go back to Shannon’s papers from the ’60s, and you'll understand that what seems to you like "emerging behaviors" are quite explainable from an information theory background. Learn how these models are built. Keep up with the latests research papers. If you do, you’ll recognize their limitations before those limitations catch you by surprise.

There is no silver bullet. And if you think you’ve found one, you’re in for a world of pain. Worse still, you’ll never realize the full potential of these tools, because you won’t understand their constraints, their limits, or their pitfalls.


> There is no silver bullet. And if you think you’ve found one, you’re in for a world of pain. Worse still, you’ll never realize the full potential of these tools, because you won’t understand their constraints, their limits, or their pitfalls.

See my previous comment (quoted below).

> If I complain about their shortcomings, I'm just using it wrong. If I try and use it the "right" way and it still gets extremely basic things wrong, then my expectations are too high.

Regarding "there are no miracles here"

Here are a few comments from this thread alone,

- https://news.ycombinator.com/item?id=46609559 - https://news.ycombinator.com/item?id=46610260 - https://news.ycombinator.com/item?id=46609800 - https://news.ycombinator.com/item?id=46611708

Here's a few from some older threads: - https://news.ycombinator.com/item?id=46519851 - https://news.ycombinator.com/item?id=46485304

There is a very vocal group who are telling us that there _is_ a silver bullet.


> We're talking "copy basic examples and don't hallucinate APIs" here, not deep complicated system design topics.

If your metric is an LLM that can copy/paste without alterations, and never hallucinate APIs, then yeah, you'll always be disappointed with them.

The rest of us learn how to be productive with them despite these problems.


> If your metric is an LLM that can copy/paste without alterations, and never hallucinate APIs, then yeah, you'll always be disappointed with them.

I struggle to take comments like this seriously - yes, it is very reasonable to expect these magical tools to copy and paste something without alterations. How on earth is that an unreasonable ask?

The whole discourse around LLMs is so utterly exhausting. If I say I don't like them for almost any reason, I'm a luddite. If I complain about their shortcomings, I'm just using it wrong. If I try and use it the "right" way and it still gets extremely basic things wrong, then my expectations are too high.

What, precisely, are they good for?


I think what they're best at right now is the initial scaffolding work of projects. A lot of the annoying bootstrap shit that I hate doing is actually generally handled really well by Codex.

I agree that there's definitely some overhype to them right now. At least for the stuff I've done they have gotten considerably better though, to a point where the code it generates is often usable, if sub-optimal.

For example, about three years ago, I was trying to get ChatGPT to write me a C program to do a fairly basic ZeroMQ program. It generated something that looked correct, but it would crash pretty much immediately, because it kept trying to use a pointer after free.

I tried the same thing again with Codex about a week ago, and it worked out of the box, and I was even able to get it to do more stuff.


I think it USED to be true that you couldn't really use an LLM on a large, existing codebase. Our codebase is about 2 million LOC, and a year ago you couldn't use an LLM on it for anything but occasional small tasks. Now, probably 90% of the code I commit each week was written by Claude (and reviewed by me and other humans - and also by Copilot and ZeroPath).

It seems like just such a weird and rigid way to evaluate it? I am a somewhat reasonable human coder, but I can't copy and paste a bunch of code without alterations from memory either. Can someone still find a use for me?

For a long time, I've wanted to write a blog post on why programmers don't understand the utility of LLMs[1], whereas non-programmers easily see it. But I struggle to articulate it well.

The gist is this: Programmers view computers as deterministic. They can't tolerate a tool that behaves differently from run to run. They have a very binary view of the world: If it can't satisfy this "basic" requirement, it's crap.

Programmers have made their career (and possibly life) being experts at solving problems that greatly benefit from determinism. A problem that doesn't - well either that needs to be solved by sophisticated machine learning, or by a human. They're trained on essentially ignoring those problems - it's not their expertise.

And so they get really thrown off when people use computers in a nondeterministic way to solve a deterministic problem.

For everyone else, the world, and its solutions, are mostly non-deterministic. When they solve a problem, or when they pay people to solve a problem, the guarantees are much lower. They don't expect perfection every time.

When a normal human asks a programmer to make a change, they understand that communication is lossy, and even if it isn't, programmers make mistakes.

Using a tool like an LLM is like any other tool. Or like asking any other human to do something.

For programmers, it's a cardinal sin if the tool is unpredictable. So they dismiss it. For everyone else, it's just another tool. They embrace it.

[1] This, of course, is changing as they become better at coding.


My problem isn't lack of determinism, it's that it's solution frequently has basic errors that prevent it from working. I asked ChatGPT for a program to remove the background of an image. The resulting image was blue. When I pointed this out to ChatGPT it identified this as a common error in RGB ordering in OpenCV and told me the code to change. The whole process did not take very long, but this is not a cycle that is anything I want to be part of. (That, and it does not help me much to give me a basic usage of OpenCV that does not work for the complex background I wanted to remove)

Then there are the cases where I just cannot get it do what I ask. Ask Gemini to remove the background of an image and you get a JPEG with a backed in checkerboard background, even when you tell it to produce an RGBA PNG. Again, I don't have any use for that.

But it does know a lot of things, and sometimes it informs me of solutions I was not aware of. The code isn't great, but if I were non-technical (or not very good), this would be fantastic and better than I could do.


I’m perfectly happy for my tooling to not be deterministic. I’m not happy for it to make up solutions that don’t exist, and get stuck in loops because of that.

I use LLMs, I code with a mix of antigravity and Claude code depending on the task, but I feel like I’m living in a different reality when the code I get out of these tools _regularly just doesn’t work, at all_. And to the parents point, I’m doing something wrong for noticing that?


If it were terrible, you wouldn't use them, right? Isn't the fact that you continue to use AI coding tools a sign that you find them a net positive? Or is it being imposed on you?

> And to the parents point, I’m doing something wrong for noticing that?

There's nothing wrong pointing out your experience. What the OP was implying was he expects them to be able to copy/paste reliably almost 100% of the time, and not hallucinate. I was merely pointing out that he'll never get that with LLMs, and that their inability to do so isn't a barrier to getting productive use out of them.


I was the person who said it can't copy from examples without making up APIs but.

> he'll never get that with LLMs, and that their inability to do so isn't a barrier to getting productive use out of them.

This is _exactly_ what the comment thread we're in said - and I agree with him. > The whole discourse around LLMs is so utterly exhausting. If I say I don't like them for almost any reason, I'm a luddite. If I complain about their shortcomings, I'm just using it wrong. If I try and use it the "right" way and it still gets extremely basic things wrong, then my expectations are too high.

> If it were terrible, you wouldn't use them, right? Isn't the fact that you continue to use AI coding tools a sign that you find them a net positive? Or is it being imposed on you?

You're putting words in my mouth here - I'm not saying that they're terrible, I'm saying they're way, way, way overhyped, their abilities are overblown, (look at this post and the replies of people saying they're writing 90% of code with claude and using AI tools to review it), but when we challenge that, we're wrong.


Apologies. I confused you with drewbug up in the thread.

> And so they get really thrown off when people use computers in a nondeterministic way to solve a deterministic problem

Ah, no. This is wildly off the mark, but I think a lot of people don't understand what SWEs actually do.

We don't get paid to write code. We get paid to solve problems. We're knowledge workers like lawyers or doctors or other engineers, meaning we're the ones making the judgement calls and making the technical decisions.

In my current job, I tell my boss what I'm going to be working on, not the other way around. That's not always true, but it's mostly true for most SWEs.

The flip side of that is I'm also held responsible. If I write ass code and deploy it to prod, it's my ass that's gonna get paged for it. If I take prod down and cause a major incident, the blame comes to me. It's not hard to come up with scenarios where your bad choices end up costing the company enormous sums of money. Millions of dollars for large companies. Fines.

So no, it has nothing to do with non-determinism lol. We deal with that all the time. (Machine learning is decades old, after all.)

It's evaluating things, weighing the benefits against the risks and failure modes, and making a judgement call that it's ass.


> What, precisely, are they good for?

scamming people


Also good for manufacturing consent in Reddit and other places. Intelligence services busy with certain country now, bots using LLMs to pump out insane amounts of content to mold the information atmosphere.

Its strong enough to replace humans at their jobs and weak enough that it cant do basic things. Its a paradox. Just learn to be productive with them. Pay $200/month and work around with its little quirks. /s

>Every 2/3 months we're hearing there's a new model that just blows the last one out of the water for coding

I haven't heard that at all. I hear about models that come out and are a bit better. And other people saying they suck.

>Meanwhile, here I am with Opus and Sonnet for $20/mo and it's regularly failing at basic tasks, antigravity getting stuck in loops and burning credits.

Is it bringing you any value? I find it speeds things up a LOT.


I have a hard time believing that this v0, from 2023, achieved comparable results to Gemini 3 in Web design.

Gemini now often produces output that looks significantly better than what I could produce manually, and I'm an expert for web, although my expertise is more in tooling and package management.


Frankly I think the 'latest' generation of models from a lot of providers, which switch between 'fast' and 'thinking' modes, are really just the 'latest' because they encourage users to use cheaper inference by default. In chatgpt I still trust o3 the most. It gives me fewer flat-out wrong or nonsensical responses.

I'm suspecting that once these models hit 'good enough' for ~90% of users and use cases, the providers started optimizing for cost instead of quality, but still benchmark and advertise for quality.


We implement pretty cool workflows at work using "GenAI" and the users of our software are really appreciative. It's like saying a hammer sucks because it breaks most things you hit with it.

>Generative AI, as we know it, has only existed ~5-6 years

Probably less than that, practically speaking. ChatGPT's initial release date was November 2022. It's closer to 3 years, in terms of any significant amount of people using them.


> Generative AI, as we know it, has only existed ~5-6 years, and it has improved substantially, and is likely to keep improving.

I think the big problem is that the pace of improvement was UNBELIEVABLE for about 4 years, and it appears to have plateaued to almost nothing.

ChatGPT has barely improved in, what, 6 months or so.

They are driving costs down incredibly, which is not nothing.

But, here's the thing, they're not cutting costs because they have to. Google has deep enough pockets.

They're cutting costs because - at least with the current known paradigm - the cost is not worth it to make material improvements.

So unless there's a paradigm shift, we're not seeing MASSIVE improvements in output like we did in the previous years.

You could see costs go down to 1/100th over 3 years, seriously.

But they need to make money, so it's possible non of that will be passed on.


I think that even if it never improves, its current state is already pretty useful. I do think it's going to improve though I don't think AGI is going to happen any time soon.

I have no idea what this is called, but it feels like a lot of people assume that progress will continue at a linear pace for forever for things, when I think that generally progress is closer to a "staircase" shape. A new invention or discovery will lead to a lot of really cool new inventions and discoveries in a very short period of time, eventually people will exhaust the low-to-middle-hanging fruit, and progress kind of levels out.

I suspect it will be the same way with AI; I don't now if we've reached the top of our current plateau, but if not I think we're getting fairly close.


Yes I've read about something like before - like the jump from living in 1800 to 1900 - you go from no electricity at home to having electricity at home for example. The jump from 1900 to 2000 is much less groundbreaking for the electricity example - you have more appliances and more reliable electricity but it's nothing like the jump from candle to light bulb.

Maybe you meant 1900s to 2000s but if you meant the year 1900 to the year 2000 then that century of difference saw a lot more innovation than just the "candle to lightbulb" change of 1800 to 1900.

I'll interpret it as meaning 1800s to 1900s to 2000s. I'd argue that we haven't yet seen the same step change as 1800s to 1900s this century because we're only just beginning the ramp up on the new technology that will drive progress this century similar to how in 1926 they were still ramping up on the use of electricity and internal combustion engines.

Let's take electricity as the primary example though since it's the one you mentioned and it's probably more similar to our current situation with AI. The similarities include the need for central generating stations to supply raw power to end users as well as the need for products designed to make use of that power and provide some utility to the consumer. Efficiency of generation is also a primary concern for both as it's a major cost driver. Both of those required significant investment and effort to solve in the early days of electrification.

We're now solving similar problems with AI, instead of power plants we're building datacenters, instead of lightbulbs and washing machines we're developing chat bot integrations and agents, instead of improving dynamos we're improving GPUs and TPUs. I fully expect we'll follow a similar curve for deployment as we find new uses, improve existing ones and integrate this new power source into an increasing number of domains.

We do have one major advantage though, we've already built The Grid for distribution which saves a massive amount of effort.

This article is a good read on the permeation of electricity through the economy

https://www.construction-physics.com/p/the-birth-of-the-grid


Arguably the jump around the space age is a bigger jump than everything else between ~1900 and now - whenever you want to define that small period.

We may be in a similar step-jump period now, where over the next 10-15 years we'll see some pretty big advancements in robotics due to AI, and then all of the low hanging fruit will be picked until there some other MAJOR breakthrough


They are focused on reducing costs in order to survive. Pure and simple.

Alphabet / Google doesn’t have that issue. OAI and other money losing firms do.


I don't think LLMs are an abject failure, but I find it equally odd that so many people think that transformer-based LLMs can be incrementally improved to perfection. It seems pretty obvious to me now that we're not gonna RLHF our way out of hallucinations. We'll probably need a few more fundamental architecture breakthroughs to do that.

>and is likely to keep improving.

I'm not trying to be pedantic, but how did you arrive at 'keep improving' as a conclusion? Nobody is really sure how this stuff actually works. That's why AI safety was such a big deal a few years ago.


Totally reasonable question, and I only am making an assumption based on observed progress. AI generated code, at least in my personal experience, has gotten a lot better, and while I don't think that will go to infinity, I do think that there's still more room for improvement that could happen.

I will acknowledge that I don't have any evidence of this claim, so maybe the word "likely" was unwise, as that suggests probability. Feel free to replace "is "likely to" with "it feels like it will".


Because the likes of Altman have set short term expectations unrealistically high.

I mean that's every tech company.

I made a joke once after the first time I watched one of those Apple announcement shows in 2018, where I said "it's kind of sad, because there won't be any problems for us to solve because the iPhone XS Max is going to solve all of them".

The US economy is pretty much a big vibes-based Ponzi scheme now, so I don't think we can single-out AI, I think we have to blame the fact that the CEOs running these things face no negative consequences for lying or embellishing and they do get rewarded for it because it will often bump the stock price.

Is Tesla really worth more than every other car company combined in any kind of objective sense? I don't think so, I think people really like it when Elon lies to them about stuff that will come out "next year", and they feel no need to punish him economically.


"Ponzi" requires records fraud and is popularly misused, sort of like if people started describing every software bug as "a stack overflow."

I'd rather characterize it as extremes of Greater Fool Theory.

https://en.wikipedia.org/wiki/Greater_fool_theory


I would argue it’s fraud-adjacent. These tech CEOs know that they’re not going to be able to keep the promises that they’re making. It’s dishonest at the very least, if it doesn’t legally constitute “fraud”.

I maintain that most anti-AI sentiment is actually anti-lying-tech-CEO sentiment misattributed.

The technology is neat, the people selling it are ghouls.


Exactly: the technology is useful but because the executive class is hyping it as close to AGI because their buddies are slavering for layoffs. If that “when do you get fired?” tone wasn’t behind the conversation, I think a lot of people would be interested in applying LLMs to the smaller subset of things they actually perform well at.

For me it's mostly about the subset of things that LLMs suck at but still rammed in everywhere because someone wants to make a quick buck.

I know it's good tech for some stuff, just not for everything. It's the same with previous bubbles. VR is really great for some things but we were never going to work with a headset on 8 hours a day. Bitcoin is pretty cool but we were never going to do our shopping list on Blockchain. I'm just so sick of hypes.

But I do think it's good tech, just like I enjoy VR daily I do have my local LLM servers (I'm pretty anti cloud so I avoid it unless I really need the power)

It's not really about the societal impacts for me, at least not yet, it's just not good enough for that yet. I do worry about that longer-term but not with the current generation of AI. At my work we've done extensive benchmarking (especially among enthusiastic early adopters) and while it can save a couple hours a week we're nowhere near the point where it can displace FTEs.


Yeah, I think those are coming from the same place: so many companies are trying to wedge LLMs into everything, especially contexts where you really need actual reasoning to accomplish a task, and it’s just such a “magic VC money fairy, pick us!” play that it distracts from the underlying tech opening up some text processing capabilities we would’ve thought were amazing a few years ago.

Maybe CEOs should face consequences for going on the stage and outwardly lying. Instead they're rewarded by a bump in stock price because people appear to have amnesia.

This is how I felt about Bitcoin.

I hate the Anthropic guy so much.. when I see the face it just brings back all the nonsense lies and "predictions" he says. Altman is kind of the same but for some reason Dario kind of takes the cake.

You're saying the same thing cryptobros say about bitcoin right now, and that's 17 years later.

It's a business, but it won't be the thing the first movers thought it was.


It’s different in that Bitcoin was never useful in any capacity when it was new. AI is at least useful right now and it’s improved considerably in the last few years.

It was useful for doing illegal shit

> I generally like to enjoy a good book, movie, blog, or comic strip without letting politics get in the way.

It's certainly easier once they're dead. I can't speak for everyone, but part of the issue is that we don't want to financially support anyone who is doing bad stuff, so once they're dead I don't have to worry about funding them.

Hyperbolic example; suppose David Duke wrote a fantasy novel. Let's even assume that this fantasy novel had nothing to do with race or politics and was purely just about elves and gnomes and shit. Let's also assume that the novel is "good" by any objective measure you're like to use.

I would still not want to buy it, because I would be afraid that my money is going to something I don't agree with. David Duke is a known racist, neo-Nazi, and former leader of the KKK, and if I were to give him cash then it's likely that some percentage of this will end up towards a cause that I think is very actively harmful.

Now, if you go too deep with this, then of course you can't ever consume anything; virtually every piece of media involves multiple people, often dozens or even hundreds, many of which are perfectly fine people and some of which are assholes, so unless you want to go live in a Unabomber shack then everything devolves into my favorite Sonic quote [1].

So you draw a line somewhere, and I think people more or less have drawn the line at "authorship".

[1] https://www.reddit.com/media?url=https%3A%2F%2Fexternal-prev...


I feel similar.

Dilbert came out a bit before I was born, so from my perspective it always existed. Even before I had ever had any kind of office job, I was reading the Dilbert comics and watching the cartoon series, and had even read The Dilbert Principle.

It was upsetting that he ended up with such horrible viewpoints later in his life, and they aren’t really forgivable, but as you stated it’s sort of like a relative you grew up with dying.

I really hate my grandmother, because she has repeatedly said very racist stuff to my wife, so I haven’t talked to her in since 2018, and the only communication that I have had with her was a series of increasingly nasty emails we exchanged after she called my mother a “terrible parent” because my sister is gay, where I eventually told her that she “will die sad and alone with her only friend being Fox News”.

It is likely that I will never say anything to her ever again; she is in her 90s now, and not in the greatest health from my understanding. When she kicks the bucket in a few years, I think I am going to have similar conflicts.

Despite me hating her now, it’s not like all my memories with her were bad. There are plenty of happy memories too, and I am glad to have those, but it doesn’t automatically forgive the horrible shit she has said to my wife and mother and sister.

I have thought about reaching out, but I cannot apologize for anything I said because I am not sorry for anything I said, and I do not apologize for things unless I actually regret them.

Dunno, relationships and psychology are complex and I can’t pretend to say I understand a damn thing about how my brain works.


I've been getting into FUSE a bit lately, as I stole an idea that a friend had of how to add CoW features to an existing non-CoW filesystem, so I've been on/off been hacking on a FUSE driver for ext4 to do that.

To learn FUSE, however, I started just making everything into filesystems that I could mount. I wrote a FUSE driver for Cassandra, I wrote a FUSE driver for CouchDB, I wrote a FUSE driver for a thing that just wrote JSON files with Base64 encoding.

None of these performed very well and I'm sort of embarrassed at how terrible the code is hence why I haven't published them (and they were also just learning projects), but I did find FUSE to be extremely fun and easy to write against. I encourage everyone to play with it.


Writing a fuse frontend for git is particularly rewarding: git is already more or less organised like a file system internally.

I believe that.

FUSE makes me think that the Plan 9 people were on to something. Filesystems actually can be a really nice abstraction; sort of surreal that I could make an application so accessible that I could seriously have it directly linked with Vim or something.

I feel like building a FUSE driver would be a pretty interesting way to provide a "library" for a service I write. I have no idea how I'd pitch this to a boss to pay me to do it, but pretending that I could, I could see it being pretty interesting to do a message broker or something that worked entirely by "writing a file to a folder". That way you could easily use that broker from basically anything that has file IO, even something like bash.

I always have a dozen projects going on concurrently, so maybe I should add that one to the queue.


See https://github.com/matthiasgoergens/git-snap-fs for my example of read-only access to git via fuse.

I built the original version in Python for a job years ago. But the version above is almost entirely vibe-coded in Rust in a lazy afternoon for fun.

However, I disagree that the filesystem is the right abstraction in general. It works for git, because git is essentially structured like a filesystem already.

More generally, filesystems are roughly equivalent to hierarchical databases, or at most graph databases. And while you can make that work, many collections of data are actually better organised and accessed by other means. See https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf for an particularly interesting and useful model that has found widespread application and success.


Yeah I'm not saying that they're necessarily great in general, just that there are certain applications that map pretty well, and for those it's a pretty cool abstraction because it allows virtually anything to interface with it.

Also, looks like my message queue idea has already been done: https://github.com/pehrs/kafkafs

No new ideas under the sun I suppose.


I have been running desktop Linux for a very long time, but I actually agree. There's a lot of rough edges. I do think a lot of these problems do go away if you are a bit proactive in choosing compatible hardware. I bought my mother in law a laptop for Christmas, and I put Linux Mint on there [1]. There were no issues getting it working on Mint with Cinnamon, but that's in no small part because I double checked all the common hardware (wi-fi, GPU, trackpad, etc) to make sure it worked fine in Linux and it did.

If you don't do your homework, it's definitely a crapshoot with hardware compatibility, and of course that sucks if you're telling people that they should "switch to Linux" on their existing hardware, since they might have a bad experience.

That said, it is weird that people seem to have total amnesia for the rough edges of Windows, and I'm not convinced that Windows has fewer rough edges than Linux. I've grown a pretty strong hatred for Windows Update, and the System Restore and Automatic Repair tools that never work. Oh, and I really think that NTFS is showing its age now and wish that Microsoft would either restart effort on ReFS or port over ZFS to run on root.

[1] Before you give me shit for this, if anything breaks I agreed to be the one to fix it, and I find that generally I can solve these kinds of problems by just using tmate and logging into their command line which AFAIK doesn't have a direct easy analog in Windows.


How do you check the hardware is compatible in practice? Is there some reliable resource for doing this?

I've found generally that business-grade hardware has better Linux support. So for laptops, for instance, Lenovo Thinkpad and Dell Latitude laptops work better than some bargain-basement consumer-grade laptop.

As a rule AMD stuff is pretty safe, but to answer your question, I generally go look at kernel sources, or sometimes I go and see if I can find the model in the NixOS Github and see how many workarounds that they have to do to get it working.


I bought a cheap laptop with preinstalled linux, it happened to be compatible with linux.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: