Like magic pixie dust, nobody knows in detail how AI models work. They are not explicitly created like GOFAI or arbitrary software. The machine learning algorithms are explicitly written by humans, but the model in turn is "written" by a machine learning algorithm, in the form of billions of neural network weights.
I think we do know how they work, no? We give a model some input, this travels through the big neural net of probabilities (gotten with training) and then arrives at a result.
Sure, you don't know what the exact constellation of a trained model will be upfront. But similarly you don't know what, e.g, the average age of some group of people is until you compute it.
If it solves a problem, we generally don't know how it did it. We can't just look at its billions of weights and read what they did. They are incomprehensible to us. This is very different from GOFAI, which is just a piece of software whose code can be read and understood.
The number can be anything, is there a number at which "we don't know" starts?
The model's parameters are in your RAM, you insert the prompt, it runs through the model and gives you a result. I'm sure if you spend a bit of time, you could add some software scaffolding around the process to show you each step of the way. How is this different from a statistical model where you "do know"?
For just a few parameters, you can understand the model, because you can hold it in your mind. But for machine learning models that's not possible, as they are far more complex.
May I point out that we don't know in detail how most code runs? Not talking about assembly, I am talking about edge cases, instabilities, etc. We know the happy path and a bit around it. All complex systems based on code are unpredictable from static code alone.
We know at least quite well how it runs if we look at the code. But we know almost nothing about how a specific AI model works. Looking at the weights is pointless. It's like looking into Beethoven's brain to figure out how it came up with the Moonshine sonata.
When we built nuclear powerplant we had no idea what really mattered for safety or maintenance, or even what day-to-day operations would be like, and we discovered a lot of things as we ran them (which is why we have been able to keep expanding their lifetime much longer than they were planned for).
Same for airplanes: there's tons of empirical knowledge about them, and people are still trying to build better models for why things that works do works the way they do (a former roommate of mine did a PhD on modeling combustion in jet engines, and she told me how much of the details were unknown, despite the technology being widely used for the past 70 years).
By the way, this is the fundamental reason why waterfall often fails, we generally don't understand enough about something before we build it and use it extensively.
Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.
Now LLMs, what is their purpose? What is the purpose of a human?
I'm not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that's called bullshitting - not a coincidence).
They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It's been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.
> Road upkeep is from general taxation. Road tax was abolished in 1937
I was skeptical of this being true since fuel duty is notoriously high in the UK, so I did a quick fact check.
Based on the change in 1937 you are "technically" correct, in that none of the motoring taxes are ring fenced for road funds since 1937.
However the opposite is true of what you are implying... income from fuel duty alone is generally around 3 times larger than all road maintenance spending (a fairly steady +25bn/yr [0] Vs -8bn/yr [1] over the last decade).
In other words, although it's officially one big tax pot, motoring taxes pay for road network expenditure more than 3 times over.
This is why they are introducing the per mile EV tax, because fuel duty provided a proportional tax to road use, but EVs skip that and electricity can't be so easily taxed for road use specifically.
TLDR, UK road users pay for far more than the road network.
> TLDR, UK road users pay for far more than the road network.
Right, but driving has far more externalities than just the cost of the roads. For example:
> Results suggest that each kilometer driven by car incurs an external cost of €0.11, while cycling and walking represent benefits of €0.18 and €0.37 per kilometer. Extrapolated to the total number of passenger kilometers driven, cycled or walked in the European Union, the cost of automobility is about €500 billion per year. Due to positive health effects, cycling is an external benefit worth €24 billion per year and walking €66 billion per year.
I really like your response and your approach to it; I would like to work with you. :P
I do not need a CLI tool. I can come up with a very simple script or even an one-liner (like you just did) to achieve what I want.
Worth noting that neovim shows some git status when editing a file inside a git repository, and there are ways to do the same from your shell.
FWIW, I think this project was vibe coded with an LLM, but if it works, it works, so it makes no difference to me. The only reason I mentioned it is that "vibe coding" is not inherently bad. I do not even like the term. If you "vibe code" without knowledge, then yeah, it is bad, just as bad as a shitty developer writing code is.
Thanks :D I like working with people who appreciate simple solutions.
This sort of response to complex solutions used to be more prevalent on HN. When I got downvoted I was like "..this is the end isn't it" :P Maybe the unix way is a dying strategy IDK, but you give me hope.
> FWIW, I think this project was vibe coded with an LLM, but if it works, it works, so it makes no difference to me.
I did not realise that, I'd be far more worried about running it than most human coded projects out of fear of it doing something destructive. Not that humans don't make mistakes, but at least they have a mental model and intent. I suppose it depends on the definition of "vibe coded" I've heard some people talk about sending the LLM off into a loop and then trying to use the result, whereas if you are just using it as a more powerful autocomplete and playing captain then that's a lot better.
Yeah, I am surprised that you would get downvoted for this. Seriously though. A simple yet effective solution. What is wrong with that?! This is what programmers used to do. :(
As for the LLM part: I have written a couple of projects with the help of LLMs and it works perfectly! I know what I wanted it to do and how, and I did extensive testing, and I am familiar with the whole code, of course. The problem arises when people who "vibe code" do not have the knowledge to begin with. It ended up writing code that I would write because of me. :D It just wrote it quicker, that is all. Ultimately I would have written the same code, but it would have taken me a bit more time because I would have had to read documentation first (which I do not mind, I love doing it).
It looks quite fancy but I actually like it more for it's functionality, particularly it's tree view for navigating the processes list. I'm not a big fan of full multicolor in these kinds of tools and so appreciate how easy it is to flip to grey scale mode from the built in colour schemes (even from the TUI settings menu).
> as they are the bill payer and entering into a credit agreement requires you to be over 18. If you wanted belt and braces the phone companies doing PAYG could set it to disabled unless you authenticate your age to avoid the "buy simcard for cash" loophole.
This is already the case in UK, has been for years. The bill payer needs to prove age with an ID to lift IP level blocks from some default age blocklist.
It doesn't work well because obviously a lot of internet is shared amongst a household, and the blocklist is too broad to make it annoying enough that any adults will remove it. Then of course you can always just use a VPN same as with the current situation.
There are various international economic laws, treaties and agreements between cooperating countries, whether or not any of them cover this scenario for to US, and whether the US would honour any agreement in the current political climate remains to be seen. But there are mechanisms in place that allow w the UK to reach US companies through each others legal systems to a degree and vice versa, regardless of asset location.
> whether the US would honour any agreement in the current political climate remains to be seen
That this is even a question is bananas to me. Isn't that handled by the judicial system rather than involving politics/the administration? Shouldn't be possible for the US to have a treaty, and there are questions about if the treaty will actually be enforced or not, how could anyone trust the US as a whole for anything if those aren't enforced?
> Components are bad for web accessibility (aria- property fatigue).
I've been using web components as a vehicle to automate and auto validate accessibility aspects as much as possible, because I think the only way to truly make things sustainably accessible is to find a way to unburden the developer by either inferring as much as possible or making validation a natural part of development rather than a separate testing cycle that will invariably cause accessibility support to become out of sync.
It sounds like you might have similar concerns. Do you have any insights to share along these lines for Gooey?
The UI components that I wrote initially are just wrappers for the Browser provided input/form elements. As I'm relying on webview/webview to build desktop apps out of it, that also kind of implies WebKitGTK4 on Linux, WebKit on MacOS, and WebView2 (Edge) on Windows.
These work quite nicely together with a screen reader because you don't have to intercept the focus event (or others) that people browsing in caret mode or similar would use to navigate the page.
Additionally I decided to make single page applications using a main and section[data-view] elements so that the HTML and CSS alone is enough to hint screen readers on what's visible and so that there are no javascript codes necessary to tween things around, the JS/WebASM side of things literally just sets a data-view property on the main element.
The whole idea behind gooey and the way it is structured is:
- all states must be serializable in HTML
- Static HTML and CSS makes the page usable (apart from web forms and REST APIs, that's developer provided code)
- Dynamic WebASM on top essentially translates the DOM to be interactive, so that things can be animated based on changing data or streams coming from the backend. All interactivity is rendered directly into the DOM, so that it can be serialized again at all times.
- Communication between Client and Server is JSON or any other Go implemented Marshaller, and using Fetch API behind the scenes.
I decided on purpose to not provide XMLHTTPRequest and other old APIs because I'm relying on WebASM and "modern Browser engines" anyways. This way I kinda force users of gooey to use modern JS from the WebASM context and I save a whole lot of trouble with compatibility issues (and don't get into the unsemantic div fatigue like React does, for example).
> And proprietary browser plugins, really? So you're not looking to reduce complexity after all, then?
Maybe they haven't lived through the world of pain that was Silverlight, Flash and Java Applets et al. I suppose from a more innocent position without any history it might seem like a good idea to break complexity out into little modules, but the reality was poor integration, more platform lockin, and a security nightmare.
Not really, it's called discovery, aka science.
This weird framing is just perpetuating the idea of LLMs being some kind of magic pixie dust. Stop it.