Hacker Newsnew | past | comments | ask | show | jobs | submit | n42's commentslogin

Don't engage with this guy, he shows up in every one of these threads to pattern match back to his heyday without considering any of the nuance of what is actually different this time.

Look an admirer!

It’s hard to see this article as being written in good faith. We’re at the point that we are responding to low quality LLM outputs with low quality LLM retorts and voting them both to the front page because of feelings.


I'm at the point now where I simply stop reading the article once it has too many red flags, something that is happening increasingly often.

I don't enjoy reading AI slop but it feels worse when users of AI tools have chosen not to disclose the authors of these articles as Claude/ChatGPT/etc. Rather than being honest upfront, they choose to hide this fact.


I added some sentences at the top, so it wont waste people's time:

Some parts of this article were refined with help from LLMs to improve clarity and technical accuracy. These are just personal notes, but I would really appreciate feedback: feel free to share your thoughts, open an issue, or send a pull request!

If you prefer to read only fully human-written articles, feel free to skip this one.


It clearly wasn't "refined" using LLMs when it contained commands that plainly don't work. Don't lie.


We've flagged it. Please don't waste our time in the future.


> but I would really appreciate feedback

very well

> Some parts of this article were refined with help from LLMs to improve clarity and technical accuracy

Perhaps you should stick to writing about things you can write with clarity and and accuracy yourself instead of relying on an LLM to do it for you. Alternatively, properly cite and highlight what portions you used AI on/for from the outset as failure to do so reads at best as lazy slop and more often as intentional duplicity


the entire github organization looks to be ai slop books... why even do this?


As a fan and user of Zig I found the original post embarrassing, but chalked it up to the enthusiasm of a new user discovering the joy of something that clicked for them

Taking offense to that enthusiasm and generating this weirdly defensive and uninformed take is something else, though


Edit: Apologies, it looks like I misunderstood. Original response left below for posterity.

It's not "weirdly defensive and uninformed" to question the value of posting a bunch of inaccurate LLM slop, especially without any disclosures.

If you're pro-AI, you should be against this too, before these errors get used as training data.


I think you are misunderstanding, they are calling TFA a defensive and uninformed reply to the pro-Zig post from yesterday.


Ohhhh, my apologies, then.


I can see how you would have read it that way, now, but yes — I meant this article is defensive for no reason while being uninformed


I stopped using Rust because of this. I spent more time learning and cursing at other people’s abstractions versus thinking about what the computer is doing.

> the ones who’d use Zig if it weren’t allergic to syntactic sugar

You’re very close to understanding why some people prefer Zig. There is a correlation between language design and how things are built with it.


> There is a correlation

Precisely, same for Go. Incentives decide outcomes.


One that I know of is https://timetree.org/


this is a really, really good article with a lot of nuance and a deep understanding of the tradeoffs in syntax design. unfortunately, it is evoking a lot of knee-jerk reactions from the title and emotional responses to surface level syntax aesthetics.

the thing that stands out to me about Zig's syntax that makes it "lovely" (and I think matklad is getting at here), is there is both minimalism and consistency to the design, while ruthlessly prioritizing readability. and it's not the kind of surface level "aesthetically beautiful" readability that tickles the mind of an abstract thinker; it is brutalist in a way that leaves no room for surprise in an industrial application. it's really, really hard to balance syntax design like this, and Zig has done a lovely and respectable job at doing so.


My only complaint about the article is that it doesn't mention error handling. Lol

Zigs use of try/catch is incredible, and by far my favorite error handling of any language. I feel like it would have fit into this article.


> it's not the kind of surface level "aesthetically beautiful" readability that tickles the mind of an abstract thinker

Rather, the sort of beauty it's going for here is exactly the type of beauty that requires a bit of abstraction to appreciate: it's not that the concrete syntax is visually beautiful per se so much as that it's elegantly exposing the abstract syntax, which is inherently more regular and unambiguous than the concrete syntax. It's the same reason S-exprs won over M-exprs: consistently good often wins over special-case great because the latter imposes the mental burden of trying to fit into the special case, while the former allows you to forget that the problem ever existed. To see a language do the opposite of this, look at C++: the syntax has been designed with many, many special cases that make specific constructs nicer to write, but the cost of that is that now you have to remember all of them (and account for all of them, if templating — hence the ‘new’ uniform initialization syntax[1]).

[1]: https://xkcd.com/927/

This trade-off happens all the time in language design: you're looking for language that makes all the special cases nice _as a consequence of_ the general case, because _just_ being simple and consistent leads you to the Turing tarpit: you simplify the language by pushing all the complexity onto the programmer.


I considered making the case for the parallels to Lisp, but it's not an easy case to make. Zig is profoundly not a Lisp. However, in my opinion it embodies a lot of the spirit of it. A singular syntax for programming and metaprogramming, built around an internally consistent mental model.

I don't really know how else to put it, but it's vaguely like a C derived spiritual cousin of Lisp with structs instead of lists.


I think because of the forces I talked about above we experience a repeating progression step in programming languages:

- we have a language with a particular philosophy of development

- we discover that some concept A is awkward to express in the language

- we add a special case to the language to make it nicer

- someone eventually invents a new base language that natively handles concept A nicely as part of its general model

Lisp in some sense skipped a couple of those progressions: it had a very regular language that didn't necessarily have a story for things that people at the time cared about (like static memory management, in the guise of latency). But it's still a paragon of consistency in a usable high-level language.

I agree that it's of course not correct to say that Zig is a descendent or modern equivalent of Lisp. It's more that the virtue that Lisp embodies over all else is a universal goal of language design, just one that has to be traded off against other things, and Zig has managed to do pretty well at it.


> I don't really know how else to put it, but it's vaguely like a C derived spiritual cousin of Lisp with structs instead of lists.

Zig comptime operates a lot like very old school Lisp FEXPRS before the Lisp intelligentsia booted them out because FEXPRS were theoretically messy and hard to compile.


As someone who loves Lisps, I still have to disagree on the value of the s-expression syntax. I think that sexps are very beautiful, easy to parse, and easy to remember, but I think that overall they're less useful than Algol-like syntaxes (of which I consider most modern languages, including C++, to be in the family of), for one reason:

Visually-heterogeneous syntaxes, for all of their flaws, are easier to read because it's easier for the human brain to pattern-match on distinct features than indistinct ones.


So urbit’s nock vs forth?


Have any examples that stand out to you to share ?


Zig does not really try to appeal to window shoppers. this is one of those controversial decisions that, once you become comfortable with the language by using it, you learn to appreciate.

spoken as someone who found the syntax offensive when I first learned it.


> it's even worse that they landed on // for the syntax

.. it is using \\


I worked with browsers since before most people knew what a browser was and it will never cease to amaze me how often people confuse slash and backslash, / and \

It’s some sort of mental glitch that a number of people fall into and I have absolutely no idea why.


I doubt those very people would confuse the two when presented with both next to each other: / \ / \. The issue is, they're not characters used day-to-day so few people have made the association that the slash is the one going this way / and not the one going the other way \. They may not even be aware that both exist, and just pick the first slash-like symbol they see on their keyboards without looking further.


I wonder if it's dyslexia-adjacent. Dyslexic people famously have particular difficulty distinguishing rotated and reflected letterforms.


Could be. The frequency is such that it could be dyslexics. It's not all the time, but it's a steady rate of incidence.


I think in the 90's it was just people repeating a pattern they learned from Windows/DOS.

It used to grate on my nerves to hear people say, e.g. "H T T P colon backslash backslash yahoo dot com".

But I think they always typed forward slash, like they knew the correct slash to use based on the context, but somehow always spoke it in DOSish.


to me it seems like the market is breaking into an 80/20 of B2C/B2B; the B2C use case becoming OSS models (the market shifts to devices that can support them), and the B2B market being priced appropriately for businesses that require that last 20% of absolute cutting edge performance as the cloud offering


here's a quick recording from the 20b model on my 128GB M4 Max MBP: https://asciinema.org/a/AiLDq7qPvgdAR1JuQhvZScMNr

and the 120b: https://asciinema.org/a/B0q8tBl7IcgUorZsphQbbZsMM

I am, um, floored


Generation is usually fast, but prompt processing is the main limitation with local agents. I also have a 128 GB M4 Max. How is the prompt processing on long prompts? processing the system prompt for Goose always takes quite a while for me. I haven't been able to download the 120B yet, but I'm looking to switch to either that or the GLM-4.5-Air for my main driver.


Here's a sample of running the 120b model on Ollama with my MBP:

```

total duration: 1m14.16469975s

load duration: 56.678959ms

prompt eval count: 3921 token(s)

prompt eval duration: 10.791402416s

prompt eval rate: 363.34 tokens/s

eval count: 2479 token(s)

eval duration: 1m3.284597459s

eval rate: 39.17 tokens/s

```


You mentioned "on local agents". I've noticed this too. How do ChatGPT and the others get around this, and provide instant responses on long conversations?


Not getting around it, just benefiting from parallel compute / huge flops of GPUs. Fundamentally, it's just that prefill compute is itself highly parallel and HBM is just that much faster than LPDDR. Effectively H100s and B100s can chew through the prefill in under a second at ~50k token lengths, so the TTFT (Time to First Token) can feel amazingly fast.


They cache the intermediate data (KV cache).


it's odd that the result of this processing cannot be cached.


It can be and it is by most good processing frameworks.


the active param count is low so it should be fast.


my very early first impression of the 20b model on ollama is that it is quite good, at least for the code I am working on; arguably good enough to drop a subscription or two


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: