Eh, I got a cheap degree from a public school (URI), albeit in Mathematics and not Comp Sci and it hasn't stopped me getting good tech jobs over the last decade or so. I'm currently working at a FAANG. Maybe I'm just extra hard-working, smart, or lucky? Or maybe your pedigree isn't as big a deal as it once was? Hard to say from my N=1 data point.
I guess the point I was trying to get at is the flagship public research university in your state doesn't count, URI isn't even cheap compared to other flagship public universities. I went to the flagship public school in my state, I don't consider it cheap. I'm thinking more like lake superior college in MI, or UW-Stout in Wisconsin. They're cheaper than the flagship public school, but because of their lack of name recognition and assumptions employers make (like did this person go to superior because it was cheap, or because that's the only place they got in?), I've seen people struggle to get good jobs after attending, particularly during rough economies.
As a test, look in your IT department. I wouldn't be surprised if it's full of people from community colleges and lower ranked "cheap" colleges with engineering degrees. I like the idea of just going to the cheapest school but ranking unfortunately matters to a lot of employers, and ranking is usually correlated with cost.
If you live in a jurisdiction where there is a speed limit enforced by law, you likely have driven above it at some point. By definition, this is a violation of the law. Yet you have observed that you have never been arrested (perhaps never even ticketed?) as a result of this. Is this a logical contradiction? Obviously not. The law isn't always enforced, and not every violation of the law is punished.
I can't speak for where you live, but in America, there are many, many traffic laws. They differ greatly by jurisdiction. Most of them are not enforced. Sometimes explicitly -- for example, in my city, they recently announced they would no longer detain people for specific minor traffic violations -- but usually, it's implicit which go unpunished. It's also selective. By creating an unseen web of violations, the detaining officer is given all the necessary tools to make each stop as painful or as peaceful as they'd like.
Sure, you can ask the agents to "identify and remove cruft" but I never have any confidence that they actually do that reliably. Sometimes it works. Mostly they just burn tokens, in my experience.
> And it's not like any of your criticisms don't apply to human teams.
Every time the limitations of AI are discussed, we see this unfair standard applied: ideal AI output is compared to the worst human output. We get it, people suck, and sometimes the AI is better.
At least the ways that humans screw up are predictable to me. And I rarely find myself in a gaslighting session with my coworkers where I repeatedly have to tell them that they're doing it wrong, only to be met with "oh my, you're so right!" and watch them re-write the same flawed code over and over again.
Yeah, I see quite a lot of misanthropy in the rhetoric people sometimes use to advance AI. I'll say something like "most people are able to learn from their mistakes, whereas an LLM won't" and then some smartass will reply "you think too highly of most people" -- as if this simple capability is just beyond a mere mortal's abilities.
I'll never doubt the ability of people like yourself to consistently mischaracterize human capabilities in order to make it seem like LLMs' flaws are just the same as (maybe even fewer than!) humans. There are still so many obvious errors (noticeable by just using Claude or ChatGPT to do some non-trivial task) that the average human would simply not make.
And no, just because you can imagine a human stupid enough to make the same mistake, doesn't mean that LLMs are somehow human in their flaws.
> the gap is still shrinking though
I can tell this human is fond of extrapolation. If the gap is getting smaller, surely soon it will be zero, right?
> doesn't mean that LLMs are somehow human in their flaws.
I don't believe anyone is suggesting that LLMs flaws are perfectly 1:1 aligned with human flaws, just that both do have flaws.
> If the gap is getting smaller, surely soon it will be zero, right?
The gap between y=x^2 and y=-x^2-1 gets closer for a bit, fails to ever become zero, then gets bigger.
The difference between any given human (or even all humans) and AI will never be zero: Some future AI that can only do what one or all of us can do, can be trivially glued to any of that other stuff where AI can already do better, like chess and go (and stuff simple computers can do better, like arithmetic).
> I'll never doubt the ability of people like yourself to consistently mischaracterize human capabilities
Ditto for your mischaracterizations of LLMs.
> There are still so many obvious errors (noticeable by just using Claude or ChatGPT to do some non-trivial task) that the average human would simply not make.
Firstly, so what? LLMs also do things no human could do.
Secondly, they've learned from unimodal data sets which don't have the rich semantic content that humans are exposed to (not to mention born with due to evolution). Questions that cross modal boundaries are expected to be wrong.
> If the gap is getting smaller, surely soon it will be zero, right?
IMO, Python should only be used for what it was intended for: as a scripting language. I tend to use it as a kind of middle ground between shell scripting and compiled languages like Rust or C. It's a truly phenomenal language for gluing together random libraries and data formats, and whenever I have some one-off task where I need to request some data from some REST API, build a mapping from the response, categorize it, write the results as JSON, then push some result to another API -- I reach for Python.
But as soon as I have any suspicion that the task is going to perform any non-trivial computation, or when I notice the structure of the program starts to grow beyond a couple of files, that's when Python no longer feels suitable to the task.
Mathematical notation evolved to its modern state over centuries. It's optimized heavily for its purpose. Version numbers? You're being facetious, right?
Without version numbers, it has to be backwards-compatible, making it difficult to remove cruft. What would programming be like if all the code you wrote needed to work as IBM mainframe assembly?
Tau is a good case study. Everyone seems to agree tau is better than pi. How much adoption has it seen? Is this what "heavy optimization" looks like?
It took hundreds of years for Arabic numerals to replace Roman numerals in Europe. A medieval mathematician could have truthfully said: "We've been using Roman numerals for hundreds of years; they work fine." That would've been stockholm syndrome. I get the same sense from your comment. Take a deep breath and watch this video: https://www.youtube.com/watch?v=KgzQuE1pR1w
>You're being facetious, right?
I'm being provocative. Not facetious. "Strong opinions, weakly held."
> Without version numbers, it has to be backwards-compatible
If there’s one thing that mathematical notation is NOT, it’s backwards compatible. Fields happily reuse symbols from other fields with slightly or even completely different meanings.
Widely used for denoting division in Anglophone countries, it is no longer in common use in mathematics and its use is "not recommended". In some countries, it can indicate subtraction.
~ (tilde)
1. Between two numbers, either it is used instead of ≈ to mean "approximatively equal", or it means "has the same order of magnitude as".
2. Denotes the asymptotic equivalence of two functions or sequences.
3. Often used for denoting other types of similarity, for example, matrix similarity or similarity of geometric shapes.
4. Standard notation for an equivalence relation.
5. In probability and statistics, may specify the probability distribution of a random variable. For example, X∼N(0,1) means that the distribution of the random variable X is standard normal.
6. Notation for proportionality. See also ∝ for a less ambiguous symbol.
> Fields happily reuse symbols from other fields with slightly or even completely different meanings.
Symbol reuse doesn't imply a break in backwards compatibility. As you suggest with "other fields", context allows determining how the symbols are used. It is quite common in all types of languages to reuse symbols for different purposes, relying on context to identify what purpose is in force.
Backwards incompatibility tells that something from the past can no longer be used with modern methods. Mathematical notation from long ago doesn't much look like what we're familiar with today, but we can still make use of it. It wasn't rendered inoperable by modern notation.
> Mathematical notation from long ago doesn't much look like what we're familiar with today, but we can still make use of it.
But few modern mathematicians can understand it. Given enough data, they can figure out what it means, but that’s similar to (in this somewhat weak analogy) running code in an emulator.
What we can readily make use of are mathematical results from long ago.
> Given enough data, they can figure out what it means
Right, whereas something that isn't backwards compatible couldn't be figured out no matter how much data is given. Consider this line of Python:
print(5 / 2)
There is no way you can know what the output should be. That is, unless we introduce synthetic context (i.e. a version number). Absent synthetic context we can reasonably assume that natural context is sufficient, and where natural context is sufficient, backwards compatibility is present.
> What we can readily make use of are mathematical results from long ago.
To some degree, but mostly we've translated the old notation into modern notation for the sake of familiarity. And certainly a lot of programming that gets done is exactly that: Rewriting the exact same functionality in something more familiar.
But like mathematics, while there may have been a lot of churn in the olden days when nothing existed before it and everyone was trying to figure out what works, programming notation has largely settled on what is familiar with some reasonable stability and no doubt will only continue to find even greater stability and it matures in kind.
Mathematical notation isn't at all backwards compatible, and it certainly isn't consistent. It doesn't have to be, because the execution environment is the abstract machine of your mind, not some rigidly defined ISA or programming language.
> Everyone seems to agree tau is better than pi. How much adoption has it seen?
> It took hundreds of years for Arabic numerals to replace Roman numerals in Europe.
What on earth does this have to do with version numbers for math? I appreciate this is Hacker News and we're all just pissing into the wind, but this is extra nonsensical to me.
The reason math is slow to change has nothing to do with backwards compatibility. We don't need to institute Math 2.0 to change mathematical notation. If you want to use tau right now, the only barrier is other people's understanding. I personally like to use it, and if I anticipate its use will be confusing to a reader, I just write `tau = 2pi` at the top of the paper. Still, others have their preference, so I'm forced to understand papers (i.e. the vast majority) which still use pi.
Which points to the real reason math is slow to change: people are slow to change. If things seem to be working one way, we all have to be convinced to do something different, and that takes time. It also requires there to actually be a better way.
> Is this what "heavy optimization" looks like?
I look forward to your heavily-optimized Math 2.0 which will replace existing mathematical notation and prove me utterly wrong.
reply