The danger of short form videos is because the form enables the algorithm designer to artificially maximize the reward with minimum effort by the viewer. It doesn't matter whether you watch kitten ones initially. After watching it for a month casually, chances are you would end up watching some addictive videos for hours with little effort. It could be some endless stream of Buddhist monks talking about suffering, if someone likes that kind of thing. It's just designed to be addictive with crazy high reward/effort ratio.
What you are obsessing with is about the writer's style, not its substance. How sure are you if they outsourced the thinking to LLMs? Do you assume LLMs produce junk-level contents, which contributes human brain rot? What if their contents are of higher quality like the game of Go? Wouldn't you rather study their writing?
Writing is thinking, so they necessarily outsourced their thinking to an LLM. As far as the quality of the writing goes, that’s a separate question, but we are nowhere close to LLMs being better, more creative, and more interesting writers than even just decent human writers. But if we were, it wouldn’t change the perversion inherent in using an LLM here.
Have you considered a case where English might not be the authors' first language? They may have written a draft in their mother tongue and merely translated it using LLMs. Its style may not be many people's liking, but this is a technical manuscript, and I would think the novelty of the ideas is what matters here, more than the novelty of proses.
I agree with the "writing is thinking" part, but I think most would agree LLM-output is at least "eloquent", and that native speakers can benefit from reformulation.
This is _not_ to say that I'd suggest LLMs should be used to write papers.
> What you are obsessing with is about the writer's style, not its substance
They aren’t, they are boring styling tics that suggest the writer did not write the sentence.
Writing is both a process and an output. It’s a way of processing your thoughts and forming an argument. When you don’t do any of that and get an AI to create the output without the process it’s obvious.
Are you saying human brain is kind of similarly vulnerable to well-crafted facts? Does it mean any intelligence (human or non-human) needs a large amount of generally factual data to discern facts from fakes, which is an argument toward AIs that can accumulate huge swath of factual data?
I feel like you're trying to twist my words into something they don't resemble at all.
I'm not saying anything is vulnerable to anything. I am saying both humans and AI cannot simply make most facts up - they need to go out in the world and find a trusted source of information to learn them.
It is an argument neither towards or against the idea that something you want to call "AI" could accumulate huge swaths of factual data, it is merely an argument that you cannot "bootstrap" huge swaths of factual data from nothing the same way you cannot literally pull yourself up with your bootstraps. If you want the information, you have to collect it from the environment.
Agreed in principle, but has anyone seen any practical difference between these DNS services? What would be a more detailed downside for using these in parallel instead of the ISP default as a fallback?
Some of them are so privacy-preserving they block sending your own location to the original DNS server, which makes anycast not work, so you get slower connections to the site.
Wikipedia article about Consciousness opens with an interesting line: "Defining consciousness is challenging; about forty meanings are attributed to the term."
Perhaps "consciousness" is just a poor term to use in a scientific discussion.
In Portuguese, they indicate that a syllable is stressed and alternate ways to say the vowels. e.g. "país" is stressed in "i" and means "country", while "pais" is stressed in "a" and means "parents". Tilde (~) indicates that the vowel is nasal, e.g. the "ã" in "São Paulo" means that it sounds like the "u" in "sun"; the default sound of "a" in Portuguese is the same as in "car".
because you know the stress syllable by looking at the word. take Desert and Dessert, do we say DES-ert or des-ERT. Also in portuguese, at least, I can know which "e" sound [1] each "e" in the word makes by knowing this (well, almost, but not completely, but much better than English.)
We need to balance the benefit and the downside of the limited liability in corporations. If innovation no longer becomes beneficial for the society and only beneficial for a small number of people, perhaps the society may need to reconsider the concept.
The answer says "For rings in which division by 2 is permitted". Is there the same constraint for AlphaEvolve's algorithm?
Edit2: Z_2 has characteristics 2.
Edit: AlphaEvolve claims it works over any field with characteristic 0. It appears Waksman's could be an existing work. From the AlphaEvolve paper: "For 56 years, designing an algorithm with fewer than 49 multiplications over any field with characteristic 0 was an open problem. AlphaEvolve is the first method to find an algorithm to multiply two 4 × 4 complex-valued matrices using 48 multiplications."
If you don't want to allow division by 2 then there is Winograd's algorithm from 1967 which works over any commutative ring and uses 48 multiplications for 4 x 4.
Immediately take them out of the oven and store in the smallest airtight container you have. Obviously they'll absorb the humidity in the container and whatever is introduced anytime you open it. Ideally, keep them in containers that have an excellent seal and minimal internal volume like quality ESD bags.
I don't think I've ever seen an antistatic bag with a very good seal, and I'm not sure it's a good idea to drop something directly out of a hot oven into them either.
If they're not getting hydrated slowly, they're not serving any purpose. The whole point is that water goes into them instead of whatever you're trying to keep dry.
I think the grandparent comment meant keeping unhydrated during storage (for future uses of emergency drying electronics), not while it is being actively used for its intended purpose.
other people are suggesting the microwave rather than the oven. to my mind it seems very possible that you don't keep them from hydrating, you just dehydrate them on-demand.
Wouldn't you say the same thing for most of the people? Most of the people suck at verifying truth and reasoning. Even "intelligent" people make mistakes based on their biases.
I think at least LLMs are more receptive to the idea that they may be wrong, and based on that, we can have N diverse LLMs and they may argue more peacefully and build a reliable consensus than N "intelligent" people.
The difference between a person and a bot is that a person has a stake in the outcome. A bot is like a person who's already put in their two weeks notice and doesn't have to be there to see the outcome of their work.
Even if it was a consensus opinion among all HN users, which hardly seems to be the case, it would have little impact on the other billion plus potential customers…
The issue is that most people, especially when prompted, can provide their level of confidence in the answer or even refuse to provide an answer if they are not sure. LLMs, by default, seem to be extremely confident in their answers, and it's quite hard to get the "confidence" level out of them (if that metric is even applicable to LLMs). That's why they are so good at duping people into believing them after all.
> The issue is that most people, especially when prompted, can provide their level of confidence in the answer or even refuse to provide an answer if they are not sure.
People also pull this figure out of their ass, over or undertrust themselves, and lie. I'm not sure self-reported confidence is that interesting compared to "showing your work".
How is this a counter argument that LLMs are marketed as having intelligence when it’s more accurate to think of them as predictive models? The fact that humans are also flawed isn’t super relevant to a $200/month LLM purchasing decision.
> Wouldn't you say the same thing for most of the people? Most of the people suck at verifying truth and reasoning. Even "intelligent" people make mistakes based on their biases.
I think there's a huge difference because individuals can be reasoned with, convinced they're wrong, and have the ability to verify they're wrong and change their position. If I can convince one person they're wrong about something, they convince others. It has an exponential effect and it's a good way of eliminating common errors.
I don't understand how LLMs will do that. If everyone stops learning and starts relying on LLMs to tell them how to do everything, who will discover the mistakes?
Here's a specific example. I'll pick on LinuxServer since they're big [1], but almost every 'docker-compose.yml' stack you see online will have a database service defined like this:
Assuming the database is dedicated to that app, and it typically is, publishing port 3306 for the database isn't necessary and is a bad practice because it unnecessarily exposes it to your entire local network. You don't need to publish it because it's already accessible to other containers in the same stack.
Another Docker related example would be a Dockerfile using 'apt[-get]' without the '--error-on=any' switch. Pay attention to Docker build files and you'll realize almost no one uses that switch. Failing to do so allows silent failures of the 'update' command and it's possible to build containers with stale package versions if you have a transient error that affects the 'update' command, but succeeds on a subsequent 'install' command.
There are tons of misunderstandings like that which end up being so common that no one realizes they're doing things wrong. For people, I can do something as simple as posting on HN and others can see my suggestion, verify it's correct, and repeat the solution. Eventually, the misconception is corrected and those paying attention know to ignore the mistakes in all of the old internet posts that will never be updated.
How do you convince ChatGPT the above is correct and that it's a million posts on the internet that are wrong?
Wow. I can honestly say I'm surprised it makes that suggestion. That's great!
I don't understand how it gets there though. How does it "know" that's the right thing to suggest when the majority of the online documentation all gets it wrong?
I know how I do it. I read the Docker docs, I see that I don't think publishing that port is needed, I spin up a test, and I verify my theory. AFAIK, ChatGPT isn't testing to verify assumptions like that, so I wonder how it determines correct from incorrect.
I suspect there is acsolid corpus of advices online that mention the exposed ports risk. Alongside with flawed examples you mentioned. Narrow request will trigger the right response. That's why LLMs are still requiring basic understanding of what exactly you plan to achieve.
Yeah, most people suck at verifying truth and reasoning. But most information technology employees, above intern level, are highly capable of reasoning and making decisions in their area of expertise.
Try asking an LLM complex questions in your area of expertise. Interview it as if you needed to be confident that it could do your job. You'll quickly find out that it can't do your job, and isn't actually capable of reasoning.