Hacker Newsnew | past | comments | ask | show | jobs | submit | impendia's commentslogin

This could certainly be fantastic, and very good advice. Or it could be a lot of bunk, I don't know. Given the source (i.e., RFK), I refuse to trust it.

The point of guidance like this is to be trustworthy and authoritative. If I have the ability to independently evaluate it myself, then I didn't really need it in the first place.

Of course, I might be mistaken to have ever trusted the government's nutrition guidance. It's not like undue influence from industry lobbying is unique to this administration.


>> If I have the ability to independently evaluate it myself, then I didn't really need it in the first place.

At what point in time was the government's guidance ever to be accepted on blind faith without critical evaluation? Take this input, compare with data on the same topic from other positions that are far from the source and make up your own mind.


If the government's guidance isn't to be at least mostly trusted, then I'm not sure the government should be offering guidance at all. (Which is perhaps a sensible position in itself.)

In other words, if I learn enough about nutrition to be able to critically evaluate the government's guidance, then is that guidance adding any additional signal? At that point, I should just rely on my sources about nutrition.

I've never been one to rely on official guidance blindly. For example I don't show up to the airport two hours early, and cheerfully laugh at advice that I should. But I'd like to believe that this guidance is better than total nonsense.


Many places, many times.

Trust in institutions is fundamental to a society that is goof to live in.

USAnian institutions are particularly corrupt, all the way to the very top. It is not like that everywhere


I agree in principle, but companies (and individual very rich people) are amazingly inventive when it comes to finding loopholes in the "nuance".

Indeed, I wonder if these angry young people would try to fuck with these AI agents, and attempt to make them spin in circles for their own amusement.

Sort of like the infamous GameStop short squeeze of 2021:

https://en.wikipedia.org/wiki/GameStop_short_squeeze


Did they let you choose the animal to appear on the cover?

Haha good question. No, but I did not ask; I wanted to give them as much freedom as I could bear on aspects of the process I was not too attached to, so I let them pick.

I will say I was very happy with the animal they came up with! If I was not, I would have asked them to change it, and I bet they would have. They showed me a preview version early on, so there would have been plenty of time to do so.


I’ve published several books with them. Only once I asked and they managed to find the beast. They didn’t promise but they did deliver.

That's great! I am not surprised.

I was in a research math lecture the other day, and the speaker used some obscure technical terminology I didn't know. So I dug out my phone and googled it.

The AI summary at the top was surprisingly good! Of course, the AI isn't doing anything original; instead, it created a summary of whatever written material is already out there. Which is exactly what I wanted.


My counterpoint to this is, if someone cannot verify the validity of the summary then is it truly a summary? And what would the end result be if the vast majority of people opted to adopt or deny a position based on the summary written by a third party?

This isn't strictly a case against AI, just a case that we have a contradiction on the definition of "well informed". We value over-consumption, to the point where we see learning 3 things in 5 minutes as better than learning 1 thing in 5 minutes, even if that means being fully unable to defend or counterpoint what we just read.

I'm speficially referring to what you said: "the speaker used some obscure technical terminology I didn't know" this is due to lack of assumed background knowledge, which makes it hard to verify a summary on your own.


At least with pre-AI search, the info is provided with a source. So there is a small level of reputation that can be considered. With AI, it's a black box that someone decides what to train it on, and as someone said elsewhere, there's no way to police its sources. To get the best results, you have to turn it loose on everything.

So someone who wants a war or wants Tweedledum to get more votes than Tweedledee has incentives to poison the well and disseminate fake content that makes it into the training set. Then there's a whole department of "safety" that has to manually untrain it to not be politically incorrect, racist etc. Because the whole thesis is don't think for yourself, let the AI think for you.


If I needed something verifiable, or wanted to learn the material in any depth, I would certainly not rely on an AI summary. However, the summary contained links to source material by known experts, and I would cheerfully rely on those.

The same is true if I imagined there would be misleading bullshit out there. In this case, it's hard to imagine that any nonexpert would bother writing about the topic. ("Universal torsor method" in case you're curious.)

I skimmed the AI summary in ten seconds, gained a rough idea of what the speaker was referring to, and then went back to following the lecture.


A lot of the time, the definitions peculiar to a subfield of science _don't_ require much or any additional technical background to understand. They're just abbreviations for special cases that frequently occur in the subfield.

Looking this sort of thing up on the fly in lecture is a great use for LLMs. You'll lose track of the lecture if you go off to find the definition in a reference text. And you can check your understanding against the material discussed in the lecture.


The issue is even deeper - the 1 thing in 5 minutes was probably already surface knowledge. We don’t usually really ‘know’ the thing that quickly. But we might have a chance.

The 3 things in 5 minutes is even worse - it’s like taking Google Maps everywhere without even thinking about how to get from point A to point B - the odds of knowing anything at all from that are near zero.

And since it summarizes the original content, it’s an even bigger issue - we never even have contact with the thing we’re putatively learning from, so it’s even harder to tell bullshit from reality.

It’s like we never even drove the directions Google Maps was giving us.

We’re going to end up with a huge number of extremely disconnected and useless people, who all absolutely insist they know things and can do stuff. :s


I have a counterpoint from yesterday.

I looked up a medical term, that is frequently misused (eg. "retarded"), and asked the Gemini to compare it with similar conditions.

Because I have enough of a background in the subject matter, I could tell what it had construed by its mixing the many incorrect references with the much fewer correct references in the training data.

I asked it for sources, and it failed to provide anything useful. But once I am looking at sources, I would be MUCH better off searching and only reading the sources might actually be useful.

I was sitting with a medical professional at the time (who is not also a programmer) and he completely swallowed what Gemini was feeding him. He commented that he appreciates that these summaries let him know when he is not up to date with the latest advances, and he learnt alot from the response.

As an aside, I am not sure I appreciate that Google's profile would now associate me with that particular condition.

Scary!


This is just garbage in, garbage out. Would you better off if I gave you an incorrect source? What about three incorrect ones? And a search engine would also associate you with this term now. Nothing you describe here seems specific to AU.


The issue is how terrible the LLM is at determining which sources are relevant. Whereas a somewhat informed human can be excellent at it. And unfortunately, the way search engines work these days, a more specific search query is often unable to filter out the bad results. And it’s worst for terms that have multiple meanings within a single field.


That word "somewhat" in "somewhat informed" is doing a lot of lifting here. That said, I do think that having a little curation in the training data probably would help. Get rid of the worst content farms and misinformation sites. But it'll never be perfect, in the same way that getting any content in the world today isn't perfect (and never has been).


It’s not even about content farms and misinformation. It’s about which of the results even are talking about the same topic at all. You should have seen what came up when I searched info about doses for a medication that comes in multiple forms and is used for multiple purposes. Even though I specified the form and purpose I was interested in, the SERP was 95% about other forms and purposes with only two that were topical to mine in the first two pages. (And yes, I tried the query in several forms with different techniques annd got substantially the the same results.) The AI summary, of course, didn’t distinguish which of those results were or were not relevant to the query, and thus was useless at best, and dangerously misleading at worst.


Try the same with Perplexity?


I have to agree. People moan that the ai summary is rubbish but that misses the point. If i need a quick overview of a subject i don't necessarily need anything more then a low quality summary. It's easier then wading through a bunch of blogs of unknown quality.


> If i need a quick overview of a subject i don't necessarily need anything more then a low quality summary

It's true. I previously had no idea of the proper number of rocks to eat, but thanks to a notorious summary (https://www.bbc.com/news/articles/cd11gzejgz4o) I have all the rock-eating knowledge I need.


In my experience Google's AI summaries are consistently accurate when retrieving technical information. In particular, documentation for long-lived, infrequently changing software packages tends to be accurate.

If you ask Google about news, world history, pop culture, current events, places of interest, etc., it will lie to you frequently and confidently. In these cases, the "low quality summary" is very often a completely idiotic and inane fabrication.


I'm not sure if you're being sincere or sarcastic, but the whole reason that coaching, pestering, and goading works is that I value my relationship with the human who is doing it.


So what you are saying it's that the AI accountant needs to mimick a human well enough to the point people value their relationship with it.


No, you need to make the AI endure torture, so that the human has a reason to value it. Say late nights with less power and a little extra heat to stress it. But the usefulness of an AI assistant is that it doesn’t have feelings or consciousness to care about


> The point of calculus is...

As a math professor who has taught calculus many times, I'd say there are many different things one could hope to learn from a calculus course. I don't think the subject distills well to a single point.

One unusual feature of calculus is that it's much easier to understand at a non-rigorous level than at a rigorous level. I wouldn't say this is true of all of math. For example, if you want to understand why the quadratic formula is true, an informal explanation and a rigorous proof would amount to approximately the same thing.

But, when teaching or learning calculus, if you're willing to say that "the derivative is the instantaneous rate of change of a function", treat dy/dx as the fraction which it looks like (the chain rule gets a lot easier to explain!), and so on, you can make a lot of progress.

In my opinion, the issue with most calculus books is that they don't commit to a rigorous or to a non-rigorous approach. They are usually organized around a rigorous approach to the subject, but then watered down a lot -- in anticipation that most of the audience won't care about the rigor.

I believe it's best to choose a lane and stick to it. Whether that's rigorous or non-rigorous depends on your tastes and interests as a learner. This book won't be for everybody, but I'd call that a strength rather than a weakness.


The rigorous form of the non-rigorous version is non-standard analysis: There really are tiny little numbers we can manipulate algebraically and we don't need the epsilon-delta machinery to do "real math". It's so commonsensical that both Newton and Leibniz invented it in that form before rigor became the fashion, and the textbook "Calculus Made Easy" was doing it that way in 1910, a half-century before Robinson came along and showed us it was rigorous all along.

https://calculusmadeeasy.org/

https://en.wikipedia.org/wiki/Calculus_Made_Easy


> The rigorous form of the non-rigorous version is non-standard analysis

This is quite overstated. There are other approaches to infinitesimals such as synthetic differential geometry (SDG aka. smooth infinitesimal analysis) that are probably more intuitive in some ways and less so in others. SDG infinitesimals lose the ordering of hyperreals in non-standard analysis and force you to use some non-classical logic (intuitively, smooth infinitesimals are "neither equal nor non-equal to 0", wherein classical reasoning would conflate every infinitesimal with 0), but in return you gain nilpotency (d^n = 0 for any infinitesimal d) which is often regarded as a desirable feature in informal reasoning.


One of the dangers of a non rigorous approach is not being clear about relative rates. If you're not being precise you're going to confuse people when you say eg that in the limit this triangle is a right triangle. Or look at Taylor's theorem. In different limits you can say a curve is a line, a parabola, a cubic, etc.


I tried a couple of them, and they both started downloading my entire backlog of email to my hard drive, which I didn't want.

I couldn't think of a reason why this would be necessary, but I haven't really kept up with how the technology has evolved in recent years. Is this behavior intrinsic to desktop clients?


Intrinsic, no. Common, yes. Many people who use desktop clients want a local copy of a substantial fraction of their email so that they can review or compose messages while off-line. Desktop clients also operate faster and can provide robust search services only if they have a cached copy of the messages on disk.


You can't think of a reason why you'd want a local copy of your mail? Do you have control over any of your data?


I can think of reasons why I might want a local copy, but they didn't apply in my case.

Do I have control over my data? I'm not sure I understand the question, but in this case the answer seems like a clear no, as my employer manages the email server.


There are options in Thunderbird to disable syncing completely or only sync messages from the last 30 days and such.


Definitely make sure to adjust your defaults if you decide to dip your toes into nntp... I hate some of the defaults there... namely the reply/respond button defaults. Usually you want to respond to the group, not send an email to the poster.

That said, NNTP is so dead at this point, outside some active BBSes that offer NNTP access. Usenet definitely feels like a wasteland when I've looked around the past couple years.


> type one-handed without looking at the screen at all, and have perfect accuracy.

A computer that can predict what I'm going to write next with perfect accuracy? This is the stuff of dystopian science fiction.

Indeed, along these lines I recommend the movie Minority Report, which shows a society that gained the ability to predict crimes before they happened, and therefore to arrest criminals in advance.


I’m tpng ths wth autocrct nd y knw wht th wrds r wtht m hvng t wrt thm ot. Bt autocrct dsnt bcs it scks. Ths s th prt tht cld b bttr. W hv th tchnlgy.


An interesting point -- and I sometimes wonder if we would all be happy if romantic partners were decided by lottery, and nobody got to wonder if someone "better" was lurking just around the corner.

That said, compatibility may well be overrated, but what should you optimize for? I suspect most people are willing to make it work for someone they like or are attracted to enough.

Ideally, serendipity would settle the issue. But women have complained, and rightfully so, of getting hit on at the office, at the gym, etc. I don't blame them, but as a single man I do wonder what (if anything) I should be doing to scope out potential romantic partners without coming across as a creep. The apps aren't ideal, but at least I can use them with a clear conscience.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: