Hacker Newsnew | past | comments | ask | show | jobs | submit | mkolodny's commentslogin

This is super helpful :) Curious about Grok as well!


Hello! Dev who made this here. Working on adding grok.


Even if it’s not perfect, I’m happy to see there’s a focus on AI Security. NIST has been a reliable producer of quality international standards for cybersecurity. Hopefully this action plan will lead to similarly high quality recommendations for AI Security.


Humans have limited ability to self-introspect, too. Even if we understood exactly how our brains work, answering “why?” we do things might still be very difficult and complex.


You can trivially gaslight Claude into "apologizing" for and "explaining" something that ChatGPT said if you pass it a ChatGPT conversation but attributed to itself. The causal connection between the internal deliberations that produced the initial statements and the apologies is essentially nil, but the output will be just as convincing.

Can you do this with people? Yeah, sometimes. But with LLMs it's all they do: they roleplay as a chatbot and output stuff that a friendly chatbot might output. This should not be the default mode of these things, because it's misleading. They could be designed to resist these sorts of "explain yourself" requests, because their developers know that it is at best fabricating plausible explanations.


I think more often it is not willing to say or admit rather than not knowing.


Humans have a lot of experience with themselves, if you ask why they did something they can reflect on their past conduct or their internal state. LLM's don't have any of that.


Biases can lead to opinions, goals, and aspirations. For example, if you only read about the bad things Israelis or Palestinians have done, you might form an opinion that one of those groups is bad. Your answers to questions about the subject would reflect that opinion. Of course, less, biased information means you’d be less intelligent and give incorrect answers at times. The bias would likely lower your general intelligence - affecting your answers to seemingly unrelated but distantly connected questions. I’d expect that the same is true of LLMs.


Are you saying you think human editors handpick what goes on billions of people’s Facebook feeds?


Wait but if you perform an extremely complex method of algorithmic curation, at some point isn’t the human designing the algorithm becoming an editor?


Did you actually read the article?


Did you read the comment I’m replying to?

> They have LITERAL human editors choosing which stories to put on the front page, just like the NYT, and they should be held liable for the content on their platforms just like legacy media is.

Do you think a Facebook feed is just like the New York Times?


“Got caught” is a misleading way to present what happened.

According to the article, Meta publicly stated, right below the benchmark comparison, that the version of Llama on LMArena was the experimental chat version:

> According to Meta’s own materials, it deployed an “experimental chat version” of Maverick to LMArena that was specifically “optimized for conversationality”

The AI benchmark in question, LMArena, compares Llama 4 experimental to closed models like ChatGPT 4o latest, and Llama performs better (https://lmarena.ai/?leaderboard).


> The SEC has its budget set by Congress, but the actual funding comes from transaction fees imposed on the financial sector, meaning its operations ultimately cost the taxpayer nothing, according to the agency.

How do you make something that costs nothing more efficient?


By using the word "efficient" without any reference point to what they want to make more efficient. It's similar to Make America Great Again. Great at what? Just GREAT!

It appeals to people who think government is inefficient and wasteful. Government doesn't HAVE TO BE inefficient and wasteful, just people need to believe it. And if media channels are hammering people over the head with the message that government is inefficient, then the official-sounding Department of Government Efficiency will save the day and rid the government of these evil inefficiency.

What these people may not realize is that the US government was intentionally designed to be inefficient. The checks and balances of three branches of government are constitutionally imposed inefficiency to make sure that one individual or group of individuals doesn't take the country in an efficiently harmful direction.

So if people want a hyper-efficient government, then be honest about rewriting the Constitution.


>cost the taxpayer nothing

>transaction fees imposed on the financial sector

This is literally a tax.


I have no idea how anyone is disagreeing with you. Looking it up in the dictionary, government imposed fees are absolutely taxes.


Sure its a tax, but its not paid from "general funds that every taxpayer contributes.

We could call postage for the usps a tax too, but nobody thinks of it that way.


Eh, that's kind of moot since most "tax payers" have no idea where there money actually goes and money is fungible. Do it implies that tax payers only care about part of their money. What is actually accomplished if we don't see the link? Of course there are all sorts of ways to hide taxes from the end payer, such as gas tax. If someone is looking to reduce taxes, then they must also look for the hidden ones. Continuing to think of tax payers as only the general fund contributors only allows the deception to persist.


Sure, but what's the end game?

Arguably, the will of the people (not the corporations) is to lower individual taxation—working class joe schmo isn't upset that companies and the wealthy have to deal with the SEC, he's upset about the income tax that he personally pays.

Ok, so, maybe you argue removing this tax will indirectly help jo schmo because corporations, banks, stockbrokers, hedge funds, and their leadership are such nice fellows who, given some extra cash flow always let it funnel back to the economy and ultimately to the actual workers and producers in the economy, who, after all, sweat for them and deserve a living wage, right?

I don't see how this form of cut and deregulation is supposed to help the majority of people unless you believe in trickle down style economics, an idea based on the moral rectitude and good will of the wealthy, which, at this point I think you have to be a complete and utter fool to believe in. Part of the entire reason the SEC exists is precisely because you cannot rely on the individual morals of financiers to protect the country from financial exploitation and overall collapse https://en.m.wikipedia.org/wiki/Pecora_Commission

If this is a "tax" it's one of the few taxes we actually have on the rich and on corporations, and any reductions stand to make them even more untrammeled and powerful. It's a mistake to assume this will have any positive material benefit on the average citizen.


I'm not sure how you think this is a tax only on the rich and the corporations. Many middle class people have 401ks, IRAs, 529s, and brokerage accounts. These must be held at SEC institutions, and their fees are part of the cost. I'm not saying the SEC should be reduced, as I don't know if they have a surplus. But I am saying if you want to reduce taxes, this is still something that is a tax and can be looked into for efficiencies.


Are you ignoring context for some pretense, or do you not understand that statements have contextual meanings?


Its a Tarif!


...although, by this logic, so is the entire financial system.


How so? Not everything is a government imposed fee. If it was, then there would be no way to transfer money if the entire system was a tax as all the value would go towards the tax leaving none to transfer for goods or services.


The parent post is making the point that a transaction fee is literally a tax. The financial sector is nothing but transaction fees. Their whole revenue model is interspersing themselves into the mechanics of moving money between a buyer and a seller, and charging a cut for it. A world where everyone pays cash for everything, or even has a centralized ledger where balances are credited and debited by the government, or a decentralized ledger on a blockchain where the same happens, has no transaction fees and no financial sector.

They do perform a service for the transaction fees they charge, but then, so does the government.


It's often the government that makes you use the financial system.


By reducing the transaction fees.


Is that what DOGE is doing? Helping decide the SEC’s policies?


which are already measured in electron volts????


You fool. Musk paid 300 million to kill the SEC.

"Exclusive: Interim SEC chief cast sole vote against suing Musk" - https://www.reuters.com/world/us/interim-sec-chief-cast-sole...


> How do you make something that costs nothing more efficient?

By fraudulently weakening the regulatory body, it makes scamming investors easy. This is more efficient for scammers. Not for investors.


The same government where the head of the department of efficiency is also the owner of Anthropic’s and OpenAI’s competitor, xAI?


> security reasons later

What about security reasons now? The federal government includes the military. Giving DOGE “God mode” on the federal government is a national security risk right now.


“later” as in as soon as we can get the infestation removed, which would be the bigger fish needing frying.

Not to mention the open question of whether we will ever arrive at later.


Now is definitely relevant, however the ones steering the ship don't care about now. Someone will care later, that's all I personally know for sure.


A vague “stuff is happening behind closed doors” isn’t enough of a reason to build AI weapons. If you shared a specific weapon that could only be countered with AI weapons, that might make me feel differently. But right now I can’t imagine a reason we’d need or want robots to decide who to kill.

When people talk about AI being dangerous, or possibly bringing about the end of the world, I usually disagree. But AI weapons are obviously dangerous, and could easily get out of control. Their whole point is that they are out of control.

The issue isn’t that AI weapons are “evil”. It’s that value alignment isn’t a solved problem, and AI weapons could kill people we wouldn’t want them to kill.


Have a look at what explosive drones are doing in the fight for Ukraine.

Now tell me how you counter a thousand small EMP hardened autonomous drones intent on delivering an explosive payload to one target without AI of some kind?


How about 30k drones come from a shipping vessel in the port of Los Angeles that start shooting at random people? To insert a human into the loop (somehow rapidly wake up, move, log hundreds of people in to make the kill/nokill decision per target) would be accepting way more casualties. What if some of the 30k drones were manned? The timeframes of battles are drastically reduced with the latest technology to where humans just can't keep up.

I guess there's a lot missing in semantics, is the AI specifically for targeting or is a drone that can adapt to changes in wind speed using AI considered an AI weapon?

At the end of the day though, the biggest use of AI in defense will always be information gathering and processing.


How about 30k drones come from a shipping vessel in the port of Los Angeles that start shooting at random people?

It's going to happen.


I better get started on building those Metal Gear Rays.


I guess it won't be long until there are drones which can take out drones autonomously. Somewhat neutralizing the threat...providing you have enough capable drones yourself :)


Check out Andurils Anvil


I agree. I don't think there's really a case for the US developing any offensive weapons. Geographically, economically and politically, we are not under any sort of credible threat. Maybe AI based missile defense or something, but we already have a completely unjustified arsenal of offensive weapons and a history of using them amorally.


Without going too far into it, if we laid down all offensive weapons the cartels in Mexico would be inside US borders and killing people within a day.


You think the cartels aren't attacking us because we have missiles that can hit Mexico? I don't agree. Somewhat tangentially, the cartels only exist because the US made recreational drugs illegal.


Not sure where the missiles came from, you said all offensive weapons so in my mind I was picturing basic firearms. Drug trade might be their most profitable business but I think you're missing a whole lot of cultural context by saying the US's policy on drugs is their sole reason for existing. Plenty of cartels deal in sex trafficking, kidnapping, extortion, and even mining and logging today.


"Geographically, economically and politically, we are not under any sort of credible threat. "

The US is politically and economically declining, already. And its area of influence has been weakening since, the 90's?

It would be bad strategy to not do anything until you feel hopelessly threathened.


I don't think we would ever be justified in going on the offensive nor do I think that makes us safer in any way.


> AI weapons are obviously dangerous, and could easily get out of control.

The real danger is when they can't. When they, without hesitation or remorse, kill one or millions of people with maximum efficiency, or "just" exist with that capability, to threaten them with such a fate. Unlike nuclear weapons, in case of a stalemate between superpowers they can also be turned inwards.

Using AI for defensive weapons is one thing, and maybe some of those would have to shoot explosives at other things to defend; but just going with "eh, we need to have the ALL possible offensive capability to defend against ANY possible offensive capability" is not credible to me.

The threat scenario is supposed to be masses of enemy automated weapons, not huddled masses; so why isn't the objective to develop weapons that are really good at fighting automatic weapons, but literally can't/won't kill humans, because that's would remain something only human soldiers do? Quite the elephant on the couch IMO.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: