> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.
And some teen may be traumatized. Again, unsafe.
Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.
Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.
These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.
> Gaggle’s CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said.
“I wish that was treated as a teachable moment, not a law enforcement moment,” said Patterson.
It's entirely predictable that schools will call law enforcement for many of these detections. You can't sell to schools that have "zero tolerance" policies and pretend that your product won't trigger those policies.
Exactly. In a saner world, we could use fallible AI to call attention to possible concerns that a human could then analyze and make an appropriate judgment call on.
But alas, we don't live in that world. We live in a world where there will be firings, civil, and even criminal liability for those who make wrong judgments. If the AI says "possible gun", the human running things who alerts a SWAT team faces all upside and no downside.
Hmm, maybe this generation's version of "nobody ever got fired for buying IBM" will become "nobody ever got fired for doing what the AI told them to do." Maybe humanity is doomed after all.
I can't say that I think it would be a saner world to have the equivalent of a teacher or hall monitor sitting in on every conversation, even if that computer chaperone isn't going to automatically involve the cops. I don't think you can build a better society where everyone is expected to speak and behave defensively in every circumstance as if their words could be taken out of context by a snitch - computer or otherwise.
There is still liability there and it should be even higher when the decisions to implement so callously bad processes. Doubly so since this has demonstrably happened once.
>we could use fallible AI to call attention to possible concerns that a human could then analyze and make an appropriate judgment call on
That "we could" is doing some very heavy lifting. But even if we pretend it's remotely feasible, do we want to take an institution that already trains submission to authority and use it to normalize ubiquitous "for your own good" surveillance?
At least the current moment, the increasing turn to using autonomous weaponry against one’s citizens - I don’t think it says so much about humanity so much as the US. I think US foreign policy is a disaster but turning the AI-powered military against the citizenry does look like it’s going to be quite successful, presumably because the US leadership is fighting an enemy incapable of defending itself. I think it’s unsustainable though economically speaking. AI won’t actually create value once it’s a commodity itself (since a true commodity has its value baked into its price). Rates of profit will continue to fall. The ruling class will become increasingly desperate in its search for growth. Eventually an economy that resorts to techno-fascism implodes. (Not before things turning quite ugly of course.)
I do not, in any way, disagree with holding Gaggle accountable for this.
But can we at least talk about also holding the school accountable for the absolutely insane response?
You talk about not selling to schools that have "zero tolerance" policies as if those are an immutable fact of nature that can never be changed, but they are a human thing that has very obvious negative effects. There is no reason we actually have to have "zero tolerance" policies that traumatize children who genuinely did nothing wrong.
"Zero tolerance" for bringing deadly weapons to school, I can understand. So long as what's being checked for is actual deadly weapons, and not just "anything vaguely gun-shaped", or "anything that one could in theory use as a deadly weapon" (I mean, that would include things like "pens" and "textbooks", so...).
"Zero tolerance" for particular kinds of language is much less acceptable. And I say this as someone who is fully in favor of eliminating things like hate speech or threats of violence—you don't do it by coming down like the wrath of God on children for a single instance of such speech, whether it was actually hate speech or not. They are in school; that's the perfect place to be teaching them a) why such speech is not OK, b) who it hurts, and c) how to express themselves without it, rather than just treating them like terrorists.
Holy shitballs. In my experience such pain addons have very cheap labor attached to them, certainly not what you would expect based on the sales pitch.
The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time.
All he wanted was a Pepsi. Just one Pepsi. And they wouldn't give it to him.
They actually don't have all the rights of a person and they do have those same responsibilities.
If this company was a sole proprietorship, the only recourse this kid would have is to sue the owner, up to bankruptcy.
Since it's a corporation, his recourse is to sue the company, up to bankruptcy.
As for corporations having rights, I can explain it further if necessary but the key understanding is that the singular of "corporations are people" is "a corporation is people" not "a corporation is a person".
You can't put a corporation in prison. But a person you can. This is one of the big problems. The people making the decisions at corporations are shielded from personal consequences by the corporation. A corporation can be shut down but it rarely happens.
Even when Boeing knowingly caused the deaths of hundreds (especially the second crash was entirely preventable if they would have been honest after the first one), all they got were some fines. Those just end up being charged back to their customers, a big one being the government who fined them in the first place.
> You can't put a corporation in prison. But a person you can. This is one of the big problems.
It really isn't -- we're talking about a category of activities that involves only financial liability or civil torts in the first place, regardless of whether the parties involved are organizations or individuals. You can't put people in prison for civil torts.
Prison is irrelevant to 98% of the discussion here. And the small fraction of cases in the status quo that do involve criminal liability -- even within organizations -- absolutely do assign that liability to specific individuals, and absolutely can involve criminal penalties including jail time. Actual criminal conduct is precisely where the courts "pierce the veil" and hold individuals accountable.
> Even when Boeing knowingly caused the deaths of hundreds (especially the second crash was entirely preventable if they would have been honest after the first one), all they got were some fines.
All anyone would ever get in a lawsuit is some fines. The matter is inherently a civil one. And if there were any indications of criminal conduct, criminal liability can be applied -- as it often is -- to the individuals who engaged in it regardless of whether they are operating within an organization or on their own initiative.
The only real difference is that when you sue a large corporation, you're much more able to actually collect the damages you win than you would be if you were just suing one guy operating by himself. If the aim of justice is remunerative, not just punitive, then this is a much superior situation.
> Those just end up being charged back to their customers, a big one being the government who fined them in the first place.
Who would be paying to settle the matter in your preferred situation? It sounds like the most likely outcome is that the victims would just eat the costs they've already incurred, since there'd be little chance of collecting damages, and taxpayers would bear the burden of paying for the punishment of whomever ends up holding the hot potato after all the scapegoating and blame deflection plays out.
I'm sure the top leadership was well aware of what happened after the first crash yes. They should have immediately gone public and would have prevented the second crash.
Don't forget that hiding MCAS from pilots and the FAA was a conscious decision. It wasn't something that 'just happened'. The decision to not make it depend on redundant AoA sensors by default too.
My point is, I can imagine that the MCAS suicidal side-effect was something unexpected (it was a technical failure edge-case in a specific and rare scenario) and I get that not anticipating it could have been a mistake, not a conscious decision. But after the first crash they should have owned up to it and not waited for a second crash.
You need a judge and jury for prison sentences for criminal convictions.
If the government decides to prosecute the matter as a civil infraction, or doesn't even bother prosecuting but just has an executive agency hand out a fine, that's not a matter of the corporation shielding people, that's a matter of the government failing to prosecute or secure a conviction.
Unfortunately the company has a big war chest, and I have a small war chest, and was priced out of court through legal shenanigans and delays the corporations lawyers could afford.
Just bring back fucking pistol deals. I have a better chance of defending myself there.
If the company is a sole proprietorship, you can sue the person who controls it up to bankruptcy, which will affect their personal life significantly. If the company is a corporation/LLC, you can sue the corporate entity up to the bankruptcy of the corporate entity, while the people controlling the company remain unaffected.
This gets even more perverse. If you're an individual you actually can't just set up an LLC to limit your own liability. There's no manner for an individual to say "I'm putting on a hat and acting solely as the LLC" - rather as the owner you need to find and employ enough judgement-proof patsies that the whole thing becomes a "group project" and you can say you personally weren't aware of whatever problem gave rise to liability. In other words, the very design of corporations/LLCs encourages avoiding responsibility.
You're correct with the nitpick about the Supreme Council's justification, but that justification is still poor reasoning. Corporations are government-created liability shields. How they can direct their employees should be limited, to avoid trampling on those individuals' own natural rights. A person or group of people who want to exercise their personal natural rights through hired employees can always forgo the government-created liability shield and go sole proprietorship / gen partnership.
> If the company is a sole proprietorship, you can sue the person who controls it up to bankruptcy, which will affect their personal life significantly.
I'm sure it will. But how do you collect $30M in damages from a single individual whose entire net worth is e.g. $1M? What if the sole proprietor actually owns no assets whatsoever, because he's set up a bunch of arrangements where he leases everything from third parties, and contracts out his business operations to a different set of third parties, etc.?
I don't get why so many people are so intent on trying to attribute the motivations to maximize one's own take, deflect blame for harm away from themselves, and cover up their questionable activities to some specific organizational model. All of those motivations come from the human beings involved -- they were always present and always will be -- and those same human beings will manipulate whatever rules or institutions are involved to the greatest extent that they can.
Blaming a particular organizational model for the malicious intentions of the people who are just using that model as a tool is a deep, deep error.
> If you're an individual you actually can't just set up an LLC to limit your own liability.
What are you talking about? Of course you can. People do it all the time.
> rather as the owner you need to find and employ enough judgement-proof patsies that the whole thing becomes a "group project" and you can say you personally weren't aware of whatever problem gave rise to liability.
You're conflating entirely unrelated concepts of liability here. Limited liability as it relates to LLCs and corporations is for financial liability. It means that the organizations debts are not the shareholders' debts. It has nothing to do with legal liability for one's own purposeful conduct, whether tortious or criminal.
The kind of liability protection that you think corporations enjoy but single-member LLCs don't -- protection from the liability for individual criminal behavior -- does not exist for anyone at all.
> A person or group of people who want to exercise their personal natural rights through hired employees can always forgo the government-created liability shield and go sole proprietorship / gen partnership.
The ownership structure of a business has nothing at all to do with how it hires employees and directs their activities. The same law of agency and doctrine of vicarious liability applies to all agent-principal relationships regardless of whether the principal is a corporation or a sole proprietorship.
> how do you collect $30M in damages from a single individual
It's not about getting made whole from damages, it's about the incentives for the business owner. A sole proprietor has their own skin fully in the game, whereas an LLC owner does not (only modulo things customarily shielded from bankruptcy like retirement savings and primary dwelling, and asset protection strategies for the extremely rich, like charitable foundations)
> I don't get why so many people are so intent on trying to attribute the motivations to maximize one's own take, deflect blame for harm away from themselves, and cover up their questionable activities to some specific organizational model
Because this specific legal structure (not organizational model, that is orthogonal) is a powerful tool for deflecting blame.
> You're conflating entirely unrelated concepts of liability here... It has nothing to do with legal liability for one's own purposeful conduct, whether tortious or criminal
The point is that these concepts are quite intertwined for small businesses, and only become distinct when there are enough people involved to make a nobody's-fault "group project". Let's say I want to own a piece of rental property and think putting it in an LLC will protect my personal life from all the random things that might happen playing host to other people's lives. Managing one property doesn't take terribly much time so I do it myself. Now it snows, the tenant does a crappy job of shoveling, and someone slips on the sidewalk up front, gets hurt, and sues. Since I'm personally involved in supervising the condition of the property, there is now a theory of personal liability for me that I should have been aware of the poor conditions of the sidewalk. (This same liability applies to the tenant, or anyone that was hired to shovel, but they're usually judgement proof, sympathetic, etc).
Same thing with making repairs to the property, etc - any direct involvement (supplying anything but investment capital) opens up avenues for personal liability, negating the LLC protections.
> The same law of agency and doctrine of vicarious liability applies
The point is that LLC/corporate structures allow for much higher levels of scaling, allowing them to apply higher levels of coercion to their employees. Since these limited liability structures are purely creations of government (rather than something existing outside of government), it's straightforwardly justifiable to regulate what activities they may engage in to mitigate this coercion.
Jail is a great deterrent against criminal conduct. But natural persons are already risking jail when they engage in criminal conduct regardless of whether they're doing so within the scope of an organization or doing so on their own initiative.
Jail isn't on the table for financial liability or civil torts in the first place, and since pretty much all the forms of liability involving commercial conduct we're discussing here are financial liability or civil torts, it's not really relevant to the discussion.
> it is much harder to hold a corporation responsible
In some ways, yes. In most ways, no. In most cases, a massive fine aligns interests. Our problem is we've become weak kneed at levying massive fines on corporations.
Unlike a person, you don't have to house a corporation to punish it. Your fine simply wipes out the owners. If the enterprise is a going concern, it's born under new ownership. If it's not, its assets are redistributed.
> Jail is a great deterrent for natural persons
Jail works for executives who defraud. We just, again, don't do it. This AI could have been sold by a billionaire sole proprietor, I doubt that would suddenly make the rules more enforceable.
Engineer: hey I made this cool thing that can help people in public safety roles process information and make decisions more efficiently! It gives false positives but you save more time than it takes less time to weed through them.
Someone nearby: well what if they use it to replace human thinking instead of augment it?
Engineer: well they would be ridiculous. Nobody would ever think that’s a good idea.
Marketing Team: it seems like this lands best when positioning it as a decision-making tool. Let’s get some metrics on how much faster it is at making decisions than people are.
Sales Rep: ok, Captain, let’s dive into our flagship product, DecisionMaker Pro, the totally automated security monitoring agent…
::6 months later—some kid is being held at gunpoint over snacks.::
Lack of Accountability as-a-Service! Very attractive proposition to negligent and self-serving organizations. The people in charge don't even have to pay for it themselves, can just funnel the organization money to the vendor. Encouraging widespread adoption helps normalizes the practice. If anyone objects, shut them down as not thinking-of-the-children and something-must-be-done (and every other option is surely too complicated/expensive).
Nice fantasy, but the reality is that the "people in public safety roles" love using flimsy pretenses to harass and abuse vulnerable populations. I wish it was just overeager sales and marketing, but you're view of humanity is way too naive especially as masked thugs are disappearing people in the street as we type.
What? A) The naïveté of the engineer’s perspective was literally the whole point of the story. B) Saying I’m somehow absolving law enforcement by acknowledging other factors is absurd. My childhood best friend was shot and killed by police during a mental health crisis. C) If you think that police malevolence somehow absolves the tech world’s role in making tools for them, that’s as naive as it gets.
delegating decision to AI, excluding human from the "human in the loop" is kind of unexpected as a first step, as in general it was expected that exclusion will start from the other end. Sideway i wonder how is that going to happen on the battlefield.
for this civilian use case the next step - AR googles worn by police with that AI will be projecting onto the googles where that teenager has his gun (kind of Black Mirror style), and the next step after that is obviously excluding the humans even from the execution step.
In any system, there are false positives and false negatives. In some situations (like a high recall disease detection) false negatives are much worse than false positives, because the cost of a false positive is a more rigorous screening.
But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.
Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.
In this case false positives are far, far worse than false negatives. A false negative in this system does not mean a tragedy will occur, because there are many other preventative measures in place. And never mind the fact that this country refuses to even address the primary cause of gun violence in the first place: the ubiquity of guns in our society. So systems like this is what we end up with when we ignore to address the problem of guns and choose to deal the downstream effects of that instead.
> the primary cause of gun violence in the first place: the ubiquity of guns in our society
I would have gone with “a normalized sense of hopelessness and indignity which causes people to feel like violence is the only way they can have any agency in life” considering “gun” is the adjective and “violence” is the actual thing you're talking about.
Both are true. The underlying oppressive, lonely, pro-bullying culture creates the tension. The proliferation of high lethality weapons makes it more likely that tension will eventually release in the form of a mass tragedy.
Improvement in either area would be a net positive for society. Improvement in both areas is ideal but solving proliferation seems a lot more straightforward than fixing the generally miserable society problem.
To be clear, the false negative here would be a student who has brought a gun to a school and the computer ignores it. That is a situation where potentially multiple people can be killed in a short amount of time. It is not far, far worse to send cops.
Depends on the false positive rate doesn't it. If police are being sent to storm a school every week due to a false positive, that is quite bad. And people will become conditioned to not care about reports of a gun at a school because of all the false positives.
For what I’m saying, no it doesn’t because I’m just comparing a single instance of false positive to a single instance of false negative. Neither is desirable.
> But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.
Given the probability of police officers in the USA taking any action as hostile and then ending up shooting him a false positive here is the same as swatting someone.
The system here sent the police off to kill someone.
Yep. Think of it as the new exciting version of swatting. Naturally, one will still need to figure out common ways to force a specific misattribution, but, sadly, I think there will be people working on it ( if there aren't already ).
Sure. But school shootings are also common in the US. A student who has brought a gun to a school is very likely not harmless. So false negatives aren’t free either.
Well guns aren’t allowed in schools at all. It’s a felony. So if your point is that the ratio is low, that’s only because the denominator is way too big.
I was swatted once. Girlfriend's house. Someone called 911 and said they'd seen me kill a neighbor, drag their body into the house, and was now holding my gf's family hostage.
We answered the screams at the door to guns pointed at our faces, and countless cops.
It was explained to us that this was the restrained version. We got a knock.
Unfortunately, I understand why these responses can't be neutered too much. You just never know.
In this case, though, you COULD know, COULD verify with a human before pointing guns at people, or COULD not deploy a half finished product in a way that prioritizes your profit over public safety.
Happened to a friend of mine by an ex GF who said he was on psych meds (true though he is nonviolent with no history) and that he was threatening to kill his parents. NYPD SWAT no-knock kicked the door down to his apartment which terrorized his elderly parents as they pointed guns at their son (in his words, "machine guns".) BUT because he has psych issues and on meds he was forced into a cop car in front of the whole neighborhood to get a psych evaluation. He only received an apology from the cops who said they have no choice but to follow procedure.
Do the cops not ever get tired of being fooled like this? Or do they just enjoy the chance to go out in their army-surplus armored cars and pretend to be special forces?
I had convos with cops about swatting, the good ones aren't happy to go kick down someone's door who isn't about to harm someone but feel they can't chance making a fatally wrong call when it isn't swatting, also they have procedures to follow and if they don't the outcome is on them personally and potentially legally.
As for bad cops they look for any reason to go act like aggro billy badasses.
This is a really good question. Sadly the answer is that they think it's how the system is meant to work. Well that seems to be the answer that I see coming from police spokespeople
Its likely procedure that they have to follow (see my other post in this thread.)
I hate to say this but I get it. Imagine a scenario happens where they decide "sounds phony. stand down." only for it to be real and people are hurt/killed because the "cops ignored our pleas for help and did nothing." which would be a horrible mistake they could be liable for, never mind the media circus and PR damage. So they treat all scenarios as real and figure it out after they knock/kick in the door.
To that end, we should all have a cop assigned to us. One cop per citizen, with a gun pointed at our head at all times. Imagine a scenario happens where someone does something and that cop wasn't there? Better to be safe.
I don't think you know how policing works in America. To cops, there are sheep, sheepdogs, and wolves; they are sheepdogs protecting us sheep from the criminals. Nobody needs to watch the sheepdogs!
But lets think about their analogy a little more: sheepdogs and wolves are both canines. Hmm.
Also "funny" how quickly they can reclassify any person as a "wolf", like this student. Hmm.
Maybe we should move beyond binary thinking here. Yeah, it's worth sending someone to investigate but also making some effort to verify who the call is coming from - to get their identity, and to ask them something simple like to describe the house (in this example) so the arriving cops will know they go to the right address. Now of course you can get a description of the house with Google Street Maps, but 911 dispatchers can solicit some information like what color car is currently parked outside or suchlike. They could also look up who occupies the house and make a phone call while cops are on the way.
Everyone knows swatting is a real thing that happens and that it's problematic, so why don't police departments have procedures in place which include that possibility? Who benefits from hyped-up police responses to false claims of criminal activity?
My daughter was swatted, but at the time she lived in a town where the cops weren't militarized goon squads. What happened was two uniformed cops politely knocked on her door, had a chat with her, and asked if they could come in and look around. She allowed them, they thanked her and the issue was resolved.
This is the way. Investigate, even a little, before deploying great force.
Cops don't have a duty to protect people, so "cops ignored our pleas for help and did nothing" is a-ok, no liability (thank you, qualified immunity). They very much do not treat all scenarios as real; they go gung-ho when they want to and hang back for a few hours "assessing the situation" when they don't.
I'm a paramedic, who has personally attended a swatting call where every single detail was so egregiously wrong, but police still went in, no-knock, causing thousands of dollars damage, that, to be clear, they have absolutely zero liability for, but thankfully no injuries.
"I can see them in the upstairs window" - of a single story home.
"The house is red brick" - it was dark grey wood.
"No cars in the driveway" - there was two.
Cops still said "hmm, still could be legit" and battered down the front door, deployed flashbangs.
There are more options here than "do nothing" and "go in guns blazing".
Establishing the probable trustworthiness of the report isn't black magic. Ask the reportee for details, question the neighbours, look in through the windows, just send two plain clothed officers pretending to be salesmen to knock on the door first? Continously adjust the approach as new information comes in. This isn't rocket science, ffs.
It doesn't make sense. If you were holding people hostage, you'd have demands for their release. Windows could be peeked into. If you dragged a dead body into a house, there'd be evidence of that.
False positives can effectively lead to false negatives too. If too many alarms end in teens getting swatted (or worse) for eating chips, people might ignore the alarm if an actual school shooter triggers it. Might assume the AI is just screaming about a bag of chips again.
I think a “true positive” is an issue as well if the protocol to manage it isn’t appropriate. If the kid was armed with something other than nacho cheese, the provocative reaction could have easily set off a tragic chain of events.
Reality is there are guns in schools every day. “Solutions” like this aren’t making anyone safer. School shooters don’t fit this profile - they are planners, not impulsive people hanging out at the social event.
More disturbing is the meh attitude of both the company and the school administration. They almost engineered a tragedy through incompetence, and learned nothing.
The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.
This tech is not supposed to be used in this fashion. It's not ready.
Did you want to emphasize or clarify the first danger I mentioned?
My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.
When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.
I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.
I’d argue the second danger is worse, because shooting might be incidental (and up to human judgement) but being traumatized is guaranteed and likely to be much more frequent.
Americans are killed by police all the time, and by other Americans. We've already decided as a society that we don't care enough to take the problem seriously. Gun violence, both public and from the state, is accepted as unavoidable and defended as a necessary price to pay to live in a free society[0]. Having a computer call the shots wouldn't actually make much of a difference.
Hell, it wouldn't even move the needle on racial bias much because LLMs have already shown themselves to be prejudiced against minorities due to the stereotypes in their training data.
[0]Even though no other free society has to pay that price but whatever.
Guns are actually easier to control and significantly reduce ability to target multiple people at once. There are a lot of countries successfully controlling guns.
To the argument that then only criminals have guns - in India at least, criminals have very limited access to guns. They have to resort to unreliable handmade guns which are difficult to procure. Usually criminals use knives and swords due to that.
> The danger is that it's as clear as day that in the future someone is gonna be killed.
This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.
So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)
> This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
huh, i can think of at least one recent example of a popular figure using this kind of argument to extreme self-detriment.
Is HN really this ready to dive into obvious logical fallacies?
My original comment, currently sitting at -4, has so far attracted one guilt-by-association plus implied-threat combo, and no other replies. To remind readers: My horrifying proposal was to measure both the risks and the benefits of things.
If anyone genuinely thinks measuring the risks and benefits of things is a bad idea, or that it is in general a good idea but not in this specific case, please come forward.
I'm not claiming that a continuous range exists, and that one end cannot be distinguished from the other because the slope between those points is gradual. I'm claiming that there is a category, called technology, and everything in that category is subject to that argument.
If you want to dispute that, it's incumbent on you to provide evidence for why some technology subcategories should not be subject to that argument.
Specifically: You need to present a case for why AI devices like the one discussed in TFA should not be evaluated in terms of their risks and benefits to society.
sorry for being glib; it was low hanging fruit. my actual point should have been more clearly stated: measuring risk/benefit is really complicated because there's almost never a direct comparison to be made when balancing profit, operational excellence and safety.
Stuff like this feels like some company has managed to monetize an open source object detection model like YOLO [1], creating something that could be cobbled together relatively easily, and then sold it as advance AI capabilities. (You'd hope they've have at least fine-tuned it / have a good training dataset.)
We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?
"“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”"
Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.
This assumes no human verification of the flagged video. Maybe the bag DID look like a gun. We'll never know, because modern journalism has no interest in such things. They obtained the required emotional quotes and moved on.
Human verified the video -> human was the decision-maker. No human verified the video -> Human who gave a blank check to the AI system was the decision-maker. It's not really about the quality of journalism, here.
We're talking about who should be charged with a crime. I sincerely hope we're going to do more discovery than "ask Dexerto to summarize what WBAL-TV 11 News said".
Superintendent approved a system that they 100% knew would hallucinate guns on students. You assert that if the superintendent required human-in-the-loop before calling the police that the superintendent is absolved from deploying that system on students.
You are wrong. The superintendent is the person who decided to deploy a system that would lead to swatting kids and they knew it before they spent taxpayer dollars on that system.
The superintendent also knew that there is no way a school administrator is going to reliably NOT dial SWAT when the AI hallucinates a gun. No administrator is going to err on the side of "I did not see an actual firearm so everything is great even though the AI warned me that it exists." Human-in-the-loop is completely useless in this scenario. And the superintendent knows that too.
In your lifetime, there will be more dead kids from bad AI/police combinations than <some cause of death we all think is too high>. We are not close to safely betting lives on it, but people will do it immediately anyway.
So, are you implying that if humans surveil kids at random and call the SWAT team if a frame in a video seems to imply one kid has a gun, that then it's all OK?
Those journalists, just trying to get (unjustified, dude, unjustified!!) emotes from kids being mistakenly held at gun point, boy they are terrible.... They're just covering up how necessary those mistakes are in our pursuit of teh crime...
If security sees someone carrying a gun in surveillance video, on a gun free campus, and policy verify it, then yes, that's justified, by all aspects of the law. There are countless examples of surveillance of illegal activity resulting in police action.
Nobody saw a gun in a video. Nobody even saw something that looked like a gun. A chip bag, at most, is going to produce a bulge. No reasonable human is going to look at a kid with a random bulge in their pocket and assume gun. Otherwise we might as well start sending our kids to school naked; this is the kind of paranoia that brought us the McMartin Preschool nonsense.
They didn't see that, though. They saw a kid with a bulge over their pants pocket, suggesting that something was in the pocket. The idea that any kind of algorithm can accurately predict that an amorphous pocket bulge is a gun is just bonkers stupid.
(Ok, ok, with thin, skin-tight, light-colored pants, maybe -- maybe -- it could work. But if it mistook a crumpled-up Doritos bag as a gun, clearly that was not the case here.)
The taxpayers collectively pay the money, the officers involved don't (except for that small fraction of their income they pay in taxes that increase as a result).
The question is whether that Doritos carrying kid is still alive, instead of being shot at by the violent cops (who typically do nothing when an actual shooter is roaming a school on a killing spree; apropos the Uvalve school shooting, when hundreds of cops milled around the school in full body armor, refusing to engage the shooter on killing spree inside the school, and they even prevented the parents from going inside to rescue their kids) based on a false positive about a gun (and the cops must have figured that it's likely a false positive, because it is info from AI surveillance), only because he is white?
Before clicking on the article, I kinda assumed the student was black. I wouldn't be surprised if the AI model they're using has race-related biases. On the contrary, I would be surprised if it didn't.
> Make them pay money for false positives instead of direct support and counselling.
Agreed.
> This technology is not ready for production
No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).
I think I’ve said this too many times already but the core problem here and with the “AI craze” is that nobody realy wants to solve problems, what they want is a marketable product and AI seems to be the magic wrench that fits in all the nuts and since most of the people doesn’t really knows how it works and what are its limitations the happily buy the “magic dust”.
Sure, but this school is in the county, outside city limits. In my experience, what passes for "sketchy" in Essex MD is roughly "random dude selling pit beef out of a barrel in front of his house", i.e. fairly benign. But it's admittedly been a long while since I lived in Baltimore.
> nobody realy wants to solve problems, what they want is a marketable product
I agree, but this isn't specific to AI -- it's what makes capitalism work to the extent that it does. Nobody wants to clean a toilet or harvest wheat or do most things that society needs someone to do, they want to get paid.
Absolutely, but I don’t believe the responsibility falls in the hands of those looking to make a profit but rather into of those in charge of regulating how those profits should be made, after all thieves want to make a profit too but we don’t allow them to, at least not if it’s not a couple of millions.
I get that people are uncomfortable with explicit quantification of stuff like this, but removing the explicitness doesn't remove the quantification, it just makes it implicit. If, say, we allow people to drive cars even though car accidents kill n people each year, then we are implicitly quantifying that the value of the extra productivity society gets by being able to get places quickly in a car is worth the deaths of those people.
In your example, if terrorists were the only people killing people in the US, and police (a) were the only means of stopping them, and (b) did not benefit society in any other way, the equation would be simple: get rid of the police. There wouldn't need to be any value judgements, because everything cancels out. But in practice it's not that easy, since the vast majority of killings in the US occur at the hands of people who are neither police nor terrorists, and police play a role in reducing those killings too.
I expect that John Bryan -- who produces content as The Civil Rights Lawyer https://thecivilrightslawyer.com -- will have something to say about it.
He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.
My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.
But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.
The article says the police later showed the student the photo that triggered the alert. He had a crumpled-up Doritos bag in his pocket. So there was no gun in the photo, just a pocket bulge that the AI thought was a gun... which sounds like a hallucination, not any actual reasonable pattern-matching going on.
But the fact that the police showed the photo does suggest that maybe they did manually review the photo before going out. If that's the case, do wonder how much the AI influenced their own judgment, though. That is, if there was no AI involved, and police were just looking at real-time surveillance footage, would they have made the same call on their own? Possibly not: it feels reasonable to assume that they let the fact of the AI flagging it override their own judgment to some degree.
Or they'll tell us police have started shooting because an acorn falls, so they shouldn't be expected to be held to higher standards and are possibly an improvement.
In marketing, that's called "the bandwagon effect" and is one of the more powerful techniques for influencing people's thoughts and behaviors. Sadly, we are social animals and "social proof" is far more powerful than it should be.
Ah, the coming age of Palantir's all seeing platform; and Peter Thiel becoming the shadow Emperor. Too bad non-deterministic ml systems are prone to errors that risk lives when applied wrongly to crucial parts of society. But in an authoritarian state, those will be hidden away anyway, so there's nothing to see here: move along folks. Yes, surveillance and authoritarianism go hand in hand, ask China. It's important to protest these methods and push lawmakers to act against them; now, before it's too late.
It's still not helpful to wander into threads to talk about your favorite topic without making effort to provide some context on why your comments are relevant. When random crazy people come up to you spouting their theories in public places, the problem is not that their concerns are necessarily incoherent or invalid; the problem is that they broadcasting their thoughts randomly with no context, and their audience has no way of telling whether they just need to verbalize what's bothering them or have mistaken a passer-by for one of the villains in their psychodrama.
tl;dr if you want to make a broad point, make the effort to put it in context so people can appreciate it properly.
That may be the case, but only one of them is actually responsible for armed police swarming this student and it wasn't Palantir. It seems very strange that you're so eager to give a free pass to the firm who actually was at fault here.
American, please, wake up. The masked border police are on the streets arresting citizens, the military is being paid as a client of the president, corruption is legal, and a mass surveillance machine unfathomable to prior dictatorships is being/has been established. You're fucked. Listen to the soapbox. It is very, very relevant. Wake up.
I'm pretty sure that some people will continue to apply the term "soapbox ranting" to all opposition against the technofascism even when victims of its false positives will be in need of coroners, not psychologists.
So you just live a reactionary life? Nothing matters until it affects you personally? Should we get rid of free speech if jason-phillips doesn't have anything to say?
Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.
Calling it today. This company is going to get innocent kids killed.
How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?
First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.
This is a really bad idea right now. The technology is just not there yet.
And then there's plenty of bullies who might put a sticker of a picture of a gun on someone's back, knowing it will set off the image recognition. It's only a matter of time until they figure that out.
That's a great and terrifying idea. When that inevitably happens, you'll then have a couple of 13-year-olds: one dead, and one shell-shocked kid in disbelief that a stupid prank idea he cooked up in 60 seconds is now claimed as the root cause why someone was killed. That one may be charged with a crime or sued, though the district who installed this idiotic thing is really to blame.
The technology literally can NEVER be there. It is completely impossible to positively identify a bulge in clothing as a handgun. But that doesn’t stop irresponsible salesmen from making the claim anyway.
>First time it happens, there will be an explosion of protests.
Why do you believe this? In the US, cops will cower outside of a school with an armed gunman actively murdering children, forcibly detain parents who wish to go in if the cops wont, and then re-elect everyone involved
In the US, an entire segment of the population will send you death threats claiming you are part of some grand (democrat of course) conspiracy for the crime of being a victim of a school shooting. After every school shooting, republican lawmakers wear an AR-15 pin to work the next day to ensure you know who they care about.
Over 50% of the country blamed the protesting students at Kent state for daring to be murdered by the national guard.
Cops can shoot people in broad daylight, in the back, with no justification, or reasonable cause, or can even barge into entirely the wrong house and open fire on homeowners exercising their basic right to protect themselves from strangers invading their homes, and as long as the people who die are mostly black, half the country will spout crap like "They died from drugs" or "they once sold a cigarette" or "he stole skittles" or "they looked at my wife wrong" while the cops take selfies reenacting the murder for laughs and talk about how terrified they are by BS "training" that makes them treat every stranger as a wanted murderer armed to the teeth. The leading cause of death for cops is still like heart disease of course.
Trump sent unmarked forces to Portland and abducted people off the street into unmarked vans and I think we still don't really know what happened there? He was re-elected.
Why did they waste time verifying? The police should have eliminated the threat before any harm could be done. Seconds count when you're keeping people safe.
I get that you're being sarcastic and find the police response appalling, but the sad reality of Poe's Law is that there are a lot of people who would unironically say this and would have cheered if the cops had shot this kid, either because they hate black people or because they get off on violence and police shootings are a social sanctioned way to indulge that taste.
* Even hundreds of cops in full body armor and armed with automatic guns will not dare to engage a single "lone wolf" shooter doing a killing spree in a school; the heartless cowards may even prevent the parents from going inside to rescue their kids: Uvalde school shooting incident
* Cop on a ego trip, will shoot down a clearly harmless kid calmly eating a burger in his own car (not a stolen car): Erik Cantu incident
All of your examples are well known not because its normal and accepted but because they are exceptions. For every one bad example there are thousand good ones, that's humans for you.
Doesn't mean they are perfect or shouldn't criticised but claiming that's all they are doing isn't reasonable either.
If you look at actual per capita statistics you will easily see this.
In the United States, law enforcement officers shoot and kill more than 1,100 civilians each year, with a significant number of these incidents involving unarmed individuals, particularly among Black Americans who are disproportionately affected. The FBI has begun collecting data on these use-of-force incidents to provide better insights into the circumstances surrounding police shootings.
Police killed more than 1,300 people in the U.S. last year, an estimated 0.3% increase in police killings per million people. The increase makes 2024 the deadliest year for police violence by a slim margin since Mapping Police Violence began tracking civilian deaths more than a decade ago.
There is no national database that documents police killings in the U.S., and the report comes days after the Justice Department removed a database tracking misconduct by federal law enforcement. Researchers spent thousands of hours analyzing more than 100,000 media reports to compile the Mapping Police Violence database.
In 2025, the U.S. has experienced significant gun violence, with 11,197 shooting deaths reported through September 30, along with 20,425 nonfatal injuries. The year has seen a total of 341 mass shootings, resulting in 331 fatalities and 1,499 injuries.
Shootings have happened in all 50 states, at all times of day, and in locations as varied as schools, gas stations, gyms, Walmarts, and homes. Some involved handguns, others rifles or shotguns.
10.3 million guns have been sold across the U.S. in 2025 through September 30.
Mass shootings in the United States are incidents where one or more individuals use firearms to kill or injure multiple people, typically in public settings. The frequency and definitions of these events can vary, but they have been a significant concern in recent years, with the U.S. experiencing more mass shootings than any other country.
GVA has recorded 325 mass shootings in the U.S. this year through three quarters. Those have resulted in 309 deaths and 1,490 injuries.
Mass shootings in the last quarter included the high-profile shooting at a New York skyscraper, as well as the shooting of 29 people, 26 of them children, at a church in Minneapolis. Two children, aged 8 and 10, were killed in that incident.
The dispatch relayer and responding officers should at least have ready access to a screen where they can see a video/image of the raw footage that triggered the AI alert. If it is a false alarm, they will better see it and react accordingly, and if it is a real threat they will better understand the initial context and who may have been involved.
According to a news article[1], a human did review the video/image and flagged it as a false positive. It was the principal who told the school cop, who then called other cops:
> The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon. I contacted our school resource officer (SRO) and reported the matter to him, and he contacted the local precinct for additional support. Police officers responded to the school, searched the individual and quickly confirmed that they were not in possession of any weapons.
What's unclear to me is the information flow. If the Department of School Safety and Security recognized it as a false positive, why did the principal alert the school resource officer? And what sort of telephone game happened to cause local police to believe the student was likely armed?
Good lord, what an idiot principal. If the principal saw how un-gun-like it looked, he could have been brave enough to walk his lazy ass down to where the student was and said "Hey (Name), check this out. (show AI detection picture) The AI camera thought this was a gun in your pocket. I think it's wrong, but they like to have a staff member sign off on these since keeping everyone safe from violence is a huge deal. Can I take a picture of what it actually is in your pocket?"
Sounds like a "better safe than sorry" approach. If you ignore the alert on the basis that it's a false positive, then it turns out it really was a gun and the person shoots somebody, you're going to get sued into the ground, fired, name plastered all over the media, etc. On the other hand, if you call in the cops and there wasn't a gun, you're fine.
> "On the other hand, if you call in the cops and there wasn't a gun, you're fine."
Yeah, cause cops have never shot somebody unarmed. And you can bet your ass that the possible follow-up lawsuit to such a debacle's got "your" name on it.
It might be, depending on the integrity of "the system".
I can make a system that flags stuff, too. That doesn't mean it's any good. If they can show there was no reasonable cause then they've got a leg to stand on.
It’s the literal truth. How can that be a false report? A false report means you reported something you know to be untrue, not that you relayed bad information.
It would only be negligence if the police were considered like some sort of dangerous wild animal that people need to avoid provoking, and can't be held responsible on its own.
Which may very well be accurate, but I can't imagine the law ever punishing someone on that basis.
Reports on child welfare, it is often illegal to release the name of the tipster. Commonly taken advantage of by disgruntled exes or in custody dusputes.
Walking through TSA scanners, I always get that unnerving feeling I will get pulled aside. 50% of the time they flag my cargo pants because of the zipper pockets - There is nothing in them but the scanner doesn't like them.
Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.
There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.
To be fair, at least you can choose not to wear the cargo pants.
A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...
I have shoes that I know always beep on the airport scanners, so if I choose to wear them I know it might take longer or I might have to embarrassingly take them off and put them on the tray. Or I can not wear them.
Yes, in an ideal world we should all travel without all the security theatre, but that's not the world we live in, I can't change the way airports work, but I can wear clothes that make it faster, I can put my liquids in a clear bag, I can have bags that make it easy to take out electronics, etc. Those things I can control.
But people can't change their skin color, name, or passport (well, not easily), and those are also all things you can get held up in airports for.
Not sure, but I bet they were looking at their feet kinda dusting off the bottoms, making awkward eye contact with the security guard on the other side of the one way door.
I don't often fly, but back when I went to germany on a school trip, on the return flight I got pulled aside into a small room by whatever the german equivalent of TSA is and they swabbed the skin of my belly, and the inside of my bag. I'm guessing it was a drugs check and I must have just looked shifty because I get nervous in situations like that, but I do find it funny that they pulled me aside instead of the guys with me who almost certainly had something on them.
Also my partner has told me that apparently my armpits sometimes smell of weed or beer, despite me not coming in contact with either of those for a very long time, and now I definitely don't want to get taken into a small room by a TSA person (After some googling, apparently those smells can be associated with high stress)
Get Precheck or global entry. I only do a scanner every 5 years or so when I get pulled at random for it. Otherwise it's metal detector only. Unless your zippers have such chunky metal that they set that off you'll be fine. My belt and watch don't.
Note: Precheck is incredibly quick and easy to get; and GE is time consuming and annoying, but has its benefits if you travel internationallly. Both give the same benefits at TSA.
Second note: let's pretend someone replied "I shouldn't have to do that just to be treated...blah blah" and that I replied, "maybe not, but a few bucks could still solve this problem, if it bothers you enough that's worth it to you."
...maybe not, but a few bucks could still solve this problem
Sure, can't argue with that. But doesn't it bug you just a little that (paying a fee to avoid harassment) doesn't look all that disimilar from a protection racket? As to whether it's a few bucks or many, now you're just a mark negotiating the price.
I already adjust my clothing choices when flying to account for TSA's security theater make-work bullshit. Wonder how long before I'm doing that when preparing to go to other public places.
(I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)
I was getting pulled out of line in the 90’s for having long hair. I don’t dress in shitty clothes or fancy ones, I didn’t look funny, just the hair, which got regular compliments from women.
I started looking at people trying to decide who looked juicy to the security folks and getting in line behind them. They can’t harass two people in rapid succession. Or at least not back then.
The one I felt most guilty about, much later, was a filipino woman with a Philippine passport. Traveling alone. Flying to Asia (super suspect!). I don’t know why I thought they would tag her, but they did. I don’t fly well and more stress just escalates things, so anything that makes my day tiny bit less shitty and isn’t rude I’m going to do. But probably her day would have been better for not getting searched than mine was.
Email the state congressman and tell them what you think.
Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.
Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy
If that's the case, why do people in Congress keep voting for things their constituents don't like? When they get booed at town halls they just dismiss it as being the work of paid activists.
I got pulled aside because I absentmindedly showed them my concealed carry permit, not my driver's license. I told them I was a consultant working for their local government and was going back to Austin. No harm no foul.
If the system used any kind of logic whatsoever a CCW permit would not only allow you to bypass airport security but also carry in the airport (Speaking as both a pilot and a permit holder)
Would probably eliminate the need for the TSA security theater so that will probably never happen.
You can carry in the airport in AZ without a permit, in the unsecured areas. I think there was only one broo-ha-ha because some particularly bold guy did it openly with a rifle (can't remember if there's more to the story).
The point of the security theater is to assuage the 95th percentile scared-of-everything crowd, they're the same people who want no guns signs in public parks.
You're right not a lot of people objected to TSA ending the no shoes safety rule, and it's a shame. I certainly objected and tried to make my objections known, but apparently 23 or 24 years of the iconic custom of taking shoes off went to waste because the TSA decided to slack off
Right from the beginning it was a handout to groups who built the scanning equipment, who were basically personal friends with people in the admin. We paid absurd prices for niche equipment, a lot of which was never even deployed and just sat in storage.
Several of the hijackers were literally given extended searches by security that day.
A reminder that what actually stopped hijackings (like, nearly entirely) was locking the cabin door, which was always doable, and has not ever been breached. Not only did this stop terrorist hijackings, it stopped more casual hijackings that used to be normal, it could also stop "inside man" style hijackings like that one with a disgruntled FedEx pilot, it was nearly free to implement, always available, harms no one's rights, doesn't turn airport security into a juicy bombing target, doesn't slow down an important part of the economy, doesn't invent a massive bureaucracy and LEO in the arms of a new american agency that has the goal of suppressing domestic problems and has never done anything useful. Keep in mind, shutting the cockpit door is literally how the terrorists themselves protected themselves from being stopped and is the reason Flight 93 couldn't be recovered.
TSA is utterly ineffective. They have never stopped an attack, regularly fail their internal audits, the jobs suck, and they pay poorly and provide minimal training.
Getting pulled aside by TSA for secondary screening is nowhere in the ball park of being rushed at gunpoint as a teenager and told to lay down on the ground where one false move will get you shot by a trigger happy cop that probably won’t face any consequences - especially if the innocent victim is a Black male.
In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.
I wasn't implying TSA-cargo-pant-groping is comparable. My point is to show escalation in public facing systems. We have been dealing with TSA. Now we get AI Scanners. What's next?
You have no evidence to suggest this, just bias. Unless you are aware of the AI algorithm, then it's a pointless discussion that only causes strife and conjecturing.
How many audit the police videos have you seen on Youtube? There are an insufferable amount of "white" people getting destroyed by the cops. If you replace the "white" people in these videos with "black" then 99% of viewers would assume the cops are hardcore racist, when in fact, they are just bad cops - very bad cops, that have some deep psychological issues - probably rooted from a traumatic childhood.
I'm sure CLEAR is already having giddy discussion on how they can charge you to get pre-verified access to walk around in public. We can all wear CLEAR certified dog tags so the cops can hastle the non-dog-tagged people.
This may be mean, but we should really be careful about just handing AI over to technically illiterate people. They're far more likely to blind trust the LLM/AI output than someone who may be more experienced and take a beat. AI in an agentic-state society (what we have in America at least) is an absolute ticking time bomb. Honestly, this is what AI safety teams should be concentrated on: making sure people who think the computer is infallible understand that, no, it isn't, and you shouldn't just assume what it tells you is correct.
It's basically a failure of setting up the proper response playbook.
Instead of:
1. AI detects gun on surveillance
2. Dispatch armed police to location
It should be:
1. AI detects gun on surveillance
2. Human reviews the pictures and verifies the threat
3. Dispatch armed police to location
I think the latter version is likely what already took place in this incident, and it was actually a human that also mistook a bag of Doritos for a gun.
But that version of the story is not as interesting, I guess.
He could have been easily murdered. It's not the first time by a far margin that a bunch of overzealous cops murder a kid. I would never ever in my life set foot in a place that sends me armed cops so easily. That school is extremely dangerous.
I think the reason the school bought this silly software is because it's a dangerous school, and they're grasping at straws to try and fix the problem. The day after this false positive, a student was robbed.[1] Last month, a softball coach was charged with rape and possession of child pornography.[2] Last summer, one student was stabbed while getting off the bus.[3] Last year, there were two incidents where classmates stabbed each other.[4][5]
That certainly sounds bad, but it's all relative; keep in mind this school is in Baltimore County, which is distinct from the City of Baltimore and has a much different crime profile. This school is in the exact same town as Eastern Tech, literally the top high school in Maryland.
I skimmed through all the articles linked in GP and finding them pretty relevant to whatever decision might have been made to utilize the AI system (not at all to comment on how badly the bad tip was acted on).
Hailing from and still living in N. California, you could tell me that this school is located in Beverly Hills or Melrose Place, and it would still strike me as a piece of trivia. If anything, it'd just be ironic?
For context, Baltimore (City) is one of the most dangerous large cities in the US. Between the article calling the school "Kenwood High School in Baltimore" and the GP's crime links, a casual reader could mistakenly picture a dangerous inner-city school. But in reality it's located in a low-rise suburb in the County. Granted, it's an inner-ring blue collar suburb, but it's still a night-and-day difference from the worst neighborhoods in the city. And the schools in those bad neighborhoods tend to have far worse crimes than what was listed above.
So my point was that while the list of incidents is definitely not great, it's still way less severe than many inner-city schools in Baltimore. And honestly these same types of incidents happen at many "safe" large suburban high schools in "nice" areas throughout the US... generally less often than at this particular school, but not an order-of-magnitude difference.
Basically, I'm saying that GP's assertion of it being a "dangerous school" is entirely relative to what you're comparing to. There are much worse schools in that metro area.
I doubt that. I moved around a lot as a kid, so I went to at least eight different public schools from Alabama to Washington. One school was structurally condemned while I attended it. Some places had bullying, and sometimes a couple of people fought, but never with weapons, and there was never an injury severe enough to require medical attention.
I also know several high school teachers and the worst things they've complained about are disruptive/stupid students, not violence. And my friends who are parents would never send their kids to a school that had incidents like the ones I linked to. I think this sort of violence is limited to a small fraction of schools/districts.
> I think this sort of violence is limited to a small fraction of schools/districts.
No, definitely not. I went to a decently-well-ranked suburban school district, and still witnessed violent incidents... no weapon used, but still multiple cases where the victim got a concussion. And there were arrests, a gun found in a kid's locker, etc. This stuff was unfortunately relatively normal, at least in the 90s. Not quite as often as at the school in the article, but still.
Based on your reporting, that's one violent crime per year, and one alleged child rapist. [0]
The crime stats seem fine to me. In a city like Baltimore, the numbers you've presented are shockingly low. When I was going through school, it was quite common for bullies to rob kids... even on campus. Teachers pretty much never did anything about it.
[0] Maybe the guy is a rapist, and maybe he isn't. If he is, that's godawful and I hope he goes to jail and gets his shit straight.
Having worked extensively with computer vision models for our interview analysis system, this incident highlights a critical challenge in AI deployment: the trade-off between false positive rates and detection confidence thresholds. We initially set our confidence threshold at 0.85 for detecting inappropriate objects during remote interviews, but found this led to ~3% false positives (mostly mundane objects like water bottles being flagged as concerning).
We solved this by implementing a two-stage verification system: initial detection runs at 0.7 threshold for recall, but any flagged objects trigger a secondary model with different architecture (EfficientNet vs ResNet) and viewpoint analysis. This reduced false positives to 0.1% while maintaining 98% true positive detection rate. For high-stakes deployments like security systems, I'm curious if others have found success with ensemble approaches or if they're using human-in-the-loop verification? The latency impact of multi-stage detection could be problematic for real-time scenarios.
If false positives are known to happen, then you design a system where the image is vetted before telling the cops the perpetrator is armed. The company is basically swatting, but I'm sure they'll never be held liable.
Actually, if a system has too many false positives or false negatives, it's basically useless. There will eventually be doubts amongst the operators of it and the whole thing will implode, which is the best possible outcome.
We already went through this years ago with all those terrorism databases and we (humanity) have learned nothing--any database will have a percentage of erroneous data, it is impossible to eliminate erroneous data completely. Therefore, any database used to identify <fill in the blank> will have erroneous conclusions. It's been observed over and over again and governments can't help themselves that "this time it will be different because <fill in the blank> e.g. AI.
> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
> Baltimore County Public Schools echoed the company’s statement in a letter to parents, offering counseling services to students impacted by the incident.
Not sure nothing will happen. Some trial lawyers would love to sue a city, a school system, and an AI surveillance company over "trauma, anxiety, and mental pain and suffering" caused by the incident. There will probably be a settlement that nobody ever hears about.
The world is doing fairly ok, thank you. The US however I’m not so sure as people here are apparently more concerned by the AI malfunction than with the idea it’s somehow sensible to live monitor high schools for gun threat.
Its not just the US. China runs the same level of surveillance, its being implemented all throughout Europe, Africa and Asia. This is becoming the norm.
Because if the "gun threat" system isn't accurate, then it's a system for false positives and false negatives and it's actually worse than having no such system. Maybe that's what you meant?
No, I think it’s crazy that people somehow think it’s rational to video monitor kids and be worried they have actual fire arms.
I think that’s a level of f-ed up which is so far removed from questioning AI that I wonder why people even tolerate it and somehow seem to view the premise as normal.
It's a system that was sold to a legally risk-averse school district or city or whatever. It's sales job and the non-technical people buy it because they aren't equipped to even ask the right questions about it. They created even more problems for themselves than the problems they purportedly attempted to solve! This is modern life in a nutshell.
A bunch of companies and people invested unimaginable amounts of money in these technologies in the hope they will multiply that money. They will showe it down our throats no matter what, this isn't about security and making the world a better place, saving lives or preventing bad things to happen, this is strictly about those people and companies making as much money as possible, or at least for now not losing the money they invested.
Law enforcement officers, judicial officials, social workers, and similar generally maintain qualified immunity from liability in the course of their work. This case for example in which judges and social workers allegedly failed to properly assess a mother's fitness for child custody despite repeated indicators suggesting otherwise. The child was ultimately placed in the mother's care, and later was killed in an execution style (not due to negligence).
This case happened in the county I reside in and my sister-in-law is an attorney for the county in CP, although this was not her case directly. I can tell you what led to this: The COVID lockdowns! They stopped doing all the usual home visits and follow ups because everyone was too scared to do their jobs.
This case was a horrifying failure of the entire system that up until that point had fairly decent results for children who end up having to be taken away from their parents and later returned once the Mom/Dad clean up their act.
Not applicable - As a society we’ve countless times chosen to favour the right of the mother to keep children above the rights of other humans. Most children are killed in the home of the mother (i.e. either by the mother, or where partner choice would have avoided that, while the father was available), or even worde in the Anders Breivik situation (father available with stable job and perspectives in life, but custody refused, child grew up a mass murderer as always).
The school admin has no understanding of the tech and only the dimmest comprehension of what happened. Asking them to do anything besides what the tech company told them to do is asking wayyy too much.
We blame AI here but what's up with law enforcment that comes with loaded guns in hand and send someone to the ground and cuff him before actually doing any check?
That is the real issue.
Police force anywhere else in the world that know how to behave would have approched the student, have had a small chat with him, found out all he had in hands was a bag of doritos, maybe would have asked politely to see the content of his bag, explaining the search has been triggered by an autodetection system that may lead to occasional errors and wished him a good day.
That was my first thought as well. A worry is police officers make mistakes which leads to anywhere from hapless people getting terrorized, harmed or killed. The bad thing about AI is it'll allow police to escape responsibility. Perhaps also where a human will realize it made a mistake they can admit it and everything is okay. But if AI says you had a gun, it won't walk that back. AI said he had a gun. But when we checked, he didn't have it anymore.
In the Menezes case the cops were playing a game of telephone that ended up with him being shot in the head.
The guidance counselor does not have the training or time to "fix" the trauma you just gave this kid and his friends. Insane to put minors through this.
An alert by one of these AI tools, which from what I understand have a terrible track record, should not be reasonable suspicion or probable cause to swarm a teenager with guns drawn. I wish more people in local communities would understand how much harm this type of surveillance and response causes. Our communities should not be using these tools.
I wonder if the AI correctly identified it as a bag of Doritos, but was also trained on the commercial[0] where the bag appears to beat up a human (his fault for holding on too tight) and then it destroys an alien spacecraft.
Looks like per their website it did function as intended... It surfaces potential threats for the school to look at and make a human decision. The principal decided to send the police after the school safety team dismissed it as part of the correct process. I mean fire alarms go off for lots of things that are not fire alarms... This was an alert meant to be validated by a human that messed up.
>> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
No. If you're investigating someone and have existing reason to believe they are armed then this kind of false positive might be prioritizing safety. But in a general surveillance of a public place, IMHO you need to prioritize accuracy since false positives are very bad. This kid was one itchy trigger-pull away from death over nothing - that's not erring on the side of safety. You don't have to catch every criminal by putting everyone under a microscope, you should be catching the blatantly obvious ones at scale though.
The perceived threat of government forces assaulting and potentially killing me for reasons i have no control over, this is the kind of stuff that terminates the social contract. I'd want a new state that protects me from such stuff.
> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
This exact scenario is discussed in [1]. The "human in the loop" failed, but we're supposed to blame the human, not the AI (or the way it was implemented). The humans serve as "moral crumple zones".
"""
The emphasis on human oversight as a protective mechanism allows governments and vendors to have it both ways: they can promote an algorithm by proclaiming how its capabilities exceed those of humans, while simultaneously defending the algorithm and those responsible for it from scrutiny by pointing to the security (supposedly) provided by human oversight.
"""
The article doesn't confirm that there was definitely a human in the loop, but it sorta suggests that police got a chance to manually verify the photo before going out to harass this poor kid.
I suspect, though, that the AI flagging that image heavily influenced the cop doing the manual review. Without the AI, I'd expect that a cop manually watching a surveillance feed would have found nothing out of the ordinary, and this wouldn't have happened.
So I agree that it's weird to just blame the human in the loop here. Certainly they share blame, but the fact of an AI model flagging this sort of thing (and doing an objectively terrible job of it) in the first place should take most of the blame here.
"I am invoking my 4th and 5th amendment rights afforded to me by the Constitution of the United States of America. I have no further comment until I have consulted with and am in the presence of my legal council."
Then, just sit back and enjoy as the lawsuit unfolds.
Q: What was the name of the Google AI Ethicist who was fired by Google for raising the concern that AI overwhelmingly negatively framed non-white humans as threats .. Timnit Gebru
We, as technologists, ARE NOT DOING BETTER. We must do better, and we are not on the "DOING BETTER" trajectory.
We talk about these "incidents" with breathless, "Wwwwellll if we just train our AI better ..." and the tragedies keep rolling.
Q2: Which of you has had a half dozen Squad Cars with Armed Police roll up on you, and treat you like you were a School Shooter? Not me, and I may reasonably assume it's because I am white, however I do eat Doritos.
I don't have kids yet, but I may someday. I went to public school myself, and would prefer to send any kid of mine to public school as well. (I'm not hard against private schools, but I'd prefer my kid gets to make friends from all walks of life, not just people who have parents who can afford private school.)
But I really wouldn't want to send my kid to a school that surveils students all the time, and uses garbage software like this that directly puts kids into dangerous situations. I feel like with a private school, I'd have more choice and ability to influence that sort of thing.
Even better, share the frame(s) that the guess was drawn from with a human for verification before triggering ANYTHING. How much trouble could that possibly be? How many "guns" is this thing detecting in a day across all sites? I doubt more than a couple or we'd have heard about tons of incidents, false positives or not.
I don't find that especially good as a sole remedy, because lots of people are stupid. If they see a green outline box overlaid on a video image with the label 'gun', many many people will just respond to the label instead of looking at the underlying image and trying to make a decision. Probability and validation history need to be built into the product so that there are audit logs that can be pored over and challenged. Bad human decision-making, which is rampant, is always smoothed over with justifications like 'I was concerned for everyone's safety', and usually treated in isolation rather than assessed longitudinally.
> “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”
So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Fundamentally, this isn't really any different from a person seeing someone with what looks like a gun and calling the cops, only it turns out the person didn't see it clearly.
The main issue is just that with increased numbers of images, there will be an increase in false positives. Can this be fixed by including multiple images, e.g. from motion of the object, so police (and the AI) can better eliminate false positives before traumatizing some poor teen?
Picture? Images? But those are just frames of footage the cameras have captured! Why would one purposefully use less information to make a decision rather than more?
Just put the full footage in front of an unbiased third party for a multi-stage verification first. The problem space isn't "is that weird shadow in the picture a gun or not?" it's "does the kid in the video have a gun?". It's not hard to figure out the difference between a bag of chips and a gun based on body language. Presumably the kid ate chips out of the bag? Using certain motions that one makes when doing that? Presumably the kids around him all saw the object in his hands and somehow did not react as if it was a gun? Jeez.
Those are a lot of "presumablies". Maybe you're right. Or maybe it was mostly obscured so you really couldn't tell. How do you know it was open and he was eating? How do you know there were other kids around and he wasn't solo? Why do you think the body language would be so different? Nobody is claiming he was using a gun or threatening anyone with it. If you're just carrying something in your hand, I don't know how you could tell what the object is or isn't from body language.
It wasn't open and he wasn't eating. The AI flagged a bulge in his pants pocket, which was the empty, crumpled up bag that he put in his pocket after finishing eating all the chips.
This is quite frankly absurd. The fact that the AI flagged it is bonkers, and the fact that a human doing manual review still believed it was a gun... I mean, just, wow. The level of dangerous incompetence here is staggering.
And I wouldn't be surprised if, minutes (or even seconds) before the video frame the AI flagged, the full video showed the kid finishing the bag and stuffing it in his pocket. AIs suck at context; a human watching the full video would not have made the same mistake. But in mostly taking the human out of the loop, all they had for verification was a single frame of video, captured as a context-free still image.
It is frankly mind-boggling that you or anyone else can defend this crap.
It's not totally clear -- we haven't seen the picture. The point is, it seemed to look like a gun. Shadows and reflections do funny things. For you to say with such confidence that this is absurd and bonkers, is itself absurd without us seeing the image(s) in question.
> It is frankly mind-boggling that you or anyone else can defend this crap.
> So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Not sure I agree. The AI flagging it certainly biased the person doing the manual review toward agreeing with the AI's assessment. I can imagine a scenario where there was no AI involved, just a human watching that same surveillance feed, and (correctly) not seeing anything alarming in it.
Also I expect the AI completely failed at context. I wouldn't be surprised if the full video feed, a few minutes (or even seconds) before the flagged frame, shows the kid crumpling up the empty Doritos bag and stuffing it in his pocket. The AI probably doesn't keep all that context around to use when making a later decision, and giving just the flagged frame of video to the human may have caused them to miss out on important context.
It’s unsurprising, since this kind of classification is only as good as the training data.
And police do this kind of stuff all the time (or in the very least you hear about it a lot if you grew up in a major city).
So if you’re gonna automate broken systems, you’re going to see a lot more of the same.
I’m not sure what the answer is but I definitely feel that “security” system like this that are purchased and rolled out need to be highly regulated and be coupled with extreme accountability and consequences for false positives.
When people wonder how can AI mistake a bag of snacks as a weapon, simply answer "42"
It is about the question, the answer will become very clear once you understand what was the question presented to the inference model, and of course what data and context was fed
I can understand the outrage in this thread but literally none of what you are all calling for will be done. No one from justice or law reads HN to see what should be done. I wish folks here would keep a cooler head rather than posting lengthy rants and vents that call for punishing school staff. Really unprofessional and immature from a community that prides itself, to fall constantly into a cycle of vitriol.
Can someone outline a more pragmatic, if not likely, course of what happens next after this? Is it swept under the rug and we move on?
At least there is a check done by humans in a human way. What if this human check is removed in future, as AI decisions would be deemed no longer requiring a human inspection?
The model seems pretty shitty. Does it only look on a frame-by-frame basis? Literally one second of video context and it would never make that mistake.
Before I clicked the article, I said to myself "The victim's gotta be Black", and lo and behold.
AI has inherited police's (shitty, racist, and dangerous) idea that any Black person is a dangerous monster for whom anything is a weapon.
With high level of hallucination, cops need to tranquilizers more. If the student had reached for his bag just before the cops arrived, BLM 2.0 would have started.
The core of the issue is that many Americans do carry weapons which means that whatever the security system, it needs to keep in mind that the suspect might be armed and about to start shooting. This makes the police biased towards escalation because the only way against a shooter is to shoot first.
This problem doesn't exist in Europe or Japan because guns aren't that ubiquitous, which means that the police have the time to think before they act, which makes them less likely to escalate and start shooting. Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
Sounds like this high school is doing a great job preparing students for the real world, where they can be swarmed by jackbooted thugs at any moment for any reason.
Companies, particularly in the US, very much want to go with (2) and part of the reason they can is because there are zero consequences for incidents like this.
A couple ofexamples spring to mind:
1. the UK Royal Mail scandal where a bad system accused postmasters of theft, some of whom committed suicide over the allegations. Those allegations were later proven false and it was the system's fault. IMHO the people who signed off and deployed this should be charged with negligent homicide; and
2. The Hertz case where people who had returned cars were erroneously flagged as car thieves and report was made to police. This created hell for people who would often end up with warrants they had no idea about and would be detained on random traffic stops over a car that was never stolen.
Now these aren't AI but just like the Doritos case here, the principle is the same: companies are trying to replace people with computers. In all cases, a human should be responsible for reviewing any such complaint. In the Hertz case, a human should check to see if the car is actually stolen.
In the Royal Mail situation, the system needs to show its work. Deployment should be against the existing accounting system and discrepancies between the two need to be investigated for bugs until the system is proven correct. Particularly in the early stages, a forensic accountant (if necessary) should verify that funds were actually stolen before filing a criminal complaint.
And if "false positive" criminal complaints are filed, the people who allowed that to happen, if negligent (and we all know they are), should themslves be criminally charged.
We are way too tolerant of black box systems that can result in significant harm or even death to people. Show your work. And make a human put their name and reputation to any output of such systems.
Exactly. I wonder if this a purpose-built image-recognition system, or is it a lowest-possible effort generic image model trained on the internet? Classifying a Black high school student holding Doritos as an imminent shooting threat certainly suggests the latter.
You're free to (attempt to) amend the Second Amendment, but the Supreme Court of the United States has already affirmed and reaffirmed that individual ownership of firearms in common use is a right.
What do you propose that is "reasonable" given the frameworks established by Heller, McDonald, Caetano, and Bruen?
I can 3D print or mill basically every item you can imagine prohibiting at home: what exactly do laws do in this case?
> the Supreme Court of the United States has already affirmed and reaffirmed that individual ownership of firearms in common use is a right.
The current interpretation of 2A is actually a fairly recent invention; in the past, it's been interpreted much more narrowly. And if SCOTUS can overturn Roe v. Wade's precedent, they can do the same with their interpretation of 2A. They won't of course, at least not until some of its members age out and get -- hopefully -- replaced with people who aren't idiots.
But I'd be fine if 2A was amended away. Let the states make whatever gun laws they want, and we can see whether blue or red states end up with lower levels of gun violence as a result.
All right they’ve gotta have a plain clothes bro go up there make sure the kid is chill. You know the difference between a murder and not can be as little as somebody being nice
If it's taking images every 30 seconds, it's getting 86400 x 30 = 2.5 million images per day per camera. So when it causes enormous, unnecessary trauma to one student per week, the company can rightfully claim it has less than a 1 in 10 million false positive rate.
Everything around us: political tumult and weaponization of the justice system, ICE and other capricious projections of federal authority, the failure of drug prohibition, and on and on and on, points to a very simple solution:
Abolish SWAT teams. Do away with the idea that the state employees can be permitted to be more armed than anyone else.
Blaming the so-called 'swatter' (whether it's a human or AI) is really not getting at the root of the problem.
Inflicting trauma on a harmless human in the name of the "safety of others" is never ok. The victim here was not unharmed, but is likely to end up with PTSD and all the mental health issues that come with it.
the best part of the technocracy is that they're not actually all that good at anything. the second best part is that when their mistakes end in someone dead there will be some way that they're not responsible.
Imagine the head scratching that's going on with execs who are surprised things might work when a probabilistic software is being used for deterministic purposes without realizing there's a gap between it kind of by nature.
I'm sure there will be no head scratching. They already know that this can happen, and don't care, because they know that if someone gets killed because of it, they won't be held responsible. And may not even lose any customers.
It wasn't sour cream and onion and didn't contain cash, so it's super sus.
But really this is typical of cop overreaction with escalation and ego rather than calm, legal, and reasonable investigation. Karens may SWAT people they don't like, but it's police officers who must use reasonableness and restraint to defend the vestiges of their impartiality and community confidence based on asking questions and gathering evidence in a legal and appropriate manner rather than rushing to conclusions. Case in point: The NYC rough false arrest of a father in front of his kid to retrieve his mis-delivered package where the egomaniacal bully cop aggressively lectures a guy for his own mistake to cover his own ego while blaming the victim: https://youtu.be/LXd-4HueHYE
If these AI video based gun detectors are not a massive fraud I will eat one.
How on Earth does a person walk with a concealed gun? What does a woman in a skirt with one taped to her thigh walk like?
What does a man in a bulky sweatshirt with a pistol on his back walk like?
What does a teenager in wide legged cargo jeans with two pistols and a extra magazines walk like?
The brochure linked from TFA has a screenshot of a combination of segmentation and object recognition models which are fairly standard in NVRs. Quick skim of the vendor website seems to confirm this[1] and states a claim that they are not analyzing the gait.
The whole idea even accepting the core premise is OK to begin with needs to have a similar analysis applied to it that medical tests do: will there be enough false positives, with enough harm caused by them, that this is actually worse than doing nothing? Compared with likelihood of improving an outcome and how bad a failure to intervene is on average, of course.
Given that there's no relevant screening step here and it's just being applied to everyone who happens to be at a place it's truly incredible that such an analysis would shake out in favor of this tech. The false positive rate would have to be vanishingly tiny, and it's simply not plausible that's true. And that would have to be coupled with a pretty low false negative rate, or you'd need an even lower false positive rate to make up for how little good it's doing even when it's not false-positiving.
So I'm sure that analysis was either deliberately never performed, or was and then was ignored and not publicized. So, yes, it's a fraud.
(There's also the fact that as soon as these are known to be present, they'll have little or no effect on the very worst events involving firearms at schools—shooters would just avoid any scheme that involved loitering around with a firearm where the cameras can see them, and count on starting things very soon after arriving—like, once you factor in second order effects, too, there's just no hope for these standing up to real scrutiny)
T0 be fair most commercials for Doritos, Skittles, Mentos, etc., if occurring in real life, would result in a strong police response just after they cut away.
AI is a false (political) wish, it can and never work, it is the desperation of an over extended power structure
to hold on and permanently consolodate controll of all of the worlds population, and nothing else.
the proofs are there.
philosophers mulled this over long ago and made clear statements as to why ai cant work
though not that for a second do I misdunderstand that it is "all in"
for ai, and we all get to go for the 100 trillion dollar ride to hell.
can we have truely awsome automation for manufacturing and mundane beurocratic tasks?, fuck ya we can!
but anything that requires understanding, is forever out of reach, which unfortunatly is also lacking in the people pushing, this thing, now
I was unduly surprised and disappointed when I saw the photo of the kid and he turned out to be black. I would love to believe that this had no impact on how the whole thing played out, but I don't.
>Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
It's ok everyone, you're safer now that police are pointing a gun at you, because of a bag of chips ... just to be safe.
/s
Absolutely ridiculous. We're living "computer said you did it, prove otherwise, at gunpoint".
I’ve got great news for you: there are more girls with colored hair than ever before, and we got the Synthwave revival, just try to find the right crowd and put on Timecop1983 in your headphones
Cyberpunk always sucked for all but the corrupt corporate, political and military elites, so it’s all tracking
Well the "hackers" jacking in to the "Hacker News" discussion board, where we talk about the oppression brought in by the corrupt AI-peddling corporations employed by the even more corrupt government, probably aren't all looking like Zero Cool, Snake Plissken, Officer K, or the like, though a bunch may be.
I have seen and enjoyed the first two movies but had somehow never even heard of the third one. It’s now high up my to-watch list, thanks for bringing it to my attention.
AI singularity wil happen, but motherbrain as a complete moron. It will extinguish humans not as a grand plan for machines to take over, but doing horrible mistakes when trying to make things better.
If any of you had actually paid attention to the source media, you would have noticed that they were explicitly dystopias. They were always clearly and explicitly hell for normal people trying to live life.
Meanwhile, tons of you watched star trek and apparently learned(?) that the "bright future" it promised us was.... talking computers? And not, you know, post scarcity and enlightenment that allowed people to focus on things that brought them joy or they were good at, and an entire elimination of the concept of "capitalism" or personal profit or resource disparity that could allow people to not be able to afford something while some asshole in the right place at the right time gets to take a percentage cut of the entire economy for their personal use.
The primary "technology" of star trek was socialism lol.
Oh of course they were dystopias. But at least they were cool and there was a fair amount of competence floating around.
My point is exactly that we got the dystopia but also it's not even a little cool, and it's very stupid. We could have at least gotten the cool dystopia where bad things happen but at least they're part of some kind of sensible longer-term plan. What we got practically breaks suspension of disbelief, it's so damn goofy.
> The primary "technology" of star trek was socialism lol.
Yep. Socialism, and automatic brainwashing chairs. And sending all the oddball non-conformists off to probably die on alien planets, I guess. (The "Original Series" is pretty weird)
The safest thing to do is to pull all Frito Lay products off shelves until the packaging can be redesigned to ensure that AI never confuses them for guns. It's a liability issue. THINK OF THE CHILDREN.
I think it's almost guaranteed that this model has race-related biases, so no, I don't think you're kidding at all. I think it's entirely likely that an Asian (or white) kid of the same build, wearing the same clothes, with a crumpled-up bag of Doritos in his pocket, would not get flagged as having a gun.
It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.
And some teen may be traumatized. Again, unsafe.
Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.
reply