Jill Bearup posted a video about this a while ago, showing a short and the original side by side: https://www.youtube.com/watch?v=kd692naF-Cc (note the short is shown at 0:31)
Edit: The changes made by the ai are a lot more vissible in the higher quality video uploaded to patreon: https://www.patreon.com/posts/136994036 (this was also linked in the pinned comment on the youtube video)
It must be my eyes and the small screen on my phone. I couldn’t find any differences in the video on Patreon, which was annoying enough to watch with the actual comparison clip being just a couple of seconds or so, and I had to rewind and check again. I wish it had shown more of the comparisons. Most of the current video was just commentary.
Same here, on a big screen, I don't see anything notable. I really hope this isn't a mass delusion because YouTube started applying a sharpness ("edge enhancement") filter to videos to make them look sharper. It sure looks like that to me, because I hate this filter and how so many movie transfers have it added, with the ringing at the edges this filter leaves.
Yeah I also can't see the difference on the high quality video. I am on my phone though tbf.
Also, minus 100 points to Jill for being happy about being able to use AI to automatically edit out all the silence from her videos. That's far more annoying than any barely perceptible visual artifacts.
It’s because you’re looking for some kind of “smoking gun” AI transformation. In reality it just looks like the YouTube one is more compressed and slightly blurred. Some people are apparently just learning that YouTube recompresses videos.
Ok, I am getting mad now. I don't understand something here. Should we open like 31337 different CVEs about every possible LLM on the market and tell them that we are super-ultra-security-researchers and we're shocked when we found out that <model name> will execute commands that it is given access to, based on the input text that is feed into the model? Why people keep doing these things? Ok, they have free time to do it and like to waste other's people time. Why is this article even on HN? How is this article in the front page? "Shocking news - LLMs will read code comments and act on them as if they were instructions".
This isn't a bug in the LLMs. It's a bug in the software that uses those LLMs.
An LLM on its own can't execute code. An LLM harness like Antigravity adds that ability, and if it does it carelessly that becomes a security vulnerability.
The problem is a bit wider than that. One can frame it as "google gemini is vulterable" or "google's new VS code clone is vulnerable". The bigger picture is that the model predicts tokens (words) based on all the text it have. In a big codebase it becomes exponentially easier to mess the model's mind. At some point it will become confused what is his job. What is part of the "system prompt" and "code comments in the codebase" becomes blurry. Even the models with huge context windows get confused because they do not understand the difference between your instructions and "injected instructions" in a hidden text in the readme or in code comments. They see tokens and given enough malicious and cleverly injected tokens the model may and often will do stupid things. (The word "stupid" means unexpected by you)
People are giving LLMs access to tools. LLMs will use them. No matter if it's Antigravity, Aider, Cursor, some MCP.
I'm not sure what your argument is here. We shouldn't be making a fuss about all these prompt injection attacks because they're just inevitable so don't worry about it? Or we should stop being surprised that this happens because it happens all the time?
Either way I would be extremely concerned about these use cases in any circumstance where the program is vulnerable and rapid, automatic or semi-automatic updates aren't available. My Ubuntu installation prompts me every day to install new updates, but if I want to update e.g. Kiro or Cursor or something it's a manual process - I have to see the pop-up, decide I want to update, go to the download page, etc.
These tools are creating huge security concerns for anyone who uses them, pushing people to use them, and not providing a low-friction way for users to ensure they're running the latest versions. In an industry where the next prompt injection exploit is just a day or two away, rapid iteration would be key if rapid deployment were possible.
> I'm not sure what your argument is here. We shouldn't be making a fuss about all these prompt injection attacks because they're just inevitable so don't worry about it? Or we should stop being surprised that this happens because it happens all the time?
The argument is: we need to be careful about how LLMs are integrated with tools and about what capabilities are extended to "agents". Much more careful than what we currently see.
I remember back in the 90s that Squid was adding this header while acting as a forward proxy. This header was sent across the internet years before someone have ever dreamed of the concept of a "reverse" proxy. I have not fact-checked but I am pretty sure it is older than IPv6 and the original standard was to add this header at the origin and send it across the whole internet.
I self host and I have something like this but more obvious: i wrote a web service that talks to my mikrotik via API and add the IP of the requester to the block list with a 30 day timeout (configurable ofc). It hostname is "bot-ban-me.myexamplesite.com" and it is like a normal site in my reverse proxy. So when I request a cert this hostname is in the cert, and in the first few minutes i can catch lots of bad apples. I do not expect anyone to ever type this. I do not mention the address or anything anywhere, so the only way to land there is to watch the CT logs.
Well, logically you should be able to keep the old name because you have documented proof that your user base in EU is small enough that this should NOT cause any confusions between your name and the new trademark holder. Just keep the cancellation documents as proof that you use this name but not in EU.
This is their claim and not yours, right? The other possibility is that if you have enough users in EU you should also keep the trademark. Only one of these can be true?
Also note that I already lost some court cases using my logic.
First they came for the porn, and I did not speak out - because, officially I do not watch porn.
Then they came for cryptocurrency, and I did not speak out - because I am not a crypto bro.
Then they came for the games, and I am a gamer...
but there was no one left to speak for me,
because payment processors had already "ethically" deplatformed everyone else.
I keep seeing people mentioning credit cards as a mean to verify one's age. My daughter is 11, but she have her own card with her own name since at least one year.
Yes, my bad. It is actually a debit card. She uses it only for roblox right now. My point was to unwrap the secret of where these mystical roblox points came from and also make her feel empowered and independent.
I owned a 3310. I remember going into the mountains for a week and didn’t even charge the phone beforehand, because the battery would last anyway.
Back then I used to climb, and I remember how it fell out of my pocket from around 30m (100 feet). When I got down, I just picked it up from the ground and put the back panel back on. The phone worked perfectly for at years after that.
Currently there are laws but not for hosting. Look at the contract of Steam for example or Ubisoft, or anything else - Q: What happens to your game collection if we shut down our servers? A: You own nothing and lose everything, GG!
It is like that we must protect users privacy from greedy websites so we will make the bad ones spell out that they use cookies to spy on users - and the result is what we have now with the banners.
I agree with you! And your point about cookie banners underlines that we can't just rely on regulation (because companies are so good are subverting or outright lobbying their way out of them).
Just as with the open source movement, there needs to be a business model (and don't forget that OSS is a business model, not a technology) that competes with the old way of doing things.
Getting that new business model to work is the hard part, but we did it once with open source and I think we can do it again with cloud infrastructure. But I don't think local-first is the answer--that's just a dead end because normal users will never go with it.
I've found people want local software and access. This is a major reason why people like mobile more now than desktops outside of the obvious of having it in their pocket. A mobile app gives you more of a private feel than going to website and entering your info. In addition to an extent it is kept local first, due to sync issues.
I have a lifetime Pastebin account that I hadn't used for some years. Last year I enrolled in a "linux administration" class and tried to use that pastebin (famous for sharing code) to share some code/configurations with other students. When I tried to paste my homework I kept getting a Cloudflare error page. I don't even remember what I was pasting, but it was normal linux stuff. I contacted pastebin support - of course I got ghosted.
I am sharing this in relation to the WAF comments and how much the companies implementing WAF care about your case.
ffmpeg -i source.mkv -i suspect.mkv -filter_complex "blend=all_mode=difference" diff_output.mkv
I saw these claims before but still have not found someone to show a diff or post the source for comparison. It would be interesting.
reply