> *If you don't want your LG TV quietly snooping on what you watch and using it to serve you ads, here's how to turn Live Plus off.
If LG makes money from snooping on you, what makes you think the “off” button actually turns it off? People have no way of verifying this.
To me this is the worst part of TVs (and cars, and fridges, and so on) are even allowed to have these features[1]: non-techinical customers have no understanding that “smart” hardware is capable of doing whatever it wants - and hide it from customers. You have no way of knowing what your “smart” thing is doing behind the scenes.
[1]: any feature thats sending data back to company servers, meaning you loose control of your data. Features that are 100% on-device is not what I’m talking about.
The construction on some of these windmill farms started years ago. Before that permits & legal has been in the works for a long time. This surely included security clearances.
The orange shrimp pulling the “national security” card now, on the same day as he also creates a new Greenland debacle, is very clearly simply an attempt to strong arm the danish govt into Greenland concessions (in turn simply to please his fractile lille ego)
They were approved before the invasion of Ukraine and before our politicians could see how devestating drones can be. Just because the orange dictator did something does not mean it necessarily was wrong. Even a broken clock is right two times per day.
>"Even a broken clock is right two times per day."
That is incorrect. There are any number of ways in which a clock might be broken such that its hands are not in the correct position even once per day.
Can we stop overusing this term? It has already lost it's significance. Every political leader you don't agree with is a dictator nowadays. What kind of shitty dictator he is anyways if he is being shut down by courts left and right, and has to shut down the government waiting for the Congress to approve budget? You do know that dictators don't give a fuck about courts and parliaments?
When these wind farms were permitted many years ago, shipborne drones were not part of the threat matrix. It was considered purely hypothetical even a decade ago because it was not an imminent capability for any country even though e.g. the US DoD had studied it. In the last few years shipborne drones have emerged very quickly as a substantial practical threat, largely due to the Russia/Ukraine war. Governments around the world are struggling to adapt to this new reality because none of their naval systems are designed under this assumption.
Whether or not this is convenient for Trump doesn't take away from the reality of the security implications.
First of all: occam's razor. Political theatrics seems simpler than the US defence/intelligence forces sudenly realizing that drones can be launched from ships. Esp. with the timing involved.
Second: Established/traditional radar systems cannot spot drones. Take it from someone living in a country that recently had its airspace violated by (assumingly) Russian drones, affecting national infrastructure. It was considered an attack at the time. I don’t think thats the word we use any more, for political reasons.
Third: Trump already shut down one of these windmill farms once this year. Until the danish company building the park sued and got the courts word that the shutdown was illegal, and resumed construction. The current shutdown has much larger impact for many multi-national companies. Usually there is a political process expected between allied countries before such a drastisc move. We havnt seen that ie no attempt to solve a concrete (security) issue before punching the red button ie probably because there was no motivation for a solution ie the security issue was probably not an actual issue)
Fourth: Earlier this week the danish intelligence services released a new security assesment of USA (that takes Trumps behaviour on the international scene into account). That probably hurt the little mans ego, and now we see a retaliation. This provides yet another motivation for Trumps action, besides factual, real security concerns.
Looking at this purely from the security aspect is naive, and fails to consider the context of the real world.
Before Ukrain everyone though drones were easy to counter. Now that has proven false.
granted Trump probably isn't thinking that, but the concern should be real. We need better drone defense before someone (Russia, Iran...) starts anonymously shooting down airplanes.
Yes.
GDPR covers all handling of PII that a company does. And its sort of default deny, meaning that a company is not allowed to handle (process and/or store) your data UNLESS it has a reason that makes it legal. This is where it becomes more blurry: figuring out if the company has a valid reason. Some are simple, eg. if required by law => valid reason.
GDPR does not care how the data got “in the hands of” the company; the same rules apply.
Another important thing is the pricipals of GDPR. They sort of unline everything. One principal to consider here is that of data minimization. This basically means that IF you have a valid reason to handle an individuals PII, you must limit the data points you handle to exactly what you need and not more.
So - company proxy breaking TLS and logging everything? Well, the company has valid reason to handle some employee data obviously. But if I use my work laptop to access privat health records, then that is very much outside the scope of what my company is allowed handle. And logging (storing) my health data without valid reason is not GDPR compliant.
Could the company fire me for doing private stuff on a work laptop? Yes probably. Does it matter in terms of GDPR? Nope.
Edit: Also, “automatic” or “implicit” consent is not valid. So the company cannot say something like “if you access private info on you work pc the you automatically content to $company handling your data”. All consent must be specific, explicit and retractable
What if your employer says “don’t access your health records on our machine”? If you put private health information in your Twitter bio, Twitter is not obligated to suddenly treat it as if they were collecting private health information. Otherwise every single user-provided field would be maximally radioactive under GDPR.
Many programmers tend to treat the legal system as if it was a computer program: if(form.is_public && form.contains(private_health_records)) move(form.owner, get_nearest_jail()); - but this is not how the legal system actually works. Not even in excessively-bureaucratic-and-wording-of-rules-based Germany.
Yeah, that’s my point. I don’t understand why the fact that you could access a bunch of personal data via your work laptop in express violation of the laptop owner’s wishes would mean that your company has the same responsibilities to protect it that your doctor’s office does. That’s definitely not how it works in general.
The legal default assumption seems to be that you can use your work laptop for personal things that don't interfere with your work. Because that's a normal thing people do.
I suspect they should say "this machine is not confidential" and have good reasons for that - you can't just impose extra restrictions on your employees just because you want to.
The law (as executed) will weigh the normal interest in employee privacy, versus your legitimate interest in doing whatever you want to do on their computers. Antivirus is probably okay, even if it involves TLS interception. Having a human watch all the traffic is probably not, even if you didn't have to intercept TLS. Unless you work for the BND (German Mossad) maybe? They'd have a good reason to watch traffic like a hawk. It's all about balancing and the law is never as clear-cut as programmers want, so we might as well get used to it being this way.
If the employer says so and I do so anyway then that’s a employment issue. I still have to follow company rules. But the point is that the company needs to delete the collected data as soon as possible. They are still not allowed to store it.
I’ll give an example in more familiar with. In the US, HIPPA has a bunch of rules about how private health information can be handled by everyone in the supply chain, from doctor’s offices to medical record SaaS systems. But if I’m running a SaaS note taking app and some doctor’s office puts PHI in there without an express contract with me saying they could, I’m not suddenly subject to enforcement. It all falls on them.
I’m trying to understand the GDPR equivalent of this, which seems to exist since every text fields
in a database does not appear to require the full PII treatment in practice (and that would be kind of insane).
My takeaway from the article was not that the report was a problem, but a change in approach from Google that they’d disclose publicly after X days, regardless of if the project had a chance to fix it.
To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
On the other hand as an ffmpeg user do you care? Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it? I mean someone could already be using the vulnerability regardless of what Google does.
>Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it?
Yes? It's in the license
>NO WARRANTY
>15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.
If I really care, I can submit a patch or pay someone to. The ffmpeg devs don't owe me anything.
Not being told the existence of bugs is different from having a warranty on software. How would you submit a patch on a bug you were not aware of?
Google should provide a fix but it's been standard to disclose a bug after a fixed time because the lack of disclosure doesn't remove the existence of the bug. This might have to be rethought in the context of OSS bugs but an MIT license shouldn't mean other people can't disclose bugs in my project.
Google publicly disclosing the bug doesn't only let affected users know. It also lets attackers know how they can exploit the software.
Holding public disclosure over the heads of maintainers if they don't act fast enough is damaging not only to the project, but to end users themselves also. There was no pressing need to publicly disclose this 25 year old bug.
How is having a disclosure policy so that you balance the tradeoffs between informing people and leaving a bug unreported "holding" anything over the heads of the maintainers? They could just file public bug reports from the beginning. There's no requirement that they file non-public reports first, and certainly not everyone who does file a bug report is going to do so privately. If this is such a minuscule bug, then whether it's public or not doesn't matter. And if it's not a minuscule bug, then certainly giving some private period, but then also making a public disclosure is the only responsible thing to do.
That license also doesn't give the ffmpeg devs the right to dictate which bugs you're allowed to find, disclose privately, or disclose publicly. The software is provided as-is, without warranty, and I can do what I want with it, including reporting bugs. The ffmpeg devs can simply not read the bug reports, if they hate bug reports so much.
Sorry to put it this bluntly, but you are not going to get what you want unless you do it yourself or you can convince, pay, browbeat, or threaten somebody to provide it for you.
Have you ever used a piece of software that DID make guarantees about being safe?
Every software I've ever used had a "NO WARRANTY" clause of some kind in the license. Whether an open-source license or a EULA. Every single one. Except, perhaps, for public-domain software that explicitly had no license, but even "licenses" like CC0 explicitly include "Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work ..."
I don't know what our contract terms were for security issues, but I've certainly worked on a product where we had 5 figure penalties for any processing errors or any failures of our system to perform its actions by certain times of day. You can absolutely have these things in a contract if you pay for it, and mass market software that you pay for likely also has some implied merchantability depending on jurisdiction.
But yes things you get for free have no guarantees and there should be no expectations put in the gift giver beyond not being actively intentionally malicious.
Point. As part of a negotiated contract, some companies might indeed put in guarantees of software quality; I've never worked in the nuclear industry or any other industries where that would be required, so my perspective was a little skewed. But all mass-distributed software I've ever seen or heard of, free or not, has that "no warranty" clause, and only individual contracts are exceptions.
Also, "depending on jurisdiction" is a good point as well. I'd forgotten how often I've seen things like "Offer not valid in the state of Delaware/California/wherever" or "If you live in Tennessee, this part of the contract is preempted by state law". (All states here are pulled out of a hat and used for examples only, I'm not thinking of any real laws).
This program discloses security issues to the projects and only discloses them after they have had a "reasonable" chance to fix it though, and projects can request extensions before disclosure if projects plan to fix it but need more time.
Google runs this security program even on libraries they do not use at all, where it's not a demand, it's just whitehat security auditing. I don't see the meaningful difference between Google doing it and some guy with a blog doing it here.
Great, so Google is actively spending money on making open source projects better and more secure. And for some reason everyone is now mad at them for it because they didn't also spend additional money making patches themselves. We can absolutely wish and ask that they spend some money and resources on making those patches, but this whole thing feels like the message most corporations are going to take is "don't do anything to contribute to open source projects at all, because if you don't do it just right, they're going to drag you through the mud for it" rather than "submit more patches"
Why should Google not be expected to also contribute fixes to a core dependency of their browser, or to help funding the developers? Just publishing bug reports by themselves does not make open source projects secure!
It doesn't if you report lots of "security" issues (like this 25 years old bug) and give too little time to fix them.
Nobody is against Google reporting bugs, but they use automatic AI to spam them and then expect a prompt fix. If you can't expect the maintainers to fix the bug before disclosure, then it is a balancing act: Is the bug serious enough that users must be warned and avoid using the software? Will disclosing the bug now allow attackers to exploit it because no fix has been made?
In this case, this bug (imo) is not serious enough to warrant a short disclosure time, especially if you consider *other* security notices that may have a bigger impact. The chances of an attacker finding this on their own and exploiting it are low, but now everybody is aware and you have to rush to update.
> This is a bug in the default config that is likely to result in RCE, it doesn’t get that much worse than this.
Likely to get RCE? No. Not every UAF results in a RCE. Also, someone would have to find this and it's clearly not something you can easily spot from the code.
Google did extensive fuzzing to discover it.
The trade off is that Ffmpeg had to divert resources to fix this, when the chance it would have been discovered independently is tiny, and exploited even tinier.
The bug exists whether or not google publishes a public bug report. They are no more making the project less secure than if some retro-game enthusiast had found the same bug and made a blog post about it.
Publishing bugs that the project has so that they can be fixed is actively making the project more secure. How is someone going to do anything about it if Google didn’t do the research?
Did you see how the FFMPEG project patched a bug for a 1995 console? That's not a good use for the limited amount of volunteers on the project. It actively makes it less secure by taking away from more pertinent bugs.
The codec can be triggered to run automatically by adversarial input. The irrelevance of the format is itself irrelevant when ffmpeg has it on by default.
Publicizing vulnerabilities is the problem though. Google is ensuring obscure or unknown vulnerabilities will now be very well known and very public.
This is significant when they represent one of the few entities on the planet likely able to find bugs at that scale due to their wealth.
So funding a swarm of bug reports, for software they benefit from, using a scale of resources not commonly available, while not contributing fixes and instead demanding timelines for disclosure, seems a lot more like they'd just like to drive people out of open source.
I think most people learned about this bug from FFmpeg's actions, not Google's. Also, you are underestimating adversaries: Google spends quite a bit of money on this, but not a lot given their revenue, because their primary purpose is not finding security bugs. There are entities that are smaller than Google but derive almost all their money from finding exploits. Their results are broadly comparable but they are only publicized when they mess up.
> so Google is actively spending money on making open source projects better and more secure
It looks like they are now starting to flood OSS with issues because "our AI tools are great", but don't want to spend a dime helping to fix those issues.
According to the ffmpeg maintainer's own website (fflabs.eu) Google is spending plenty of dimes helping to fix issues in ffmpeg. Certainly they're spending enough dimes for the maintainers to proudly display Google's logo on their site as a customer of theirs.
Yes and if you look on ffmpeg’s site you’ll find a link where they promote hiring their devs independently as consultants for ffmpeg work. Note the names of those maintainers. Now go to fflabs.eu, observe that they are an ffmpeg consulting firm, scroll down on the main page and observe the Google logo among their promoted list of customers. Now click on the “team” link and check out the names of the people that run fflabs. Notice that they are some of the very same people listed in the ffmpeg main site. Ergo Google pays ffmpeg developers to work on ffmpeg.
> Note the names of those maintainers. Now go to fflabs.eu
> Now click on the “team” link and check out the names
Quite an investigative work you've done there: some maintainers may do some work that surely... means sonething?
Meanwhile actual maintainer actually patching thousands of vulnerabilities in ffmpeg, including the recent ones reported by Google:
--- start quote ---
so far i got 7560€ before taxes for my security work in the last 7 months. And thats why i would appreciate that google, facebook, amazon and others would pay me directly. Also that 7560 i only got after the twitter noise.
The user is vulnerable while the problem is unfixed. Google publishing a vulnerability doesn't change the existence of the vulnerability. If Google can find it, so can others.
Making the vulnerability public makes it easy to find to exploit, but it also makes it easy to find to fix.
If it is so easy to fix, then why doesn't Google fix it? So far they've spent more effort in spreading knowledge about the vulnerability than fixing it, so I don't agree with your assessment that Google is not actively making the world worse here.
I didn't say it was easy to fix. I said a publication made it easy to find it, if someone wanted to fix something.
If you want to fix up old codecs in ffmpeg for fun, would you rather have a list of known broken codecs and what they're doing wrong; or would you rather have to find a broken codec first.
What a strange sentence. Google can do a lot of things that nobody can do. The list of things that only Google, a handful of nation states, and a handful of Google-peers can do is probably even longer.
Sure, but running a fuzzer on ancient codecs isn't that special. I can't do it, but if I wanted to learn how, codecs would be a great place to start. (in fact, Google did some of their early fuzzing work in 2012-2014 on ffmpeg [1]) Media decoders have been the vector for how many zero interaction, high profile attacks lately? Media decoders were how many of the Macromedia Flash vulnerabilities? Codecs that haven't gotten any new media in decades but are enabled in default builds are a very good place to go looking for issues.
Google does have immense scale that makes some things easier. They can test and develop congestion control algorithms with world wide (ex-China) coverage. Only a handful of companies can do that; nation states probably can't. Google isn't all powerful either, they can't make Android updates really work even though it might be useful for them.
you'd assume that a bad actor would have found the exploit and kept it hidden for their own use. To assume otherwise is fundamentally flawed security practice.
which bad actors would have more of, as they'd have a financial incentive to make use of the found vulnerabilities. White hats don't get anything in return (financially) - it's essentially charity work.
In this world and the alternate universe both, attackers can also use _un_published vulnerabilities because they have high incentive to do research. Keeping a bug secret does not prevent it from existing or from being exploited.
As clearly stated, most users of ffmpeg are unaware of them using it. Even them knowing about a vulnerability in ffmpeg, they wouldn't know they are affected.
Really, the burden is on those shipping products that depend on ffmpeg: they are the ones who have to fix the security issues for their customers. If Google is one of those companies, they should provide the fix in the given time.
But how are those companies supposed to know they need to do anything unless someone finds and publicly reports the issue in the first place? Surely we're not advocating for a world where every vendor downstream of the ffmpeg project independently discovers and patches security vulnerabilities without ever reporting the issues upstream right?
If they both funded vulnerability scanning and vulnerability fixing (if they don't want to do it in-house, they can sponsor the upstream team), which is to me the obvious "how", I am not sure why you believe there is only one way to do it.
It's about accountability! Who really gets to do it once those who ship it to customers care, is on them to figure out (though note that maintainers will have some burden to review, integrate and maintain the change anyway).
They regularly submit code and they buy consulting from the ffmpeg maintainers according to the maintainer's own website. It seems to me like they're already funding fixes in ffmpeg, and really everyone is just mad that this particular issue didn't come with a fix. Which is honestly not a great look for convincing corporations to invest resources into contributing to upstream. If regular patches and buying dev time from the maintainers isn't enough to avoid getting grief for "not contributing" then why bother spending that time and money in the first place?
I have about 100x as much sympathy for an open source project getting time to fix a security bug than I do a multibillion dollar company with nearly infinite resources essentially blackmailing a small team of developers like this. They could -easily- pay a dev to fix the bug and send the fix to ffmpeg.
Since when are bug reports blackmail? If some retro game enthusiast discovered this bug and made a blog post about it that went to the front page of HN, is that blackmail? If someone running a fuzzer found this bug and dumped a public bug report into github is that blackmail? What if google made this report privately, but didn't say anything about when they would make it public and then just went public at some arbitrary time in the future? How is "heads up, here's a bug we found, here's the reproduction steps for it, we'll file a public bug report on it soon" blackmail?
In my case, yes, but my pipeline is closed. Processes run on isolated instances that are terminated without haste as soon as workflow ends. Even if uncaught fatal errors occur, janitor scripts run to ensure instances are terminated on a fast schedule. This isn't something running on my personal device with random content that was provided by unknown someone on the interwebs.
So while this might be a high security risk because it possibly could allow RCE, the real-world risk is very low.
> On the other hand as an ffmpeg user do you care? Are you okay not being told a tool you're using has a vulnerability in it because the devs don't have time to fix it?
Yes, because publicly disclosing the vulnerability means someone will have enough information to exploit it. Without public disclosure, the chance of that is much lower.
Public disclosures also means users will know about it and distros can turn off said codec downstream. It's not that hard lol. Information is always better. You may also get third-party contributors who will then be motivated to fix the issue. If no one signs up to do so, maybe this codec should just be permanently shelved.
Note that ffmpeg doesn't want to remove the codec because their goal is to play every format known to man, but that's their goal. No one forces them to keep all codecs working.
Let's say that FFMPEG has a 10 CVE where a very easy stream can cause it to RCE. So what?
We are talking about software commonly for end users deployed to encode their own media. Something that rarely comes in untrusted forms. For an exploit to happen, you need to have a situation where an attacker gets out a exploited media file which people commonly transcode via FFMPEG. Not an easy task.
This sure does matter to the likes of google assuming they are using ffmpeg for their backend processing. It doesn't matter at all for just about anyone else.
You might as well tell me that `tar` has a CVE. That's great, but I don't generally go around tarring or untarring files I don't trust.
AIUI, (lib)ffmpeg is used by practically everything that does anything with video, including such definitely-security-sensitive things as Chrome, which people use to play untrusted content all the time.
Ffmpeg is a versatile toolkit used in lot of different places.
I would be shocked if any company working with user generated video from the likes of zoom or TikTok or YouTube to small apps all over which do not have it in their pipeline somewhere.
There are alternatives such as gstreamer and proprietary options. I can’t give names, but can confirm at least two moderately sized startups that use gstreamer in their media pipeline instead of ffmpeg (and no, they don’t use gst-libav).
One because they are a rust shop and gstreamer is slightly better supported in that realm (due to an official binding), the other because they do complex transformations with the source streams at a basal level vs high-level batch transformations/transcoding.
There are certainly features and use cases where gstreamer is better fit than ffmpeg.
My point was it would be hard to imagine eschewing ffmpeg completely, not that there is no value for other tools and ffmpeg is better at everything. It is so versatile and ubiquitous it is hard to not use it somewhere.
In my experience there usually is always some scenarios in the stack where throwing in ffmpeg for a step is simpler and easier even if there no proper language binding etc, for some non-core step or other.
From a security context that wouldn't matter, As long it touches data, security vulnerabilities would be a concern.
It would be surprising, not that it would impossible to forgo ffmpeg completely. It would be just like this site is written Lisp, not something you would typically expect not impossible.
I wasn’t countering your point, I just wanted to add that there are alternatives (well, an alternative in the OSS sphere) that are viable and well used outside of ffmpeg despite its ubiquity.
If you use a trillion dollar AI to probe open source code in ways that no hacker could, you're kind of unearthing the vulnerabilities yourself if you disclose them.
It's standard practice for commercially-sponsored software, and it doesn't necessarily fit volunteer maintained software. You can't have the same expectations.
Vulnerabilities should be publicly disclosed. Both closed and open source software are scrutinized by the good and the bad people; sitting on vulnerabilities isn't good.
Consumers of closed source software have a pretty reasonable expectation that the creator will fix it in a timely manner. They paid money, and the (generally) the creator shouldn't put the customer in a nasty place because of errors.
Consumers of open source software should have zero expectation that someone else will fix security issues. Individuals should understand this; it's part of the deal for us using software for free. Organizations that are making money off of the work of others should have the opposite of an expectation that any vulns are fixed. If they have or should have any concern about vulnerabilities in open source software, then they need to contribute to fixing the issue somehow. Could be submitting patches, paying a contractor or vendor to submit patches, paying a maintainer to submit patches, or contributing in some other way that betters the project. The contribution they pick needs to work well with the volunteers, because some of the ones I listed would absolutely be rejected by some projects -- but not by others.
The issue is that an org like Google, with its absolute mass of technical and financial resources, went looking for security vulnerabilities in open source software with the pretense of helping. But if Google (or whoever) doesn't finish the job, then they're being a piece of shit to volunteers. The rest of the job is reviewing the vulns by hand and figuring out patches that can be accepted with absolutely minimal friction.
To your point, the beginning of the expectation should be that vulns are disclosed, since otherwise we have known insecure software. The rest of the expectation is that you don't get to pretend to do a nice thing while _knowing_ that you're dumping more work on volunteers that you profit from.
In general, wasting the time of volunteers that you're benefiting from is rude.
Specifically, organizations profiting off of volunteer work and wasting their time makes them an extractive piece of shit.
why are the standards and expectation different for google vs an independent researcher? Just because they are richer, doesn't mean they should be held to a standard that isn't done for an independent researcher.
The OSS maintainer has the responsibility to either fix, or declare they won't fix - both are appropriate actions, and they are free to make this choice. The consumer of OSS should have the right to know what vulns/issues exist in the package, so that they make as informed a decision as they can (such as adding defense in depth for vulns that the maintainers chooses not to fix).
Google makes money off ffmpeg in general but not this part of the code. They're not getting someone else to write a patch that helps them make money, because google will just disable this codec if it wasn't already disabled in their builds.
Also in general Google does investigate software they don't make money off.
> independent researchers don't make money off the projects that they investigate
but they make money off the reputational increase they earn for having their name attached to the investigation. Unless the investigation and report is anonymous and their name not attached (which, could be true for some researchers), i can say that they're not doing charity.
That's a one-time bonus they get for discovering a bug, not from using the project on production. Google also gets this reward by the way. Therefore it's still imbalanced.
I'm an open source maintainer and I have never been in a situation where someone filing a security issue will withhold indefinitely, nor would I ever think of asking them to withhold forever. If there are some complications maybe we can discuss a delayed disclosure but ffmpeg is just complaining about the whole concept of delayed disclosures which seems really immature to me.
As a user of ffmpeg I would definitely want to know this kind of information. The responsibility the issue filer has is not to the project, but to the public.
You disclose so that users can decide what mitigations to take. If there's a way to mitigate the issue without a fix from the developers the users deserve to know. Whether the developers have any obligation to fix the problem is up to the license of the software, the 90 day concession is to allow those developers who are obligated or just want to issue fixes to do so before details are released.
The entire conflict here is that norms about what's considered responsible were developed in a different context, where vulnerability reports were generated at a much lower rate and dedicated CVE-searching teams were much less common. FFmpeg says this was "AI generated bug reports on an obscure 1990s hobby codec"; if that's accurate (I have no reason to doubt it, just no time to go check), I tend to agree that it doesn't make sense to apply the standards that were developed for vulnerabilities like "malicious PNG file crashes the computer when loaded".
The codec is compiled in, enabled by default, and auto detected through file magic, so the fact that it is an obscure 1990s hobby codec does not in any way make the vulnerability less exploitable. At this point I think FFmpeg is being intentionally deceptive by constantly mentioning only the ancient obscure hobby status and not the fact that it’s on by default and autodetected. They have also rejected suggestions to turn obscure hobby codecs off by default, giving more priority to their goal of playing every media format ever than to security.
Yeah, ffmpeg's responses is really giving me a disingenuous vibe as their argument is completely misleading (and it seems to be working on a decent amount of people who don't try to read further into it). IMO it really damages their reputation in my eyes. If they handled it maturely I think I would have had a bit more respect for them.
As a user this is making me wary of running it tbh.
I think the discussion on what standard practice should be does need to be had. This seems to be throwing blame at people following the current standard.
If the obscure coded is not included by default or cannot be triggered by any means other than being explicitly asked for, then it would be reasonable to tag it Won't Fix. If it can be triggered by other means, such as auto file type detection on a renamed file, then it doesn't matter how obscure the feature is, the exploit would affect all.
What is the alternative to a time limited embargo. I don't particularly like the idea of groups of people having exploits that they have known about for ages that haven't been publicly disclosed. That is the kind of information that finds itself in the wrong hands.
Of course companies should financially support the developers of the software they depend upon. Many do this for OSS in the form of having a paid employee that works on the project.
Specifically, FFMPEG seems to have a problem that much of their limitation of resources comes from them alienating contributors. This isn't isolated to just this bug report.
FFMPEG does autodetection of what is inside a file, the extension doesn't really matter. So it's trivial to construct a video file that's labelled .mp4 but is really using the vulnerable codec and triggers its payload upon playing it. (Given ffmpeg is also used to generate thumbnails in Windows if installed, IIRC, just having a trapped video file in a directory could be dangerous.)
Silly nitpick, but you search for vulnerabilities not CVEs. CVE is something that may or may not be assigned to track a vulnerability after it has been discovered.
Most security issues probably get patched without a CVE ever being issued.
It is accurate. This is a codec that was added for archival and digital preservation purposes. It’s like adding a Unicode block for some obscure 4000 year old dead language that we have a scant half dozen examples of writing.
Why is Google deliberately running an AI process to find these bugs if they're just going to dump them all on the FFmpeg team to fix?
They have the option to pay someone to fix them.
They also have the option to not spend resources finding the bugs in the first place.
If they think these are so damn important to find that it's worth devoting those resources to, then they can damn well pay for fixing them too.
Or they can shut the hell up and let FFmpeg do its thing in the way that has kept it one of the https://xkcd.com/2347/ pieces of everyone's infrastructure for over 2 decades.
Google is a significant contributor to ffmpeg by way of VP9/AV1/AV2. It's not like it's a gaping maw of open-source abuse, the company generally provides real value to the OSS ecosystem at an even lower level than ffmpeg (which is saying a lot, ffmpeg is pretty in-the-weeds already).
As to why they bother finding these bugs... it's because that's how Google does things. You don't wait for something to break or be exploited, you load your compiler up with santizers and go hunting for bugs.
Yeah this one is kind of trivial, but if the bug-finding infrastructure is already set up it would be even more stupid if Google just sat on it.
So to be clear, if Google doesn't include patches, you would rather they don't make bugs they find in software public so other people can fix them?
That is, you'd rather a world where Google either does know about a vulnerability and refuses to tell anyone, or just doesn't look for them at all, over a world where google looks for them and lets people know they exist, but doesn't submit their own fix for it.
Why do you want that world? Why do you want corporations to reduce the already meager amounts of work and resources they put into open source software even further?
Many people are already developing and fixing FFmpeg.
How many people are actively looking for bugs? Google, and then the other guys that don't share their findings, but perhaps sell them to the highest bidder. Seems like Google is doing some good work by just picking big, popular open source projects and seeing if they have bugs, even if they don't intend to fix them. And I doubt Google was actually using the Lucas Arts video format their latest findings were about.
However, in my mind the discussion whether Google should be developing FFmpeg (beyond the codec support mentioned elsewhere in the thread) or other OSS projects is completely separate from whether they should be finding bugs in them. I believe most everyone would agree they should. They are helping OSS in other ways though, e.g. https://itsfoss.gitlab.io/post/google-sponsors-1-million-to-... .
I would love to see Google contribute here, but I think that's a different issue.
Are the bug reports accurate? If so, then they are contributing just as if I found them and sent a bug report, I'd be contributing. Of course a PR that fixes the bug is much better than just a report, but reports have value, too.
The alternative is to leave it unfound, which is not a better alternative in my opinion. It's still there and potentially exploitable even when unreported.
But FFmpeg does not have the resources to fix these at the speed Google is finding them.
It's just not possible.
So Google is dedicating resources to finding these bugs
and feeding them to bad actors.
Bad actors who might, hypothetically have had the information before, but definitely do once Google publicizes them.
You are talking about an ideal situation; we are talking about a real situation that is happening in the real world right now, wherein the option of Google reports bug > FFmpeg fixes bug simply does not exist at the scale Google is doing it at.
A solution definitely ought to be found. Google putting up a few millionths of a percent of their revenue or so towards fixing the bugs they find in ffmpeg would be the ideal solution here, certainly. Yet it seems unlikely to actually occur.
I think the far more likely result of all the complaints is that Google simply completely disengages from ffmpeg and stops doing any security work on it. I think that would be quite bad for the security of the project - if Google can trivially find bugs at a high speed such that it overwhelms the ffmpeg developers, I would imagine bad actors can also search for them and find those same vulnerabilities Google is constantly finding, and if they know that those vulnerabilities very much exist, but that Google has simply stopped searching for them upon demand of the ffmpeg project, this would likely give them extremely high motivation to go looking in a place they can be almost certain they'll find unreported/unknown vulnerabilities in. The result would likely be a lot more 0-day attacks involving ffmpeg, which I do not think anyone regards as a good outcome (I would consider "Google publishes a bunch of vulnerabilities ffmpeg hasn't fixed so that everyone knows about them" to be a much preferable outcome, personally)
Now, you might consider that possibility fine - after all, the ffmpeg developers have no obligation to work on the project, and thus to e.g. fix any vulnerabilities in it. But if that's fine, then simply ignoring the reports Google currently makes is presumably also fine, no ?
I really don’t understand whole discourse us vs them? Why it is should be only Google fixing the bugs. Isn’t if volunteers not enough, so maybe more volunteers can step up and help FFMpeg. Via direct patches, or via directly lobbying companies to fund project.
In my opinion if the problem is money, and they cannot raise enough, then somebody should help them with that. Isn’t it?
If widely deployed infrastructure software is so full of vulnerabilities that its maintainers can't fix them as fast as they're found, maybe it shouldn't be widely deployed, or they shouldn't be its maintainers. Disabling codecs in the default build that haven't been used in 30 years might be a good move, for example.
Either way, users need to know about the vulnerabilities. That way, they can make an informed tradeoff between, for example, disabling the LucasArts Smush codec in their copy of ffmpeg, and being vulnerable to this hole (and probably many others like it).
I mean, yes, the ffmpeg maintainers are very likely to decide this on their own, abandoning the project entirely. This is already happening for quite a few core open source projects that are used by multiple billion-dollar companies and deployed to billions of users.
A lot of the projects probably should be retired and rewritten in safer system languages. But rewriting all of the widely-used projects suffering from these issues would likely cost hundreds of millions of dollars.
The alternative is that maybe some of the billion-dollar companies start making lists of all the software they ship to billions of users, and hire some paid maintainers through the Linux or Apache Foundations.
> But FFmpeg does not have the resources to fix these at the speed Google is finding them.
Google submitting a patch does not address this issue. The main work for maintainers here is making the decision whether or not they want to disable this codec, whether or not Google submits a patch to do that is completely immaterial.
What makes you think the bad actors aren't already finding these bugs? From the looks of it, there isn't really any rocket science going on here. There are equally well-funded bad actors who will and do find these issues.
With Google finding these bugs, at least the user can be informed. For this instance for example, the core problem here is the codec is in *active use*. Ffmpeg utilizes a disingenuous argument that it's old and obscure, but omits the fact that it's still compiled in meaning that an attacker can craft a file and send it to you and still works.
A user (it could be a distro who packages ffmpeg) can use this information to turn off the codec that virtually no one uses today and make their distribution of ffmpeg more secure. Not having this information means they can't do that.
If ffmpeg doesn't have the resources to fix these bugs, at least let the public know so we can deal with it.
Also, just maybe, they wouldn't have that many vulnerabilities filed against them if the project took security more seriously to begin with? It's not a good sign for the software when you get so many valid security reports and just ask them to withhold them.
The actual real alternative is that the ffmpeg maintainers quit, just like the libxml2 maintainer did.
A lot of these core pieces of infrastructure are maintained by one to three middle-aged engineers in their free time, for nothing. Meanwhile, billion dollar companies use the software everywhere, and often give nothing back except bug reports and occasional license violations.
I mean, I love "responsible disclosure." But the only result of billion dollar corporations drowning a couple of unpaid engineers in bug reports is that the engineers will walk away and leave the code 100% unmaintained.
And yeah, part of the problem here is that C-based data parsers and codecs are almost always horrendously insecure. We could rewrite it all in Rust (and I have in fact rewritten one obscure codec in Rust) or WUFFS. But again, who's going to pay for that?
The other alternative if the ffmpeg developers change the text on their "about" screen from "Security is a high priority and code review is always done with security in mind. Though due to the very large amounts of code touching untrusted data security issues are unavoidable and thus we provide as quick as possible updates to our last stable releases when new security issues are found." to something like "Security is a best-effort priority. Code review is always done with security in mind. Due to the very large amounts of code touching untrusted data security issues are unavoidable. We attempt to provide updates to our last stable releases when new security issues are found, but make no guarantees as to how long this may take. Priority will be given to reports including a proof-of-concept exploit and a patch that fixes the security bug."
Then point to the "PoC + Patch or GTFO" sign when reports come in. If you use a library with a "NO WARRANTY" license clause in an application where you're responsible for failures, it's on you to fix or mitigate the issues, not on the library authors.
> Why is Google deliberately running an AI process to find these bugs if they're just going to dump them all on the FFmpeg team to fix?
This is called fuzzing and it has been standard practice for over a decade. Nobody has had any problem with it until FFmpeg decided they didn’t like that AI filed a report against them and applied the (again, mostly standard at this point) disclosure deadline. FWIW, nobody would have likely cared except they went on their Twitter to complain, so now everyone has an opinion on it.
> My takeaway from the article was not that the report was a problem, but a change in approach from Google that they’d disclose publicly after X days, regardless of if the project had a chance to fix it.
That is not an accurate description? Project Zero was using a 90 day disclosure policy from the start, so for over a decade.
What changed[0] in 2025 is that they disclose earlier than 90 days that there is an issue, but not what the issue is. And actually, from [1] it does not look like that trial policy was applied to ffmpeg.
> To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
You clearly know that no actual demands or even requests for a fix were made, hence the scare quotes. But given you know it, why call it a "demand"?
When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline, you are simultaneously increasing the visibility of the vulnerability and name-calling the maintainers. All of this increases the pressure on the maintainers, and it's fair to call that a "demand" (quotes-included). Note that we are talking about humans who will only have their motivation dwindle: it's easy to say that they should be thick-skinned and ignore issues they can't objectively fix in a timely manner, but it's demoralizing to be called out like that when everyone knows you can't do it, and you are generally doing your best.
It's similar to someone cooking a meal for you, and you go on and complain about every little thing that could have been better instead of at least saying "thank you"!
Here, Google is doing the responsible work of reporting vulnerabilities. But any company productizing ffmpeg usage (Google included) should sponsor a security team to resolve issues in high profile projects like these too.
Sure, the problem is that Google is a behemoth and their internal org structure does not cater to this scenario, but this is what the complaint is about: make your internal teams do the right thing by both reporting, but also helping fix the issue with hands-on work. Who'd argue against halving their vulnerability finding budget and using the other half to fund a security team that fixes highest priority vulnerabilities instead?
> When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline
My understanding is that the bug in question was fixed about 100 times faster than Project Zero's standard disclosure timeline. I don't know what vulnerability report your scenario is referring to, but it certainly is not this one.
> and name-calling the maintainers
Except Google did not "name-call the maintainers" or anything even remotely resembling that. You just made it up, just like GP made up the the "demands". It's pretty telling that all these supposed misdeeds are just total fabrications.
"When you publicize... you are ... name-calling": you are taking partial quotes out of context, where I claimed that publicizing is effectively doing something else.
> When you publicize a vulnerability you know someone doesn't have the capacity to fix according to the requested timeline, you are simultaneously increasing the visibility of the vulnerability and name-calling the maintainers.
So how long should all bug reporters wait before filing public bugs against open source projects? What about closed source projects? Anyone who works in software knows to ship software is to always have way more things to do than time to do it in. By this logic, we should never make bug reports public until the software maintainers (whether OSS, Apple or Microsoft) has a fix ready. Instead of "with enough eyeballs, all bugs are shallow" the new policy going forward I guess will be "with enough blindfolds, all bugs are low priority".
It's funny you come up with that suggestion when I clearly offer a different solution: "make your internal teams do the right thing by both reporting, but also helping fix the issue with hands-on work".
It's a call not to stop reporting, but to equally invest in fixing these.
Hands on work like filing a detailed bug report with suspected line numbers, reproduction code and likely causes? Look, I get it. It would be nice if Google had filed a patch with the bug. But also not every bug report is going to get a patch with it, nor should that be the sort of expectation we have. It's hard enough getting corporations to contribute time and resources to open source projects as it is, to set an expectation that the only acceptable corporate contribution to open source is full patches for any bug reports is just going to make it that much harder to get anything out of them.
In the end, Google does submit patches and code to ffmpeg, they also buy consulting from the ffmpeg maintainers. And here they did some security testing and filed a detailed and useful bug report. But because they didn't file a patch with the bug report, we're dragging them through the mud. And for what? When another corporation looks at what Google does do, and what the response this bug report has gotten them, which do you think is the most likely lesson learned?
1) "We should invest equally in reporting and patching bugs in our open source dependencies"
2) "We should shut the hell up and shouldn't tell anyone else about bugs and vulnerabilities we discover, because even if you regularly contribute patches and money to the project, that won't be good enough. Our name and reputation will get dragged for having the audacity to file a detailed bug report without also filing a patch."
> would they stop to consider what happens if everybody does that?
It’s almost almost like bitching about the “free labor” open source projects are getting from their users, especially when that labor is of good quality and comes from a user that is actively contributing both code and money to the project is a losing strategy for open source fans and maintainers.
> All I am saying is that you should be as mindful to open source maintainers as you are to the people at companies.
And all I’m saying is there is nothing that’s “un-mindful” about reporting real bugs to an open source project, whether that report is public or not. And especially when that report is well crafted and actionable. If this report were for something that wasn’t a bug, is this report was a low quality “foo is broke, plz to fix” report with no actionable information, or if the report actually came with demands for responses and commitment timelines, then it would be a different matter. But ffmpeg runs a public bug tracker. To say then that making public bug reports is somehow disrespectful of the maintainers is ridiculous.
But the vulnerability exists already. You are making it sound like Google invented a problem for the project. Maybe the project should be name called if it has hundreds of vulnerabilities? Whether it's run by volunteers or not does not matter for a user of the software.
No one is forcing anyone to do anything. Ffmpeg does not have to fix this bug, btw. If they don't have time, just let the disclosure happen.
Also, in this case, the simple fix is to turn off the codec. They just didn't want to do that because they want to have all codecs enabled. This is a conscious choice and no one is forcing them to do that. If the CVE was allowed to disclose without ffmpeg fixing the issue, at least the downstream users can turn off the codec themselves.
Just to be clear here: Googles' responsibility here is to the public (aka the users of ffmpeg), not the project.
Also, let's go back to your "cooked a meal" analogy. If I cook a meal for you, for free, that's nice. But that doesn't entitle me to be careless in hygiene and gives you salmonella poisoning because I didn't wash my hands. Doing things for free doesn't absolve me of any responsibility.
No, publishing the vulnerability is the right thing to do for a secure world because anyone can find this stuff including nation states that weaponize it. This is a public service. Giving the dev a 90 day pre warn is a courtesy.
Expecting a reporter to fix your security vulnerabilities for you is entitlement.
If your reputation is harmed by your vulnerable software, then fix the bugs. They didn’t create the hazzard they discovered it. You created it, and acting like you’re entitled to the free labor of those that gave you the heads up is insane, and trying to extort them for their labor is even worse.
You dont get to decide that lmao. Telling everyone this project doesnt care about security if they ignore my CVE is obviously a demand and your traditions can not change that
> Telling everyone this project doesnt care about security
Google did nothing like this.
If people infer that a hypothetical project doesn't care about security because they didn't fix anything, then they're right. It's not google's fault they're factually bad at security. Making someone look bad is not always a bad action.
Drawing attention to that decision by publicly reporting a bug is not a demand for what the decision will be. I could imagine malicious attention-getting but a bug report isn't it.
If merely publishing a bug they found, and doing nothing else, would qualify by your definition as "telling everyone this project doesn't care about security", then there is absolutely nothing wrong with doing that "telling".
If the FFmpeg team does not want people to file bug reports, then they should close their public issue tracker. This is not something that I decided but a choice that they made.
The fact that details of the issue _will_ be disclosed publicly is an implicit threat. Sure it's not an explicit threat, but it's definitely an implicit threat. So the demand, too, is implicit: fix this before we disclose publicly, or else your vulnerability will be public knowledge.
You should not be threatened by the fact that your software has security holes in it being made public knowledge. If you are, then your goals are fundamentally misaligned with making secure software.
I don't think that you understand the point of the delayed public disclosure. If it wasn't a threat, then there'd be no need to delay -- it would be publicly disclosed immediately.
Nobody is demanding anything. Google is just disclosing issues.
This opens up transparency of ffmpeg’s security posture, giving others the chance to fix it themselves, isolate where it’s run or build on entirely new foundations.
All this assuming the reports are in fact pointing to true security issues. Not talking about AI-slop reports.
In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days. The security issue should certainly be disclosed - when its responsible to do so.
Now, if Google or whoever really feels like fixing fast is so important, then they could very well contribute by submitting a patch along with their issue report.
> ...then they could very well contribute by submitting a patch along with their issue report.
I don't want to discourage anyone from submitting patches, but that does not necessarily remove all (or even the bulk of) the work from the maintainers. As someone who has received numerous patches to multimedia libraries from security researchers, they still need review, they often have to be rewritten, and most importantly, the issue must be understood by someone with the appropriate domain knowledge and context to know if the patch merely papers over the symptoms or resolves the underlying issue, whether the solution breaks anything else, and whether or not there might be more, similar issues lurking. It is hard for someone not deeply involved in the project to do all of those things.
> it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days
This is very far from obvious. If google doesn't feel like prioritising a critical issue, it remains irresponsible not to warn other users of the same library.
If that’s the case why give the OSS project any time to fix at all before public disclosure? They should just publish immediately, no? Warn other users asap.
Why do you think it has to be all or nothing? They are both reasonable concerns. That's why reasonable disclosure windows are usually short but not zero.
Because it gives maintainers a chance to fix the issue, which they’ll do if they feel it is a priority. Google does not decide your priorities for you, they just give you an option to make their report a priority if you so choose.
Timed disclosure is just a compromise between giving project time and public interests. People have been doing this for years now. Why are people acting like this is new just because ffmpeg is whining?
And occasionally you do see immediate disclosures (see below). This usually happens for vulnerabilities that are time-sensitive or actively being exploited where the user needs to know ASAP. It's very context dependent. In this case I don't think that's the case, so there's a standard delayed disclosure to give courtesy for the project to fix it first.
Note the word "courtesy". The public interest always overrides considerations for the project's fragile ego after some time.
(Some examples of shortened disclosures include Cloudbleed and the aCropalypse cropping bug, where in each case there were immediate reasons to notify the public / users)
Full (immediate) disclosure, where no time is given to anyone to do anything before the vulnerability is publicly disclosed, was historically the default, yes. Coordinated vulnerability disclosure (or "responsible disclosure" as many call it) only exists because the security researchers that practice it believe it is a more effective way of minimizing how much the vulnerability might be exploited before it is fixed.
Unless the maintainers are incompetent or uncooperative this does not feel like a good strategy. It is a good strategy on Google's side because it is easier for them to manage
> In addition to your point, it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days.
So when the xz backdoor was discovered, you think it would have been better to sit on that quietly and try to both wrest control of upstream away from the upstream maintainers and wait until all the downstream projects had reverted the changes in their copies before making that public? Personally I'm glad that went public early. Yes there is a tradeoff between speed of public disclosure and publicity for a vulnerability, but ultimately a vulnerability is a vulnerability and people are better off knowing there's a problem than hoping that only the good guys know about it. If a Debian bug starts tee-ing all my network traffic to the CCP and the NSA, I'd rather know about it before a patch is available, at least that way I can decide to shut down my Debian boxes.
The XZ backdoor is not a bug but a malicious payload inserted by malicious actors. The security vulnerability would immediately been used as it was created by attackers.
This bug is almost certainly too obscure to be found and exploited in the time the fix can be produced by Ffmpeg. On the other hand, this vuln being public so soon means any attacker is now free to develop their exploit before a fix is available.
If Google's goal is security, this vulnerability should only be disclosed after it's fixed or a reasonable time (which, according to ffmpeg dev, 90 days is not enough because they receive too many reports by Google).
A bug is a bug, regardless of the intent of the insertion. You have no idea if this bug was or wasn't intentionally inserted. It's of course very likely that it wasn't, but you don't and can't know that, especially given that malicious bug insertion is going to be designed to look innocent and have plausible deniability. Likewise, you don't know that the use of the XZ backdoor was imminent. For all you know the intent was to let it sit for a release or two, maybe with an eye towards waiting for it to appear in a particular down stream target, or just to make it harder to identify the source. Yes, just like it is unlikely that the ffmpeg bug was intentional, it's also unlikely the xz backdoor was intended to be a sleeper vulnerability.
But ultimately that's my point. You as an individual do not know who else has access or information about the bug/vulnerability you have found, nor do you have any insight into how quickly they intend to exploit that if they do know about it. So the right thing to do when you find a vulnerability is to make it public so that people can begin mitigating it. Private disclosure periods exist because they recognize there is an inherent tradeoff and asymmetry in making the information public and having effective remediations. So the disclosure period attempts to strike a balance, taking the risk that the bug is known and being actively exploited for the benefit of closing the gap between public knowledge and remediation. But inherently it is a risk that the bug reporter and the project maintainers are forcing on other people, which is why the end goal must ALWAYS be public disclosure sooner rather than later.
A 25 years old bug in software is not the same as a backdoor (not a bug, a full on backdoor..). The bug is so old if someone put it there intentionally, well congrats on the 25yo 0day.
Meanwhile the XZ backdoor was 100% meant to be used. I didn't say when and that doesn't matter, there is a malicious actor with the knowledge to exploit it. We can't say the same regarding the bug in a 1998 codec that was found by extensive fuzzing, and without obvious exploitation path.
Now, should it be patched? Absolutely, but should the patch be done asap at the cost of other maybe more important security patches? Maybe, maybe not. Not all bugs are security vulns, and not all security vulns are exploitable
> Absolutely, but should the patch be done asap at the cost of other maybe more important security patches? Maybe, maybe not. Not all bugs are security vulns, and not all security vulns are exploitable
I fully agree which is why I really don’t understand why everyone is all up in arms here. Google didn’t demand that this bug get fixed immediately. They didn’t demand that everything be dropped to fix a 25 year old bug. They filed a (very good and detailed) bug report to an open source product. They gave a private report out of courtesy and an acknowledgment of the tradeoffs inherent in public bug disclosure, but ultimately a bug is a bug, it’s already public because the source code is public. If the ffmpeg devs didn’t feel it was important to fix right away, nothing about filing a bug report, privately or publicly changes any of that.
In the end, a report saying "fix this within 90 days or this gets public" for small-ish bugs like this is a kind of demand. Do this or this gets out and you'll have to make an express release to fix it anyway.
I can understand that stance for serious bugs and security vulnerabilities. I can understand such delays for a company with a big market cap to put pressure on them. But these delays are exactly like a demand put on the company: fix it asap or it gets public. We wouldn't have to do this if companies in general didn't need to get publicly pressured into fixing their stuff. Making it public has two objectives: Warn users they may be at risk, and force the publisher to produce a fix asap or else risk a reputation hit.
> If the ffmpeg devs didn’t feel it was important to fix right away, nothing about filing a bug report, privately or publicly changes any of that.
It does change how they report. Had they given more time or staggered their reports over time, Ffmpeg wouldn't have felt pressure to publish fixes asap.
Even if the devs can say they won't fix, any public project will want to keep a certain quality level and not let security vulnerabilities get public.
In the end, had these reports been made by random security researchers, no drama would have happened. But if I see Google digging up 25 years old bugs, is it that much to expect them to provide a patch with it?
But this isn't a "small-ish bug". What gave you that impression? It's a vulnerability in code that is both compiled in by default, and that is reachable
when ffmpeg is run with its default settings when run on a file crafted to trigger the bug.
And if you believe this is a "small-ish" bug just because the ffmpeg Twitter account's gaslighting about "20 frames of a single video in Rebel Assault", then surely it being disclosed would be irrelevant? The only way the disclosure timeline makes a difference is if ffmpeg too think that the bug is serious.
> In the end, a report saying "fix this within 90 days or this gets public" for small-ish bugs like this is a kind of demand. Do this or this gets out and you'll have to make an express release to fix it anyway.
I think this is where the disconnect is. To my mind there is no "do this or else" message here, because there is no "or else". The report is a courtesy advance notice of a bug report that WILL be filed, no matter what the ffmpeg developers do. It's not like this is some awful secret that Google is promising not to disclose if ffmpeg jumps to their tune.
Further, the reality is most bug reports are never going to be given a 90 day window. Their site requests that if you find a security vulnerability you email their security team, but it doesn't tell you not to also file a bug report, and their bug report page doesn't tell you not to file anything you think might be a security or vulnerability bug to the tracker. And a search through the bug tracker shows more than a few open issues (sometimes years old) reporting segfault crashes, memory leaks, un-initialized variable access, heap corruption, divide by zero crashes, buffer overflows, null pointer dereferences and other such potential safety issues. It seems the ffmpeg team has no problems generally with having a backlog of these issues, so certainly one more in a (as we've been repeatedly reminded) 25 year old obscure codec parser is hardly going to tank their reputation right?
> In the end, had these reports been made by random security researchers, no drama would have happened.
And now we get to what is really the heart of the matter. If anyone else has reported this bug in this way, no one would care. It's not that Google did anything wrong, it's that Google has money so everyone is mad that they didn't do even more than they already do. And frankly that attitude stinks. It's hard enough getting corporations to actually contribute back to open source projects, especially when the license doesn't obligate them to at all. I'm not advocating holding corporations to some lesser standard, if the complaint was that Google was shoving unvalidated, and un-validatable low effort reports into the bug tracker, or that they actually were harassing the ffmpeg developers with constant followups on their tickets and demands for status updates then that would be poor behavior that we would be equally upset about if it came from anyone. But like you said, any other security researcher behaving the same way would be just fine. Shitting on Google this way for behaving according to the same standards outlined on ffmpeg's own website because of who they are and not what they've done just tells other corporations that it doesn't matter if you contribute code and money in addition to bug reports, if you don't do something to someone's arbitrary standard based on WHO you are, rather than WHAT you do, you'll get shit on for it. And that's not going to encourage more cooperation and contributions from the corporations that benefit from these projects.
The OP/article is very clear and very direct on what the problems are. The response is so typical american conflict-shy “let’s talk so we can slowly dimish your critique, and also let’s do talking instead of writing so we cant really be held accountable for specifics”. And, to me it comes across as lazy: the op/article is very specific on the problems, just get to work already, no need to “further clarifications” (obviously disable that stupid bot for the japanese community; then get to work restoring original KBs from backups. Then reach out to talk about next steps)
It’s a tonedeaf response from the staff person. Zero respect for what’s clearly many, many hours of contribured work.
reply