It's not entirely settled law, but it seems the US Supreme Court would probably disagree with you. These issues were near the center of the Authors Guild vs Google case that ran from 2005 to 2015. There's a good relevant summary of it here https://towardsdatascience.com/the-most-important-supreme-co...
But broadly the courts have upheld the rights of companies to use copyrighted works as inputs to commercial algorithmic derivative works like neural networks.
Now you might argue this doesn't apply here. A key aspect of the decision rested on the fact that the original copyright holders (book authors & publishers) were not directly harmed by Google's indexing of them, since it probably drove more sales of those books. In this case it's not so clear. Is somebody using a diffusion model doing so instead of buying a piece of commercial art? If they're generating a new piece of art, I'd say probably not. But if they're generating something specifically similar to an existing specific piece of art, perhaps, but if it's deliberately different, it's still a tough argument. If the ML model is being used to deliberately replicate a specific artist's style, then I think you can make that case pretty strongly. But if you're building something that's an aggregate of a bunch of styles (almost always the case unless you specifically prompt it otherwise) then I don't think the courts would find that any damage has been done, and thus nobody taking this to court would have standing.
I think it's likely we will see this end up in the courts somehow. But being able to prove actual harm is critical to the US court system. And it's difficult to see how the courts would rule against the kinds of broad general use that is most common for this kind of generative art.
Thank you — that's at least an argument I've not yet heard and that isn't the trope of "the AI is thinking".
> Now you might argue this doesn't apply here.
Indeed, I would. In particular,
> and the revelations do not provide a significant market substitute for the protected aspects of the originals
I'm not sure if that holds here. In Google's case, the product (a search engine) was completely different from the input (a book). Here … we're replacing art with art, or code with code, admittedly different art. And … uh, maybe? different code. I'm also less certain due to the extreme views on what constitutes de minimis copying the courts have taken.
> I think it's likely we will see this end up in the courts somehow.
I agree.
> But being able to prove actual harm is critical to the US court system. And it's difficult to see how the courts would rule against the kinds of broad general use that is most common for this kind of generative art.
This is a good argument, too, though I'd like to see it tried in court, I think.
> If the ML model is being used to deliberately replicate a specific artist's style, then I think you can make that case pretty strongly.
I'll link the same example I linked in a comment, [1]. Seek to "On the left is a piece by award-winning Hollywood artist Michael Kutsche, while on the right is a piece of AI art that’s claimed to have copied his iconic style, including a blurred, incomplete signature"
But broadly the courts have upheld the rights of companies to use copyrighted works as inputs to commercial algorithmic derivative works like neural networks.
Now you might argue this doesn't apply here. A key aspect of the decision rested on the fact that the original copyright holders (book authors & publishers) were not directly harmed by Google's indexing of them, since it probably drove more sales of those books. In this case it's not so clear. Is somebody using a diffusion model doing so instead of buying a piece of commercial art? If they're generating a new piece of art, I'd say probably not. But if they're generating something specifically similar to an existing specific piece of art, perhaps, but if it's deliberately different, it's still a tough argument. If the ML model is being used to deliberately replicate a specific artist's style, then I think you can make that case pretty strongly. But if you're building something that's an aggregate of a bunch of styles (almost always the case unless you specifically prompt it otherwise) then I don't think the courts would find that any damage has been done, and thus nobody taking this to court would have standing.
I think it's likely we will see this end up in the courts somehow. But being able to prove actual harm is critical to the US court system. And it's difficult to see how the courts would rule against the kinds of broad general use that is most common for this kind of generative art.