Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So this "Stable Diffusion model", it was training on a bunch of copyrighted data, and anything that comes of it must hope to launder the copyright sufficiently to not constitute a derivative work, somehow?

> these models were trained on image-text pairs from a broad internet scrape

… yep.

I've the same issue with this as with Github Copilot.

I will admit, it is technically impressive, and something I would love to use, as someone who cannot draw worth a darn. And it is that I cannot draw that I do not feel morally comfortable with this: I am using a — complicated, admittedly — tool to just derive art from the unwilling talents of others. (Admittedly, my skill in prompting & editing might matter, but that's true of "normal" derivative works, too!)



All art is derivative and there's no such thing as originality. Every human artist draws inspiration from their visual and emotional experiences, copyrighted or otherwise, how is this different? If I watch Star Wars and then make a space opera film that's aesthetically similar to Star Wars, that's not copyright laundering, it's inspiration! Same principle applies here.


Because the AI doesn't have "experience", it has training data that it's deriving the work from.

People have shown fairly convincing examples of this in the more general sense: e.g., they've had well-known stock image (e.g., iStockPhoto) watermarks get produced in the output from the AI models (when not prompted). An artist with "experience" would not reproduce a watermark. Or in this article[1], where an AI was requests to mimic another artists style, and the output was (attempting to) reproduce the artist's signature.

(IANAL.) If you make a film that directly incorporates aspects from Star Wars (what I believe to be the more accurate version of what these models do), then yes, I would expect that you will be handed a C&D. "Glowing space swords" aren't copyrighted, but if you include something indistinguishable from a lightsaber & call it a lightsaber? I bet Disney would have something to say about that.

[1]: https://kotaku.com/ai-art-dall-e-midjourney-stable-diffusion...


I don't personally see much difference between how I trained myself to be a portrait artist vs. how diffusion models do. In order to learn to draw stylized portraits, I looped over:

1. Find a photo of a person as reference 2. Create portrait 3. See how well the portrait compares to the reference and the stylized art I was drawing inspiration from.

The work I was doing was original in colloquial sense, but also I see zero reason why what the AI's process is fundamentally inferior to mine.


I am pretty sure that children learning to draw will in fact include some of those “copyright markers,” when not explicitly admonished by adults. What humans do is not some magical “experience,” they just have worse memories and better self-censorship.


This Kotaku article is really trying to spread misinformation about this kind of model. The image shown in the article was not trying to imitate anyone, as the author of the image stated https://twitter.com/illustrata_ai/status/1558559036575911936 (the artist's name was not in the prompt), it is only RJ Palmer who for no reason thought this was the case, the signature also does not even come close to the original as the model is not really trying to copy anything, the signature is like the rest of the image completely made up. Also, in the article you linked it states that there are programs to explicitly remove the signature, this is also not true. Articles like the one you posted are usually full of nonsense, written by people who don't really understand this kind of technology and I wouldn't use them as a source of any kind. RJ Palmer's reaction to the image in the article: "This literally tried to recreate the signature of Michael Kutsche in the corner. This is extremely fucked up", these people are good at creating controversy, even when it is based on facts that are not true.


It's not entirely settled law, but it seems the US Supreme Court would probably disagree with you. These issues were near the center of the Authors Guild vs Google case that ran from 2005 to 2015. There's a good relevant summary of it here https://towardsdatascience.com/the-most-important-supreme-co...

But broadly the courts have upheld the rights of companies to use copyrighted works as inputs to commercial algorithmic derivative works like neural networks.

Now you might argue this doesn't apply here. A key aspect of the decision rested on the fact that the original copyright holders (book authors & publishers) were not directly harmed by Google's indexing of them, since it probably drove more sales of those books. In this case it's not so clear. Is somebody using a diffusion model doing so instead of buying a piece of commercial art? If they're generating a new piece of art, I'd say probably not. But if they're generating something specifically similar to an existing specific piece of art, perhaps, but if it's deliberately different, it's still a tough argument. If the ML model is being used to deliberately replicate a specific artist's style, then I think you can make that case pretty strongly. But if you're building something that's an aggregate of a bunch of styles (almost always the case unless you specifically prompt it otherwise) then I don't think the courts would find that any damage has been done, and thus nobody taking this to court would have standing.

I think it's likely we will see this end up in the courts somehow. But being able to prove actual harm is critical to the US court system. And it's difficult to see how the courts would rule against the kinds of broad general use that is most common for this kind of generative art.


Thank you — that's at least an argument I've not yet heard and that isn't the trope of "the AI is thinking".

> Now you might argue this doesn't apply here.

Indeed, I would. In particular,

> and the revelations do not provide a significant market substitute for the protected aspects of the originals

I'm not sure if that holds here. In Google's case, the product (a search engine) was completely different from the input (a book). Here … we're replacing art with art, or code with code, admittedly different art. And … uh, maybe? different code. I'm also less certain due to the extreme views on what constitutes de minimis copying the courts have taken.

> I think it's likely we will see this end up in the courts somehow.

I agree.

> But being able to prove actual harm is critical to the US court system. And it's difficult to see how the courts would rule against the kinds of broad general use that is most common for this kind of generative art.

This is a good argument, too, though I'd like to see it tried in court, I think.

> If the ML model is being used to deliberately replicate a specific artist's style, then I think you can make that case pretty strongly.

I'll link the same example I linked in a comment, [1]. Seek to "On the left is a piece by award-winning Hollywood artist Michael Kutsche, while on the right is a piece of AI art that’s claimed to have copied his iconic style, including a blurred, incomplete signature"

[1]: https://kotaku.com/ai-art-dall-e-midjourney-stable-diffusion...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: