October 23, 2025

Pressed a button – an artist? How AI is changing copyright law

A quick note: Hi there! I translated this article from Russian to English with the help of AI, as my English isn't very good. I did my best to edit it and fix the errors and awkward wording that I could find and understand (I trust I didn't make it worse :D).
Hopefully, the main ideas are still clear, haha! If you notice any mistakes or unclear phrasing, please don't hesitate to point them out ♥

Huge thanks to my friend BeaverXXX, translator of the novel "The Earth is Online", for proofreading and helping correct mistakes! 😘

In 2022, the "neural network boom" happened — generative models like Midjourney left the labs and became available to the general public. For several years now, generative models have been one of the most popular topics of discussion in the art community, causing a wave of debates, fears, and myths.

In this article, I want to tackle all of this and break it down — to separate myths from reality, discuss popular arguments from AI proponents (like comparison to photography), explore the possibility of copyrighting a prompt, and analyze the situation with model training and "Fair use".

I've tried to dig deep into the problem and examine the positions of all sides to make this article as objective as possible ^^

Table of Contents

  1. Why do neural networks cause so much controversy?
  2. The Legal Status of neural networks
    • What qualifies as a copyrightable work?
    • Is working with neural networks considered creative work?
    • The photography analogy: where is the line for creative contribution?
    • Is a prompt a copyrightable work?
    • Conclusions and a memo for AI users
  3. The Ethics of Training
    • Do neural networks learn like humans?
    • Similarities and differences in AI vs. human learning
    • What is "Fair use"?
    • "Fair use" and AI: what's the problem?
  4. Court Cases Now: The Current Status of AI
  5. The Future of the Technology and Authors

Why do neural networks cause so much controversy?

After studying opinions in the art community and conducting a poll on this topic in my group, I've tried to highlight several main reasons why this new technology evokes negative emotions:

  • Generative models are unethically trained.
    Many believe that using others' images for model training without permission is theft.
  • Neural networks can reproduce others' work.
    This isn't about the img2img feature (where an AI user uploads an image to change its style), but an undesirable side effect — "memorization". This is when the model accidentally memorizes an original from its training data. This can happen if the original was repeated multiple times or, conversely, if it's very rare and specific. Developers use various methods to minimize memorization, so unless the AI user is specifically trying to get a copy of an original from the dataset, the probability of this happening is very, very low. But it's not zero :D
  • Employers are replacing specialists with neural networks.
    There's a fear of job loss; companies might prefer fast and cheap AI over artists. As far as I know, neural networks are already being actively implemented in game development, especially mobile game development.
  • AI users pass themselves off as artists.
    People generate images and claim they drew them from scratch.
  • AI-art often looks cheap and bad.
    Many generated images seem low-quality or lacking any message.

Added to this list is a popular myth:

  • Neural networks is just "copy machines" or "collagists"; they assemble new pictures from pieces of other artists' work.
    This is not true. Modern diffusion models (Stable Diffusion, Midjourney) do not store or "slice up" images. Instead, during training, they review millions of pictures and "internalize" concepts: "what is a cat?", "what does anime style look like?", "how is a sunset drawn?". So, what's stored inside the generative model isn't a collection of pictures, but something like an internal set of knowledge — parameters (weights). Therefore, when an AI user gives it a request, it doesn't search for a ready-made picture but draws its own from scratch, based on these internalized concepts.

The Legal Status of neural networks

What qualifies as a copyrightable work?

I often encounter statements that images created with a generative model are not protected by copyright, meaning anyone can use others' generated images without legal consequences.

But are they really unprotected?

The law (Art. 1259 of the Civil Code of the Russian Federation and Resolution No. 10 of the Plenum of the Supreme Court of the RF dated 23.04.2019) states that for a work to be protected by copyright, it must meet the following criteria:

  1. Be the result of a human's independent creative contribution.
  2. Be expressed in an objective form (i.e., exist not just in the author's head).

While AI-generated images have no problem with objective form, the concept of "human creative contribution" is more complicated.

Is working with neural networks considered creative work?

Based on the information about copyrightable works, I assume it will all depend on the degree of AI user intervention.

  • Minimal involvement. If a person entered a simple prompt like "cat on a sofa" and picked the first result, their creative contribution is minimal. The algorithm did the main work. Such an image will most likely not be considered a copyrightable work belonging to the AI user.
  • Deep involvement. If a person intervenes more deeply in the creation process — for example, by painting over parts, combining several generated images, or making complex adjustments to the neural network to convey their idea, rather than just taking the first satisfactory result — then the probability that such a work will be considered a copyrightable work by the AI user increases.

In the latter case, it's hard to argue that the picture appeared by itself, that the result was unpredictable, and that nothing depended on the human. This is especially true if the AI user uses img2img feature or ControlNet and, for example, guides the generation using their own sketches.

The photography analogy: where is the line for creative contribution?

In disputes about copyright for generated images, the comparison to photography often comes up. And I want to discuss this topic a bit, too.

Critics of AI art say: "The AI user just pushes a button; the algorithm does the main work". Supporters of AI reply: "A photographer also just pushes a button, but their works are protected by law".

I understand why this analogy is frequently mentioned; photography went through a similar path of rejection in its time. Artists of the past claimed that simply copying reality couldn't be considered art. The arguments were similar to those now made against AI: the camera is a machine that only captures reality, photos have no soul (they steal it 😅), and no creative intent.

But over time, people came to the opinion that the value of a picture lies in the photographer's vision.

Here's an example: A photograph of a random cloud in the sky will be protected by copyright, even if the photographer didn't create the cloud or control the weather. Why? Because they made a series of creative choices:

  • Choice of composition: They decided exactly how the cloud would be positioned in the frame.
  • Choice of moment: They pressed the shutter at a specific second, capturing a unique state of light and form.
  • Technical settings: They chose the lens, aperture, and shutter speed, which affected the sharpness, depth, and atmosphere of the shot.
  • Post-processing: They might have applied filters or adjusted colors to enhance their vision.

Even in this simple example, there are enough actions that count as creative contribution.

Now let's turn to the AI user. Their work follows a similar principle — through a series of creative choices:

  • Formulating the idea and prompt: They think through the description of the scene, characters, mood, and style.
  • Choosing the model and settings: The AI user selects a specific neural network and model, sets technical parameters that influence the result.
  • Supervision and iterations: They generate dozens of options, select the best ones, refine the prompt, combine successful outputs, or use additional tools like img2img and ControlNet for more control.
  • Final processing: Some authors refine the images in graphics editors.

So, what's the difference?

The main difference is the degree of control and predictability.

  • A photographer works with a reality they can see and has a high level of control over the final image. They fully control their tools and know what result they'll get with certain settings.
  • An AI user often acts as a "co-author" or "commissioner" who sets the direction, but the result always contains an element of randomness generated by the machine.

Conclusion: Copyright protects not the amount of effort expended, but the presence of human creative contribution that uses technology as a tool to realize a vision.

For example, a popular opinion now is that a prompt, even a very detailed one, is much like a creative brief (or technical specification) for an artist. The author of the brief does intellectual work, but they don't become the author of the work created by the artist based on that brief.

I think the question of protecting AI images should shift from "Did a machine make this?" to "Was the human's creative contribution sufficient for them to be considered the author?". The more deeply a person is involved in the generation process and the more they add of themselves, the more grounds there are to recognize their authorship.

Is a prompt a copyrightable work?

This will depend on its complexity and type.

As mentioned earlier, in Russian legislation, one of the key criteria for protecting works is the presence of human creative effort. When considering prompts, the following position is common among lawyers:

  • Simple prompts that are just a set of words like: "cat, sofa, high quality, 4k" are not protected. They are more comparable to a list of tags under a photo or a list of ingredients in a recipe.
  • A complex, detailed prompt written like a literary text describing a scene, emotions, style, and technical parameters can be protected as an independent literary work.

I'm not strong in literature, so I asked Grok to generate an example of such a prompt:

"In the twilight of a Victorian sitting room, where dusty curtains whisper secrets of forgotten love letters, stands a lone woman in a scarlet dress woven from shadows and silk. Her eyes — two shards of an autumn sky, full of longing for lost time — are fixed on the window, beyond which a storm rages, tearing the sky into shreds of lightning. Style: Impressionistic, like Monet in his misty gardens, with brushstrokes that melt into the air — high resolution, 8K, soft candlelight, emphasis on fabric textures and raindrops on the glass."

Since I remember that previously, when working with neural networks, you had to give commands as separate words separated by commas (cat, sofa, high quality, etc.), I was curious whether a neural network could now understand a literary prompt like this at all and how accurate the result would be.

Well — it turned out pretty well. And on the first try, too! :D

Generation by Gemini from Grok's prompt

It's hard to call such a text just a set of commands. It has an author's style, originality — it's already the result of creative activity, and it would likely be protected by law from the moment it was written (provided, of course, that it was written by a human; texts generated entirely by a machine are not protected).

But!

Rights to the prompt do not equal rights to the picture

It's important to understand: protecting the prompt's text as a literary work does not mean automatically getting copyrights for the image generated from it.

I think a comparison to a film screenplay is appropriate here. The screenwriter owns the rights to their text, but this doesn't make them the rights holder of the visual product — the finished shots and scenes. The rights to the film belong to the director, cinematographer, and other creators who creatively transformed the text into an audiovisual work.

Returning to the topic of neural networks, it currently appears that a prompt alone is unlikely to confer copyright in the resulting image, even if the prompt were itself copyrighted.

The U.S. Copyright Office, in its report "Copyright and Artificial Intelligence, Part 2: Copyrightability Report" published in January 2025, came to the following conclusion: even complex prompts do not provide sufficient human control over the AI, as there is always an element of unpredictability when working with generative models. For generated content to be copyrighted, additional human contribution (editing, revisions) is needed.

However, there is no judicial practice on this issue in Russia yet, so future decisions may vary. Or maybe, we'll soon see neural networks that give us this "sufficient control" through prompts :D

Conclusions and a memo for AI users

You shouldn't thoughtlessly use others' generative AI outputs assuming they belong to no one. No one knows how much creative effort the author put into their creation; it's quite possible they are protected by copyright just like traditional art.

Furthermore, the rights to generated images are always governed by the service's user agreement. Before using generated images, you must read it. The absence of court cases does not mean all use is permitted.

A brief guide for AI users: how to increase the chances of protecting your AI generated images

  1. Always read the Terms of Service. • Find out who owns the rights to the images: you, the platform, or is it public property?
    • Check if there are restrictions on commercial use.
  2. Increase your creative contribution. • Don't limit yourself to just a prompt.
    • Use tools for greater control, like img2img, ControlNet, and Inpainting.
    • Refine in graphics editors. Final post-processing in Photoshop, etc., is creative work protected by copyright.
  3. Document the entire creation process. • Save the history of prompts, seeds, and technical settings.
    • Save intermediate versions and screenshots of your workflow in the neural network and graphics editor.
  4. Check content for legal risks. • Avoid others' intellectual property: recognizable brands, trademarks, characters, and portraits of real people without their consent.
    • Avoid prompts that directly reference the style of a specific contemporary artist. (Although by law, copying a style isn't infringement — this applies to humans. It's currently unclear what will be decided regarding generative AI.)

The Ethics of Training

Perhaps the main legal and ethical problem is that models are trained on billions of images from the internet, whose authors did not consent to this. And so far, no country in the world has a clear legislative solution for this, but the positions of the parties have more or less formed.

Position of AI Developers

  • Analogy with human learning: A neural network learns like an art student who looks at thousands of works in museums and books to understand principles of composition, color, and style. The neural network doesn't store copies of images but extracts statistical patterns from them.
  • Creating new, not copying: Developers argue that training a model isn't theft, but a transformation of information to create entirely new works, which falls under "Fair use". The model doesn't copy others' paintings; it draws its own.
  • Impossibility of getting permissions: It's practically impossible to get permission from every author to use their work in a training set. There are billions of images in datasets. If consent were required for every picture, the technology's development would simply stop.

Position of Artists

  • Scale and competition: The analogy with the art student is incorrect, as a neural network processes information at an inhuman speed and scale. In seconds, an AI "studies" more than an artist does in a lifetime and can mass-produce content in similar styles, becoming a direct economic competitor to those on whose works it was trained.
  • Use without consent: This violates the basic ethical and legal principle of "don't take what isn't yours without asking". Commercial companies profit from the labor and talent of artists without asking their permission or offering compensation.
  • Absence of "Fair use": According to the "Fair use" principle, using works without the rights holder's consent is possible for certain purposes, for example, for criticism, education, or parody. The artists' position is that training commercial AI models does not fall under these criteria.

This leads to a complicated situation — on one hand, generative models create new, unique content that didn't exist before, and this content itself doesn't infringe on anyone's rights. But the training of the models themselves was potentially illegal.

Do neural networks learn like humans?

First, I want to mention why everyone is arguing about whether neural networks learn like humans or not. The answer to this question could help determine whether to consider model training legal or a violation of copyright.

  • AI learns like a human (developers' argument). If a court accepts this view, model training might be recognized as "Fair use," not requiring licenses or infringing copyright.
  • AI-generated art is a product of thoughtless combination (artists' argument). Under this approach, training models on protected content is a direct copyright infringement, as its output could be interpreted as the creation of derivative works without the rights holders' permission.

Similarities and differences in AI vs. human learning

Neural networks, whose creation was inspired by the work and structure of real biological neural networks in the brain, do indeed imitate the learning process by analyzing patterns, similar to how humans do it. Both humans and neural networks use existing data to generate something new.

Similarities:

  • An artist looks at thousands of paintings to internalize concepts of composition, color theory, anatomy, and style.
  • A diffusion model does the same; it reviews millions of image-text pairs and "internalizes" statistical connections: "what is a cat?", "what does anime style look like?", etc..

Honestly, every time I hear the phrase "a human creates new things, while a neural network just combines what it was fed," I remember this cheat sheet for artists on how to use references 😅

Creating something new is always a combination of existing elements. Neither human consciousness nor a generative model can generate ideas from an absolute vacuum — both are limited by the scope of their experience and data.

I want to give the example of the mantis shrimp, whose vision is more complex than human vision, allowing it to see colors humans can't even imagine. A human cannot create such a color in their imagination because they simply lack the sensory experience for it. A neural network is in a similar position: it cannot "understand" the laws of a 3D physical world when its input is only 2D images and text descriptions.

On that, our similarities with neural networks probably end.

Differences:

  • Scale, speed, and memory. A human cannot study hundreds of millions of images in a week, as Stable Diffusion did. AI possesses inhuman speed and scale. Furthermore, human memory is imperfect, whereas a neural network's is almost absolute, which leads to the risk of memorization.
  • Experience and context. A human's "database" contains more information (we have a "multimodal dataset," haha) — we live in a real 3D world, we have hearing, touch, smell, and an understanding of physical laws. An artist won't draw 6 fingers on a hand or objects flowing into each other unless it was intentional, because they have the experience of existing in reality, where they learn its laws. A neural network lacks this experience; its "database" is limited to flat pictures and text. This is why human creativity often seems deeper and more interesting — it's based on life experience, emotions, and narratives.
  • Qualia (subjective experience). A human has self-awareness, intentions, and subjective experience. An artist may want to say something with their work, express emotions, and other people can read and feel this, too. A generative model is just a very complex tool. It doesn't know what "warmth" or "pain" is; it only knows which pixels are most often associated with those words. It doesn't "understand" what it's drawing, doesn't feel emotions, and has no intent.

In short, the analogy "AI learns like a human" has its place, but it is an oversimplification. The neural network imitates one aspect of human learning — pattern recognition — but does so in a different, machinic way, devoid of consciousness, purpose, context, and semantic understanding.

This technology is not a "copy machine" but it's not an "art student" either. It is a fundamentally different tool that requires new rules.

What is "Fair use"?

"Fair use" is an American legal doctrine which states that in some cases, copyrighted material may be used without the owner's permission.

The purpose of "Fair use" is to find a balance between protecting the author's rights and the public good (e.g., freedom of speech, development of science and art).

To decide if a use was "fair," a court analyzes four factors:

  1. The purpose and character of the use. Is the use commercial? But most importantly, how "transformative" is it — i.e., does it create something new from the original with a different meaning or purpose (parody, criticism, research)?
  2. The nature of the copyrighted work. The law looks not only at how you used the work, but also at what kind of work it was. Since the goal of copyright is to encourage and protect creative expression, using factual works (scientific articles, reference books) is more often found to be "Fair use" than using creative works (paintings, novels).
  3. The amount and substantiality of the portion used. Was a small fragment used, or the entire work? Was the "heart" of the work taken, its most significant part?
  4. The effect of the use upon the potential market for the original. Does the new work cause economic harm to the original's rights holder? Could it replace the original in the market?

"Fair use" and neural networks: what's the problem?

The problem is that for each of the four factors, the parties' arguments are diametrically opposed 😅

1. Purpose and character of the use (transformative)

  • AI Companies' Position: Using data for training is transformative. The images are used not for their original purpose (viewing) but for a completely new task — training an algorithm.
  • Artists' Position: The final product (generated images) is not transformative in the traditional sense. It directly competes with the original works in the very same market, which contradicts one of the main principles of "Fair use".

2. The nature of the copyrighted work

  • Positions generally align here: Generative models are trained on creative, artistic works (art, photos, books) which have the strongest degree of legal protection. This factor is mostly interpreted against the AI companies.

3. The amount and substantiality of the portion used

  • Artists' Position: Generative models use the entire work, 100% of it, and not just one, but billions. Moreover, they extract its most substantial features — style, composition, manner — which constitute the main value of an artistic work.
  • AI Companies' Position: Although 100% of the data is used, it is not stored or copied. The originals are transformed into abstract mathematical "weights" (model parameters). Such use is technically closer to analysis or reading than to making a copy.

4. The effect upon the potential market

  • Artists' Position: The market impact is assessed as extremely negative. An AI trained on a specific artist's style allows any user to mass-generate works "in their style," which deprives the artist of commissions and devalues their labor. Thus, the AI becomes a direct economic competitor to artists, created from their own work.
  • AI Companies' Position: The technology creates a new market and new opportunities (e.g., for non-artists) rather than just replacing the existing one.

In the end, people have found that the doctrine of "Fair use" was not prepared for the emergence of machine learning. It's unclear how to apply old laws to this new technology. So, all we can do is wait — the final answer to all questions must come from the courts, which are already considering cases related to neural networks.

Court Cases Now: The Current Status of AI

Here are a few cases currently underway (or concluded) that could shape future regulation.

  1. Getty Images v. Stability AI Country: USA, UK
    The Case: Getty Images accuses Stability AI of using 12 million of its images to train Stable Diffusion without a license.
    Key Questions: Is training AI on protected content copyright infringement, or does it fall under "Fair use"?
    Current Status/Significance: The case is moving slowly; the outcome could set a major precedent for the industry.
  2. Sarah Andersen, Kelly McKernan, and Karla Ortiz v. Stability AI, Midjourney, and DeviantArt Country: USA
    The Case: Artists filed a class-action lawsuit for the use of their work to train generative models without permission.
    Key Questions: Besides infringement at the training stage, the issue of unfair competition and style appropriation is raised.
    Current Status/Significance: The court refused to dismiss the case but also did not accept the artists' arguments in their initial, very broad form — for example, rejecting the claim that any AI-generated output is a copyright infringement. The artists were allowed to rephrase and file an amended, narrower lawsuit.
  3. Bartz v. Anthropic and Kadrey v. Meta Country: USA
    The Case: Authors accuse companies of training AI on their texts, including those downloaded from pirate websites.
    Key Questions: Does "Fair use" extend to training large language models (LLMs)?
    Current Status/Significance: The courts partially sided with the AI companies, recognizing the training as "Fair use," but questions about pirated content remain. The process continues.
  4. First Case of Infringement on an AI Image Country: China
    The Case: A court recognized a generated image as a copyrightable work because the human selected prompts and parameters, demonstrating creative contribution.
    Key Questions: Where is the line for human creative contribution when using AI?
    Current Status/Significance: One of the first precedents. China became one of the first countries to establish that copyrights belong to the person using AI as a tool.
  5. GEMA v. OpenAI Country: Germany
    The Case: The German society for musical performing and mechanical reproduction rights accuses OpenAI of using song lyrics to train ChatGPT without a license.
    Key Questions: Is using song lyrics to train AI without a license copyright infringement? Is the AI's verbatim reproduction of lyrics illegal copying?
    Current Status/Significance: The court's decision is expected on November 11, 2025. The outcome could set a precedent in Europe regarding the need to license protected content for AI training and form the basis for future regulation.
  6. LLC "Reface" v. LLC "Business-Analytics" Country: Russia
    The Case: The company "Reface" accused "Business-Analytics" of using its deepfake video without permission. The defendant argued that such a video could not be a copyrightable work, as it was created by an algorithm, not human creative contribution.
    Key Questions: Can content created with AI be protected by copyright? Is human creative contribution preserved when using AI as a tool?
    Current Status/Significance: The Arbitration Court of Moscow (No. A40-200471/23) and the appellate court sided with the plaintiff: it was established that the deepfake technology, in this case, acted as an additional tool for processing (technical editing) video materials. Compensation of 500,000 RUB was awarded. The precedent confirms the protection of AI content when human input is present.

The Future of the Technology and Authors

Generative AI has posed questions to society and the law that have no simple answers.

  • On one hand, we cannot ignore the rights of artists and other creators whose work became the foundation for this technology.
  • On the other hand, it would be unreasonable to halt technological progress that opens up new possibilities for creativity and science.

Personally, I want both authors' rights to be protected and for technology to be allowed to develop.

We need some kind of reasonable balance so that we don't end up in a future where, for example, a company hires an artist, then trains a generative model on their work, essentially getting a free, digital replica of that artist, after which the real person is no longer needed.

New rules must be created. Perhaps it will be mandatory licensing, compensation, new forms of collaboration between developers and authors, or something else.

In any case, I'm really looking forward to updates on the Getty Images and the artists' class-action lawsuits; it seems they are expected in 2026 🔥

That's all! Thanks for reading the article, share your opinions in the comments! ♥

My social media: Tg: t.me/catrinenice Vk: vk.com/catrinenice X: x.com/CatrineNice

Sources:

  1. Давайте запретим нейронные сети
  2. Как работают нейронные генераторы картинок (в формате ELI5)
  3. Extracting Training Data from Diffusion Models
  4. ИИ-художник без прав: почему промпт — еще не авторство
  5. Промпты для ИИ и авторское право
  6. Copyright Office Releases Part 2 of Artificial Intelligence Report
  7. ГК РФ Статья 1259. Объекты авторских прав
  8. Постановление Пленума Верховного Суда РФ от 23.04.2019 N 10. Авторское право
  9. ИИскусство: кому принадлежат авторские права на творчество нейросетей
  10. Нейросети и закон: как можно использовать сгенерированный контент
  11. Дружинина Александра Александровна. ХУДОЖНИК И НЕЙРОСЕТЬ: КОНФЛИКТ, ДИАЛОГ, СОТРУДНИЧЕСТВО?
  12. Early influences of photography on art - Part 1
  13. Early influences of photography on art - Part 2
  14. Воронцова Екатерина Александровна. Становление фотографии как искусства сквозь призму трансформации мироощущения человека
  15. Авторские права на контент нейросетей: вопросы, которые требуют решений
  16. Защищены ли ИИ⁠-⁠кар­тинки авторским правом?
  17. Animation Jobs In The Age Of AI: Revelations From Luminate Intelligence’s 2025 Special Report
  18. Getty Images v Stability AI: a landmark trial for generative AI in UK?
  19. В США подали первый иск о нарушении авторских прав нейросетью
  20. Status of all 51 copyright lawsuits v. AI (Oct. 8, 2025): no more decisions on fair use in 2025
  21. Mid-Year Review: AI Copyright Case Developments in 2025
  22. Munich hears first ever case over licences for generative AI in GEMA vs OpenAI
  23. Художники подали в суд на создателей нейросетей Midjourney и Stable Diffusion
  24. Federal Judge Largely Dismissive of AI Complaint: Anderson v. Stability A
  25. Апелляция согласилась с признанием дипфейка объектом авторского права
  26. ИИ-контент в бизнесе: как избежать нарушений авторских прав
  27. AI art: The end of creativity or the start of a new movement?
  28. AI, Copyright, and the Law: The Ongoing Battle Over Intellectual Property Rights
  29. Copyright vs fair use in AI: key 2025 court case insights
  30. Copyright and Artificial Intelligence Part 3: Generative AI Training pre-publication version
  31. U.S. Copyright Office Releases Part 3 of AI Report: What Authors Should Know
  32. U.S. Copyright Office Fair Use Index
  33. Andersen v. Stability AI Ltd.