ScarboSheep's Thoughts on Creative Commons and AI

January 21, 2026

NOTE: These opinions are my own, and do not necessarily reflect that of another institution.

One of my current hyperfixations is the Creative Commons licenses, and the open culture it creates. It creates such a quick way to allow some works to be shared freely, with different degrees of freedom. There are a lot of things out there that benefit from being shared more freely, so applying these licenses may be helpful.

But given the general disapproval of AI in the furry community, my support of Creative Commons licenses, and that I have some more nuanced thoughts about AI; I wanted to explain my positions.

Creative Commons Origins

The Creative Commons licenses were inspired by free software licenses. One of the most popular of these is the GNU General Public License, created as a way to keep derivatives of software free, also called a copyleft. I have enjoyed being able to be a user of many items released under the GPL. However, I personally do not want to use the GNU General Public License, especially because I highly disagree with GNU's stance against proprietary software, and I think a strong copyleft is too extreme for the works I create. I believe that artists deserve to be paid for their work, unless they prefer to give it out for free, and video games are both software and art. But yet I think a lot of software benefits from being free, especially operating systems, accessibility tools, word processing and typesetting software, browsers, and other commonplace digital products. When it comes to free licenses, I'm more favorable to things like the Mozilla Public License 2.0, or the BSD 2-clause license, which do not put burdens on small businesses in the same way the GPL might.

My Feelings Towards Creative Commons

As I've seen that Creative Commons has platformed the voices of people defending generative AI, and that they have generated a few AI-generated images of their own, I want to take the opportunity to share my feelings. I'm still going to support the use of Creative Commons Licenses, if the artist so wishes to apply one to their work. They create a nice consent culture, similar to the badges that are sometimes handed out at furry conventions regarding physical touch. But largely, I don't think highly of generative AI. When it comes to generative AI and consent, I think that such training should be by consent of the author or for public domain works. If someone has applied a CC license that would signal AI training consent, then consent has been given. But quite frankly, I think a fair and just copyright law should not automatically consider AI training as fair use.

I am not canceling Creative Commons. I don't even have a problem with them facilitating creators for generating ways to easily give consent to AI--some are fine with this! I am more in group B in regulating AI according to What Does the CC Community Think About Regulating Generative AI," where I want maximum oversight. I might even lean more against generative AI than many in group B.

Copyright and Plagiarism

There is a concept in copyright law called moral rights, which according to the Berne Convention, is "the right to claim authorship of the work and the right to object to any mutilation, deformation or other modification of, or other derogatory action in relation to, the work that would be prejudicial to the author's honor or reputation." Creative Commons even acknowledges the existence of moral rights, and a CC license does not necessarily give away moral rights. For example, in section 2(b)(1) of the CC-BY 4.0 license, it states that "Moral rights, such as the right of integrity, are not licensed under this Public License."

In many cases, I think many artists feel AI using their work is treating it with a lack of integrity. In many cases, I think many artists would also consider AI generation to be a form of "mutilation" of their artwork. Hayao Miyazaki, of Studio Ghibli fame, is even quoted as saying that generated AI images are "an insult to life itself." The word "slop" is frequently used to refer to output by AI, especially as it gets things wrong in a way humans are not likely to do. And I personally feel that the term uncanny valley can be applied to a lot of AI-generated content.

But not everyone feels that way. "Moral rights" is not an argument I ever hear in the AI discourse. Usually I hear arguments more along the lines of AI taking away jobs from artists. I have heard multiple stories of artists losing money and opportunities from AI. But I do not take the extreme position that no more artists will be needed because of AI. I don't think AI image generators have the ability to innovate like a human can. I do not think AI will phase out artists. I believe images, sounds, and stories generated by AI are of either lesser or counterfeit quality. I say counterfeit because another common argument against AI-generated media is plagiarism, and I agree. There will always be a need for real artists and truly good, life-changing art. We can have a leg up on AI.

Also, authorship is an issue in general. Who made the AI image? The program, the prompter, or the people who created the training data? The prompter is taking on the role of commissioner at this point, not artist. Now if this were a form of automation created by a human-generated algorithm--which is not AI; no training data was used--the answer may be different, especially if the algorithm is merely a tool rather than generating the full product. But learning how to prompt engineer does not build your skill as an artist. One might argue it builds your skill as a commissioner, but we tend to speak differently to human beings than to computers.

Other Problems with Generative AI

Copyright and plagiarism aren't the only issues. There's the issue of environmental concerns, and this can include non-generative AI. Now if generative AI and chatbots were not so extremely popular as they are now, this would be far less of a concern. If the ethical concerns mentioned earlier were not a thing, using it on a small scale would be a potential solution. But as it stands, the usage of AI technology at the scale it is being deployed at creates some environmental issues. To really put into perspective how much power AI uses, I found a Commodore 64 project porting Llama 2 intriguing. The documentation says that it takes 8 minutes just to produce one token. Despite the Commodore 64 being slow, that just still feels ridiculously slow and inefficient. Inefficient in a day where I would prefer game developers to consciously conserve processing power and storage space, as was done in the 90's.

But that's not to shift all data center responsibility to AI. Cryptocurrency is, in my opinion, more useless and wasteful. In order to mine Bitcoin, a nonce has to be generated. It's literal garbage added to the header of a Bitcoin transaction, employing the SHA256 algorithm over and over again until that algorithm can return a value within a certain target range. The SHA256 algorithm does not run in an instant, and that's evident when running it on a large file. When I use the SHA256 algorithm, it's for a completely different reason: to check whether a file has been altered or corrupted. Some people distribute their files along with their SHA256 checksums (the results of the algorithm on the file). That way when you use it, you know you have the correct one.

Safety is another concern with generative AI. Sometimes chatbots give some really horrendous advice, such as suggesting a man replace his salt with sodium bromide. Another problem I've heard while watching Dr. K's videos is that of AI inducing psychosis in some individuals. And of course, there was the huge Grok controversy in January of this year, where Twitter users--sorry, but the true X in technology is the X Window System--generated nonconsensual images of other people.

Is There Any Good Generative AI?

I think there can be in certain circumstances. An AI tool only used by a single person, trained only on their own output, will automate things in a way that may be considered theirs. For example, AI that clones one's own voice, with their consent, can be useful for dealing with diseases that eventually remove the ability to talk, like with ALS. Also, an AI tool built specifically to find an optimal tool-assisted speedrun route might be okay--the random training data might uncover an idea gamers haven't seen before.

Someone asking for help on how to do something in programming, if basic enough, may generate code that is too basic to be meaningfully copyrightable on its own, but otherwise helpful for a project. (Personally I would rather look up how to do the thing and why each component works.) If someone commissioning an artist could not explain well with words or even in a sloppy drawing what they want, and miscommunicating the first time is not an option, then perhaps AI may bridge that gap--but only if the AI was fully ethically trained. A chatbot that is hyper-tailored for a certain thing, if ethically trained, may be useful.

But what about AI that doesn't fall into the "generative" camp? This I feel better about. Stockfish is one of the leading chess engines, and it now works as an AI. AI that works merely to translate into other languages, with a human verifying the output, is okay in my opinion. AI that can predict diseases or other disasters could be useful, provided that reliance on it does not lose human skills. AI that can search for potential genealogical links could be useful. AI that can search for sources on a certain topic could be useful. AI that can simulate protein folding could be useful--it was already a computationally-expensive form of research anyway. AI that can describe an image is often used by blind people to bridge the gaps in inaccessible infrastructure.

But even then, there is another cost to even using ethical AI. Regarding accessibility, sometimes it would be better to make things naturally accessible in the first place, where a phone helps but is not strictly necessary. Sometimes phones run out of batteries, lose Internet connection, or otherwise stop working. Regarding skills that humans can do, I think experts can often do them better, not to mention that experts in different fields can collaborate to create an innovative solution neither would have thought of independently. And also, too much AI usage can create skill atrophy as well.

What Solutions Are There?

Going back to Creative Commons, I hope that CC Signals not only includes options for allowing, but disallowing works to be used for AI. We need to acknowledge the existence of ethical AI, but we also do not have to use it ourselves. We need to acknowledge that many, many artists do not consent to the use of their art for AI training data, and I believe regulations need to be able to enforce this.

We need to be careful in our discourse of this, too. Fighting and judging others is not the way. I am absolutely willing to have friends who use generative AI. I can also set boundaries that they do not share these things with me.

For those of us who find ourselves being forced into using it or ignoring it, consider Linux or other open-source tools! While AI on Linux exists, I'm not being bombarded with, or expected to use AI tools that I'd rather not. Not to mention open-source licensing, similar to Creative Commons, can bring the tech industry to a more reasonable level under capitalism. You can also change your default search engine, and disable automatic AI prompts in your search engines.

I also think we should have more braille documentation/signage and braille literacy to make AI less necessary. After all, as Tamara from Unsightly Opinions explains in her video AI, Eye Captain! BLIND AI Has Changed Everything!, she appreciates the many AI accessibility features afforded to blind people, but still she does not trust AI with sensitive information.

All in all, despite Creative Commons being a thing I love, I don't fully share their opinions on AI. I think it's good for us to be aware of the good and bad use cases of AI, and act accordingly.

Final Note

This article was not generated with AI in any way.
This article is released under a CC-BY-NC-ND 4.0 International License. See https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en for more details.