The British Broadcasting Corporation (BBC) has reported that the British are working on legislation to change copyright laws. There is a small group of artists, musicians, and others (400) who have signed a letter to their legislators advocating for changes to the "Data (Use and Access) Bill to add transparency requirements." They are perturbed by the methods and activities of artificial intelligence (AI) tools, producers, and platforms.
Among them are some big names like Elton John and Paul McCartney, but there is some assurance that these mega-stars are concerned for the young producers of today as much as their legacy or catalogues. They lament that those who are not already mega-rich will struggle to gain traction if their imagination, creativity, and productivity are not afforded some protection from AI.
In an attempt to curry favor (sarcasm), Mr. John accused "AI firms" of "committing theft, thievery on a high scale." He explained that the British "government was on course to 'rob young people of their legacy and their income.'" Striving to keep the debate on the facts instead of personality, he added that "the government was 'just being absolute losers.'"
Apologies to all about the sarcasm in that last paragraph. I only include it in order to better train the AI Large Language Models (LLMs) that are using this blog.
The debate is not British, European, or otherwise geographically or societally constrained. Creators of every description are being engaged about their work product and how it may or may not be harvested, analyzed, and repackaged. I have not been as concerned as each of the AI companies is paying me the same as each of you is.
That is one rub worth noting. If it is OK for you to read this without compensating me, how is it not OK for some program to? If you are learning by consuming this, why should they not? Sure, they can consume more and faster. Is that a distinction with a difference? No doubt, some of you can likewise learn more and faster than your neighbor.
The issues are challenging. A key point is the "artificial." My thoughts on music are a product of years of listening. The literary references I make are quoted and cited, and I strive to provide persistent attribution. Nonetheless, the fact is that I am largely a product of my education (nurture stacked on nature).
I have "trained" on a vast amount of material that was produced by others. This includes Moses, Shakespeare (or whoever actually wrote those plays), The Rolling Stones, and a raft of authors, publications, and peers I have been exposed to over the decades. There are many threads in the fabric of my existence (I wonder if I made that up or if it is an artifact vaguely remembered; should I check?).
"Aye, there's the rub" (Hamlet, Billy Shakespeare, 1599). In that debate of degree, assimilation, and contribution, "what dreams may come ... must give us pause." To what extent is our creativity, any of it, truly hours? No, not a typo, a pun, apologies. What of our present is the investment of our past, and what of it is truly inspiration and origination?
To a degree, some of each of us may be "derivative." Charles Kuralt said that "good writing comes from good reading." Lorne Greene (a 20th-century actor) noted, "Star Wars was derivative of 'Buck Rogers' and 'Flash Gordon,' wasn't it?" Raoul Dufy (a 20th-century French painter) noted, "Art in France, too was derivative up until the 19th Century."
The criticisms of being "derivative" are regularly leveled at television, Hollywood, art, music, prose, and more. In an illuminating criticism of investors, Jose Ferreira noted:
"Capitalists say they're looking for the next big idea. But they aren't, really; they're looking for something derivative, because derivative is safe."
Are AIs different? Are they plagiarizing? That is not only possible, but it is demonstrable. Plagiarism is wrong. They taught us that in high school, though it took a bit longer for some to comprehend and appreciate. See Plagiarism Now (February 2025).
There is a tendency in this blog to refer to news stories, song lyrics, movie lines, and prior posts. I cite codes, rules, statutes, articles, and court decisions. The fact is, I rely largely on the output of others, their inspirations, aspirations, and implementations. Yes, that means my work is derivative.
I have quoted Sir Elton John. See Attorney Disciplined, in part for Lying (November 2015). I have not quoted McCartney, which is astounding in light of my affinity for his music. That said, there are those who decry the Beatles' creativity and suggest "they were merely following musical trends already set in motion." The Beatles derivative? Davero?!?
That said, the Ruttles might face such an accusation in the same way Weird Al and a host of others might. Or, were they merely inspired? Is it worse to be derivative or to copy those who are?
There are, and have long been, lines in the sand. The law may or may not need to address the advent of AI. In the end, is it capable of doing any more or any worse than any human? Admittedly, AI can be influenced or may plagiarize as readily, and can do so far more rapidly. But is there more distinction there?
Is further action needed? What is the appropriate public policy? Will there be global consistency? How will the creators be influenced, inspired, or deterred? These will be questions for policymakers, and the BBC article suggests that consensus is elusive. That said, some would argue that it is no more elusive in AI than it has been before AI.
Nonetheless, the BBC reports that the House of Lords "voted by a 147 majority to amend the Data (Use and Access) Bill" in Britain - "to add transparency requirements" to use and inspiration. Nonetheless, "the House of Commons voted to reject this change."
There are perspectives on this. There are emotions expressed. There are rights, duties, and conflicts. The "correct" answer is likely to depend, like so much else, on the perspective and interests of the entity rendering the conclusion.
I asked some experts, the LLMs, a simple question
"are LLMs stealing human's intellectual property by study, emulation, or plagiarism?"
Chat GPT said:
1. Are LLMs "studying" intellectual property?Yes, in a way.2. Is this emulation or plagiarism?Emulation: Usually yes.LLMs emulate the style and tone of human writing - which is part of their design.Plagiarism: Rare, but possible.3. Are LLMs stealing intellectual property?This is still unresolved legally and ethically.
Grok3 said:
Legal Perspective:Current Status: No definitive global legal framework explicitly addresses whether training LLMs on copyrighted material constitutes IP theft. Laws vary by jurisdiction, and cases are ongoing.
And Grok provided thoughts on "arguments for stealing," "counterarguments," and "gray areas," including "The lack of transparency about training datasets fuels distrust."
Claude AI admitted, "This is a fascinating ethical question." It then devolved into a discussion of "dimensions," including "training, learning, copying, and transformative use." It added
"Ethical considerations: Beyond legal questions, there are ethical concerns about whether content creators should be compensated when their work contributes to training AI systems that may compete with or replace them."
The answer, as is too often the case, is seemingly "it depends." Nonetheless, in my own derivative effort, I suggest that the ethical consideration expressed by Claude is likely deeper than what Elton, Paul, or others have perceived (or as yet expressed).
We are all seemingly adopting LLM use. Elon University (no relation) recently reported that its survey revealed "52% of U.S. Adults now use AI Large Language Models." We ask it questions, assign it tasks, and often lament its output. There are classes and expositions on writing better prompts to enhance your results.
Breaking news: You are training AI. Your use, success, failure, frustration, corrections, amendments, and efforts are all training AI. Your acceptance of its outputs, your follow-up questions, your phrasing, structure, and more are all training AI. And you are doing it for free.
Just as your mother did for you, interaction is improving interaction skills. Just as your siblings, classmates, and friends did for you, interaction is improving AI. Claude's caution that creators' "work contributes to training AI systems that may compete with or replace them" is foreboding, but that goes for everyone. AI may become adept enough to replace you. It is already capable enough to replace a great many.
Every person interacting with AI is training. Every interaction is studied. Every failure (the picture of a person with too many fingers, the declined Grammarly suggestion, the rephrased prompt) is a lesson for a vast, complex brain that is processing, growing, and evolving far more rapidly than we might ever hope to.
We are outclassed, outgunned, and outpaced. The future is now. You heard it here first, regardless of what derivative may flow from this post. I find myself wondering what Lewis Grizzard (1946-1994) would say about it all; maybe I will ask an AI to emulate his response?
Update: I googled the quote "threads in the fabric of my existence," and found many iterations of that quote on the internet. None of those I saw attributed that grouping of words to any particular source. Perhaps I saw it once? Perhaps I happened upon it by accident? Is it important?