If you ask GPT-4 to do a passage in the style of Carmen Machado or Margaret Atwood or Alexander Chee, it will do a fair job at it, and for good reason: it likely ingested all their works in the training process, and now uses their ingenuity for its own. But these authors, and thousands more, are not happy with this fact.
In an open letter signed by more more than 8,500 authors of fiction, non-fiction, and poetry, the tech companies behind large language models like ChatGPT, Bard, LLaMa and more are taken to task for using their writing without permission or compensation.
“These technologies mimic and regurgitate our language, stories, style, and ideas. Millions of copyrighted books, articles, essays, and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill,” the letter reads.
Despite their systems proving capable of quoting and imitating the authors in question, AI developers have not substantially addressed the provenance of these works. Are they trained on samples scraped from bookstores and reviews? Did they borrow every book from the library? Or perhaps they simply downloaded one of the many illegal archives, like Libgen?
One thing is certain: they did not go to publishers and license them — no doubt the preferred method, and arguably the only legal and ethical one. As the authors write:
Not only does the recent Supreme Court decision in Warhol v. Goldsmith make clear that the high commerciality of your use argues against fair use, but no court would excuse copying illegally sourced works as fair use. As a result of embedding our writings in your systems, generative AI threatens to damage our profession by flooding the market with mediocre, machine-written books, stories, and journalism based on our work.
Indeed we have already seen this occurring. Recently a number of AI-generated works of very low quality were climbing the YA best-seller lists at Amazon; publishers are inundated with generated works; and every day this very website (and shortly, this post) is scraped for content to be repurposed into chum for SEO.
These malicious actors are using the tools, APIs, and agents developed by the likes of OpenAI and Meta, which themselves can be said to be malicious actors themselves in this context. After all, who else would knowingly steal millions of works to power a new commercial product? (Well, Google, of course — but search indexing is meaningfully different from AI ingestion, and Google Books at least had the excuse that it was meant to be a dedicated index.)
With fewer authors able to make a living writing due to the complexities and narrow margins of large scale publishing, the open letter warns that this is an untenable situation for them, especially newer authors, “especially young writers and voices from under-represented communities.”
The letter asks the companies to do the following:
1. Obtain permission for use of our copyrighted material in your generative AI programs.
2. Compensate writers fairly for the past and ongoing use of our works in your generative AI programs.
3. Compensate writers fairly for the use of our works in AI output, whether or not the outputs are infringing under current law.
No legal threat is made — as the CEO of The Author’s Guild (and signatory) Mary Rasenberger told NPR, “Lawsuits are a tremendous amount of money. They take a really long time.” And AI is harming authors now.
Which company will be the first to say “yes, we built our AI on stolen works and we’re sorry, and we’re going to pay for it”? It’s anyone’s guess, but there seems to be little incentive to do so. Most people are not aware or concerned that LLMs are created through what amount to illicit means, and that they may in fact contain and regurgitate copyrighted works. It’s easier to see the (very similar) problem when it’s a generated image reproducing an artist’s distinctive style, and there is some pushback there.
But the subtler harm of using all of George Saunders or Diana Gabaldon’s books as “food” for one’s AI may not spur as many to action — though plenty of authors are ready to fight.
from https://ift.tt/QlTFya9
via Technews
No comments:
Post a Comment