Reprinted from the Winter/Spring 2020 issue of the Authors Guild Bulletin.

During the last decade, technology has grown at breakneck speed, making tech companies some of the wealthiest and most powerful enterprises in the U.S. At the start of the decade, Amazon had 30,000 employees and a market value of $80 billion. Today, the company is worth an estimated $1 trillion and has a 750,000 person work force. Uber, then a fledgling start-up, had launched in beta and hired its first employee. Facebook had 700 million users, one third of the 2.5 billion people who actively use the platform today. In just ten years, technology has completely transformed our lives, reshaped our routines, habits, and our political process. Authors have been on the front lines of this change from the beginning, contesting questions of privacy, freedom of speech, and copyright. It has not been an easy fight; as discussed in our recent white paper, “The Profession of the Author in the 21st Century,” technological factors like the growth of digital readers and the monopoly power of tech companies such as Facebook, Google and Amazon, which first devalue the price of content before capturing its market and suppressing competition, have substantially contributed to the decline of author incomes in the past ten years.

As we round out the first decade of the millennium, it is important for us to not only take stock of how technology has changed authorship but how it is poised to change it in the coming years. One of the biggest changes on the horizon is the growth in artificial intelligence technologies. 
A Brookings Institution report last year estimated that AI technologies will wipe out 25 percent of jobs in the United States, with workers in transportation, administration, and customer service facing the greatest risk. Jobs involving repetitive tasks may be the first on the line, but every occupation — including writing — will have to contend with intelligent machines that can perform tasks that until now were the exclusive domain of humans.

The technology underlying AI tools capable of creative work is already in use — for example, autocomplete in email composers and grammar checks — and more sophisticated tools capable of writing prose, composing music, and painting are not far off. Today, many AI technologies can pass the Turing test, which measures the functional equivalence between artificial and human intelligence. In 2016, a Japanese AI-authored novella beat out human-authored submissions in the first round of a literary prize competition. The same year, Microsoft unveiled “The Next Rembrandt” — an AI-generated painting in the style of the Dutch master. John Seabrook reported in a recent New Yorker piece that the nonprofit AI developer, OpenAI, has created an AI writer called GPT-2, which it has withheld from the public because the program is “too good at writing.”

AI technologies rely on ingesting data and learning from patterns. Programmers “feed” this data to the AI system and, to varying degrees of supervision, guide execution. For example, if the AI is given a task to write a short story, the programmers will feed it a set of short stories. The AI will identify patterns and similarities within the content and build a model from which to assemble an entirely new output. This raises two obvious questions: where is the input data coming from and have those creators given permission for their works to be used to train AI?

Tech companies have spent most of the past decade amassing data through various means and channels. Some data, like pictures that are used to train AI to recognize human faces, is submitted voluntary when users upload content online. Other data, like books, come from digital copies made without permission of their rights-holders. If this sounds familiar, it is because the use harkens to the Google Books case. Although Google publicly claimed that the copies were being made in order to preserve and expand access to books, the company’s ultimate purpose was to use the massive trove of books it had digitized to build AI systems, devices, and products capable of understanding natural language. Using copyrighted works without permission to train AI to create new works — books, music, visual art, etc. — just adds insult to injury. So far, we haven’t seen too many AI-authored creative works actually competing in the market for human works, but unless we preemptively create robust rules around such uses, content channels will no doubt be flooded with AI-generated works that are intended to replace human-authored works, such as book summaries and in genres like romance and how-to books which follow certain conventions that can be emulated.

In January, the Authors Guild submitted comments to the United States Patent and Trademark Office in which we argued for the creation of a collective licensing regime to regulate mass ingestion of copyrighted works for machine learning purposes, especially when the AI is used to create new works. At the same time, we argued for stricter liability for third-party beneficiaries of copyright infringement. Under current case law, liability for copyright infringement requires “volition,” which is difficult to prove when the infringement is done through the use of automated technologies. Unless rules are changed, AI developers, users, and the companies that own and profit from AI technologies will not be held liable for infringement inci-dental to the use of AI. I raised these and other issues related to use of AI at the Copyright Office symposium “Copyright in The Age of Artificial Intelligence,” which took place in February, and in my input on the American Bar Association Intellectual Property Section’s comments to the World Intellectual Property Organization.

Copyright law and publishing are undergoing profound changes, forcing institutions that oversee copyright policy to adopt new regulations and methods to continue administration of the IP sector. The Guild has had an opportunity to participate in the drafting of a bill sponsored by Senator Thom Tillis that would give the U.S. Copyright Office much-needed independence from the Library of Congress, allowing it to modernize operations and implement new registration processes to protect a wider range of content. We are also hoping that by the time you read this bulletin, Senator Ron Wyden will have lifted his hold on the CASE Act. You may recall that this important bill passed the House of Representatives in October with a resounding vote. Yet, despite it having major support in the Senate, Senator Wyden has blocked it from the floor.

The future we have long imagined — of self-driving cars and intelligent assistants — is finally here. It is up to us to make sure that that technology is used for improving lives instead of depriving hardworking authors of their livelihoods.

Mary Rasenberger
Executive Director