AccelPro | Intellectual Property Law
AccelPro | Intellectual Property Law
On Artificial Intelligence and Authorship
0:00
-22:20

On Artificial Intelligence and Authorship

With Ryan Abbott, Professor of Law & Health Sciences at University of Surrey School of Law & Adjunct Asst. Professor of Medicine at Geffen School of Medicine at UCLA | Interviewed by Neal Ungerleider

Welcome to the first AccelPro IP Law interview. If this was forwarded to you, please sign up here.

And if your colleagues in any sector of the IP law field might be interested, please let them know about AccelPro. As our community grows, it grows more useful for its members. Send comments and questions to: questions@joinaccelpro.com

Listen on Apple Podcasts and Spotify.

Welcome to AccelPro IP Law, where we provide expert interviews and coaching to accelerate your professional development. Today we’re going to talk with Ryan Abbott about emerging legal perspectives around artificial intelligence and authorship.

Ryan is a Professor of Law & Health Sciences at University of Surrey School of Law & Adjunct Assistant Professor of Medicine at David Geffen School of Medicine at UCLA. And he is the author of The Reasonable Robot: Artificial Intelligence and the Law. 

The rise of generative artificial intelligence tools is leading to new copyright and IP questions for creators, rights-holders and judges. AI authorship and determining the line between human authorship and machine authorship can be difficult in many cases, and a range of questions exist around copyright and training data for AI tools.

In this interview, Abbott discusses AI and authorship, the Copyright Office and AI, and his own career from medicine to the intersection of technology and copyright law. The supplemental materials and episode transcript are available below.

Listen on Apple Podcasts and Spotify.


Interview References:

Supplemental Materials:


TRANSCRIPT

I. AI AND AUTHORSHIP

Neal Ungerleider, Host: You've written extensively on AI authorship and mentorship and the need for a better legal framework around that. Can you tell us more about this and what your thoughts are regarding how the law should treat AI creation ownership?

Ryan Abbott: This is something I've been thinking about for a while. A lot of people have, although most of these people have been law professors. It's been very interesting that we would write these papers saying that this is going to be important, that people should pay attention to it, that there are all these unresolved questions, and everyone else would more or less ignore that. Just this past year, breakthroughs on generative AI systems have suddenly thrust these issues into the public consciousness with at least a little bit of anxiety, and into boardroom management decisions at large corporations.

So it’s been exciting to see something that academics have been talking about for a long time taking on some real-world importance. There are a lot of issues associated with AI and IP and with AI and law generally. All sorts of things from generative AI systems coming up with defamatory texts, to what sort of First Amendment protections exist around machine-generated speech, to people using AI systems to make deepfake videos of people that did not actually take place in the physical world. There has been a democratization of that, and it has become very easy and accessible to people. There are also questions on whether you can train sophisticated machine learning algorithms based on people’s protected content without their permission, and whether you can copy someone's likeness or style.

There are a huge host of issues. Some of the issues that I've been most involved with have been whether and to what extent you can protect AI-generated output with intellectual property rights. In particular, if you asked GPT-4, “Can you invent a COVID vaccine for me?” and it does, whether that's the sort of thing that could get patent protection. Or if you asked GPT-4, “Can you write my next book for me?” and it does, whether that's the sort of thing that could get copyright protection.

II. COPYRIGHT OFFICE AND AI

NU: The US Copyright Office recently issued copyright registration guidance and human authorship requirements for applications registering AI generated work. The Office will also hold a series of discussions this spring addressing copyright ability and registration issues raised by AI works. Do you expect the rules or guidance there to evolve significantly in the nearer term? What role should applicants and registrants play in pushing for changes? 

RA: I do think things are going to change; they haven't really changed since 1973. The Copyright Office has always taken the position that without human authorship a work is not protectable, and that's the end of the story.

I disagree with that view. There's nothing in the Copyright Act that says an author needs to be a human being. In fact, for over a hundred years in the United States (although we're somewhat of an international outlier in this), you've had corporate authorship. A company can be an author and when making a registration application, the company does not have to disclose any natural person involved in the making of a work.

Who knows how many people have actually registered AI-generated works without disclosing that. There’s also nowhere on the registration form that says to disclose that this is AI-generated. If I had Midjourney make a very valuable image, I could just say that I was the author of it and no one is likely to notice or challenge that. It would only be challenged if it was in litigation alleging copyright infringement and in discovery it came out that I actually had not done anything to be a traditional author—if the Copyright Office's position is correct and validated. The Copyright Office also does not grant copyright to someone, they only register copyright. Although, in the United States you pretty much need it to then sue over it.

I led a pro-bono legal test case where we submitted a piece of AI-generated art for registration. It acknowledged it was AI-generated, the Copyright Office refused registration, and now we are suing in district court in the District of Columbia alleging that there really is not this human authorship requirement that the Copyright Office says there is.

Again, it's not in the Copyright Act. They cite a couple of cases from the 19th century in support of it, the In re Trade-Mark Cases and Burrow-Giles v. Sarony. Burrow-Giles v. Sarony involved this very famous picture of Oscar Wilde, and it was the Supreme Court case in the United States that decided whether photographs were copyrightable. They now are by statute, but back then the claim was that you can’t copyright a photograph; it’s not the writing of an author. The Supreme Court interpreted that term non-literally and purposely and said this law was designed, and the Constitution’s purpose is, to allow any tangible expression of an idea in the mind of an author to be protected.

The Copyright Office interpreted that to mean, "Well machines don't have minds and therefore can't be creative, and this isn't the sort of thing we want to protect." Courts have certainly talked about human authorship in many cases, but only because that's traditionally what authors were, human beings. There’s really never been a case until my case on it. There were a few cases where people tried to copyright a garden or said, “I wrote this book, but really my dead grandmother channeled it through me.” And there was the famous Monkey Selfies case in 2014 involving a series of pictures that a monkey took of itself.

That case actually got tossed out on the basis of standing. The Ninth Circuit said that unless Congress is very plainly going to state that animals can bring lawsuits, animals can't bring lawsuits. This is, of course, funny because human beings are animals and bring an awful lot of copyright infringement suits—just to give some sense of how tricky literal interpretation is in this space. The Copyright Office has been saying publicly they've been thinking about these things, in particular, these generative AI systems that are now very commonly prompt image generators. Someone says something like “I'd like a picture of three people in a podcast,” and your image comes out.

There is a real art to prompt engineering because I am not very good at it and my images suck. Someone like Kris Kashtanova, who is a famous user of AI-generated systems to make artwork, has the prompts down to a science and really gets some great stuff out of these systems. This is not a machine fully autonomously making an image without any user input. There are always people involved and there are really tricky questions of how much human input is going into this and at what point? To what extent is it the sort of thing we traditionally associate with authorship? And there's a long line of cases in the US and other jurisdictions about whether or not you are a commissioner, editor, or producer of a work.

Let's say that I own Cosmo Magazine and I tell a human artist working for me that “I want you to make a magazine cover with an astronaut striding toward a camera.” She comes back, and I say, “No, maybe more like this,” or “Keep trying,” and at some point we get an image. Who’s the author of that? The person who gives instructions or the person who does it? It’s probably pretty fact specific. The Copyright Office is basically saying this is analogous. If I tell DALL-E to go make me a picture of three people doing a podcast, it will do that. This, by itself, is probably not enough to make me an artist, but at some point it might if there's enough iteration on it or I'm giving very detailed prompts. 

This artist, Kris Kashtanova, submitted a few copyright registrations. One was for a comic book they did, called Zarya of the Dawn, that initially got a registration where they had not disclosed it was AI-generated. They went on social media and said, “Hey, this isn't an issue at all. I got it registered.” Then the Copyright Office canceled it, and now they're in discussion with the Copyright Office. The Copyright Office most recently upheld the denial. You can copyright a prompt; you can copyright changes to images you make; you can copyright the arrangement and selection of images, those are all done by humans, but for the image itself, the Copyright Office said, “No, the machine really did that.” 

And it was interesting to me because if you were trying to find an example where a human artist had given a very detailed prompt and done a lot of iterative work with the AI, that would've been a good choice for you. It was a little surprising to me how firmly the Copyright Office came down on this, but their policy now is that they want applicants to disclose what pieces of a registration are AI-generated and disclaim those. [Note: The U.S. Copyright Office has again rejected copyright protection for a piece of art. In this case the application was filed by artist Jason Allen.]

I think that’s going to be an exceedingly challenging policy to enforce. They’re expecting people to know and understand these complicated rules and that people are going to be completely transparent about this sort of thing. There is a tremendously gray line in this ‘person did something versus machine did something’ space. I predict a lot of challenges in this policy. 

NU: Guidance in the Copyright Office has not addressed issues surrounding the use of copyrighted content as training data for AI. How should we be thinking about that? What are your thoughts on attribution or compensation for the use of copyrighted works in training data?

RA: I think issues associated with AI-generated works and copyright protection get a little more attention because they're sexier. But of more commercial interest to many parties are these training data issues. Basically, many of these AI systems being used to generate works are based on machine learning, which themselves are being trained on data. If you're going to train a machine to make images, the training set is made up of images—sometimes these are billions of images. If you want billions of images, the way a lot of people are doing it is just going on the internet and scraping them off.

There's a question of whether you are allowed to do that or if those images have copyright protection: whether it’s copyright infringement. It usually involves the making of a large number of copies of those images, at least, which means that in the United States, if it isn't copyright infringement it's by virtue of the fair use doctrine. That’s basically a doctrine that says certain sorts of activities that would literally be copying aren't considered copyright infringement because we think there are things people should be allowed to do anyway. It's a factor test that looks at whether a use is transformative, is being used for commercial or non-commercial purposes, how much of the work you are using, what impact this is having on a rightsholder market, and so forth. 

Some of the people making these systems basically say, “Look, the only way that we could make a system like this is by using billions of images. There's no possible way that we could go get permission from every person to do this. People wouldn’t give us permission; the cost of doing this would be prohibitive; we’re losing out on building; these systems are our AI competitiveness; all of the public benefits from this...etc.” 

Rights-holders on the other side will say, “First, many of us are in the business of licensing these images.” Getty Images, for example, licenses images for purposes of machine learning. They may say, “There are benefits to licensing because while there will be fewer images, they may be higher quality. You may get better quality AI content because the images are curated, and there are better descriptions of them. We have databases that may help avoid some of the problems with AI in inappropriate bias and unfairness. For example, we have representative databases of physicians of all races and genders, so you're not just getting white males when you type in, ‘Give me a picture of three doctors.’ People can and do license these, so it's not impossible to license them.” That's basically the two camps and where they separate.

In other jurisdictions around the world, England being an example, they don't have an open-ended fair use doctrine. They have specific closed statutory exceptions to infringement, and so it is a big policy issue over there. In their last copyright amendment, the European Union adopted a text and data mining exception that was fairly narrow and non-commercial. Text and data mining largely refers to drawing insights from databases, but can more broadly capture training algorithms based on data. 

The UK Intellectual Property Office did two consultations on AI and IP, and their only recommendation was that they were going to propose an acting and very broad commercial exception to text and data mining which would've made it not a copyright infringement to train these systems in this manner. That recommendation was not ultimately adopted at the ministerial level, and so now they're doing additional consultations on this. Whether as a matter of statute or fair use, whether this sort of thing is protectable or not is a big issue. 

In the absence of definitive guidance, there is potentially a lot of risk for anyone using these systems. There are now lawsuits making their way through the courts in the US and in the UK on this. Getty Images, which is a major rightsholder of copyrighted content, is suing Stability AI, which operates a text-to-image generator, alleging that they trained on their data without permission. They claim that this is copyright infringement, that their works are infringing derivatives, and that there's trademark infringement and some other issues associated with that.

There are some other lawsuits and class actions going on now, for example, against GitHub, OpenAI, and Microsoft for use of the Copilot tool, which is an AI that generates code. One of the things AI is surprisingly good at is writing software (making other AI, essentially), and they're alleging that they train the AI on open-source software in the GitHub repository without giving proper attribution.

That's perhaps more of a contractual terms of service dispute, but a lot of these things are moving on in parallel. Right now, creatives and companies are faced with these choices: Do we use these systems? Do we allow our employees to use these systems? Under what circumstances do we use these systems? And for people who aren't thinking about it as much, they're just plunging ahead and doing it full steam ahead.

III. CAREERS, HEALTHCARE & AI

NU: Can you tell our listeners a little bit about you and your background?

RA: I had an unusual path into legal academia and legal practice. I did an undergraduate degree at UCLA in integrative medical theory combined with a four year master’s degree in traditional Oriental Medicine, which is largely acupuncture and herbal medicine. I then went on and did a dual degree in medicine and law between UC San Diego School of Medicine and Yale Law School—did a medical internship to get my license. Someone told me I had too many degrees to do anything other than academia, so I went straight into the academy.

NU: I want to ask you a couple of questions about your career path, and also advice for our listeners. First, what are some of the ways that your medical background has influenced your legal career?

RA: The medical background has been useful in my legal career for a couple of reasons. I would say that firstly, a lot of my work has been in the life sciences and in healthcare. Also, as a patent attorney and a patent litigator, having a technical background has been very helpful for dealing with some of those issues. It's also something that clients who are deeply in that space tend to appreciate. The scientists or the clinicians feel that they're speaking to someone who understands things from their perspectives. It has really helped me to work in certain subject matter areas. 

As it happens, my other main area of focus is in tech, and that is largely something I just picked up on the job. My initial interest in all this kind of stemmed from seeing how AI was being used in drug discovery and broadening out from there to today where we have AI copyright issues, so it's never too late to learn something like that. But my medical background comes in handy in all sorts of unexpected ways, including on business trips. When you're on a flight and someone calls for a doctor, it's good to have a trained person responding. Although I feel a little bad at this point in my career for the person I then go over to. I do try and see if there's a real practicing doctor on board, first.

NU: Have you ever helped anyone on the flight? 

RA: I did practice as a clinician for a number of years, so I saw any number of very high intensity medical crises. On planes, yes I have helped people, but no one whose life I've saved. More people having indigestion or nose bleeds or so forth. 

NU: And Ryan, what advice do you have for practitioners who are just getting started in their careers?

RA: I think AI is going to have a very significant impact on the legal profession. I don't think you need to learn how to code, especially now that AI can code, but I think that for younger attorneys being familiar with these systems and how they can work are going to give you a significant competitive advantage.

We are not quite replacing first to third year associate brief writing with GPT-4, but having a proficiency with this may improve your own writing, and you’ll also understand some of the dynamics that more senior attorneys might not. There are already horror stories recently from Samsung about employees putting confidential information into ChatGPT and learning, ‘there goes their confidential protections.’ Lawyers are doing this too with privileged information from clients, so don't do that. Doctors are doing it with personal health information from patients, so don't do that either. We need a generation of people who have grown up using AI and understand how to use it to augment their own performance.

Listen on Apple Podcasts and Spotify.

This AccelPro audio transcript has been edited and organized for clarity. This interview was recorded on April 13, 2023.

AccelPro’s interviews and products accelerate your professional development. Our mission is to improve your day-to-day job performance and make your career goals achievable.

JOIN NOW

Please send your comments and career questions to questions@joinaccelpro.com. You can also call us at 614-642-2235.

If your colleagues in any sector of the IP law field might be interested, please let them know about AccelPro. As our community grows, it grows more useful for its members.

AccelPro | Intellectual Property Law
AccelPro | Intellectual Property Law
AccelPro’s expert interviews and coaching accelerate your professional development. Our mission is to improve your everyday job performance and make your career goals achievable. How? By connecting with a group of experienced IP Law professionals.
You’ll get knowledge and advice to help you navigate the changing field. You’ll hear deep dives with experts on the most important IP Law topics. You’ll give and receive advice on how to make difficult job decisions. Join now to accelerate your career: https://joinaccelpro.com/ip-law/