This Week In Law 255 (Transcript)
Denise Howell: (bagandbaggage.com-@dhowell). Next up on This Week in Law, we’ve got author James Barrett. Author of our last invention. We’re going to spitball the future, talk about what happens when super intelligent, AI, is a thing of reality. So, from a very large super brains. We’re going to talk about very tiny antennae too because the AERO case gets argued next week. We’ve got Brandon Butler joining us as well, with Evan Brown and me all this and more on This Week in Law.
Netcasts You Love, From People You Trust. This is TWiT! Bandwidth for This Week in Law is provided by CacheFly at CacheFly.com (CacheFly logo) TWiT theme music playing)
This Week in Law, Episode 255, April 18, 2014
Teach Your Robots Well
Denise: This is TWiL. This Week in Law with Denise Howell and Evan Brown, episode 255, recorded, April 18, 2014,”Teach Your Robots Well”. Hi folks, you tuned in for This Week in Law. Welcome. I’m Denise Howell, I’m here with my cohost Evan Brown. Hi Evan.
Evan Brown: (InfoLawGroup LLP - @internetcases). Hi Denise hope your Friday’s going well. Good to see you.
Denise: So far, so good. And I’m so excited for our show today. Talk about fascinating people and topics. Let’s introduce our first guest, who is James Barrat.
James Barrat: (jamesbarrat.com - @jrbarrat). Hi,
Denise: Hi. Good to see you James. James I’m so glad you could join us on the show today, but at some point you’re in Indianapolis, I am in California, and Evan is in Chicago but I know love to get together and by you a few beers because you seem like the most fascinating guy, the most interesting man alive. Former documentary filmmaker, I was going to say former documentary filmmaker but do you still make films?
James: Yeah, I do.
Denise: Oh yeah.
James: Yeah, absolutely, I just got back from Sudan, I do make films. That’s my “make thing.”
Denise: very cool. There’s a list of selected films that James has been involved in at his website, so you’ve traveled a lot to the Middle East, you’ve done extreme cave diving, very cool array of stuff been able to do with the Discovery Channel and National Geographic and others. And, as though that weren’t enough to make you the most interesting man alive. James has written a really, really chilling book that I am currently in the process of reading and just can’t put it down, but it’s one of those things that I wish I could put it down because it’s quite terrifying. It’s called.” Our Final Invention”. And it’s about super intelligence, AI. And what that might, mean us all as we continued down the road toward developing it. So welcome, we were thrilled to have you on the show, James.
James: (holding book up: artificial intelligence and the end of this human era,” Our Final Invention”, James Barrat.) It’s great to be here. Thank you so much. Denise it’s really terrific. I look forward to talking about the legal aspects, of the things that you’re interested in.
Denise: Right. Absolutely. And as though that weren’t enough, we also have joining us another equally thrilling and interesting individual. Brandon Butler, who heads the IP law clinic at the American University in Washington, DC. And is a great expert on fair use, and copyright, and all of the things that we delve deep into on this show and particularly as we sit here with the AERO oral arguments coming up next week, we’re thrilled to have Brandon on the show to give us his take.
Brandon Butler: (IPClinic.org – brandonbutler.info) Thanks, so much Denise. I’m thrilled to be here and I should say lest my colleagues correct me later, I am not head of the clinic. I am merely one of its 2 to 3 heads, depending on what time of the date is but I help run the clinic. So, the students do the work, so in that sense, I am the head because I get to supervise them doing the work, but actually I am one of three professors that do it.
Denise: Very cool, and it’s actually called the Glesko and Samuelson IP clinic. Is that Pamela Samuelson, who also has an IP clinic at my alma mater, BTU?
Brandon: Absolutely, that’s right, so, Pam Samuelson and his husband Bobby Burch have and down the IP clinics at several law schools around the country. And so we are all very lucky to have had Pam and Bobby start these clinics with seed money and we are all starting to take off. There are actually four or five of them. I’m Proud American University was, I think, the first or second, depending on, there was sort of twins born on the same day, Berkeley and AU. Right? And so we got Bobby Glesko’s name first, but most of them are Samuelson Glesko.
Denise: Gotcha. Well, we just couldn’t be more excited to have you both here. Let’s start out and talk about James’s book and the subject matter of James’s book. There was just a conference in, let’s see, Florida. I think it was at the University of Miami school of Law called “We Robots”. Didn’t you go to that James?
James: I didn’t. I didn’t. I was traveling. I miss that.
Denise: It was the inaugural conference on legal and policy issues relating to robotics and it looks like some really fascinating panels and discussions happened there. And this is an issue that we try to delve into from time to time on the show since we love to consider all things at the juncture of technology and law. And AI and the law is particularly fascinating. So, James’s book starts out by just scaring the pants off you, I got a say. It’s, it pauses this notion that we could someday develop super intelligent to artificial intelligence, not just of average intelligence, but really and intelligence beyond our comprehension, even. And compares that intelligence while it is under our control to consider if you were a human being and you were under the control of mice and everything that you might do to appease the mice and what you might give the mice in order to obtain your freedom. So, I won’t give away too much more of the blood, but definitely go out and read it, but the idea of, seems to assume that any super intelligence we might create would view us as, would not have any sort of honor or respect for us as its creator. But might be able to manipulate us, sort of set us aside, as something that is just in the way and move about its business, which could very well involved, doing away with the annoying people in its way.
James: You’ve put your finger on a number (laughter). When you put it that way, you’ve put your finger on a number of the big issues. Right now, we created machines that are better than we in number of things. They are better at Jeopardy. They are better chess. They are better at navigation. They’re better at therom proofing. They’re going to be better at legal research, pretty soon. They’ll be better at driving cars; they’ll be better at a lot of things. In the not too distant future, we will create machine that are better than us at artificial intelligence research and development. And at that point, they will set the rate at which AI improves and then their intelligence will get, will go past ours. A lot of companies right now are aiming for creating AGI. Which is artificial general intelligence or human level intelligence and that’s between 20 and maybe 100 years years away. So, these ideas that might have once seemed like science fiction aren’t science fiction anymore. We have to keep in mind, the biggest thing I try to bring out in my book is. We humans control the future because, not because we are the fastest creature, or the strongest creature, but because we are the most intelligent. When we share the planet with creatures smarter than we are. They will steer the future. So the thesis of the book is, we need to develop a science for understanding super intelligence before we share the planet with them. (Holds up a copy of his book, Our Final Invention), We have to develop a science for understanding machines that are smarter than humans before we share the planet with them, not after.
Denise: So, how would that go, how could we develop a science that would be capable of understanding something that is so much smarter than we are?
James: Well, it’s interesting you’d think, just on the face of it, you’d think to understand super intelligence; you have to be that smart yourself. Well, it turns out and there’s a somewhat of a profile in the book; he is a really good thinker and AI maker’s his name is Steve Alejandro. And Steve Alejandro uses rational aging economic theory to predict the behavior of super intelligence machines. Now, rational aging theory or Homo Economicus says that an economic agent or an intelligent agent will behave rationally, they will satisfy a set of values, a set of values, called its utility function. Now when you take that into super intelligence as it turns out; it will have a lot of the same values that we have come. It will need resources, it will not want to be unplugged. So it will have vestiges of self-protection, it will be created, it will be efficient and it won’t necessarily be apathetic or nice, super intelligence or intelligence did not implies benevolence or kindness. And as it turns out, programming, empathy and kindness into a machines is extremely difficult. We have a hard time understanding that concept in ourselves, but it’s in understanding how to create machines that are empathetic and friendly towards us, that will determine how the future works out for our species.
Denise: Okay, so we want to create things that are empathetic and friendly, just as not every person is. Isn’t it a good assumption that, it might not be possible to guarantee that every machine would be?
James: You know, there’s a group that I’ve worked with the bit called MIRI-the Machine Intelligence Research Institute. They’re basically trying, their mandate is to create friendly artificial intelligence or to create artificial intelligence that is A) as smart as humans and then B) is friendly towards us who will be towards the future will be the future. But you know, if you ask a robot to protect life, then you’re having to define what life is and you and I may differ on when life begins, or what life is. In a lot of parts of the world, you also have a different definition. As time goes by, we have different definitions of basic ethical principles. So we don’t just differ when life begins, or when human life begins, but what is it to be human, and a lot of parts of the world, women and children aren’t in some parts of the world are really considered to be, given the full rights of human, to fulfill human status. So, when we’re talking about programming these values into computers. That will be intelligent that will work with, we have to be a lot smarter with the real analysis of what our ethics are. What our intuition about ethics and we have to move those into those machines. That is a great challenge and meanwhile wall that challenges going on, trying to think about what makes, how we give machines, empathy, there’s a big race to create autonomous battlefield robots and drones that are the least bit empathetic. In fact, the Pentagon will be very disappointed if their battlefield robots and killer drones are empathetic or friendly. So the big money is actually going into warfare, and battlefield AI. Whereas the scholars and the real thinkers in this argument are trying to figure out how to create friendly, the opposite. Try to create friendly AI, friendly robotics. So that’s a big dilemma right now.
Denise: I told my, go ahead. Evan.
Evan: Well, it just seems like there’s a point that once the artificial intelligence turns back on itself and starts further developments where it becomes self-referential. At that point in we are really beyond the Rubicon at a certain point, even if we start out from the premise or start out from the foundation that this intelligence is empathetic or compassionate or has the ability to be altruistic or what have you, just with natural biological life. It seems like there to be mutation, software bugs, different functions in the way these things are program so that despite our best efforts. These could change in a way that we can no longer direct because we are no longer in the position should do. And so there’s this risk of evil and the capacity to do harm. No matter how we started this process off and sort of push it off into the future, is that a legitimate concern, James?
James: Absolutely, you’ve put your finger on another really important point. Once you get to the stage where we’ve got self-aware and self-improving machines. And the AI experts I polled in the book says that it’s between, the earliest is 2030 later dates are around the year 3000. Once you get to self-awareness, self-improving machines, then they will improve their own intelligence very rapidly. They are intelligence will be thousands or millions of times greater than ours, and as I said, there’s no correlation between intelligence and benevolence or kindness towards us. So you are right, once we have created self-aware, self-improving machines, then we are kind of across the Rubicon. Unless we have figured out by then and this is why I urge the creation of this science to understand super intelligence. Once we’re past that point we will not even understand the creatures we’ve created. It will be like as Denise was, but my shoe could communicate with the mice would be human. A super intelligence saying earlier. I opened the book with an idea, it would be like you awakening in a prison guard by mice, but mice that could communicate with us because the mice became human. And super intelligence coming into full awareness in our environment may not have, may not feel any connection to us, and that’s just, it’s not biological. Empathy is biological, we see a robot and we instantly start anthro-meta-morphosizing and assuming it’s like us, but machines aren’t like us. They don’t have our mammalian empathy that hour-long evolutionary tradition of empathy. So yeah, it’s a problem, and so we’ve got to deal with it now before. As Gary Marcus would say, a professor at New York University reviewed my book for the New Yorker and he said, it’s not really a question of when this happens, when computers become smarter than we are, whether it’s 30 years, 100 years. It’s kind of immaterial. The question is what happens next, once computers are smarter than us, what happens next. Are you ready, if it’s 2030. Are we ready then, or if it’s 2090 or 3000. It’s going to happen, we can see the trajectory of computers now, we can see that the very fast acceleration of technologies now. It’s really an AI spring, AI is growing very rapidly. Robotics are developing extremely. The question is, what happens after we’ve crossed that place where we are sharing the planet with smarter than human machine?
Denise: That’s right, it’s as though as we have a sort of dual outcomes from the same driving principle, and that is, we are creating artificial intelligence in order to be able to solve all of the wants, needs, desires, pitfalls, shortcomings of humanity and the world live in; you could see AI being far more efficient at world food and water resources, making sure were not destroying the climate, we need perhaps such intelligence to Solve These problems, but once that intelligence is able to solve those problems then keep saying, oh, I can some of these problems for you if you’ll only give me more autonomy. We’ve got an issue.
James: (laughter). We’ve got a big issue. Yep, you’re definitely right.
Denise: So, from the practical legal perspective, regulating how development in this area goes ahead so that we are not facing our last invention, do you have any ideas on that front, James?
James: There was a, what I’m writing about artificial intelligence and what the people in this AI risk community., write about, who I refer to as MIRI for an example Machine Intelligence Research Institute. We write about this technology being a dual use technology like nuclear fusion. Nuclear fission was developed as a cheap way to split the atom to get free energy, and it was quickly with the knives in emergency circumstances in World War II. And the first time we heard about nuclear fission, or the public was when the ball was stopped on Hiroshima. There are good parallels with dual technologies, we don’t want a future with AI like the future, and we had with nuclear fusion. We won’t survive that. If you look at, what was the initial plan for nuclear fusion, we dropped a bomb, then we that human race held a gun to his head for 50 years in nuclear arms race. We’re not out of the woods with nuclear fusion, yet we’ve got to be smarter with AI. Recombinant DNA has a good model to offer everybody, in the recombinant DNA world as early as the 1970s, they were aware, there will be problems that would come about, if people weren’t careful with experimental DNA in plants and animals. So they called a conference in California called, at a conference center called Alomar and came away with basic guidelines called the Alomar guidelines. They were simple things like: Don’t track the DNA out on your shoes. Basic principles. So now is later rehab promising gene therapies, the highly productive crops. Recombinant DNA science pretty safe, but it’s because the main players decided to stop and get together and talk about the issues. That’s really our only hope with AI. There’s a really good sign recently. When Deep Mine, a very promising AI startup was sold to Google, the founders of Deep Mine said and as condition of this $400,000,000 sale; we would like Google to set up an AI safety, an ethics board, basically to govern the use of this powerful technology we are selling Google. And so they did, they are in the process now. I haven’t heard anything about it recently, but I hope it is moving forward. That is a really really good sign about AI risk.
Denise: I was telling my 10 year old that you were going to be on the show with us and he of course is fascinated with robots as every 10-year-old is and as are we. And he had a couple of questions that I thought our pretty good ones, so I’m going to run them by you. We were talking about, you touched on the fact that a lot of this development sort of takes place behind closed doors, in the hands of the military in ways that the public does not have control or scrutiny over and my son’s questions was, “they can’t make artificial intelligence that can think for itself right? It’s like my drone then I fly around in our yard, it’s always under human control. Right?” My answer to him was, I don’t know and I don’t know what kind of controls the military is under because it has to be able to operate in secrecy without public scrutiny. So, are you comfortable James that the military is following sufficient guidelines and safeguards in its development of AI?
James: No, while I’m wildly uncomfortable, I couldn’t be less comfortable. Look at what, the NSA Ponzi scandal was a scandal about AI. The NSA was siphoning off mountains of data, oceans of data and they had no use for that data, they could do nothing with it. If they didn’t have very sophisticated data mining algorithms to probe that data and pull out your address book and mine and a gazillion other people’s. So from the metadata of our phone calls and the entire transmission contents of the Internet. So they used that that power, that AI power may be to do some good, although they have hard time bringing up cases where they’ve stopped terrorism by using this kind of data mining technique. They certainly done a lot of evil to our Constitution, to our, I mean, what happened to our fourth amendment – to probable cause. It used to be, if you wanted to wiretap, you had to go before a judge and present good evidence why you wanted to take that person’s information, now you just need to go to Google or go to AT&T or to an email supplier and use the backdoor route that you’ve already set up. The NSA used this powerful technology to circumvent the Constitution. So I’m extremely worried that the military and the security complex are not using all due care with these technologies. Right now there’s a giant push to make drones, you know, the drones that are killing people over Central Asia. To make them autonomous, to take the human out of the loop in killing situations, that’s a very big push right now in our drone technology. It’s also going to be a big push in battlefield robots, autonomous battlefield robots that again who kill people with no human in the loop to make that killed decision. This is a conversation that is going on behind closed doors. This is a debate that has so much money, propelling it. There’s so much money in these kinds of robots. Any chance for a public discourse is, it’s only going to happen if all of us get involved and insist that a public debate goes on.
Denise: Yikes! Well, we better go ahead and do that. My 10-year-old’s other question, and then I’ll let our other panelist
James: I hope I haven’t scared your 10-year-old.
Denise: That’s okay, we had this discussion last night over dinner, and I had to kind of go it’s okay. Yeah, he was a little scared. The other question he had, had to do with something we have discussed on our shows quite a bit; and I know you have interviewed Ray Kurzweil. The singularity. So when we contemplate a time when it’s possible for human intelligence to be preserved, we talked a lot last night at dinner about immortality and how that might play out. First of all, human life, human life will be extended and will continue to be extended as it has been over the last couple of centuries. And then finally, there will come a time when perhaps hanging around in your old body is not what people want to do with their intelligence anymore. And there’s a way for you to upload to some sort of artificial intelligence that would preserve who you are. So getting into those kinds of considerations, if you have no control over the AI that might be responsible for preserving human intelligences that gets a little strange. And also the whole question of how you govern, how you allocate resources in that kind of world, who has to stay in their body and who does not, does everyone get the same future. Do you have any thoughts about all of this?
James: These are great questions. I’m so glad your 10-year-old has written them. What’s fascinating about AI and why I really like AI like the title, interest of my book is that is such a profound look into ourselves. It’s a look into, AI is derived from psychology and neuroscience and robotics and it’s a deep look at what we are. And so, one of the fascinating aspects of AI in the future is maybe we can prolong life, maybe we can port ourselves into machines in the future. And then, we have real questions about identity, who are we then, are we the same legal entity. We were when we left our flesh and bone body and went on a machine. How’s our computers culpable in tort cases when they do bad things. We have a lot of issues to confront. I tend to think the singularity is a bit of a dreamy sideshow to the real thrust of technology. The technology is going to bring about a lot of really good things, and potentially a really lot of bad things. Immortality is an age old fear of ours, it’s the fear of death, fear of the dark. And I don’t think we should be pinning our hopes and pinning our AI dreams on that outcome. I think it’s probably the singularity movement. It’s got this quasi-religious undertone that I think are a distraction from the real issues that we need to solve as we embrace these technologies.
Denise: Okay, we only have limited time, James today, so I’m going to let Evan and Brandon fire away for a bit. Here. I certainly got more on my mind that we could discuss.
Evan: Well, James, I’m wondering if you have any guidance of how we should think about consciousness, or more specifically, whether or not an entity that’s imbued or endowed with artificial intelligence is that conscious? It seems like when we answer that question that would just blossom out into a bunch of other issues that you sort of touched on there with culpability and all kinds rights. So what about consciousness?
James: You know, it’s really great that you have identified that is a whole has them that you have identified in artificial intelligence. Is consciousness really necessary for intelligence, does a creature need to be conscious like we are, or does it simply or a machine or does it simply have to have a good model of itself in its environment. Right now I’m on the side that says we’ll create extremely intelligent machines that will do everything we can do, but I don’t think it’s necessary for them to a sense of self: a sense to know that they even exists to help self-awareness. So, you have to ask yourself as you with equal background is in what sense are they call when they do things. If they didn’t consider that they have person, can they be held legally culpable. There was a great book written years ago called” Rattling The Cage”, and it was about giving was a human status to primates because they had a lot of the same qualities we have in terms of expectations about the future, ability to suffer. But it was, I thought about robots in those terms, and I don’t know. I don’t have the answer to that. I don’t know if consciousness is really necessary for intelligence, and then if it’s not lose culpable, the manufacturer? Who’s culpable when a $300 drone crashes through my windshield, as I’m jamming through traffic. Everyone got these helicopters now, there’s the issues of tort law coming up there. Self-driving cars big issues. You know if we have battlefield robots and there’s whole human rights laws that would apply to them, and then culpability, it’s going to be this really slippery concept.
Evan: Right, culpability is so important in all this. I think of culpability in terms of obligations. If we are culpable for having done something, it’s because we have violated an obligation to someone else. We have an obligation not take another’s life or we have an obligation not breach a standard of care. So therefore we are criminally culpable or negligent under tort law, or whatever. But the flip side of that are the rights. What I am particularly interested, is, if the entity is conscious, and maybe were not even certain if it is or not. But let’s just say from a behavior perspective. It exhibits all the hallmarks of consciousness. You have a robot and it says”” Ouch. I feel pain, I am communicating to you that I have the capacity to suffer.” doesn’t that just really put us in a conundrum, as how, if we’re going to regulate as a society, what obligations we go to an entity that has the capacity to suffer. We don’t even have to go that far as to say we have created it, but just go back to what was implied in Denise’s question. Our physical substrate has here is, but yet our, sort of like in the movie, Transcendent. The new Johnny Depp movie coming out. You’ve got the consciousness coming out, you’ve got the mental function coming out in a non-biological substrate, when you pull the plug?
James: Right, these are the questions that pretty figure on, we, we need to deal with this whole idea of where does the identity exist when you pour your whole grain into a machine or do you both have an identity, who is the bank account, who gets the kids? Those are questions we will have to deal with a little bit further down the road, but sooner than that. We need to think about. Where’s the chain of responsibility when a robot and? Where’s the chain of culpability in a tort sense? These are questions we have to start exploring now before we create the machines. What happens with us, though, is our stewardship of new technology races behind our innovation. We create something new, and then we chased behind it, throwing down laws and throwing down policy. With some technologies that works out okay; with other technologies like artificial intelligence, we have to be had that because it’s going to be, as I talk about it in my book, The Final Invention; if we wait until we’ve created these things, it will be too late. But these are great questions and lawyers and robot law is probably going to be a huge field.
Denise: Yeah, and Brandon as you are sitting here, I’m guessing you’re wondering, do you have a direct question for James first because I then want to draw a parallel between the kind of law that we’re talking about and copyright law.
Brandon: Cool, I have one direct question, then absolutely let’s get to copyright. The direct question is from my former life as a philosophy guy, so I’m curious, whether you’re talking to, or maybe more importantly, gosh, giving the stakes you’ve outlined whoever is making these robots is talking to ethicists and philosophers, most importantly, and I’m sure they’re talking to certain philosophers of mind and people who think about the questions of consciousness that you have discussed, but when I studied philosophy and went to grad school for a little while and philosophy. We talked about moral psychology and how do we reason from things we want to what we will do, and this goes way way way back all the way to Aristotle’s. And the idea that that moral reasoning, is goal oriented, and it’s depends on a sense of the good. And there are higher goods lower goods and so on, philosophers have been hammering away at this thinking for a long time, but it seems like there is a fundamentally almost operationalizable insight there and that the key for getting this right will be to program the robot to want the right things. Right? So if we want them to do good things, they are very good at seeing factual connections and reasoning, but the insight philosophies. There is a gap. Big gap and knowing what is right and that gap is breached, usually practiced by desire, so what are the desires that we could give robots. Clearly they are giving the desire to kill, but it seems like that we could give them any kind of desire, we would think carefully about how and why we give them the ones that we do, right?
James: Yes, there are lots of ethicists talking about. This Nick Bosterman at Oxford, there’s this group MIRI, Millhouse, there are a bunch of philosophers in this game, there’s a guy named Wallick at Yale, he talks about. He’s a philosopher. Actually, there is at least one computer theologian, his name is Ann Forest. And she talks about what are your responsibilities. If you create something that has for all intents purposes, a soul or consciousness. If you’ve create then you need be responsible father, you need to be responsible mother or parent here. And what does that entail. If you’re going to imbue a robot with an ethical underpinning, then you need to ask yourself what’s it going to be. Are we going to be Canton utilitarian or categorical imperative, categorically embarrasses, no. Do we use the means, I like the ends only test. To be good, or to be ethical, you treat other humans as ends in themselves, not as a means to your own ends, or to some other ends. I think that was, that’s an old one. I think that might be.
Evan: I think that was also Jesus.
James: it might been Jesus. It’s interesting that’s why AI is such a profoundly fascinating look at ourselves because we need to talk philosophy. We need to talk, we need to all get a graduate to in divinity. We need to look at who’s gone down this road because there’s a lot of wisdom there. And these are the dilemmas that are facing us right now. These are not down the road in alumnus. So yes, there are philosophers in this conversation. As I said the philosophers and the weapons manufacturers are not in same world. Philosophers and robot makers are not really in the same. My goal of this book is to get with our final invention rolled out again. (Holds up book title: Our Final Invention) to be one of the people who helped get this conversation out into the mainstream. So that we are all thinking about these issues, these are all such terrific questions. I wish we could, wish we could talk all day about this.
Denise: Yes, someone in IRC dropped the phrase robot slavery into the stream, prompting the notion that if something is sufficiently intelligent and conscious that at some point, the ethics of our directing destiny, it becomes quite problematic. So, I promised to tie in copyright law to this. I can’t take credit for making that tie in but Cory Doctorow has a great article over at the Guardian. That just went up on April 4, and it’s called “why it’s not possible to regulate robots” he has a great discussion in there and one of the things says, he’s considering how one would regulate this kind of development and intelligences that can manipulate and cause physical change in the world. He writes “a robot is basically a computer that causes some physical change in the world, we can and do regulate machine from cars to drills to implanted defibrillators, but the thing that distinguishes a powered drill from a robot drill is that the robot drill has a drive, a computer that operates it, regulating that computer in the way that we regulate other machines by mandating the characteristics of its their manufacture will be no more effective preventing undesirable robotic outcomes, then the copyright mandates of the past 20 years have been, have been effective at preventing copyright infringement, that is “not at all.” But that isn’t to say that robots are unregulable but that Lotus of regulation needs to be somewhere other than controlling the instructions. Your lack to give the computer. For example, we might mandate that manufactures subject codes to a certain suite of vigorous public review or the code be able to respond correctly in certain set of circumstances and he goes on and on more examples. (Website: theGuardian-technology blog-“Why is it not possible to regulate robots.”) I like he is looking at this from the standpoint of what has worked and what has not worked in our knowledge of law. As we sit here today, and may be approach it differently, however, all of the suggestions that Corey gives assumes, private operators. Its legal requirements as opposed to military and government development. So, first of all, Brandon, do you have any thoughts on is comparing this to sort of the failed copyright regimen.
Brandon: Yeah, on the one hand, I think it’s really important for everyone realize in the context of copyright, that to ask the question and answer with the only honest possible response is it possible for a law to stop online piracy? The answer is no. And yet we had this act, The Stopping Online Piracy Act., Well, you’re never going to stop it. And so it’s very important to understand the limits of what the law can do, and of course the law can’t absolutely suppress behavior given the power of machines in the hands of private people. Not without some type of tyranny and I know that’s kind of the theme of Corey’s. The kind of control that copyright holders would have to interact their kind perfect enforcement scenario would be a tyrannical kind of power and so they have to give up that expectation that the law can perfectly enforce copyright. Now, I don’t know if I can go with him, what that in the logical road from because it is impossible to perfectly enforce copyright. It would be impossible to adequately enforce some kind of law for robotics. I think he’s maybe indulging in a little, rhetorical sleight of hand, there, I think, some regulation is good enough, right? I not a kind of copyright, I’m skeptical of lots of copyright laws, but I’m not at bottom, anti-copyright. I think copyright can be good enough to save enough of a market to induce creation in a way that we wanted to end it doesn’t have to be perfect to do that. So similarly, I would bet it’s at least worth trying to regulate robots in an imperfect way as well as we can to get outcomes that are good enough. I think that’s probably the answer, but I had another funny copyright and robot’s amusing, that I would like to share.
Brandon: I think it would be a funny question to ask, can a robot be an author? So, what happens if a robot is programmed, and this is not a new question and robots are already creating works, right? And I think it might shed some light on the copyright system, to think about this, right? Because should the question, “should we give copyright rights to robots?” some people might ask. To answer that question we need to know is the robot, a poet, does it have the soul, did its creation emanate from its self, is it touching the divine when it writes? And I think in the European tradition of what an author is that either right kind of questions to ask, because copyright is a human right that has to being imbued with personhood. But in the US tradition, I think you were honest about what the Constitution says and what the Supreme Court has said over and over again, we already kind of treat authors like robots, frankly. We don’t really care if they have souls, we don’t really care where the art comes from, and it’s a rather mercenary law that says, we want your stuff for the public good, and if giving you copyright is what it takes to get your stuff, then that’s what we’ll do. We don’t really care if that’s the way your motivation system works. So if we had robots that were motivated by acquiring and monetizing copyrights and if they made good art, then sure, give them copyright, right? I think that would be a natural extension of the US copyright system, it would violate constant imperative, but I think to get back to that, we already do, I think the US copyright system does not run on the Kantian vision of treating artists like ends; It very much treats them more as means to an end of more art.
Denise: All right then, I think we should make “robot poetry” our first MCLE passphrase for this episode of This Week in Law. We do drop these passphrase into the show in case of you are listening to this show for continuing legal education credit or other professional education credit, you’ll be able to demonstrate to any governing body that yes, in fact, you listened, because you know the secret phrases. So our first one is “robot poetry”. Do you have any thoughts on that James?
James: No, I’m still thinking about what Brandon said. You know, there is a company called Merit Science that is putting out, and there is another company that goes under the name Stat Sheet. And I quote them both in my book. They are putting out a lot of articles usually written, especially about sports are written by robots, right now. And I don’t know where’s, who owns the copyright or if they commit slander, whose? In a sports article If they say,” he’s not just a bad hitter. He’s a personally bad guy. He beats up nuns and orphans” I don’t know where that goes. But there are already a lot of articles being written by robots. Pretty soon, there’s narrative sciences doing it. They’re doing sports. They’re going to be doing news they will soon be doing editorials. And then who knows what’s next. Writing as it turns out, is not one of the harder things to do for machines.
Evan: A couple of comments on that; as far as assigning liability of defamation for a robot author, it’s a pretty similar comparison that you would make for tort liability for negligence for a self-driving car. Some sort of crossing of the line, in terms of the obligation that we have towards one another by an automated means. So it’s difficult to assign a human agent being responsible for that. The question of authorship and to tack on to what you said James and responding to Brandon, first of all, I didn’t expect to have my mind melted in the way that it was when Brandon started talking about Kantian, categorical imperative and how it relates to the copyright system and all that stuff. But suffice it to say, this does expose a real weakness, a flop. Perhaps in the system under which copyright law in the US is set. And granted, comparing it to the European system where there is more of a recognition of moral rights and the way you said it Brandon is presupposition that you’re in touch with the divine when you write a poem, in the US, it is much more utilitarian, saying, you know, we are giving you an incentive creator to do this. It would seem that this, if we’re going to get to a stage where automated or artificially intelligent agents are creative works. The whole incentive goes away unless they are somehow, they have some sort of emotion or some sort of a desire, going back to a point that you raised earlier Brandon. If we can no longer tie the creative effort in the results in creativity to a human author who actually has the, who is motivated to have the incentive to create, which is baked into the copyright clause of the copyright act. Maybe if robots are doing the creation, we don’t need to worry about whether it is protected by copyright, or not because the premise would be there’s no incentivizing going on in the act of creation of the work by an artificially intelligent agent. So, this is all over the place isn’t it?
Denise: Yeah, it’s a great point, though.
Brandon: I love that to. That’s a great point. Evan. I said something on both the copyright, if the robot wants copyrights in order to write article, then give the robot copyright. Maybe its owner wants copyrights in order to flip the switch and turn the robot on, but if not, then sure. And then again, it starts to expose some of the assumptions of the copyright system about incentives and if we had robots that would make great art and didn’t need food, and didn’t need to be able to monetize their articles using copyright. Then again, wouldn’t give them copyrights because we could get the art for free. So, this is a really fun thought experiment.
James: It kind of, it also kind of points out what I consider one of the fallacies of the sort of free download movement. And I didn’t form too many ideas about this myself, but I read part, and I need to finish reading Jared Linarite’s book, Who Owns the Future. He points out that the Internet was a free download of other people’s work was going to create some sort of new era of creativity and poetry and sharing of music and ideas. But what it has done is actually made being a poet, being a musician, being an artist a lot more challenging because you’re not getting monetize the same way people are downloading your stuff for free, and the people who are making out are the people who own what pieces of the music, art, marketable. And that’s the people who own popular CDs, popular bands, tours things like that but for other people it’s been just sort of a wobbly uprooting. It’s made what was a marginal existence even more marginal. Unless your robots, I feel like I already know the answer to the question, who is going to make out from the robots creating copyrighted stuff, and of course it’s going to be the people who own the robots. It’s going to be they’ll claim the rights to authorship to whatever the robots create and I’ll bet they’ll object when you try to download it for free.
Denise: Right, we were talking about it. Sports as being a ripe area for robots to involve themselves, it would seem the music world would be very an amenable to this. That you can have some algorithms for what makes a hit song and just turn them loose.
Evan: I think it’s science fiction. I don’t want to preempt anything.
Brandon: Yeah, let’s go there.
Denise: Well, since we’ve gone into the copyright arena here. Should we just stayed there, play our bumper and do some more copyright stuff we let James go?
(Advertisement: VHS recorder with video ejecting, label reads” Copyright Law.” Bumper music playing, copyright law in front, FBI warning behind).
Denise: So, in addition to his book that we’ve been discussing. James also is a documentary filmmaker as we mentioned, so we wanted to pick your brain before you go, James about the copyright issues that you have encountered in that aspect of your career.
James: Well, copyright, really, I make, first I should say I mean commercial documentary for people like the Discovery Channel, National Geographic, PBS, and I’ve worked for some channels abroad. And copyright is, it imposes a strict set of rules and you have to obey them, you can’t use music that you haven’t paid for, you can’t use people’s images without getting permission. You’re always conscious, you can use images without getting permission. And somebody owns everything, what we are doing is when we go out and get a piece of music or when we take a picture of someone. The companies National Geographic, Discovery, etc. they have a form that says we want all rights and perpetuity in all domains, and all planets in the universe forever. So they want all the rights to our things and we surrender those things. So when you agreed to go on a TV show, you have lost control of anything you say. And most of that is okay if you’re talking to a scholar about his work, then he has got an interests in getting his words out there. He’s going to exchange that, that’s a trade. He’s willing to make. We run into it all the time. And some, it can be very restrictive. You just want to go out to the world with the camera and shoot the cameras wherever you want, whoever you want, but if you wanted to be like this one, Happy birthday, it turns out to be extremely expensive. I have run into that several times when somebody is singing happy birthday and I realize we’re going to have to mute this, and put a happy sound over it. Yeah, you can’t. I’ll probably get some lawyer will send me a note for saying happy birthday.
Brandon: If they do call me.
James: I will. So, it’s part of what we do as filmmakers all the time.
Denise: We were talking before the show started about the only people in public, and the challenges of that. Do you want to expand on that for us?
James: Sure, I just came back from Sudan, where we were filming a wedding in there was a band at the wedding. So a bunch of thoughts went through my mind, what we do in America and what we have to do what we bring back the product to America that they want to broadcast. They want to sell and promote in the area. We got to somehow get permission from all those people at the wedding. One of the ways to do it. That is accepted by our legal system is you put up a sign outside the wedding that says going to this wedding is basically agreeing to have your picture, your likeness films by National Geographic. So we put up the signs in Arabic that, every time we encountered a crowd, and that constitutes, then we can use images of those people. We do that in America, we don’t at a church recently and we did have a big sign out front. Then the other part of that is work with people individually, if I’m interviewing you. For example, I need to get your permission, I need to get it specifically. I can’t just put up a sign in your office and say, if you go to your office today you will be filmed.
Denise: So, Evan is using this kind of a thing as a business development tool. I understand.
Evan: Yeah, I put up a Tweet. I mean, obviously it’s a little bit, not totally tongue-in-cheek. No, totally tongue-in-cheek, I was walking through the Chicago Union Station a couple of weeks ago and Victor, got that photo yeah. I took a picture of this is a photo release and James this is not related to what you are doing this is not particular to you. This is just a sign I happen to see for whomever was filming, but it looks like the creature that you were talking about. This was on a big like a sandwich board, sort of a thing in the middle of the core door as you come into the great Hall at Chicago Union Station. It’s a notice of filming and it’s written both in English. I zoomed in on the English paragraph, and it’s also in Spanish. Down below. And it’s sort of, not sort of it is intended to communicate to folks walking through there that by being here and lingering here are granting a permission for the producers of the program. The television program. To use your image, likeness name, what have you in the photo, and it’s just got this great sort of sweeping up everything for such program to be exploited in any and all media worldwide in propriety. So, you are really granting a lot of rights, just like being there. The Tweet. I put up was just the tongue-in-cheek part. My Tweet said. “Thought about standing by this sign in Union Station and having people retain me to explain their rights “so I just kind of was standing there pondering. But there are some interesting legal issues to discuss with this. I see three potential problems with doing it this way. The first is what is if somebody just doesn’t see it, does this by people who are blind. For example, and didn’t want able to read this. How is that communicated to them, it gets to fundamental questions of contract law, really. Where is the consideration for this you’re clearly not getting any compensation. So you wonder if one could challenge this for having, you’re not really getting anything. I guess you got the right to walk through, but is that really what the bargain was all about. There’s the challenge that one could raise if they didn’t understand it. What if I only, this is perfectly plausible in Chicago. What if I only read and speak only Polish. For example
(Evan Brown Tweet: thought about standing by the sign in Union Station, Chicago and having people retain me to explain their right-picture of: pic.twitter.com/UzLaJkw8xx: photo release, notice of filming)
Evan: I wouldn't know what this says and these are all separate and apart from whether or not it's legally sufficient to use one of those but such an interesting creature and even though I spend a lot of time in urban area where there's a lot of filming going on too. Denise you remember a couple years ago when they were filming Transformer 3 outside my window and I was keeping you up-to-date on the show about what was going on on the bridge and on Wells Street outside my office I'd never seen one of these before so it really was interesting and I wanted to capture that and it's an interesting practice here to use this.
Denise: Right and we talk a lot on the show about how if you're attempting to bind people in your online dealings Prof. Goldman has coined the term mandatory non-leaky click through agreement that you need to actually have the user jump through some hoops to can firm there consents to whatever you're asking them to agree to online it would seem to me that the court would want to have some sort of manifestation of people's consents may be as just walking past the sign and someone could argue that that was enough but as you point out Evan that there are people who are going to blow right by that without seeing it whether they have sight or not. I think it's a fascinating topic any other fair use or other considerations that have come up in your documentary film making process?
James: If you find a piece of music and you really can't find the owner and you've made a lot of attempts to find owner and it seems to be something that has just dropped out of copyright as long as you've done due diligence and you've documented your efforts then you can use it and I think it's also the same with the signs that we use if a blind person or blindness Polish speaking person walked past that sign then you are not expected to have thought of everything there are some people that are going to slip through. I think it raises big issues and when I see a sign like that I just kind of nod my head and go into the crowd and get filmed. I'm not so sure I'm going to be happy doing that with Google glasses because I know there is a company behind the Google glasses that plays fast and loose with privacy so I'm not so sure - I don't know if that's going to be a… Do people wearing Google glasses have to wear a sign? Literally wear a sign that says if you get anywhere near me Google is going to use her face to sell as much advertising as they can.
Denise: Well as throughout this discussion you given us much to think about James and you're leaving on the same note we'll let you run we're going to continue the show and get further into some copyright and entertainment issues with Brandon. We really appreciate your coming on to chat with us today.
James: Well what a pleasure it's been it's been great Brandon Evan and Denise the pleasure has been all mine so thank you very much for having me and I've enjoyed speaking to each of you.
Evan: It’s been awesome.
Brandon: It was nice to meet you James.
Denise: I hope we can pick up the threads of this discussion sometime in the future. So Brandon we got into some copyright considerations there. There is are few things that have been going on, some really big things and some other things that might not be so much on the public radar. You saw a great little piece on NPR that had to do with fair use and repetition why don't you explain that one for us.
Brandon: I laid the fair use patina on top of that story in my tweet. The story was about this really interesting research is going on where a psychologist at the University was doing research about what is it about music that makes it enjoyable to people, what do we like when we listen to music. She did this really cool experiment where she took this classical computer poser modern classical composer who was revered by 12 people who studied modern classical music and is unknown to the rest of us. One of the things for which the sky is revered is that he does not repeat phrases and his music. Every know is new he never starts over. There are no loops and so what this researcher did who she has no musical training or that she didn't confess to in the NPR story. She then played her test subjects the clips of music and one clip was on edited non-repetitive versions of this classical composers work, but the other clip was a clip that she made by looping arbitrarily chosen segments of the work so she's not a producer. She wasn't listening to the music for a book. There are no hooks, she just grabs some chunk and put it on repeat, which is a composition method that might sound familiar to EDM producers. I don't know but the test subjects loved the repetitive version and they had no interest whatsoever in the non-attitude version until apparently this research. And she's going deeper and looking at the fact that people like to hear the same song over and over again and just basically that repetition is something that we love. The fair use angle to me actually, there is a lot in here so you could talk about Sam plane this is such an interesting art form. This is why if someone who loops something. Well really is doing something that's at the heart of what we enjoy about music. But the thing that I find it my tweet that is funny. This woman made a derivative work. This researcher took the really non-repetitive modern classical stuff and remixed it and that guy was working on into the 70s and 80s. And so he may still be alive his heirs are certainly alive and his music is absolutely not in the public domain so had they wanted to. If they thought this is making a mockery of my dad’s work or my work they could sue her for copyright infringement, but I think there's a pretty compelling fair use argument here because she's not - no one is going to hear this music that is not a participant of her study. She's using these works as exhibits in a study and none of this has anything to do with their original purpose of the music. So I think she's got a really compelling fair use case and these are the kinds of cases that I'm really interested in their really marginal seeming but powerfully interesting and the idea that the copyright system would be brought to bear against this person is just mind-boggling. That definitely should not happen and it probably won't. And it's a good thing.
Denise: Evan was right to tie this into our previous discussion about AIs making music. Apparently all you would need to do is to put some portion of music on a repetitive loop and you would have something pretty good.
Evan: This is definitely highly transformative and you know in the last two episodes of the show. We've talked about situations where it was questionable whether or not the commentary being done by the work was about the work or was about some other subject matter the whole parody and satire. What we have here is a clear example. This is a commentary on the work itself not parody but it's a commentary on the work which in my view. I think with you. Brandon because it's so highly transformed and by no means resembling the original form of the work. I think this is a really strong fair use cases well. Plus it's really interesting and it brings up a lot of other notions to because well you know I enjoyed the comment that there's some theorizing that is kind of built-in evolutionary into our brains because we have a disdain for the unfamiliar but once we become what's it called mirror exposure. It makes this less suspect of something. It also reminded me of this whole notion from the Baroque era. It was really uncommon and I'm not a psychologist, I'm just using this with what I've heard on the today share something like this that Baroque composers very seldom came up with the original things, but they would borrow from folk music and motifs and things like that and that's why it was so popular because it draws on something that we know and it's sort of the same phenomenon that you mentioned with the samplings so tons of interesting things of psychology, evolutionary science copyright. This is a really neat story because of that mixture of things
Brandon: There's an interesting tension because on the one hand you think it was so easy for her to make something without even trying by just sort of grabbing pre-existing stuff and loopy net that people would like and when something is easy that seems cheap, but then on the other hands Ray it worked. So there's value and people really would like to it. So if we really want is more valuable, interesting art. Sort of by any means necessary. That may be the copyright system should have more respect for repetition may be mere repetition is not something that we should suppress or disdain. Maybe that's kind of at the heart of what we want from music so people who are playing with repetitive ways of creation are really playing at the heart of what we want.
Denise: It seems like there would have to be some kind of time sensitivity to it though. Did she address that at all with her study? Because if it's really repetitive you’d think that you'd really like it at first but then as you continued listening it would get old is that something that she had on at all.
Brandon: I didn't see that in the story and in fact, the mere exposure theory seemed to be almost to the contrary, that is you have to just loop something, even something that strikes you as annoying at first with enough exposure will become familiar and therefore find an acceptable I have lots of personal experiences like this. I try to be kind of an adventurous listener and when people tell me something is good. I've had several experiences where on first listen I did not agree with whoever recommended this music to me and I sort of pummeled myself with it until I get it. I trust certain people and critics and so on. So the fall is a band that's very repetitive and kind of grading, but I love them now and I put them on repeat in my dorm room in college for a week and it hit me and on day five. I was like this is great, so I've lived the mirror exposure.
Evan: People were kind of puzzled about Ravel's Bolero in the 20th century thus the piece that's 17 to 20 min. long, depending on how fast it's played but it's the same thing repeated over just sort of an escalation with more instruments from the orchestra. It's one of those pieces where you can buy into this. You can sort of see as your listening to it. You can like it more as it's actually going on, because you know where it's going, but it could be grading for one perspective there's a sense of well you see where it's going and kind of experience that escalation even though you haven't heard of before, because you kind of know the underlying motif.
Denise: I think we better make Bolero a MCLE passphrase for this show because that's nice and memorable is and if anyone has listens to Bolero than they probably have it running through their mind right now. So that's going to be our second phrase for the show. Let's move on were going to be negligence and definitely not doing our job here. If we don't move on to talk about music and an entertainment case that is going to be heard before the Supreme Court next week. The Aereo case. This is really a showdown coming on Tuesday the oral argument will happen. I'm sure there will be tons of coverage of it, and this is going to be one where I download the audio and listen to it for sure because this is going to be a really big case no matter which way it comes out. There has been a lot of posturing. Many amicus briefs filed and it in fact your IP clinic filed one correct Brandon?
Brandon: We did on behalf of the Consumer’s Federation of America and Consumer Union.
Denise: So which piece did you decide to brief we should explain for people who are familiar with this whole decision-making process and briefing process that there is the ability for parties who are not directly involved in the case to shed light on certain issues and help the court further understand issues that may be the parties themselves have not had the time or opportunity themselves to brief because there are time limits to what you can file with the court so which issue did you guys expound on?
Brandon: So, the rule of an amicus is really important to understand here and I'm glad you sort of started off with that. So, the parties are going to brief the law and they're going to brief it in depth and at a Supreme Court level year to lean with the best lawyers in the business. So what we do and what's a good amicus. I think tries to do always is to bring an interest to the table that is not otherwise going to be represented. So what we did in our brief is to focus on consumers a lot of the coverage in this of this case in the popular press and even especially even more so the legal blogosphere other and so on has been about tech companies. On one hand, and broadcasters on the other or tech companies and content companies is always the dichotomy and I hate that because there are so many other stakeholders in this sphere. So our goal with our brief was just to shine a light on the fact that consumers are actually the intended beneficiaries of copyright law. They are the folks who progress in the constitutional clause that creates the intellectual property power in Congress. It's all about promoting progress and promoting progress is all about making consumers… (In a way I don't like that word, but they’re our clients so we'll go with it) better off. So there's just no doubt that Aereo does that and area was doing that without doing any cognizable harm to the broadcasters. So what we wanted to point out was that free over the air television is free. Consumers are entitled access to it and Aereo is a technology company that is doing the paradigm case of foot technology companies do. It is creating the cheapest, easiest way for people to do something that they want to do with stuff that they are to have access to so like Dropbox lets you take your files and do cool things with them access them different places and send them across different machines. Aereo lets you take the airwaves, which are once again put out there for free for you and make those airwaves available to yourself on all the different devices you have and we also tried to highlight how crummy the market for TV is right now. That it is ripe for disruption and the cable companies are awful. They are running monopolies around the country that have these non-compete arrangements were they really have very little choice in any given market, you can't get the kinds of capabilities that Aereo sells from most cable companies without buying the biggest nicest channel packages. So Aereo did a great and classic technology thing which is to give people a service unbundled from a bunch of crap that they don't want and that's great and that's competition and enforces other people to lower their prices which is also great. Sometimes I think copyright holders - folks who benefit from copyright forget that generally speaking competition is good and consumers getting cheap and easy access to stuff is really good and generally speaking, public policy favors that outcome and not the outcome where a monopolist gets to have total control and extort huge prices in the process. So that was the story that our brief told that consumers should be empowered to make these kinds of uses of TV content and that it will be good for the whole system because it will force bloated monopolies to activate to operate in a more efficient way.
Denise: Evan, I know it's hard to guess about how this case is going to go, but you have any thoughts or insights, things you expect to happen in the oral argument next week?
Evan: Well who knows what's going to happen oral argument because you never know how those things are going to go. I expect Justice Thomas to ask a lot of questions. No that's probably not going to happen. I was just trying to think about this today and break it down to its essential elements. The essential point here. Thinking about Aereo and comparing it to Cablevision and I'm coming at this from a perspective of I want Aereo to win - I think that innovation should win here and like I said I was just thinking about it in terms of Cablevision and I think if a court could really look at this if they wanted to combat it in a very simple framework and sometimes I've been told courts will decide how they want the outcome to be and then sort of go back in and fill in the analysis. So they can fill in the analysis here of what that portion of the transcript clause means. How is this really any different and how is the remote DVR technology in Cablevision any different than the ordinary VCR that you have. You are storing in there on your set top and if you're using a VCR you’re transmitting it through a RCA cable to your television. What's the fundamental difference between having that stored in the cloud and you don't transmitted via the Internet to your TV. So if you just get back to that very basic point, if we’re going to accept the premise of the Sony Betamax decision that this is okay from a time shifting perspective, and we look at just sort of the fundamental technology of what's going on and allow yourself to be moved along through time with the thinking of Cablevision, then as far as the ultimate outcome is. I'm just really trying to talk myself into it expecting an Aereo victory as far as oral argument. It'll be interesting to listen to.
Denise: Can you tell us Brandon some of the other points that were raised by the amici who submitted alongside you guys?
Brandon: Sure, there is a kind of what I would call a war of metaphors or frames and Evan just gave you a really, really good version of the frame that I like to which is Aereo is just a VCR on a long cord attached to an antenna on a long cord and it's all yours. So what does it matter how long the court is and there were several folks who weighed in along those lines. There was a great Amicus brief by a bunch of IP professors including James Grimmelmann one of my favorite IP folks on twitter and David Post at Temple University. They were the authors I think and then 20, 30, 40 Profs signed on and basically made the argument. This is just a VCR and it's too late to unwind Betamax and pretend that didn't happen. We've got to uphold that basic principle that technology has lawful uses and in time shifting of lawful uses to be free to go to people that want it. But there were other amici who said that this is on behalf of broadcasters who said, look, this is just the other frame, which is this is just cable. Cable started CATV is community antenna television back in the 70s and before if you lived down in the Valley and somebody would put a big old antenna up on the mountain and run a cord down into the valley and so you access a cord and the broadcasters sued over that twice and both times they lost and the Supreme Court made the kind of long cord argument that we are making now. They said look people can have access to this free stuff and the antenna is just on a long cord. Who cares this guy is just renting technology to people. And so it's a very, very similar argument. The problem is that then Congress in the 76 copyright act decided that they liked the broadcasters better than they liked the cable operators and so they explicitly overruled those two findings. Those two cases those cases are called teleprompters and fortnightly and said no no, and that's why they put in the transmit clause under public performance section 106 whichever section of 106 is about performance, there's now the transmit clause that counts as a public performance and as well any transmission to the public is also a public performance. So these guys who want antennas are making transmissions to the public and that's a performance and that requires a license. So the broadcasters and their amici are unified in making that kind of an argument that this is just cable this is just an antenna just like cable and cable has to pay to retransmit our signals. So Aereo should have to pay transmitter signals and in a way the question is going to come down to whether the justices see all those little antennas and say those are all part of a legit technological solution that is a lot like a VCR or if they see all those little antennas and say oh, come on. That's just one big antenna and it's just like cable. That's really the war that's going to be waged in the court room. I think next week and I'm going to be there. I just got admitted to the Supreme Court bar, so I get to walk through the special door and hopefully there will be a shorter line.
Denise: That is so awesome. We will definitely tweet your impressions after you go. I think you just hit on the reason why I think oral arguments going to be - you can't really read the tea leaves from oral arguments, but you get a good impression from the questions that the justices ask. A little bit into their thought process… Are they viewing Aereo as a white hat or a black hat player in this case and I think they will tip their hand on that or at least the justices asking questions will. By way exactly they pose and their tone and what they're trying to suss out from the lawyers arguing in court because I think you're absolutely right Brandon that someone could look at the tiny little antennas as either a brilliant technological solution to the legal framework as it exists or as something devious that routes around the legal framework and accomplishes a goal that really the law should not permit. I think it'll be really interesting to hear how they're probing out the case in oral argument.
Brandon: I'm curious to see. I'll take the risk and make a guess. The two things are safe and everyone is saying this we won't get Ginsburg for Aereo because she is pretty well bought into she hears a rights holder arguments in a very favorable way all the time. There is very little you can say to her from a rights holder perspective that she's not predisposed to get into. But then Breyer will side with Ariel for the opposite reason. He's published academic articles sort of skeptical of the reach of copyright. I'd also I meant to look at Kirtsaeng case and the breakdown on that case because I bet that would also help us because Kirtsaeng is a recent Supreme Court case about the for sale doctrine and Kirtsaeng did something in a way really similar to Aereo. What he did was he kind of crowded around a loop hole in the wall to make some money, which is he had his family in Thailand by a bunch of foreign cheap versions of textbooks and ship them to him in the US and he resold them on eBay. There's a little piece of the copyright act where the publisher said look this is meant to stop people from doing this kind of arbitrage of selling cheap texts abroad instead of selling them here. But Breyer said well you know the law that little bit. Looks interesting to me but what's more important to me is the fundamental principle that once you buy something you own it and you can do whatever you want with it and that could have a similar breakdown here where you could say broadcasters you got the transmit clause, but what's more important to us is the fundamental rights of consumers to watch TV. So I wonder if that breakdown might be predictable and the other thing that's interesting is Justice Alito just un-recused himself, so we did have this prospect of a 4 to 4 split and just in time to save the day for somebody perhaps Justice Alito has un-recused himself and so now we won't see a tie probably. Will see a 5 to 4 some way or the other.
Denise: Alright well we will definitely be watching your twitter stream next week. Brandon and let us know your thoughts. Don't tweet from the courtroom. I don't want you to get tossed out but you run right out there and let us know your thoughts. Evan. Any thoughts before we move on to our tip and resource?
Evan: No looking forward to how that goes and looking for tenets hearing what you have to say about it. Brandon. It should be cool.
Denise: Our tip actually comes by way of Brandon, who spotted this great article in the New Yorker by Prof. Tim Wu called Little lies the Internet told me and the tip to be gleaned from that article is something called the buy button scam that's what professor who calls it and that is the mere fact that when you see – often times when you see something on an e-commerce site like iTunes or Amazon that says that your quote unquote find something you're not actually doing that but the fact that you're presented with a buy button might make you think that you have more rights over that piece of media than you actually do so. It is a bit deceptive and misleading as Prof. Wu points out in his story and maybe they should do something about using the buy button when you buy movies, e-books and digital music. Because when you go deep and read the terms of service and we talked a lot about this on the show. You're not actually buying anything. You're getting a license to be able to use that media in certain ways, so it's a really fascinating conundrum and we've talked before on the show about when you've amassed a large movie collection whether that's something that can pass on in your will and of course terms of service come into play there as to whether these things are even transferable, but the tip is that buy button may not mean exactly what you think it means. It's not actually a purchase when you're buying a movie music or an e-book and our resources. This week, one of them comes from Evans firm and is a great round up of the legal implications of heart bleed. Do you want to expand on that for us Evan?
Evan: Sure. It's a blog post at our firm's blog at infolawgroup.com. Four of my colleagues bound together and drafted this super Uber blog post talking about some of the legal implications of the heart bleed vulnerability so it's a good read it focuses a lot on identifying issues and evaluating when an organization - the intended audience is at the enterprise level. The organizations that may be affected and their data security may have been compromised by heart bleed and a pretty heavy focus on remediation figuring out whether and to what extent remediation and notice in those types of things may be relevant, so take a look at that and it's at the top of the page. It's the most recent post at info law group.com. Some of the intriguing questions that the enterprise organizations may have. Now that the heart bleed has been on the market. Well it's been on the market for a couple of years but now we've known about it for a couple of weeks so it's an interesting read.
Denise: Right, and regular listeners on the show know that we aggregate all of the discussion points that we’ve hit on and our resources and tips at delicious.com/thisweekinlaw/255 is our episode this time around so if you need to go back and refer to anything that we’ve mentioned today that’s where they’ll be for you. We have a couple of additional resources I wanted to throw in just in light of the Aereo case to be argued next week. Over at Skoda’s blog which is Supreme Court coverage they have a whole page for each and every case they’ve been following and covering so I went ahead and put a link for their Aereo page there because it’s got links to all the major docket events in a case When the arguments are going to be and whenever something is going to happen on a case it’s reflected on that page. So if you need that information it is there. Also if you’re like me and Evan you’re not going to be able to march into the court and hear the argument yourself. We’re so jealous Brandon. You will have the opportunity after the fact at some point to listen to the oral argument at the Oyez project. That’s at oyez.org which eventually posts all the oral arguments heard in the court. I’m not sure what their lag time is these days, do you know Brandon?
Brandon: I think it’s pretty fast. The next day was the last I heard. Although I think there was a slow down on recently, so I don’t know. For a while it was the next day.
Denise: I remember back in the day you used to have to wait quite a while after the argument because it was quite a process to get the recording and then process and post it. I think they’ve streamlined quite a bit which is a good thing. This has been just a fascinating discussion today. I’m so excited that we’ve been able to have it. Brandon, we can’t thank you enough for taking the time to join us today and to give us all your insights and we hope that your amicus brief finds fertile ground with the justices at the Supreme Court.
Brandon: Thank you so much, me too. Can I point to one other resource?
Denise: Sure, absolutely.
Brandon: When I heard talk about fair use and documentary film I can’t believe I didn’t bring up the documentary film maker’s statement of best practices and fair use. So if folks who are listening to discussions about having to get permission about everything that’s actually no necessarily the case. I know some people do out of an abundance of caution. But in fact there are some well recognized practices where film makers are really relying on fair use and there are lots of folks including my clinic who would be happy to help you learn more about fair use and how that works in a documentary film.
Denise: I just pulled that up. It’s at the center for media and social impact, is that right?
Brandon: That’s right, it’s my colleagues at American University who’ve been doing these best practices statements on fair use and I’ve done one for libraries myself. I collaborated with them on this. These projects are really great because they really provide guidance without ham stringing you. So it’s good stuff.
Denise: Oh that is good stuff. Thanks so much for bringing that up and for joining us today.
Brandon: Absolutely, it was my pleasure.
Denise: Thanks again to James Barrett, fascinating chat with him on an issue that we’re sure will continue to unfold and develop we hope in positive ways. Evan it’s always fun when we can spit ball the future on the show.
Evan: It sure is and I tell you what this has been – this is up there at the top of some of my favorite episodes. This has just been such an interesting conversation. Brandon, so nice to meet you and talk with you. I really enjoyed it. So many provocative things that you brought up and talked about just really interesting stuff. So from 1 philosophy major to another I’ve really enjoyed it. It’s crazy! It might have been better if we were in our undergraduate days on a couch at 3AM drinking beer but anyway. Fun times and Denise thanks for driving things so well….as the train goes off the rails.
Denise: Crash. It’s been really fun for me too. I’m so glad we get to do this show every week. We do it on Fridays at 11AM Pacific time, 1800 UTC. That’s when we record live and you can jump into IRC with us if you like and join us live. That’s at irc.twit.TV. If you can’t do any of that it doesn’t mean you’re going to have to miss out on the show, just head over twit.tv/twil and our whole archive of shows is there. If you prefer to find it on YouTube which is a great way to go – youtube.com/thisweekinlaw will put you in the right place. We have Facebook and google plus pages that we use as well for the show. That’s a great place if you want to give us something that won’t fit into 140 characters and often the issues that we talk about will not; but if you want to tweet us and get our attention quickly too Evan is @internetcases on Twitter and I’m @dhowell over there. Email us too, I’m email@example.com and Evan is firstname.lastname@example.org however you get in touch with us please do so. We love hearing from you, we love hearing about your reactions to our discussions and additional thoughts you might have had, things that it made you think about that was a light bulb going off over your head. Always fun to hear about that. If you have light bulbs for us as to guests we should have on the show we’d love to hear about those too. Thanks so much for joining us. We’ll see you next week on This week in law!