by Y Combinator4/25/2018
Miles Brundage is an AI Policy Research Fellow with the Strategic AI Research Center at the Future of Humanity Institute. He is also a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University.
Miles recently co-authored The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.
Tim Hwang is the Director of the Harvard-MIT Ethics and Governance of AI Initiative. He is also a Visiting Associate at the Oxford Internet Institute, and a Fellow at the Knight-Stanford Project on Democracy and the Internet. This is Tim’s second time on the podcast; he was also on episode 11.
iTunes
Breaker
Google Play
Stitcher
SoundCloud
RSS
Craig Cannon [00:00] – Hey, how’s it going? This is Craig Cannon, and you’re listening to Y Combinator’s podcast. Today’s episode is with Miles Brundage and Tim Hwang. Miles is an AI policy research fellow with The Strategic AI Research Center at the Future of Humanity Institute. He’s also a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. Tim is the director of the Harvard-MIT Ethics and Governance of AI Initiative. He’s also a visiting associate at the Oxford Internet Institute, and a fellow of the Knight-Stanford Project on Democracy and the Internet. This is Tim’s second time on the podcast. He was also on episode 11, and I’ll link that one up in the description. Alright, here we go. Alright guys, the most important and pressing question is now that cryptocurrency gets all the attention, and AI is no longer the hottest thing in technology, how are you dealing with it?
Miles Brundage [00:51] – Yeah, Ben Hamner of Kaggle had a good line on this. He said something like great thing about cryptocurrency is people no longer ask me about whether there’s an AI bubble. It’s hard to compete with the crypto bubble, or phenomenon, whatever you want to call it.
Tim Hwang [01:08] – It’s actually a good development. The history of AI is all of these winners and having another hype cycle to kind of balance it out might actually be a good thing.
Craig Cannon [01:18] – Yeah, absolutely. Let’s talk about your paper to start off, Miles.
Miles Brundage [01:21] – Sure.
Craig Cannon [01:21] – What is it called and where do you go from there?
Miles Brundage [01:24] – It’s called the Malicious Use of Artificial Intelligence, and there’s a subtitle, Forecasting Prevention and Mitigation. It’s attempting to be the most comprehensive analysis to date of the various ways in which AI could be deliberately misused. They’re not just things like bias and lack of fairness in an algorithm that are not necessarily intentional, but deliberately using it for things like fake news generation, and combining AI with drones to carry out terrorist attacks, or offensive cybersecurity applications. The essential argument that we make is that, that needs to be taken seriously, the fact that AI is a dual-use, or even omni-use technology, and that similar to other fields like biotechnology, and computer security, we need to think about whether there are norms that account for that. Things like responsible disclosure, when you find out about a new vulnerability, is something that’s pervasive in the computer security community, but hasn’t yet been seriously discussed for things like adversarial examples where you might want to say, “Hey, there’s this new misuse opportunity or way in which you could fool this commercial system that is currently running driverless cars,” or whatever. There should be some more discussion about those sorts of issues.
Craig Cannon [02:36] – Is it going into the technical details or is it a survey of where you think things stand now?
Miles Brundage [02:41] – Most of it’s a general survey, but then there’s an appendix on different areas, like how to deal with the privacy issues. How to deal with the robustness issues, and different places to look for lessons.
Craig Cannon [02:53] – Okay. Tim, have you been focusing on any of this stuff while you’ve been here at Oxford, or is your work totally unrelated?
Tim Hwang [03:00] – It’s somewhat related, actually. I would say that I’ve mostly been focusing on what you might think of as a subset of the problems that Miles is working on. Where he’s sort of saying, “Look, AI isn’t going to be inherently used for good. In fact, there’s lots of intentional ways to use it for bad,” right? One of the things I’ve been thinking about is the sort of interface between these techniques, and the problems with disinformation. Whether or not you think these techniques will be used to make ever more believable fakes in the future, and what that does to the media ecosystem. I would say it’s a very particular kind of bad acter use that Miles was talking about.
Craig Cannon [03:32] – When you’re doing this research for both of these topics, are you digging into actual code? How are you spotting this in the wild?
Tim Hwang [03:41] – My methodology is really kind of focused on looking at what is the research that’s coming out right now, and trying to extrapolate what the uses might be. Because one of the really interesting things we’re seeing in the AI space is that it is becoming more available for people to do. You’ve got these cloud services. We’ve got the tools are widely available now. What’s really missing is the ability to kind of figure out how you do it, right? What is the methodology that you use? The question is, do you see papers that are coming saying, “Hey, we could actually use it for this somewhat disturbing purpose,” and then kind of extrapolating from there to say, “Okay, well what would it mean for it to get used more widely?”
Miles Brundage [04:20] – Reading papers, seeing what the hot areas are, and cases in which some sort of potentially negative or positive application is on the cusp of getting just efficient enough to be used by a wide array of people, or the hyper parameter optimization problem is close to being solved, or whatever sort of trend that you might see might be a sign that certain technologies are going to be more widely usable. Not just by experts, but potentially in a huge range of applications. For the purpose of this report that I recently wrote, we got a ton of people together, including Tim, at a workshop, and we talked about technical trends, and had people in cybersecurity, and AI, and other areas sort of give their best guesses of what’s possible, and then prioritize what the risks are, and what to do about them. Often pulling together different disciplines is a good way to think about what’s possible. One other thing that I’ll point out is that you don’t necessarily have to even look into the technical literature to find discussion of these sorts of misuse applications today, because it’s a hot topic already. Things like deep fakes for face swapping and pornography is a huge media issue right now. That actually happened while we were writing this report, and then we added something later about it, because we characterize the general issue of fake videos, and misinformation, and AI as making it more scalable, because it potentially requires less expertise. And while we’re writing that, this deep fakes thing happens,
Miles Brundage [05:52] – and it’s democratizing in some sense, the ability to create fake videos. It’s quite a live issue.
Tim Hwang [06:01] – Right, there’s a really interesting question here, particularly when you think about prediction about there’s the realm of what can be done, and then trying to understand what’s likely to actually happen in practice, seems to be the really challenging thing. Because there’s lots of terrible uses for almost every technology.
Miles Brundage [06:17] – Yeah.
Tim Hwang [06:18] – But we see certain uses more prominently than others, right? Tthat’s actually where the rubbo on this sort of stuff is, and actually is part of this prediction problem, right?
Miles Brundage [06:25] – Yeah, so that’s why you kind of have to first of all, have some humility about what you can predict. If it’s a fully general purpose, or fairly general purpose technology, they could be steered in a bunch of different directions, or applied to a bunch of different data sets, then you should expect that if it’s super widely available, a bunch of people are going to find new uses for it. That’s a reason to look upstream at the papers and see where the technical trends are. Because then you could say, “Well, maybe this is not yet ready for primetime for any application,” or, “This is starting to be fairly general purpose.”
Tim Hwang [06:59] – Yeah, a good question for you Miles, is whether or not you think that we’ll see the virtual uses be the ones that happen first, versus the physical ones. Some people have said, oh okay, well you could use AI to really make hacking much easier. Or you might be able to use it to create these fakes, which we’re already seeing. But I’m wondering if those threats kind of evolve in a way that’s different or maybe even earlier than threats of people who have talked about, “Oh, what happens if someone built a drone that goes out and uses algorithms to go hurt people?”
Miles Brundage [07:30] – It’s hard to say. One heuristic that I’ve used is that stuff in the physical world is often harder. That it’s both more expensive and less scalable because you have to buy actual robots, and then there’s often hardware issues that you run into, and the general problem of perception. Perception is much harder in the real world than in static data sets. But yeah, we’re seeing progress. Just a few days ago there were a bunch of cool videos from Skydio of their autonomous drone for tracking people doing sports, and flying around, and it seems to be pretty good at navigating in forests, and things like that. Maybe technologies like that are sort of a sign that they’ll be much more both positive and negative uses in the real world. In terms of nearer term impact, those sorts of things that have those autonomous features still aren’t super easy to use for end users outside of a particular domain. I’m not sure that anyone could just easily repurpose it to track a particular person or whatever. For that domain application, I don’t know how expensive it is, but it’s probably more expensive than $20 drone.
Tim Hwang [08:39] – Right. What’s like the first harm that comes out of the gate in a really big way? Because I’ve debated often okay, so say there’s a horrible self-driving car incident that occurs, right? Maybe that turns society off in general to the whole technology, and there’s a big categorical outlawing of it. I’m like well okay, that’s kind of not so good, right? At the same time, I’m kind of like, okay, well what if hacking becomes a lot more prominent in a way that’s powered by machine learning? We know that like, I don’t know the response to huge data disclosures or huge data compromises is actually a quite limited public response.
Tim Hwang [09:15] – That seems not so good either, is basically people either overestimate the risk or underestimate the risk, depending on what happens first.
Miles Brundage [09:20] – People are starting to get kind of desensitized to these mega-disclosures, and so maybe they won’t even care if there’s some adaptive malware thing that we might be like, “Whoa, that’s kind of scary.” But it could be that something truly catastrophic could happen. If you sort of combine the scalability of AI and digital technology in general, with the adaptability of human intelligence for finding vulnerabilities. If you put those together, you might have a really bad cyber incident that will actually make people be like, whoa, this AI thing. Yeah, so that’s something that worries me a lot.
Craig Cannon [09:58] – But it’s sort of a moving goalpost on the positive and negative side, right? Newsfeed, for instance. You could call that AI, to a certain extent. As it’s feeding you information. People get mad at newsfeed, they don’t get mad at AI. The notion that the public would generally turn on something like that seems almost unrealistic. Because you want to just point at one thing.
Tim Hwang [10:18] – Right. I mean that says basically what the public thinks about as AI isn’t AI.
Craig Cannon [10:23] – Right.
Tim Hwang [10:24] – What we’re actually talking about is this weird amalgam of popular culture, some research explanations that make it to the public, all these sorts of things. There’s so much about what does the public actually think AI even is, which is really relevant to the discussion. Because the newsfeed assuredly is AI. It uses machine learning. It uses the latest machine learning to do what it does. We don’t really think about it as AI. Whereas like the car, I mean I think a lot of robots kind of fall into this category where even robots that don’t involve any machine learning are thought of as AI. And actually impact the discussion about AI despite not actually being related to it at all in some absolute sense.
Craig Cannon [11:01] – But then it sort of becomes a design challenge. It’s why these self-driving cars are shaped like little bubbly toys. They’re so much less intimidating when you see it just bump into a little baller down the street here. Whatever. But yeah, the robot, like the factory robot, for instance, those are terrifying to people, but they’ve always been terrifying to people. There’s no difference here. Surely there are positive things that you guys notice. You’re going around to these conferences. What questions are people asking you about AI? What is the public concerned about positively and negatively?
Tim Hwang [11:33] – There’s two things that are really at top of mind that I think keep coming up both in the popular discussion around AI right now, and also among researcher circles.
Craig Cannon [11:41] – Yeah.
Tim Hwang [11:42] – Right, so the first one is the question of international competition, and what it looks like in the space. This is the question of it seems like China’s making a lot of moves to really invest in AI in a big way. What does that mean about these research fields? Will the US, and Canada, and Europe sort of stay ahead in this game? Will they fall behind? What does that mean if you think governments are going to see this as a national security thing? That’s one issue that I hear a lot about. Second one is around the issues of interpretability, right? Which I think are a really big concern, which is these systems make decisions. Can we render some kind of satisfying explanation for why they do what they do? I use the word satisfy specifically there because there’s lots of ways of trying to tell how they do what they do, but this question of how you communicate it is a whole ‘nother issue, and those seem to be two really big challenges. I’m sure Miles has seen other things too.
Miles Brundage [12:28] – Yeah, I mean there’s a lot going on. The whole FAT ML community. Fairness, accountability and transparency in machine learning. Now there’s FAT*, so it’s more general than just machine learning, conference series, and broader community has been doing a ton of awesome work on those sorts of issues. In addition to the transparency thing that Tim mentioned, I would also mention robustness. That’s a huge concern. If you look at the offense and defense in competitions on adversarial examples, the offense generally wins. We don’t really know how to make neural nets robust against deliberate, or even unintentional things that could mess them up. They do really well according to one single number of human versus AI performance. But then if it’s slightly outside the distribution, they might fail, or if someone’s deliberately tampering with it. That’s a huge problem for actually applying these systems in the real world and we’ll continue to see progress on that, but we’ll also see setbacks where people say, well this proposal you had for defending neural nets actually doesn’t work. Then there are all sorts of other things besides just adversarial examples. There’s a recent paper called BadNets that talked about back doors in neural networks. Essentially, someone can put a trained neural network on GitHub, or wherever, and then it seems to work fine, but then you show it some special image, and then it goes wrong. There are issues around that. In terms of positive applications, one area that is super exciting,
Miles Brundage [13:59] – and there’s so much work on it that I’ve had to sort of take a step back, and not even try to tweet all the interesting stuff that I see on it, is health. There’s pretty much every day on arXiv, there’s a new paper that’s super human performance on this dermatology task, or this esophageal cancer task. There’s a ton of activity in that space–
Craig Cannon [14:23] – Is that specific to for instance, image recognition? CT scan type stuff.
Miles Brundage [14:27] – There’s a lot of image recognition. That’s kind of the low-hanging fruit because there’s all this progress in image recognition, and things like adversarial examples aren’t necessarily a problem in that domain. You’re hoping that a patient isn’t fiddling with their image or putting a little turtle on their chest when they’re getting scanned and then it gives the wrong answer. There’s tons of applications there, but there’s also just more general machine learning stuff, like predicting people relapsing and having to come back to the hospital. When’s the optimal time to send people home? Given this huge data set of people’s medical histories, what’s the best diagnosis? There’s a lot of other applications.
Tim Hwang [15:08] – There was a workshop at NIPS a few years back. Was it two years ago? That was basically AI in the Wild was the name of it. That’s a really good way of framing up a lot of the issues that we’re seeing right now. We’re moving out of the lab in some sense, where it’s like okay, the old task used to be just like, “Could we optimize this algorithm to kind of do this thing better?” Now there’s a bunch of research trying to figure out what do we do when we confront the practical problems of deploying these things in the world? That links a lot of the interpretability stuff. It links a lot of the safety stuff. It links these questions that are specific to health. All these come out of a fact that the technology’s really finally becoming practical and so you have to solve some of these really practical questions.
Craig Cannon [15:47] – As far as deploying this stuff in the wild in the health use case, who is using it right now? Where are we seeing it?
Miles Brundage [15:54] – A lot of it’s pilot stuff. There’d be a hospital here, a medical center there. I am not sure of any super widely deployed ones, except for apps for very specific things like looking at skin lesions, and stuff. As I said, it’s something that’s so active that I’m not the best person to ask because it’s just I haven’t even tried to assess what’s the hottest thing in this area. Basically, it’s just every day there’s a new pilot on this. But a lot of it, as Tim said, it’s like at the stage where it might get rolled out, but it hasn’t yet been rolled out. There are pilots on the one hand, but then there’s also a lot of stuff that’s just training on offline data. They’re like, well if we had implemented this, it would’ve been good, but there are issues around interpretability, and fairness, and stuff like that, that would have to be resolved before it was actually widely deployed.
Tim Hwang [16:46] – One of the interpretability debates that I’m loving right now is basically so Zach Lipton, this machine learning researcher did this great paper called The Doctor Just Won’t Accept That, right? It’s basically a reference to that trope in a lot of the discussions where it’s like, well the doctor won’t accept that it’s not interpretable. What do you mean it’s not interpretable? He’s challenging what is a really big question, which is will they care in the end? Will interpretability actually matter in the end? Are we actually in some ways, is the field actually over-indexing on that, or maybe in the very least, not thinking as nuanced as it should be about what kinds of interpretability are actually needed or expected in the space? That’s one big question, is just will these things become the norm for the technology, or will the market kind of adopt it even without those things? They’re worried about the safety of these technologies. That ends up being a question not just of, can we develop the methods, but can they be something that just expected that you use when you deploy the technology? Because it’s possible that if you just sort of leave it to the market, that we’ll just kind of rush ahead
Tim Hwang [17:48] – without actually working on these problems.
Craig Cannon [17:50] – Think about anything. Do you know how to build a microphone? Yet you’re totally fine using it. There’s all of these things, and you probably see it with anti-vaxxers. They’re like, no. They’re the old school, home grown version maybe, that they don’t want to accept it, but the rest of the world seems totally fine with it.
Miles Brundage [18:06] – Yeah, and just another point. There’re likely to be differences cross-nationally. Not just internationally, in terms of who’s going to be willing to accept what, because countries in the European Union might be much more, and at the EU level, there might be a lot more regulation of these sorts of things. There’s this whole discussion around right to an explanation and the general data production regime. In China, I haven’t seen as much concern about interpretability, though there are some good papers coming out of China. In terms of governance, I haven’t gotten a sense that they’re going to hold back the deployment of these technologies for those reasons. Then in the US maybe, it’s somewhere between the two.
Tim Hwang [18:46] – I mean, it’s a real battle of like, I was reflecting on this ’cause I saw a debate on interpretability recently where some researchers were like, no one cares. Let’s just roll ahead with this stuff.
Craig Cannon [18:55] – Just to pause you really quickly.
Tim Hwang [18:56] – Yeah, sure.
Craig Cannon [18:57] – Let’s define that just in case someone who’s listening who’s not an AI nerd–
Tim Hwang [18:59] – Yeah, sure. The most colloquial way of talking about it is interpretability is kind of the study of the methods that let you understand why a machine learning system makes the decisions that it does.
Craig Cannon [19:11] – Another way is kind of like an audit to understand how you got this output.
Tim Hwang [19:14] – That’s right, exactly. There’s two sets of problems there. One of them is, can you actually extract a meaningful explanation to technicians? Then there’s the other question of just from a user point of view. Just a doctor or someone who’s not a domain expert on machine learning, being able to understand what’s going on.
Craig Cannon [19:29] – Right, okay.
Tim Hwang [19:30] – Right, and the debate I think, focused on just, does it matter? Right, because I think there’s some machine learning folks who are like, look, if it works, it works. That’s ultimately going to be the way we’re going to move ahead on this stuff. And some people say no, we actually want to have some level of explanation. And I actually got the feeling that in some ways, this is sort of like machine learning fighting with the rest of the computer science fields, right? Because when you’re learning CS, it’s very much about can you figure out every step of the process, right?
Tim Hwang [19:58] – Whereas machine learning has always been empirical in some sense, right? In the sense that like, we just let what the data tells us train the system, right? Those are actually two ways of knowing the world that are actually debating on this question of interpretability at the moment.
Craig Cannon [20:12] – It’s sort of like statistical significance in bio. Where you’re just like, I don’t know. It worked five out of 500 times, therefore it works. This is fine. It’s not a computer. What are people pushing for? For instance, we’re in the UK now. In the US, how are the conversations different?
Tim Hwang [20:31] – There is certainly very different regimes around what is sort of expected from explanation. This actually stems from some really interesting things about how the US thinks about privacy, and how Europe thinks about privacy. I would say in general, the US moves on a very case-by-case basis. The regulatory mode is basically going to say, look, in medical, that seems to be a situation where there’s particularly high risks, and we want to create a bunch of regimes that are specific to medical. Whereas in Europe, there’s broader regimes, where the frame is for example, automated decision making.
Craig Cannon [21:08] – Okay.
Tim Hwang [21:09] – Right? The GDPR applies to automated decision making systems, which is very broad, and the actual interpretation will narrow that considerably, but you start from a big kind of category, and you narrow it down, versus an approach, I think we’re just taking much more just starting from the domain that we think is significant. It’s more patchworky I guess, in that sense.
Craig Cannon [21:29] – You would agree?
Miles Brundage [21:30] – Yup, I agree.
Craig Cannon [21:31] – Yeah, fantastic. Okay, cool. I am curious about your PhD. What are you working on? You’re almost done…
Miles Brundage [21:39] – I’m studying science policy, and the work of my dissertation is on what sorts of methods are useful for AI policy. The problem that I pose is that there’s so much uncertainty. There’s uncertainty as we were just talking about, about where AI will be applied. But then there’s also deep expert disagreement about how long it will take to get in certain capabilities like human level AI, or even if that’s well-defined, let alone what happens after. I’m taking more of a scenario planning approach. Let’s think about multiple possible scenarios, and I’ve done some workshops. I’m trying to understand, is that a useful tool, and also, can we do models that sort of express this uncertainty in some sort of formal way?
Tim Hwang [22:24] – There’s a lot of history you’ve looked into there too, right?
Miles Brundage [22:26] – Yeah.
Tim Hwang [22:26] – Yeah.
Miles Brundage [22:27] – People have been talking about AI, AI ethics, and AI governance for a long time, but there hasn’t been much dialogue between this world and then the other worlds of science policy and public policy. One way to think about it is that AI is sort of less mature in terms of its methodological rigor. The best we’ve sort of come up with is, let’s do a survey of some experts, whereas in you look at something like climate change, they not only do surveys of experts, but also synthesize that expertise into an IPCC report that’s supposed to be super authoritative, and has error bars for everything, and levels of confidence in different statements. They have this whole process. They have models of different possible futures given different assumptions. Everything’s sort of much better spelled out in terms of the links between assumptions, and policies, and scenarios. I’m trying to take one small step in that direction of more rigor and more clarity of what are the actual disagreements?
Craig Cannon [23:30] – Are you familiar with the history of policy? I was driving over here with my girlfriend, and she asked me, “Has this policy ecosystem around AI always existed around CS?” For instance, when writing started, were people out questioning the policy of what does this mean? Is this a new phenomenon, given that you can establish, for lack of a better word, a personal brand, and disseminate it out to the world? Or have there kind of always been policy advisors in as many number as you guys, working directly with governments, and companies, and stuff like that?
Miles Brundage [24:09] – I don’t know about writing. But definitely, or at least there’s no record–
Craig Cannon [24:13] – I heard it as a joke on Joe Rogan, actually.
Miles Brundage [24:17] – But certainly things like nuclear weapons, and nuclear energy, and solar energy, and coal, and cars. There were people debating the social implications, and there were calls for regulation, and there were conflicts between the incumbent interests and the startup innovators. Those sort of issues are not new. What’s more new is as you said, there’s an ability to spread views more quickly, and to have sort of global conversations about these things.
Tim Hwang [24:47] – It’s linked to the notion of having specialists develop policy at all. That’s kind of the history of this, right? Which is, when do certain situations become considered so complex as to require someone to be able to be like, “Okay, I can become an expert on it, and be the person who’s consulted on this topic.” A little bit about what is the supply of policy, and then also what is the demand for policy, right? In the nuclear war case, governments have a lot of interest in trying to figure out how we avoid chucking nuclear bombs at one another, right?
Craig Cannon [25:20] – I think so, yeah.
Tim Hwang [25:21] – Suddenly, there’s a really strong demand. There’s also funding.
Craig Cannon [25:24] – Right.
Tim Hwang [25:25] – There’s all these reasons for policy people to enter the space. I think AI is sort of interesting in that it kind of floats in this median zone right now, right? Where it’s sort of like, you see this happen a lot where people are like, AI, it seems like a really big deal, but then they get into the room and they’re like, so what are we doing here exactly? What is policy and AI? I think that is part of the challenge right now is trying to figure out what are the things that are really valuable to kind of work on if you think this is going to continue to become a big issue? Because right now, the technology’s nascent in a way that we can argue about the relative impact of it at all.
Craig Cannon [25:58] – Right.
Tim Hwang [25:59] – And then we can argue about like, does it make sense to actually have kind of like, policy people working on–
Craig Cannon [26:03] – Well, that’s the thing. Obviously, there are a lot of machine learning papers coming out all the time, but you’re very much at the forefront. Oftentimes, I feel like you’re ahead of the curve a little bit. Anticipating the needs and demands of a company or of a government. And so planning ahead for the future, are you just waiting for data to come? Are you getting within companies to see what they’re working on? Are you learning about the hardware? How are you spending your time to figure out what’s coming next?
Miles Brundage [26:32] – A lot it’s just talking to people. Talking to people working on hardware, and industry, and academia, and what they’re working on, and I find it personally helpful to have some sort of predictions, or explicit model of the future. I’ve written some blog posts about this. My forecast for short-term. In 2017, I made a bunch of predictions. I found that to be a super useful exercise because then I could say, okay, what was I wrong about, and were there systematic ways in which I can sort of be better about anticipating the future next time?
Tim Hwang [27:06] – We had asked an interesting question about what is policy expertise? Because it’s different in different situations.
Craig Cannon [27:12] – Yeah.
Tim Hwang [27:13] – Imagine the nuclear case. Actually, the nuclear case is pretty interesting, right? Because early on, the experts from a policy perspective also were the physicists, right? You could imagine that existing actually in a field, or in a technical field, which is society is like, “Okay, what do we do with this technology?” And the response is, well the scientists working on it will tell you about that.” But AI is sort of interesting in that there has been kind of the development of a community of people that I think is fairly nascent, which I think suggests to me that at least two options. One of them is that the field could be, the technical field could be doing more policy stuff, but isn’t right now.
Craig Cannon [27:52] – Okay, so it’s an arbitrage?
Tim Hwang [27:54] – That’s maybe one way of thinking about it. There’s also this other question of just like, what are other things that might help to inform the technical research?
Craig Cannon [28:02] – Okay.
Tim Hwang [28:03] – I think a lot of my policy work really is translation work, right? Where you talk to policy people who are like, well I understand liability. And I’m like, well, it’s mixed up because of AI because of A, B, C reasons, right? It’s bringing the technical research to an existing policy discussion. There’s also the reverse that happens, right? Which is basically researchers being like, what is this fairness thing, right? You’re like, well it turns out that you can’t just create a score for fairness. There’s these really interesting things that people have written about how do you think about translating that into the machine learning space, as well? Which is kind of where you can read FAT ML doing. I think that translation role is by no means certain, but in the AI space, seems to have been a useful role for people to play. Again, thinking about what is supply, like policy supply and policy demand.
Craig Cannon [28:45] – Yeah, absolutely.
Miles Brundage [28:46] – Collaboration is super important between people interested in the societal questions and the technical questions. It’s rare not just in AI, but in other cases, to have the answer readily available. With the IPCC for climate change, they have to go back to the lab sometimes and do new studies because they’re trying to answer policy-relevant questions. AI might be this sort of case where there’s sort of this feedback loop between people saying, “Okay, here are the questions that AI people need to answer. Here are the assumptions we need to flush out, in terms of how quickly will we have this capability,” and so forth, that you can’t just find that existing on archive. The answers aren’t just lying out there ready to be taken by policy people. There needs to be this sort of collaboration.
Tim Hwang [29:28] – Yeah, I’d love to actually look into the history of how this evolved in the climate science space, right? Because you can imagine a situation where like, you hear this from some machine learning people sometimes, where they’re just like, I just program the algorithms, man. Other people have to deal with, I don’t know, the implications of that, right?
Craig Cannon [29:42] – Yeah.
Tim Hwang [29:43] – Presumably, you could actually have that in the climate space, as well, where researchers could be like, all I do is really measure the climate, man. You decide if you want to change their missions. That’s not my deal, but clearly, that field has taken the choice to basically say, in addition to our research work, we have this other obligation, which is to engage in this policy debate.
Craig Cannon [29:59] – Right.
Tim Hwang [30:00] – That is really interesting, is like, what does the field actually think its responsibilities even are? And then, how to other kind of skills or talents arrange themselves around that?
Craig Cannon [30:10] – Then the question ends up being like Tim, you were at Google before. Now we’re at the Future of Humanity Institute, and how do you guys deal with policy both within an institute, and within a company? What are the differences, and how do those relationships work?
Tim Hwang [30:23] – Yeah, definitely. I’ve got kind of a weird set of experience I think, just because I was doing public policy for Google, so that was very much on the company side of things. And then now, I’m doing a little bit of work with Harvard and MIT on this Ethics and Governance of AI initiative, and doing work with the Oxford Internet Institute, as well. It is interesting, the degree to which you actually find that people in both spaces are often concerned about the same things. The constraints that they operate under are very different, right? Both sides I think, like I talk to a bunch of researchers within Google who are very concerned about fairness. I talk to researchers outside of Google who are in civil society who are very concerned about fairness.
Craig Cannon [31:05] – Have you found the same to be true?
Miles Brundage [31:08] – There are people worried about the same issues in a bunch of different domains, but they differ in terms of how much time they’re able to focus on them, and what sorts of concrete issues they have to answer. If you’re in industry, you have to sort of think about the actual applications that you’re rolling out, or fairness as it relates to this product. Assuming that you’re working on the application side, there are also researchers who are interested in the more fundamental question. But in terms of different institutions, if you’re in government, you might have a broader mandate, but you don’t have the time to drill down into every single issue. You need to sort of rely, to some extent, on experts outside the government who are writing reports and things like that. Then if you’re in academia, you might be able to take a super broad perspective, but you’re not necessarily as close to the cutting edge research, and you have to sort of rely on having connections with industry. For example, the Future of Humanity Institute, we have a lot of relationships with organizations like DeepMind, and OpenAI, and others, but we don’t have a ton of GPUs or TPUs here running the latest experiments, outside of some specific domains like safety. Having those different sectors and dialogue is super important in order to have a synthesis of what are the actual practical problems we’re pressing? What are the governance issues we need to address across this whole thing? And then, what are the issues
Miles Brundage [32:30] – we need people to drill down on and focus, and do sort of wide-ranging exploration of that are further down the road?
Craig Cannon [32:40] – What does the population look like here of researchers? I’m curious in the sense of who’s around influencing your ideas? What are their backgrounds? What are they working on?
Miles Brundage [32:49] – At the Future of Humanity Institute, it’s a mix of people. There’s some philosophers, an ethicist. There’s some political scientists. There’s some mathematicians. It’s basically a mix of people who are interested in both AI, or not everyone’s working on AI, but AI and biotechnology are two technical areas of focus. But also more general issues related to the future of humanity, as the name suggests. It’s pretty interdisciplinary. People aren’t necessarily working just in the domain that they’re coming from. The mathematicians aren’t necessarily trying to prove math theorems, but rather are just bringing that mindset of rigor to their work, and trying to break down the concepts that we’re thinking about.
Tim Hwang [33:34] – I’m curious about this too, because I’ve never really understood this about FHI. Is sort of the argument that thinking about existential risk, there’s practices that apply across all these different domains, or do they kind of operate as sort of separate research–
Craig Cannon [33:46] – We should pause there too.
Tim Hwang [33:47] – Oh sorry.
Craig Cannon [33:48] – Is the existential risk at the crux of the FHI being founded?
Miles Brundage [33:53] – Yeah, so it’s a major motivation for a lot of our work.
Craig Cannon [33:56] – Okay.
Miles Brundage [33:57] – The book Superintelligence by our founder, talked a lot about existential risks associated with AI. But it’s not the entirety of our focus, so we also are interested in long-term issues that aren’t necessarily existential, and also making sure that we get to the upsides. I’m ultimately pretty optimistic about the positive applications of AI. We do a range of issues, but to Tim’s question, there are a lot of people who come at this from a sort of very conceptual, and utility maximizing, philosophical perspective of, whoa, if we were to lose all the possible value in the future because humanity just stopped, that would be one of the worst things that could possibly happen. And so reducing the probability of existential risk is super important, even if AI is decades or centuries away, and even if we can only decrease the probability of that happening by .1%, or whatever. In expectation, that’s a huge amount of value that you’re protecting.
Craig Cannon [34:55] – Before we wrap things up, I’m curious about your broad thoughts. What should we be concerned about in the short-term around AI, and in the long-term, and then how do the two mix together?
Tim Hwang [35:05] – Yeah, definitely. This is one of the really interesting things is that, at least within the community of policy people, and the kind of researchers that there has been this kind of beef, if you will. I mean maybe beef is a little dramatic, but a small beef between what we might call the long-term like you’re talking about. Which is people who are concerned about AGI, and existential risk, and all of these sorts of things, and then sort of the short-term. People saying, well why do we focus on that when there’s all these problems of how these systems are being implemented right now? That is one of the kind of enduring sort of features of the landscape right now, but it’s an interesting question as to whether or not that will be the case forever. I don’t know. Like I know Miles, you’ve had some thoughts on that.
Miles Brundage [35:47] – There are common sort of, topical issues over different time frames. Both in the near, and in the long-term, we would want to worry about systems being fair, and accountable, and transparent. Maybe the methods will be the same, or maybe they’ll be different over those different times horizons. There are also going to be issues around security over different time horizons. There’s probably more common cause between the people working on the immediate issues and the long-term issues than is often perceived by some people who see it as a big trade-off between who’s going to get funding, or this is getting too much attention in the media. Actually, the goal of most of the people working in this area is to maximize the benefits of AI, and minimize the risks. It might turn out that some of the same governance approaches are applicable. It might turn out that actually solving some of these nearer term issues will set a positive precedent for solving the longer ones, and start building up a community of practice, and links with policy makers, and expertise in governments. There’s a lot of opportunity for fusion.
Tim Hwang [36:51] – Yeah, well I’m interested, Miles, you’re in kind of the safety community. Do you hear people talking about like, I mean I use the phrase FAT AGI, which I think is just fascinating as a term, just because it marries together these two concepts so well.
Miles Brundage [37:04] – Yeah.
Tim Hwang [37:05] – But I don’t know, is that being talked about at all?
Miles Brundage [37:07] – There’s common cause in the sense that you can sort of, so take a step back. One term that people often throw around in the AI safety world, particularly looking at long-term AI safety, is value alignment. How do you actually learn the values of humans, and not go crazy, and to put it colloquially.
Tim Hwang [37:29] – That’s a technical term–
Miles Brundage [37:31] – Yeah, go crazy, yeah.
Craig Cannon [37:32] – Computers go crazy all the time.
Miles Brundage [37:33] – Yeah. But I think you could frame a lot of current issues as value alignment problems. Things around bias and fairness. Ultimately, there’s a question of, how do you extract human preferences, and how do you deal with the fact that humans might not have consistent preferences? And some of them are bias. Ultimately, those are issues that we’ll have to deal with in the nearer term, and might take a different form in the future, if AI systems are operating with a much larger action space. They’re not just classifying data, but they’re taking very long-term decisions, and thinking abstractly. Ultimately, the goal was the same. It’s to get the right behavior out of these systems.
Tim Hwang [38:17] – That was really interesting because the example that you just gave was saying a lot of the fairness problems that we’re dealing with right now are actually value alignment problems. Which is the problem there is basically, the system doesn’t behave in a way that’s consistent with human values.
Miles Brundage [38:31] – That’s a fairness case. That’s the F in the FAT acronym. To take accountability and transparency, there’s also a common cause. One of the issues I’ve been toying with recently is that transparency might be a way of avoiding certain international conflicts, or it might be part of the toolbox. Historically, in arms control agreements, like around nuclear weapons and chemical weapons, there have been things like on-site inspections, and satellite monitoring, and all these tools that are sort of bespoke for the purpose of the domain. But the general concept is we would be better off cooperating, and we will verify that that behavior is actually happening. So that if we detect defection by the Soviet Union, or the Soviet Union detects defection from us, then they can respond appropriately. We can build trust, but verify, in Reagan’s terminology. If you actually had the full development of the FAT methods, and you had accountability and transparency for even general AI systems, or super intelligent systems, then that would open up the door for a lot more collaboration. If you could sort of credibly commit to saying, okay, we are developing this general AI system, but these are its goals, or this is how it learns its goals, and we’re sort of putting these hard constraints on the system so that it’s not going to attack your country, or whatever.
Tim Hwang [39:56] – One of the things that’s so intriguing about it though, is the reason why FAT AGI for me is like, huh, wow. It’s kind of a crazy idea, is because I know typically in the literature around AGI, it’s very much the idea that it would be accountable, and that it could be transparent is usually considered impossible, right? Because AGI is so complex, and so powerful that nothing could– The move you’re making is to say, actually, we might be able to do it.
Miles Brundage [40:24] – Yeah, well there are differences of opinion on how sort of interactive the development of an AGI would be, and the extent to which humans will be in the loop over the long run.
Tim Hwang [40:34] – Right.
Miles Brundage [40:36] – I mean, Paul Christiano at OpenAI, for example, has a lot of really good blog posts, and some of these ideas are in the paper, Concrete Problems in AI Safety, about the idea that what he calls corrigibility, and what others have called corrigibility, might actually be a stable basin of attraction. In the sense that if a system is designed in such a way that it’s able to take critical feedback, and it’s able to say, “Okay yeah, what I was doing was wrong,” that might sort of stabilize in a way that it’s continuously asking for human feedback. It’s possible that accountability is an easier problem even for very powerful systems than we realize. Maybe Trump aside, there are powerful people in the world who actually seek out critical feedback, and are aware, and want to–
Tim Hwang [41:21] – That was very topical, yeah.
Miles Brundage [41:23] – Want to hear diverse inputs, and want to make sure that they’re doing the right thing.
Tim Hwang [41:27] – Right. This is actually really interesting because it’s both short-term and long-term again.
Miles Brundage [41:31] – Exactly.
Tim Hwang [41:31] – Which is if we could get the research community to have certain norms around ensuring that we are seeking to build corrigible systems– That might set the precedent that the AGI that eventually arrives will be one which is actually consistent with FAT. Versus not, right? We actually have control over the design of the eventual thing, right?
Craig Cannon [41:49] – I’ve always had such trouble understanding the people who thought there are these AI engineers that were trying to take over the world with their AGI. It’s like, “No, they’re going to die too.” The incentives are aligned. You just imagine this apocalyptic scenario. Do you have strong opinions on people working in public versus working in private? I know there’s somewhat of a debate around development.
Miles Brundage [42:11] – So you mean working in the US Government, versus–
Craig Cannon [42:14] – No, sorry. Do you have an opinion on trying to build and AGI and holding some amount of your data, or training data– Publicly versus privately.
Miles Brundage [42:24] – Yeah, so that’s a super interesting question. We sort of broached the topic in this report on the malicious uses of AI because there might be specific domains in which maybe it’s not, maybe in a world in which isn’t necessarily the world we’re in today, but maybe in a world in which there are millions of driverless cars, and they’re all using the same convolutional neural net that is vulnerable to this new adversarial example that you just came up with, you might want to give those companies a heads up before you just post it on arXiv, and then someone can cause tens of thousands of car crashes, or whatever. We might want to think about norms around openness in those specific domains, where the idea isn’t to never publish, but it’s to have some sort of process. Yeah, as far as general AI and research, right now the community’s pretty open, and it’s sort of both in the broad interest, and in the individual interest of companies to be fairly open because they want to recruit researchers, and researchers want to publish. I think there’s a pretty strong norm around openness, but if we were in a world where there was more widely perceived great power or competition between countries, or where the safety issues were a lot more salient, or there were some catastrophic misuses of AI in the cyber arena, then I think people might think twice. It might be appropriate to think twice if your concern is that the first people to press the button, if they’re not conscious of all the safety issues, could cause a huge problem.
Tim Hwang [43:52] – Yeah, I’m very pro-open publishing. It should be the default. I’m still disputing situations where I’m like, “You shouldn’t publish on this stuff.” Just because it is actually to the benefit of everybody to know what the current state of the field is, because it allows us to make a realistic assessment. Regardless of whether or not you believe in AGI, or you believe in super intelligence, it’s useful just to know what can be done. Because even if you’re thinking about the more prosaic bad acter uses, it’s useful to know what are the risks? We can’t do that in an environment where lots of people are kind of holding back. It’s important to know the state of the field at any given time so we can actually make realistic public policy. Otherwise, we’re really operating in the dark.
Craig Cannon [44:35] – Yeah, that’s a great point. Okay, so Miles, last year you wrote about predictions for 2017 or 2018?
Miles Brundage [44:41] – Yeah, yeah. I made the predictions early 2017, and then I reviewed them a month ago.
Craig Cannon [44:46] – Okay. This year, 2018. You can get a full year.
Miles Brundage [44:51] – I was not prepared for this. I was not prepared for this.
Craig Cannon [44:54] – You can have a three-year time frame then. Even more, if you want.
Miles Brundage [44:59] – Three years. Sure. Yeah, I think there’ll be super human star craft, and Dota 2 probably in that time horizon. I said in I think, early 2017, I gave 50% chance by the end of 2018, so this gives me more runway. I’ll say 70% confident that there’ll be super human StarCraft. I’m actually less familiar with Dota 2, so I’ll say just StarCraft.
Craig Cannon [45:27] – Alright, okay. Tim?
Tim Hwang [45:29] – Meta learning will improve significantly. This is basically treating machine learning, designing machine learning architectures as if they were their own machine learning problem. It’s something that basically is done by machine learning specialists right now. The question is, how far will machine learning researchers go in replacing themselves, essentially? That will get really good in ways that we don’t expect.
Craig Cannon [45:50] – Your insight into why that will happen is what?
Tim Hwang [45:53] – There’s some of the results that we’re seeing from the research right now. It just seems like these networks are able to kind of tune their parameters in a way that, at least I would’ve not expected, and so it’s cool seeing that adapt and advance.
Craig Cannon [46:08] – These are all positive things. Alright guys, well thanks for your time.
Miles Brundage [46:10] – Yeah.
Tim Hwang [46:11] – Thanks for having us.
Craig Cannon [46:12] – Alright, thanks for listening. As always, you can find the transcript and the video at blog.ycombinator.com. If you have a second, it would be awesome to give us a rating and review wherever you find your podcast. See you next time.
Other Posts
Y Combinator created a new model for funding early stage startups. Twice a year we invest a small amount of money ($150k) in a large number of startups (recently 200). The startups move to Silicon