Miles Brundage researches the societal impacts of artificial intelligence. His research focuses on developing methods for rigorous analysis of AI development scenarios and appropriate policy responses. In 2018, he joined OpenAI, as a Research Scientist on the Policy team. Previously, he was a Research Fellow at the University of Oxford's Future of Humanity Institute and served as a member of Axon's AI and Policing Technology Ethics Board.
Lukas: So let's look at your background. So you had a sort of an interesting path into AI policy, right? Eventually, working on energy and climate issues. Is that right?
Miles: Yeah, exactly. So in undergrad, in D.C. and then for a little while after I graduated, I was working at the Department of Energy and other think tanks in the broad area of energy and climate policy. I ended up wanting to go to grad school because I wanted to do more research and less administrative work that was like boss support type stuff, special assistant was my role. I learned a ton from being in government and working on something that was super hot at the time; energy and climate policy was a big, big deal. People were talking about it way more than AI or anything like that back in the day and it was super enlightening. But I also eventually concluded that while energy was important, there wasn't as much work to be done in terms of research to move things forward and AI was this more fresh green field in terms of research and that it wasn't just a political problem of getting the right thing to happen, which I think is arguably the case with energy and climate change now where we have some good understanding of the policy issues - not everything - but we have a decent understanding of what is to be done and it's largely a matter of political will. Whereas in AI it's not even clear what we want and what should be done, let alone how to get the political will. So that felt like a more exciting opportunity for impact on the research.
Lukas: OK. So having thought deeply about both, which do you think is more of a threat to the future of humanity? Climate change? Which worries you more?
Miles: Definitely in the near term, I don't think AI is going to end the human race or anything like that. It's still relatively early stage, but we do need to think about the long term risks of technologies as they're developed, whereas energy is something that's a clear and present danger right now; and I'm hopeful that AI will actually help with addressing it. But, yeah, I think energy is a known long-term time scale issue where we know that there's a bunch of things that need to change or to get a good factory workers, in the case of AI, we know that we're not in imminent danger of all jobs being automated or killer AI taking over the world. But we see these trends and they're more uncertain especially in how quickly they develop. So I think despite all the uncertainty as we have climate change and energy technology innovation where we don't really know what's going to happen in a few decades, we actually have more reasonable error bars for that than we do for AI where experts are all over the place. I think things will be solved in the near future or taking centuries and I think it's actually harder to be very confident about AI than energy and climate.
Lukas: When you say long-term, near-term, medium-term, do you feel like there is a serious possible issue with AI in the next, say, 50 years?
Miles: It's really hard to say. I think we should try not to be too confident in where the trends are going to go. For example, just in the past few years, there's been a lot of technological progress that people have found surprising, you know. Very few people were expecting as much progress in things like machine translation and natural language generation has ended up happening, and similarly, things like image recognition have just gone from, "OK, it's in the human performance ballpark" to, "OK, now it's superhuman" in some metrics. It's not superhuman across the board but on well-defined tasks, we've made a ton of progress. Will that lead to at some point systems that are harder to control or that might cause unintended side effects and increasingly large contexts where they're deployed? I think it's hard to say. I think we should think of this as something that where the stakes are going to rise, both of as the capabilities increase and also as the use cases. So even if A.I. doesn't progress much in terms of capabilities, it will. . . It seems like there's economic pressure and other sorts of reasons why it's going to be deployed in more contexts and as those contracts get more dangerous, the stakes are going to rise. We're already seeing AI applied for things like predictive policing and face recognition; so things that are highly sensitive. Also things like search engines and recommendation systems that are actually materially affecting what information people get and what products and services they get is definitely the sort of thing that sees that the stakes are high and getting higher. But I know it's hard to say what the endpoint of that evolution will be. And I think the key is to think ahead and be at least a few years ahead in terms of what sorts of problems we are anticipating. So that we're not caught off guard. So things like misuse of AI systems; I think we should not wait until the risks have materialized but instead think about what can we do with the design phase to make these problems less severe? Can we develop detection systems? Those sort of things. I don't think there's any inevitable risk. I think it's more like how prepared are we relative to the capabilities at the point they arise?
Lukas: I feel like a lot of these things that I hear about AI, it seems like you almost substitute the word technology for A.I. and you'd kind of come to the same conclusions. Do you feel like there's something special about A.I. that requires more vigilance or different kinds of ethical concerns?
Miles: Yeah, I think it starts to get a bit risky when there's more uncertainty and more variation in the kinds of actions that systems make as opposed to just being deterministic code that does the same thing over and over, as opposed to dealing with uncertain inputs and you know essentially a shift in the distribution of inputs. I think the risks grow when you're moving from software to A.I, but definitely there are all sorts of systems in society that are very complex already, even before AI. So I think the number of lines of code [and the aircraft] are the same and it's like super complex systems so it's not like AI is suddenly going to lead to this new complexity, but rather there are elements of AI such as the processing of a wide range of inputs and potentially making decisions that previous software systems were not entrusted to do because we lacked the manipulating outputs. So things like robots, for example, there are applications of AI that were just not possible if you don't have that technology. So I wouldn't draw a clean line between them. I see AI as an extension of information technology and software and going back even further, things like electricity. And it's unclear whether that cognitive element like this [switch] AI is about thinking, as opposed to just following rules blindly. And is that actually a significant element that introduces a lot of risk? I don't know whether we should be AI exceptionalists where we're treating AI like an exception or just treating it as part of the spectrum. I think there are pros and cons of both because you don't want to ignore the growing capabilities in cases where AI can do things previous technologies didn't do. I think we're seeing lots of cases of just products that would never have existed prior to the current way of AI. But we also don't want to be hysterical and overemphasize the novelty when there are lots of technologies that have big impacts on people.
Lukas: So do you have an agenda of some types of change that you want to enact in the world at this point, or are you still trying to figure out what you're trying to get folks to do?
Miles: I'm still trying to figure out the overall agenda, but I think some of the key components are cooperation between AI developers I think is super essential to figure out what the right practices should be and to hold one another accountable. So, for example, I did some work with Amanda Haskell and Julian Hatfield in a paper last year that talks about the need for cooperation in order to avoid a race to the bottom between AI developers. So insofar as I have an agenda on that issue, it's avoiding a race to the bottom and avoiding that scenario.
Lukas: What would a race to the bottom mean in this context?
Miles: So driverless cars is perhaps the most clear-cut case where there's an incentive to be safe, but there's also an incentive to get products to market. And in cases where systems are void in a way that's premature in some respect, like not taking into account that there can be a jaywalker for example - I think that's a case where there is a design element of the system that was not super thought through and if you have people who are overlooking these things, cutting corners in a rush to get products to market, I think that could cause harm for the whole ecosystem, not just causing individual crashes. But there are cases where it's people following their own individual self-interest to get things out there faster and this can lead them to cause harm and ultimately make the whole sector worse off. So that's why we need regulations as well as informal norms in the AI community that put things like safety and security and privacy and so forth first so that there's some guardrails. So there's competition but within certain guardrail. Unbridled competition, I think, could lead to lowering of the standards in an effort to one up one another.
Lukas: You mean unbridled competition in terms of researchers? Is that what you mean in this context or ...?
Miles: So I specifically mean in terms of products going to market. So if there's no standards for, say driverless car safety; fortunately, I don't think there are any countries that have no standards, their policymakers are aware of this need to impose some guardrails.. But, you know, if there's a world in which the standards are insufficient and there's insufficient vigilance about how safe the systems are, then there's not going to be sufficient pressure to make sure that the sector as a whole moves in the right direction long term. So it is a big ask in individual interest and collective interest as well as short and long term that I think it could go awry. And that's why things like ethical principles that keep you thinking about the long term and clarify what's expected of different offers and also regulations that impose some floor on what the level of safety or security should be forgiven. Those are necessary in order to prevent it from just being the wild west. So when I say unbridled competition, I mean like in the actual deployment side of 'there needs to be some process for making sure of how safe systems are.'
Lukas: So compare autonomous vehicles to the car industry. The car industry has all kinds of safety regulations that make sense. Do you feel that AI needs some sort of different kind of cooperation than other industries maybe need?
Miles: I'm not sure it means anything different. I think it just needs to catch up to where other sectors are. So you know there's a whole history of nuclear power plants responsibly disclosing an incident in order for others to learn from it. And similarly, there's a whole history of airplane crash investigations and regulations and so forth. So it's not that driverless cars or A.I. generally is this new scary thing. It's more that we need to apply the same approach as we have done in other technology. So going back to your question earlier about A.I. versus other technologies; I think this is a case where I see them as very similar. It's not that A.I. is uniquely dangerous in this respect or uniquely likely to lead to harmful, unbridled competition. But it's more that we have already gone through the lessons of other technologies. Like with the case of cars that took decades of gradually ratcheting up standards around fuel efficiency, seatbelts, airbags and so forth. And we're in the early stages of that process for AI.
Lukas: Interesting. Is there an industry that you think is doing particularly well on this or particularly badly?
Miles: Good question. I think driverless cars are not, to be clear, an unambiguously bad thing. I think there's a ton of great work on figuring out how to make driverless cars safe and there is some sort of informal cooperation going on among those developers. Though, it's kind of tricky due to things like antitrust laws that prevent too much coordination. It's not clear exactly how much is happening and how much can happen. for other technologies, I think there's actually been a ton of really good work on things like image recognition and trying to characterize robustness in that context. So, you know, with regard to things like interpretability, we're much further along in terms of interpreting image classifiers than we are with language systems, for example. Classifiers generally, I'd say, are the more mature end of the spectrum and that we have clear sort of metrics for performance and there's a lot of open source models to compare things against. And there's a lot of documentation of things like biases and central issues with robustness. That's not to say that all classifiers are good or something like that. But I'd say that if you look across AI, there are areas that are more and less return on the rigor that people are applying.
Lukas: I guess it seems to me like more of the AI's industry does come out of more of a research background or maybe there is like a little bit more of a culture of cooperation. It does seem notable that so many models that people use in companies come out of open source projects or research papers. So it does seem like there's a fair amount of cooperation at least on the technology side, like maybe more than you'd actually expect.
Miles: There is. Yeah. And there's an interesting question of how long that will last and what are the underlying drivers of that openness? One argument for why it might be more open than you might otherwise expect is just that individual AI researchers want to publish stuff and therefore that puts pressure on the companies that hire those people to allow them to publish stuff. And I think that's a very strong incentive. And the fact that AI researchers are sort of a scarce resource in the talent market, that gives them some influence over what the policies are at these companies. And there's also something, another argument for why it might be so open, is that it can benefit the companies to be open by getting people to use their frameworks and they can make it easier to hire people in the future. But I wouldn't want us to rest on our laurels here. And that's why, for example, in a report that my colleagues and I put out recently, we talked about the need for more generously supporting academics with computing resource, because we don't necessarily want to just assume that those in industry will continue to release products and that forever there will be that pressure from below to get these releases because there could also be cases where suddenly there's a huge commercial potential that's identified and a company is sort of closed up. Or at an international level, there's pressure due to competition with China to damp down on publishing. There are all sorts of things that could happen longer term. So I would say I'm glad that there is so much openness and collaboration today, but given how much has changed in the last five years, I think we need to start thinking about what are the policy that can you get away?
Lukas: You were actually talking about, I think, a paper you just recently published that talks about AI development. I had a couple of questions about that. You were actually the first author of that paper, so I can imagine there was a fair amount of heavy lifting?
Miles: Yeah, I was one of first five authors. So it was super collaborative.
Lukas: Got it. Cool. One thing I thought was that it was a provocative title - Mechanisms for Supporting Verifiable Claims. I thought it was interesting and I think I agree with the importance of it but maybe you could share the thinking behind that.
Miles: Yeah. So the basic idea in that report is that AI developers are making all sorts of systems and they want to sell them or they want people to use them. And they make various claims about these systems like we did bias analysis or they're robust in X, Y and Z ways. And there are some claims like that that are relatively easy to verify. Like, if you're in the case where it's an open source system, you might be able, if you have some technical expertise, to replicate the results, reproduce a certain error rate and verify that these AI developers are telling the truth. But there are other cases where it's not as obvious how to do that. For example, if it's a very computing-intensive system, it might be harder for academics to scrutinize it. Or if there are claims being made about privacy preservation, but there's this new fancy approach to privacy-preserving machine learning that there isn't just one standard way of evaluating it or its a new library that hasn't been subjected to a lot of scrutiny for bugs. Those are cases where it's harder to verify the claims made by AI developers. So what we did in this report is try and take a very broad 30,000 foot view of like, "OK, what is the problem here?" And we broke it down into institution, software and hardware as the three elements of an AI project that can contribute to allowing one's claims to be verified and in each of those cases we zoomed into what are the specific interventions that can be made. Like in software, for example, we talk about privacy-preserving machine learning and the need for more standardization and open space libraries around there so that it's going to reproduce the skill requirements and increase the clarity of how different systems should be compared, and interpretability, audit trails for safety, critical systems, etcetera, are some of the things on the software side as well as things at the level of the hardware that can incrementally be solved since it's not going to be solved overnight. We don't want to 'overclaim' what we're accomplishing here. But what we tried to do is survey what are the ways that people can show that they're actually being responsible as opposed to just saying, "Hey, we're being responsible, we care about safety." How do you actually provide evidence of that? And are there ways in which there are barriers to providing that evidence that we think are important?
Lukas: I thought it was interesting you recommended that the government gives compute credits to academic institutions. Right? It struck me.
Lukas: I actually remember leaving academia to go into industry. And one of the real impetuses for me i guess was the fact that tech companies just had so much more data that led to more interesting problems. And I do feel like when I talk to my friends at Facebook or Google, they feel more sophisticated in some specific ways having dealt with such enormous datasets and in a way that I don't think typically gets published. And I feel like OpenAI is one of the few places that you clearly do incredibly compute-intensive stuff. But I wonder if you actually deal with the same scale of datasets and if that might... I feel like there might be the case that a couple of big companies are getting a skillset that doesn't exist anywhere and doesn't really get published. I'm not sure.
Miles: Yeah, I mean, there's definitely a sense in which some companies have this infrastructure in place for generating huge amounts of labelled and unlabelled data, and that puts them in a strong position to do work in that area. I think it's also possible to do cutting edge work with open source data through existing datasets or tripping and building your own datasets. So I wouldn't draw this hard distinction but, yeah, I think there are lots of ways in which industry provides these opportunities that aren't necessarily available elsewhere. And that's part of what's driven the academic into industry. And part of where we're coming from on this report is like, "Is that a good thing?" "Are there ways that you can balance that?" It's slightly easier to say, "OK, let's balance out the compute side of the equation then the data side of the equation." In part because a lot of the data is private and by nature, it's really hard to get that out of these companies in an ethical way. But I think we should also be thinking about data in terms of a differentiator, you know different sector wisdom. I would also like to see governments, in addition to providing compute credits or other means of supporting academics, also building a data center or something like that. Also, generating labelled datasets is another thing that government can do because it's not clear that whatever is easiest to collect at Google or Facebook or Twitter or whatever, is inherently the data that we need to solve all the problems in the world. In fact, those datasets by default have lots of biases. So I think one potentially exciting area is government support for datasets that could be used by large numbers of people and that are specifically designed to be less biased and have the bias that are able to help a wide number of actors. And I think the fact that you can cheaply copy data is a strong argument for this being something that governments should do. It's like building a highway or something like that that benefits large numbers of people. And, yeah, you can exclude people from the highway and do tolls and stuff but generally, it's public infrastructure. And weirdly, producing datasets that can be widely used is another thing.
Lukas: That's kind of a cool idea. I mean, it does seem to me that datasets have pushed a lot of innovation, and ML. I also remember when I was a grad student, there was this frustration. It seemed like the tasks that we worked on were based on the datasets that happened to be available. Although I feel like one of the issues that we had maybe was the collectives of people coming together to create datasets but it would create a huge amount of bureaucracy in that data collection process because no one person really owned it. And then I think it would end up being a much bigger undertaking than it necessarily needed to be. So I could definitely see governments having trouble making decisions like, "OK, what are we actually going to collect here?"
Miles: Yeah and it's not clear what's to be done. You can also imagine, in addition to Compute Credits, giving academics data labelling credits that just allow them to, "I go to some third party service and you pay for some amount of data." I think there's probably a role for that in the same way that there's also a role for big public datasets that a bunch of people use for some general class of problems. So I think you ideally want to reduce the barriers to entry on, both regenerating these big useful datasets, but also making sure that people have access to more tailored data for their own needs.
Lukas: I'm curious. It's a little bit jumping around but it seems OpenAI has actually been a real driver towards using more and more compute on these different charges and it kind of makes it hard to reproduce and makes it potentially with some environmental impact. Do you have any thoughts on that?
Miles: It's a really good question and I've liked a lot of the publications on this topic, like the paper, the Green AI and some folks at the Allen Institute [was a good...] of this topic and lots of other people have been calling attention to it. I think in general, my view is that all things being equal, we'd like to not use more compute than is needed for solving a given problem. But there's some ways in which it's not as urgent or as bad of a problem as it might at first appear. Such as the fact that there's retraining step of a large model, for example, only needs to be done once and it can be fine tuned relatively cheaply or even used in a zero-shot fashion by millions or billions of people. So I think it would be strange to look only at the spending side and not also the inference side. I mean your question wasn't about one or the other specifically, but I would just flag the difference between the two in that, say, Google using a bot for a search result ranking, for example. Presumably almost all of the compute cost there is on the inference side rather than the training side and as originally as possible, it could be a few hundred thousand dollars or something for the training. And now it's been years at for a billion years. That's not to say that it's not an issue in terms of the environmental impacts but you also have to look at the whole product and think about inference, whereas I think a lot of the attention so far has been on things...
Lukas: Yeah, I guess that's a good point. I wonder why that is. I actually felt like, when I looked into my back of the envelope calculations on the whole thing, it seemed to me that even if you took all the graphics cards that are made and at random times evaluated, it wouldn't be near the environmental impact of regular data centers. But I guess the trend line is certainly scary, right? Because it's like this exponential growth in volume of usage and I guess maybe it feels like there's a more natural barrier on the inference because it's... I don't know, why does it feel like that? Maybe because companies are doing it. I guess it seems that there's some limits to the inference, whereas the training seems to be skyrocketing. At least that's my impression.
Miles: Yeah. I think that's right in that we haven't seen many models being served in production. And generally there's a lot of optimization pressure to keep things there. Whereas in my training and research, it's a bit more like everything goes and trying out things to see what works. It'll be interesting to see how that evolves over time. Another thing I'd add is that I would also try not to pass all of AI and MS into broad brush in that depending on the use case, you could actually be saving energy. Like the deep mining Google uses and deep RL for controlling data center energy consumption, for example, is a case where you're actually able to reduce the net amount of energy used by applying some AI upfront.
Lukas: One question I asked Jack, and I thought his answers were interesting and I'm pretty curious about yours, is the whole thing about not releasing the biggest GPT-2 model. But honestly, here's what I thought about it and I didn't even tell Jack this. This is my impression. I don't view myself as the expert on this stuff but first openness seems really, really important to me it's a core value and if people are going to do stuff and call themselves OpenAI, they really should be erring on the side of making their work public. But then I thought, well, it's kind of interesting that they've chosen to think about the impact of releasing this and making a controversial stance here that also I thought, I wonder maybe they're right. It certainly seemed to me at the time that a really powerful language model could be used in bad ways. And so I think I didn't feel so sure of myself, about what I thought. And then it seemed like it didn't really matter because other models came out about a month or two later and it almost seemed like maybe the most surprising thing is that there weren't more applications of such impressive-looking technology. I don't know why I started. But I'm kind of curious from the inside about how it felt for you and if there were any lessons you learned on it.
Miles: Yeah. So from the inside, we also felt very unsure what to think and we tried to, at each stage, say clearly, "Well, we don't know how dangerous this is and we got this information and we're trying to reduce the strength to error bars over time in terms of both beneficial use" That's not to say that we eventually converged on an overall conclusion of, "OK, this is definitely good for society" but we started with a default of openness and these concerns arose in terms of people building proofs of concept of rating reviews for Amazon, for example, and that seemed pretty scary. Writing to fake news stories seem pretty scary... And ultimately what we ended up doing was taking an incremental approach to releasing progressively larger versions of the model and obviously if we could go back in time and we would take all the insights that we have now and feed that into an earlier stage in the process. What we ended up doing, I think, was a reasonable approach in the sense of, if there's a potential irreversible decision, like releasing a model, it makes sense to be a little bit cautious before you do it if there are ways you can gather more information. I think there are ways you can get some information by doing things like human studies. And we worked with Extra Mile Run who outputs by people and statistically study the differences across the model sizes and that informs some of our decision making. But ultimately, it's really hard to answer the question. For me, it's about the economics of it and the motivations of bad actors. So I think it's an ongoing issue that you can't really fully resolve.
Lukas: Do you feel like you really got new information that informed decisions along the way? What kind of information did you collect and what different information could you have gotten that would have made you make a different choice?
Miles: As a concrete example, we were very unsure what those scaling relationship is between model size in this GPT-2 regime of 125 million to 1.5 billion parameters. We weren't sure what the relationship was between model size and convincingness or ability to be coherent and clarity and so forth. We had a rough sense that there was this smoothish relationship, you know, as you grow in model size, it takes fewer and fewer tries to get given results. So less cherry picking is required for a given fixed level of performance. And that seemed to be true, but we aren't sure of, OK, for a given level of cherry picking, what's the level deeper. And what we ultimately found was that there actually wasn't a huge difference between the two largest model sizes and that was one of the factors that pushed us towards releasing the 1.5 billion model. When otherwise, if there had been more of a gap between the two that one felt like there were more risk in doing that release. And there were also other things happening, like other people releasing models and now we're able to do some comparison between them. So we were trying to absorb as much information and generate as much information as possible as well. But overall, that's probably one of the most clear cut cases where the real diminishing marginal risk as you increase in model size was a reason why we felt that for scientific reproducibility reasons and other reasons, the benefits were outweighing the cost because there wasn't much increase in risk but also we're seeing significantly improved performance prospects in the standard NLP benchmarks. So it seemed like it was a non-trivial increase in utility from a research perspective and also would allow people to start grappling with some of the issues involved in training these large models. It also didn't seem to be a huge part.
Lukas: There's someone here who worked on natural language processing a while ago. In my view, the GPT-2 results were incredibly impressive. And I thought at the time, this must've been about a year ago, that the applications would be enormous. And I think actually the applications have been subtle. Like I've noticed translation systems working a little better than they used to. And there is that crazy adventure game that I've played and you know, it's kind of fun, and I've seen people suggesting plausible domain names for your business. For example, on our website, we see a lot of models come through so we do see people using the technology. But I don't think that my mother has noticed a difference in her world. Maybe that's not surprising in retrospect but it's funny how it seems like there's this huge leap in my head. And I feel like the vision stuff. we may be feeling a little more. I feel like face recognition feels a little more ubiquitous to me. It's scary. At least I notice my camera somehow finds people's faces and things like that that it definitely couldn't do a few years ago. What do you think about this?
Miles: I tend to think in terms of general purpose technologies that could be misused or could be used for beneficial things. So I'm basically saying the same thing that I was saying about the risks. So we have some information about the fact that you can produce coherent text in some contexts, and that seems like it could be used for lots of commercial and creative applications, and also some malicious applications. But we might just not be at the level of performance for either of those domains where it is a straightforward replacement of what people were already doing. I think we'll get there eventually and language models will continue to proliferate to a bunch of different applications including on the edge, in the cloud and all sorts of contacts. But I think a few things need to happen - there needs to be more reliability and higher performance compared to humans on some tasks where it's just not going to make sense to replace human or augmented human if it hasn't yet reached that level of performance. And generally, I think we also need to figure out what are the right workflows and ways of leveraging the system look like. Because I don't think it's replace a human with a language model; I think that's one of the more naive uses that you can do. But that depends on a very high level of performance and the right kind of space where you don't need online monitoring. But I think there's also other cases like writing a system where the fact that it's not a 100% reliable is not a game, it's not a deal breaker. But if you're able to get humans in the loop to provide feedback on these systems and choose among a few initial outputs, both for beneficial and malicious purposes, I think that could be a game changer. So I think we'll see further progress both on the technology and people finding better ways to use it.
Lukas: Interesting. Does OpenAI continue to push things forward or are you like, "OK, we made this model. we're good." How do you guys think about that?
Miles: We're so continuing to push things forward both in terms of trying to understand these models better, like the ones we've already built, and also trying to push further in terms of improving performance.
Lukas: So this is the question that I've always had about OpenAI. If you don't mind, I'm kind of curious. OpenAI's mission is something along the lines of ethical AI, am I right? Remind me.
Miles: Yeah. The shorthand version is build artificial general intelligence and use it to help everyone.
Lukas: It seems like a funny mix of policy and building. Do you think it's important that those two things are together? How would you argue against this? Because I think that they should go together but I'm trying to think from the other point of view. Does policy and building really necessarily need to be combined? Is it even combined in most cases? Because it feels like the policymakers in general, aren't always engineers. Right? Why have a thing that combines both?
Miles: I think by default, a lot of people who are building powerful technologies are 'de facto' policymakers and that they're setting the default and how people think of things and they're influencing what the early applications are and so forth. So I think you can't totally separate them. I think anyone who's involved in technology should be thinking to some extent about other social impact. And, you know, there's also value to division of labor and that's why not every single person in the organization is on the policy team. We have various different teams. So it makes sense to have some specialization. But I think the reason we think it's such a high priority is that we don't see the impact of AI as reducible exclusively to the design of the technology. It's also about what sorts of policies are there to constrain the use of it? And are there ways of distributing the economic upsides of it so that it's only benefiting a small number of people. So we think that it's not just a matter of building the technology but also making sure that it happens at the right time in the right environment with the right infrastructure. So that's why we invest a lot in advocating for policy changes that we think will enable a more co-operative, more responsible AI ecospace.
Lukas: I'm sure you have a group of AI ethicists or policymakers that you hang out with. I'm curious about those circles, the things I hear from people that are interested in AI don't seem very controversial to me. It seems like people want fairness and they want openness and transparency. And these things all seem like sensible, wonderful things to me. But I guess where there are disagreements, are there different factions, you think, of people thinking deeply about this topic?
Miles: There definitely are. So I'm not sure I'd say factions, because I try to build bridges and stuff like that, but they're definitely differences in emphasis between people who are focused on immediate ethical concerns around things like bias and surveillance which are on one end of the spectrum, or multiple spectra, but that's one end of one spectrum. And on the other end is people who think existential risk from AI systems that are too powerful to control unless you've thought really hard about it in advance, then you by default expect things to go bad; that's the thing you should focus on and devote 100% of your attention to. I think personally, I find myself somewhere in the middle, and OpenAI as an organization finds itself somewhere in the middle, and that we are thinking about both bias and long term safety and thinking about [prison] systems and AGI. And I think in terms of fact, it's ultimately like both sides are just trying to figure out how to make sure this technology helps people and doesn't hurt people. And in fact, often, the conclusions in terms of the actual policy you recommend aren't that different, depending on whether you're focused on the immediate term or the long term. I see it more as a spectrum, but there are definitely people with different emphasis.
Lukas: I feel like if the ethics is interesting, it must have, there must be controversy. There've been decisions internal to OpenAI even to philosophical discussions where people really take strong and different stances. Because I think when you put it in this general sense, it's like who would argue that we shouldn't make AI safe and who would argue that we shouldn't make AI biased? But I have a feeling when you zoom in on what that really means, there must be points where they come into conflict.
Miles: Yeah. Oh yeah. And to be clear, you had asked me about factions, so I tried to give you a map of factions but I think there's also, within the factions, all sorts of debates and I know that they're not consensus actions. So, yeah, like I said, let us take the short term issues. So I think among the people who are focused on short-term issues, there is another spectrum of people who on the one end are thinking, "We have to figure out how to do this right and in each case figure out what the right norms are," and they see it being both intrinsically important and symbolically important to get issues like bias and so forth, sorted out as soon as possible. They're like hardcore, like, "let's make sure that we're not causing harm". Like maybe the Hippocratic Oath end of the spectrum; first do no harm and take stuff like bias super seriously. And there are also lots of people who are building systems that are focused on getting products to market and they're focused on releasing systems that can inform research. And in the case of GPT-2, there was potentially a tension between avoiding causing harm on the one hand and enabling people to understand our research and verify our claims and build a new system. And so you could maybe say, like Hippocratic Oath end of the spectrum, and then Move Fast and Break Things end of the spectrum. I think there's an element of truth in both of them; the harming people on the one hand and also that this technology needs to be iterated and there needs to be publication and models getting out there in the world in order for there to be learning by doing and ultimately figuring out how to solve those problems. So, yeah, definitely. You could see conflicts there. And in the case of GPT-2 there were definitely a bunch of different perspectives internally at OpenAI and ultimately, we tried to wrangle that into a consensus view. But you can see the ambiguity and the fact that we were hedging a lot of our claims like, "We're not sure how dangerous this is," "We're going one step at at time" because there were actual different competing values at stake. I wouldn't say totally Zero-Sum, but there was some Zero-Sumness between avoiding harm if it turned out that GPT-2 was very dangerous and allowing people to verify that type of claim.
Lukas: Yeah, that totally makes sense. I think you mentioned that you don't really have a good sense for what all the policies would need to be to.. Unlike climate change where it's pretty maybe more clear what the sensible policies are. Are there some policies that you would enact if you had control or if you could recommend a few things to say, the U.S. government? What would those things be?
Miles: Yeah, so some of the stuff that we flagged in the Toward Trustworthy AI Development Report, I would definitely consider here. So stuff like compute for academia. I think generally a super, high, level one policy change that I'd like to see is more robust support of academic research including not just compute, but also things like data and just funding for academics so that they're not constantly writing grant applications. I think there are lots of inefficiencies in the way that the American academic system works currently. So, like more long term funding of work in areas like security, surveillance privacy would be good, more support for working on things like bias, because I think we're currently in a regime where there's a little bit of funding here from the Department of Defense, there's a little bit of funding here from the National Science Foundation, and then there's everything that industry is doing. What I'd like to see is a world in which an academic or postdoc or something like that doesn't see a big tradeoff between work in the industry versus academia in terms of the resources that they'll have and the amount of freedom that they'll have, because I think we'll see both faster progress in AI and more creativity, if people are able to think long term in different sectors and not be constantly fighting over money. But also there are areas that by necessity need to be worked on over the long term. So things like, for example, you would want that to be always an option for ambitious grad students to work on that but it's actually not really the case today. A lot of the time today, one could go to grad school and not find an easy way to work on AI safety because a lot of the grants available are permanent defense on X, Y and Z or something like that. They do fund some safety stuff but I would like to see more balancing between the civilian defense side and also more robust long term funding for academia.
Lukas: That makes sense. What about regulation? Besides more funding, would you want our government to enact some laws now, putting guardrails around AI research or deployments? Or would you want them to wait and see until there's more information?
Miles: Yeah. It's a good question, I think my answer is yes, they should do some stuff now and in other areas, they should wait and see. So I think in areas like driverless cars, for example, there's already a clear reason to act quickly and develop sectors like regulations. I think another area where I'd like to see more progress in developing clear guidance for entrepreneurs and others is health applications in AI. So where there is some effort to figure out how AI systems should flow through the FDA, for example, and what should that review process look like? I think that's an area where it would be beneficial to see more investment in capacity and expertise in the government so that they have a better ability to process these applications. And also, clearer structures for getting AI systems through strict regulatory process - that I think could be very valuable. I don't know exactly what the details of that should look like but, you know, generally that's something where we do not want people putting out health applications that are causing harm. But we also don't want zero health applications from AI. So I think that's an area where some guardrails are to give people long term clarity so it's not just a black box of, "Oh, will, my system be able to be deployed in like a year or two?", but then have long term signals I think would be very valuable.
Lukas: Yeah, that makes sense. Cool. Well, we always end with two questions. I'm curious as to what you'll say about these. The first one is kind of broad, but off the top of your head when you think about all these topics in machine learning, what's the one that you think people don't talk about enough? What's the underrated topic that you'd like to push people to go maybe learn a little more about?
Miles: I think one super interesting thing is detection of language model outputs. And I mean, this is personally me being biased from working on GPT-2 a lot but there's actually a ton of super interesting research. Some things like how the sampling strategy relates to scale-ability and how model size relates to detectability and one area that's how fine tuning relates to all that. So you can imagine a world in which GPT-2 or other systems are used to generate homework. And this was... Someone was just giving me an example of this on twitter earlier today that I find funny. So it's like people are using it to cheat or to generate phishing emails or something like that. I think it's a really interesting question, like what the limiting case of lie detection of language models is. Like will it just always be...?
Lukas: So like who wins that arms race? [Chuckles]
Miles: Yeah. Who wins an arms race. And also are there steps that you can take to make it more winnable from the defender's perspective? Like watermarking and things like that. Hopefully it's not a super urgent issue and hopefully, there aren't that many people actively trying to weaponize these or whatever but I think there's been a ton of work by Google, Allen Institute for AI, University of Washington, OpenAI and elsewhere, on trying to push the state of the art forward. But we still don't have a general theory of how all these things will be together.
Lukas: Interesting idea. What is this state of the art? Can you generally detect these models?
Miles: Yes. So the state of the art is around 95%-ish detection of where... So we use a Roberta model actually in order to detect GPT-2. So when we released our newest and best system, we're actually using a smaller model to detect the outputs from the larger model. So that's one of the things; is that in one of the early findings from folks at renowned University of Washington and Allen Institute was that models can detect themselves and this was an argument for releasing. We later found that sidestepping and trying a different model and then using it to detect the original ones becomes more better. One of the things that we found is that it's easier to detect all our models. Maybe not surprising because they're worse in some respects and they might be making more errors that are catchable but, yeah, that's what our initial findings were then. But then other people found other really interesting things like what are humans picking up on versus what are AI systems detecting. Like AI systems can detect weird stuff like distribution of adverbs versus nouns or something like, "Oh, therefore it's fake." But humans are not looking for those kinds of things. They're looking for, "Is it saying something that's incoherent or is it repeating itself?" Things like that. So that's another interesting finding; that humans and machines are complementary in terms of how they detect things.
Lukas: I guess that makes sense. Maybe I'm overconfident or out of date, but I feel like I can still detect these models pretty reliably by noticing that they make no sense.
Miles: Yes. So, there's a good game that you can play that attracts fake versus real trump tweets and there are a few other quizzes like that that I think are worth trying and sometimes are harder than you might think. At least like in the context of fine tuning.
Lukas: I guess the Twitter genre is really kind of pushing us to... Yeah. I can see Twitter being a tough medium to detect human versus machine. I feel like I can handle a few paragraphs, I think I could do it. But now I really want to try.
Miles: Yeah. There are over a few paragraphs part that's critical because that's one of the other robust findings; it's actually kind of good in that respect that Twitter recently went from 140 to 280 characters.
Miles: That's a sweet spot in terms of detectability. It makes it a bit easier.
Lukas: Well maybe are popular culture is nudging us more towards the language of machines than machines are learning even.
Miles: (Both Laugh)
Lukas: Okay last question. I guess this is kinda more for practitioners but I think actually I'd be curious about your take on it. When you look at things at OpenAI or elsewhere and you look from the conception to creation and deployment of a complete model, where do you see the bottlenecks happening? What's the hardest piece of getting something done and out the door?
Miles: Good question. It seems like a lot of it is finding the right hyperparameters and the kind of stuff that y'all are trying to help out with - getting the right type of parameters. Obviously, like compute as a bottleneck; you know, if you don't have enough compute but I'm thinking in the context where you have a decent amount of compute. Data is definitely always an uphill battle. Like even if you have good engineers who are good at gathering and cleaning data, there's always room for improvement. So I'd say I'm not sure that I call it a bottleneck, but something to push on is the quality of data. Yeah. Also, related to the hyperparameter thing is ML weirdness. It's hard to debug ML systems and like weird silent failures which are kinda related and also the weird silent issues and data stuff as well. All of those lead to various weird dynamics.
Lukas: I do want to say for the record that I hope that our product helps people more with actually the kind of collaboration that you're talking about. Like specifically tuning the hyperparameters so I think both are important. But I'm really curious, actually, because we watched, from the sidelines from far away, OpenAI try to build the robot hand that manipulated the Rubik's Cube and just from casually talking to folks on the team, it seemed like people felt like you're really close and then it took about two years for... That actually got done but over a long period of time. What happens in that year or two? From your perspective, like, what's going on? It can't just be tweaking hyperparameters, can it be?
Miles: I should emphasize I'm not a technical person and it's not slowly just tweaking the hyperparameters..
Lukas: I know, in a way, I'm just in need of a clearer perspective because I don't know what they're doing and you're just watching in from the outside. I'm curious what you'll say.
Miles: In the case of robotics, from my perspective, it felt like a fairly smooth trajectory of every few months there would be some kind of demo that seemed a bit more impressive. It wasn't like nothing happened for years. It was like maybe we didn't solve the original problem for a while. But it seemed like there's always some area to push on. And I think you mentioned collaboration, I would say just knowing what sort of techniques to apply is another part of it. So it's not the hyperparameters per se but knowing how to get transformers to work and in some new context I think it's non-obvious in the fact that there's not always sufficient information papers to reproduce things sort of requires you to do a lot of trial and error. And that's just how ML research seems to work. It's like trying lots of hyperparameters and it just takes time.
Lukas: Thank you so much for taking the time to talk. We'll put some of the papers you mentioned in the show notes. That was really fun. I really appreciate it.