Jack Clark is the Strategy and Communications Director at OpenAI and formerly worked as the world’s only neural network reporter at Bloomberg. Lukas and Jack discuss AI policy, ethics, and the responsibilities of AI researchers.
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims by OpenAI: https://arxiv.org/abs/2004.07213
Follow Jack Clark on Twitter: twitter.com/jackclarkSF
Read more posts by Jack on his website: https://jack-clark.net/
Lukas: I feel like I typically get nervous when people ask me big policy questions about A.I. and I never really feel like I have a lot of smart things to say and I think the goal of this podcast is mainly to talk about people doing A.I. in production. But then when I started writing down questions I want to ask you, I was like, wait a second. I want to ask you all the policy questions and all the weird questions that they have ever asked me because I have no idea.
Jack: Well let's do it.
Lukas: So there is a question I seriously want to ask because I feel like you think about this a lot. I mean, this is such a cliche question, but I'm actually fascinated by how you are going to answer it, which is what probability do you put on an A.I. apocalypse?
Jack: Oh, good. So we'll start with the really easy question and go from there.
Lukas: Yeah. So what's it like? Is it like one in 10? Like nine out of ten? One in a million? Like what are your odds?
Jack: I feel the chance of an A.I.apocalypses is quite high. It's like 50 percent but that's only because the chance of most apocalypse is if they get to the point of them happening, like say a global pandemic, which is something that we're currently going through. It's quite clear that most of today's governments don't have the capacity or capabilities to deal with these really hard challenges. So if you end up in some scenario where you have large autonomous broken machines doing bad stuff, then I think your chance of being able to right the ship is relatively low and you don't have a super positive outlook. I think the chance that we have to avert that and get ahead of it is actually quite high. But I think your question is more along the lines of, if something wakes up and we enter this very weird territory, what are our chances? And I think if we don't do anything today then our chances are extremely poor.
Lukas: Yeah, I think maybe I agree. Our chances of surviving an AI apocalypse is probably low. But my question actually is what do you think the chances are like actually entering the A.I. apocalypse? And remember that all apocalypse scenarios.
Lukas: They can sum to more than one Right? So I mean in a way like a pandemic apocalypse. Unless you think they're linked that that should make the AI apocalypse happening actually lower right?
Jack: I think for us, this is like the beginning of when you start to do massive amounts of computer trading on the stock market, say what's the chance we're going to enter into a high frequency trading apocalypse? And I think how someone would have answered is, "it's a really high probability that we'll have problems, but it's fairly unlikely the whole system will topple over due to high frequency trading" and I think for my answer on AI, it is pretty similar. It's really high that we're going to have some problems because it's a massively scaled technology that does weird things really quickly and will do stuff in areas where finance is also deployed, huge amounts of capital, so the opportunity for big things going wrong is kind of high. But the chance of a total apocalypse feels a little fantastical to me and that's partly because I think for a real apocalypse, like a really bad severe one, You need the ability for A.I. to take a lot of actions in the world which means you need robotics. And robotics, as you and I both know, is terrible and actually protects us from a huge amounts of many parts of the like ornate apocalypse scenarios. But when I think about this is, you develop a load of radical technology and some of the greatest risks you face aren't the technology deciding of its own volition to do bad stuff that very rarely happens. It's even unlikely here. There's a chance that you kind of get black mold, with technology like somewhere in your house you have not being cleaning sufficiently. You don't have good systems in place and something problematic starts developing in a kind of emergent way. You barely notice, but that thing has really adverse effects on you. It's also hard to diagnose the source of the problem and why it's occurred.
Lukas: Interesting. That seems like a much more concrete scenario. Like I guess what? What form might that take? I mean, it sounds like you're mostly worried about sort of the things people are doing now. We get better at doing these bad things and that causes big problems, what were top of your mind concerns for you?
Jack: I guess I'd frame my concern as we're currently pretty blind to most of the places this could show up and we need something that looks a lot like weather forecasting and radar and sensors where we're looking at evolution in this domain. The sorts of things that I'd be worried about are scaled up versions of what we already have; recommendation systems crushing people towards increasingly odd areas and content or subject matter that we are all realizing are quietly radicalizing people or making people behave differently with each other. I worry quite a lot about A.I. capabilities interacting with economics. So you have some economic incentive today to create entertaining disinformation or misinformation? I think we think about what happens when those things collide. We have good A.I. tools for creating misinformation or disinformation and the economic incentive and stuff start showing up. I think there are going to be relatively few grand evil plans. I think there are going to be lots and lots of accidental screw ups that happen at really large scales and that happen really quickly with self-reinforcing cycles. I think that the challenge is not only to spot something that you are going to need to take actions quite quickly, and that's something that we are traditionally just really bad at doing at speed. We can observe things happening, but our ability to act against them is quite low.
Lukas: But you do a lot of work on ethics and A.I and a lot of thinking about it, it seems like those scenarios feel like, is A.I special there? I mean that, it seems like there's a lot that of general technology risk, right? Do you think A.I. makes it different?
Jack: I think the difference is delegation. So, technology allows us to delegate certain things. Technology, up until many sort of practical forms of A.I., Lets us delegate highly specific things that we can write down in a sort of procedural way and A.I. allows us to delegate things which have a bit more inherent freedom in how you choose to approach the problem that's been delegated to you. Like, make sure people watch more videos on my website. It's kind of a fuzzy problem. You're giving a lot of space for the system to think about it. And so I think the ethics there, they are not something that humans haven't encountered before, but it's full of ethics, which has a lot in common with the military or how you do administrative states in the old days, which is a sort of ethical nature of giving someone the ability to delegate increasingly broad tasks to hundreds of thousands of capable people. That's like a classic ethical problem that people have dealt with for hundreds or thousands of years but with A.I., now almost everyone gets to be about delegation, and that really hasn't happened before. We haven't had the scale of delegation and this ease with which people can scale themselves up and so lots of the ethical challenges like, okay, people now have much greater capabilities to do good and harm than they did before; they have these automated tools that extends their ability to do this. How do you think about the role of the tool developer in that context? Because, sure, you're building iterations of previous tools, but the scope of which those tools will be used, the areas in which they will be used is much more broader than you've dealt with before and I think it introduces ethical considerations to you that maybe governments only previously dealt with.
Lukas: I see. So in your view, A.I allows single individuals to have broader impact and therefore the tools you actually make available to folks, there are more ethical issues within that?
Jack: Yeah, a good way to think about this is this. I think language models are interesting. Here is an ethical challenge I find interesting with language models, you have a big language model that has a generative capability. You want to give that to basically everyone because it's analogous to a new form of paintbrush. It is very general. People are going to do all kinds of stuff with it, except this paintbrush reflects the implicit biases and this is data that it was trained at massive scale. So it's like a slightly racist paintbrush. The problem is now different to just having a paintbrush. You've got a paintbrush that has slight tendencies and some of these tendencies seem fine to you. But some of the tendency seems to reflect things that many people have a strong moral view of as being bad in society. What do you do then? And I've actually spoken to lots of artists about this. And most artists will just say, give me the paintbrush. Like I want the like crazy fun-house mirror version of society. So I could talk about it and make interesting things. That feels fine but then I wonder about what happens if someone gets given this paintbrush and they just want to write texts for a kind of academic purpose. They may not know much about the paintbrush they have been given. They may not know about its traits. And then suddenly they are unwittingly creating massive scaled up versions of the biases inherent to that thing you gave them. That seems challenging and where as technology developers, we have a lot of choice. An uncomfortable amount of choice and a lot of problems which are not easy to fix. You can't fix this. You need to figure out how to talk about it and make people aware of it.
Lukas: Well, that's a really clever analogy. I have not heard that one before.
Jack: Yeah, I mean, I think it speaks to a weird scalability of a lot of this stuff. Like if we just have tools that let people scale themselves in various directions, the directions are increasingly creative areas because we're building these in a scaled up curve-fitting systems that could fit really weird curves including interesting semantic domains. But all the problems of curve fitting now become weird problems of reproduction of art and sort of force, which feels different and challenging. I don't have great answers here. I have more like Oh dear, This is interesting and feels different.
Lukas: But actually, it's interesting because you speak of the language model as, "Oh yeah". Just for example, what if you had a language model, but I mean Open AI had this issue. And I am curious how you thought about it at the time and how you reflect on that now.
Jack: So this is GPT2, which is a language model we announced and didn't initially release it, but subsequently released in full. At the time, I think we made a classic error, which is that if you're developing a technology, you see all of its potential very clearly and you don't just see the technology you are holding in your hands, you see Gen 2, 3, 4 and 5 and the implications thereof. I think we treated some of our worries about the misuse of this technology. We were thinking about later versions of the technology than the one we were actually holding, because what actually happened is we released it. We observed a huge amount of positive uses and really surprising ones like this game, AI dungeon, where a language model overcomes a kind of dungeon master and it feels interesting and actually different, a different form of game-play, something we wouldn't have expected. And misuses were relatively small and it's actually been because it's really hard to make a misuse of a technology it is probably as hard to make a misuse of a technology as it is to make a positive use. And luckily, most people want to do the positive uses. So your amounts of people doing misuses is a lot smaller. I think that means that the responsibility to technology developers is going to be more about, maybe you are still going to trickle things out in stages, but you're ultimately going to get to release lots of stuff in some form. It's about thinking about how you could control some elements of the technology while making other parts accessible, like can you control how you'd expect big generative systems to be used while making it maximally accessible? Because you definitely don't want a big generative model that may have biased tendencies providing generations to people in, say, a mock interview process that happens before they speak to a human for an interview stage, because that's the kind of usage we can imagine and feels like the sort of thing you really want to avoid. But you can imagine ways in which you've made this technology really, broadly accessible while finding ways to carve out parts where you as a developer say, "This probably isn't okay". So I think our thinking needs to become a lot more subtle. And I think we did anchor on the future more than the present. And that's been one of the main things that's changed.
Lukas: Interesting. So knowing what you know now, you wouldn't withhold the model?
Jack: I think you'd still do staged release, but I think you would do more research earlier on characterizing the biases of the model and potential malicious uses. Because I think what we did is we did some of this research and when we did a lot more after some of the initial models had been released on characterizing subsequent models we are planning to release. What I think is now more helpful is if you have a load of that stuff front-loaded. So you're basically saying, here's the context, here are the traits of this thing, which is like going to slowly be released that you should be aware of it. So, yeah, I think we would have done stuff slightly differently. And I think that this is, what we're trying to do here is learn how to behave with these technologies. And some of that is about making yourself more culpable for this traditional outcomes, because it's a thinking exercise. It makes you think about different things to do. So I'm glad some of the goal of GPT2 is bring a problem that we actually don't get to get wrong in the future. Earlier in time to a point where we can like do different ways of release that maybe some of them will be good and some will be suboptimal and learn from that because I think in five, six, seven years, these sorts of capabilities will need to be treated in a standardized way that we thought about carefully. And getting to that requires a lot of experiments now.
Lukas: It's kind of interesting. I guess there's two kinds of problems. I think my understanding of the worry with GPT2 is actually malicious uses, which more information probably wouldn't help with. But then there's also, I think, your idea of accidentally racist paintbrush. Like, that sort of speaks to inadvertently bad uses. I mean, both seem like potential issues. But do you now view malicious uses as kind of less of an issue? Because I really could imagine like a very good language model having plenty of malicious uses. I suppose you could say, well, any interesting technology probably has mostly misuse issues. Should we never release any kind of tool?, how do you think about that?
Jack: Yeah, again, it's good that we're doing really easy questions, I'm glad we are slowly moving up the easy stuff. Well, I guess there's a couple of things. One of things we did with GPT2, was we released a detecting system, which was a model trained to detect outputs of GPT2 models. We also released a big data set of unsupervised generations from the model so other people could build different detecting systems. I think for a huge amount of dealing with misuse is just giving yourself awareness, you know? Like why are police forces around the world and security service is able to actually deal with organized crime, we call it organized crime go-away, that is a socio economic phenomenon, but they tool up on very specific ways to detect patterns of organized crime. And I think it's similar here where you need to release tools that can help others detect the outputs of the things you're releasing. For avoiding malicious uses, I think it's actually kind of challenging. I think it's a little unclear today how you completely rule that stuff out. I think it's generally challenging to do that with technologies. Some of how we've been approaching it is trying to make prototypes; the idea being that if we can make a prototype use case that's malicious and real, then we should talk to affected people. The extent to which we would publicize that remains deeply unclear to me, because as you intuit, if you publicize malicious users, it's like, "Look over here. Here's how you might misuse this thing we have released, which seems seems a little dangerous". I think that we're going to need new forms of control of technology in general at some point. I don't think facts like "this year's problem" or "next year's problem" but, you know, in 2025, you're going to have these embarrassingly capable cognitive services, which can be made available to large numbers of people. And I think cloud providers, governments and others are going to have to work together to really characterize what could be generically available for everyone and what the sub level of care and attention being paid to it is going to be. And getting to that's going to be incredibly unpleasant and difficult. But it feels inevitable.
Lukas: But I guess just to be concrete, if you create a thing like GPT3 that was much more powerful, you think that you would probably release it along with the detector would be the sort of compromise that way?
Jack: I think you think about different ways that you can release it because some capabilities might be fine, and some, you might want to have some control. So you control the model people access - services around it could be one way that you do it. Another way could be just releasing fine tuned versions of models and specific data sets or specific areas, because if you fine a model, it's like neural silly putty, where you take this big blob of capability and you put it on a new data set. It takes on some of the traits of that data set and in some sets you have restricted it. So you can do things like that. I think the challenge for a lot of developers going forward is going to be how to deal with the roots, like artifacts themselves, like models themselves. Something I think about quite regularly is, it's not today. It's not next year. It's probably not even 2022, but definitely by 2025. We're going to have conditional video models. Like someone in the AI community or some group of people are going to develop research that allows us to generate a model, generate a video of a problem over some period of time, you know, a few seconds, probably. Probably not minutes, but they can guide it so that it includes specific people, may do specific things. And maybe you also get audio as well. That capability is obviously something, but it's like a much harder case of just a language model or just an image model. I think with that capability, definitely gets quite a few controls applied to it, and needs systems done for the authentication of real content on the public internet as well. It provokes questions about that. I think we're heading into a weird era for all of this stuff. I think that the advantages you get of releasing all of this stuff to sort of public on the internet is pretty huge. But I also think that to some degree of dereliction of duty by the A.I Community to not think about the implications of where we are in three, four, five years, because I have quite high confidence that we can't be in this state of affairs where the norm is to put everything online instantly, because I think we would just develop things that are frankly too capable. By we I mean A.I researchers at large. For you to be able to do that and say this is fine
Lukas: Do you think that from?
Jack: What do think? I need to ask? I want ask you about this. Like what do you think about this sort of issue. What is the responsibility of technologists and how do we get to a more responsible place and is that even necessary? I bet you could ask me another question, but I gotta know.
Lukas: I don't know it's funny. I feel like I really want to reserve the right to change my mind on some of this stuff.
Jack: So do I, to be clear, I didn't realize we were committing in this conversation.
Lukas: I think I'm quite reluctant to say things publicly because I know that it seems like the ethics really depends on the specifics of how the technology works and stuff. I think like on GPT2, for just as an example, I thought open AI's decision was intriguing and different than I think what I would have done or what my instincts would have been. But it was provocative to say, "hey, we're not going to release this model". And I think the good thing about it maybe was like get everyone kinda like talking and thinking about it. I guess also another thing that I don't really have a strong point of view on, but just seems like a little interesting is it seems like at the moment every A.I. researcher is being asked to be their own kind of 'ethicist' on the stuff. Like I see a lot of ethics documents coming out with open source, you know? ML projects would have their code of conduct... On one hand, it seems a little like highfaluting to me. Like I feel like I have this instinctive, "Should I put out a code of ethics with the toaster that I sell?" There's something unappealing about it; but I can actually also definitely take your side of it. I guess to me it's less like the power of an individual and more of if this technology compounds and runs amok, then maybe it's a case that people really should be thinking about. But yeah, honestly, I don't know. I guess I'm curious what you think about this because you're in this all the time. Do you think that AI researchers are in the best position to decide this stuff? I mean, if it really affects societies profoundly, as you're saying, it seems like everyone should get a say about how this stuff works. Right?
Jack: This is unfair. What's actually happening here is an unfair thing for A.I. researchers, which is the availability powerful technologies they are released into a world that doesn't have any real notion of technology governance because it hasn't really been developed yet and they release them into systems that will use the technologies to do great amounts of good and maybe a small amount of harm. And so the challenge is like, well shit, "I didn't sign up for this". "I wanted to do AI research". "I didn't want to do AI research plus societal ethics and geopolitics". "That's also not my expertise". I think that's a very reasonable point. Unfortunately, there isn't another crack team of people hiding behind some wall to entirely shoulder the burden of this. There are ethicists and social scientists and philosophers, members of our public, governments. All of them have thoughts about this and should be involved. But I think the way to view A.I. researchers is they're making stuff that's important. They should be themselves as being analogous to engineers of the people who build buildings, make sure bridges don't fall over. You have a notion of ethics, chemists should have a notion of ethics because chemists get trained how to make bombs. And so you want your chemists to have a strong ethical compass so that most of them don't make explosives. Because until you have a really, really resilient and stable society, you don't want lots of people able to do this. You have no ethical grounding because they might do experiments that lead to literal blowups. Or even people like lawyers who have codes of conduct and ethics. It's very strange to look at A.I. research, and more broadly computer science, and see a relative lack of this. When you see it in other disciplines that are as impactful or maybe even less impactful on our current world, I don't think any A.I researcher is going to solve this on their own. But I think for the culture of culpability, of thinking, but actually to some extent I am a little responsible here. Not a lot. It's not my entire problem. But "I have some responsibility" is good because how you get systemic change is millions of people making very small decisions of their own lives. It's not millions of people making huge important decisions because that doesn't happen at scale but millions of people making slight deltas is how you get massive change over time. And I think that's kind of what we need here.
Lukas: Well, let me ask another easy question.
Jack: Oh, good. yea
Lukas: What do you think about military applications of AI?
Jack: I think that, well the military applications of AI aren't special in the sense of it's technology that's going to be used generically in different domains, so it'll get used in military applications. I mostly don't like it because of what I think of as the AK 47 problem. So the AK 47 was a technological innovation to make this type of rifle a lot more repeatable, more maintainable and easier to use by people who had much less knowledge of weaponry than many prior systems. You develop a system that goes everywhere. It makes the act of taking life or carrying out war cheaper and more repeatable, massively cheaper and much more repeatable. And so we see a rise in conflict. And we also see that artifact, this technical artifact to some extent drives conflict. It doesn't create the conditions for conflict, but it gets injected into them and it worsens them. Because it's cheap and it works. I think that AI, if applied wrongly or rashly in a military context, does a lot of this. It makes certain things cheaper, certain things more repeatable and seems really, really bad. I think A.I. for military awareness is much more of a gray area, like lots of ways in which unsteady peace holds in the world is by different sides who are at war with each other, having lots of awareness of each other, awareness of troop movements, distributions, what you're doing, and they use surveillance technologies to do this. And I think you can make a credible argument that the advances in computer vision, that we're seeing that's being applied massively widely may, if adopted at scale by lots of militaries at the same time, which is kind of what seems to be happening, may provide some diminishment in a certain type of conflict because it means there's generally more awareness. I think stuff like the moral question of lethal autonomous weapons is really, really challenging because we want it to be a moral question, but it's ultimately going to be an economic question. Like it's going to be a question that governments make decisions about on the motivation of economic speed of decision and what it does to strategic advantage, which means it's really hard to reason about because leaving you or I make these decisions and actually covered it with a radically different frame probably of a strong intuitive push against from it existing; but that's not a frame that these people have. Let's do another easy question. What else you got?
Lukas: Actually this is maybe like a less loaded question, but I'm genuinely curious about this. So you recently put out this paper - I think it's called Towards Trustworthy AI Development - and I thought, as someone who builds a system that does a lot of saving of experiments and models and things like that, I thought it's really intriguing that you picked as the subtitle, mechanisms for supporting verifiable claims. So it seems like you draw this incredibly bright direct line between trust for the AI development and supporting verifiable claims and I was wondering if you can tell me why that is so connected.
Jack: Well, it's really easy for us to say things that have a moral or an ethical value, and in words commit an organization to something like "we value the safety of our systems and we value but not making bias decisions or what have you" but that's an aspiration, and it's very similar to a politician on the election campaign trail being, "well, if you elect me, I will do such and such for you. I'll give you all this money or I'll build this thing" but it's not very verifiable. You're sort of needing to believe the organization or believe the politician and they can't give much proof to you. Because AI is going to be really, really significant in society. It is going to play an increasingly large role. People are going to approach it with slightly more skepticism, just as they do with anything else of their life that plays a large role and has effects on them. And they are going to want systems of report, systems of diagnosis, systems of awareness about it. Now, today, for most of this, we just fall back on people. We fall back on the court system as a way to ensure things are verifiable. We have these mechanisms of the law that mean that if I as a company make a certain claim, especially one that has a fiduciary component, the sort of validation of that claim comes from a load of stuff around my company and the ability to verify it comes from action and also legal recourse if I'm not doing so... It's tons of stuff like that.
Lukas: I feel like this because some people will not have read the paper or listened to this. When you say supporting verifiable claims, what's an example of a claim that you might want to verify that would be relevant to trust for the AI development?
Jack: A Claim you might want to verify is that, say our system is we feel that we've identified many of the main biases in our system and have labeled it as such. However, we want the world to validate that our system lacks bias a the critical area. So we're going to use a mechanism of bias bounty to get people to compete, to try and find bias traits in that system. And so there you have got a thing you'll make your claim about it. I believe that it's relatively unbiased or I have taken steps to look for bias in it. But then you're introducing an additional thing, which is a transparent mechanism for other people to go poke holes in your system and find biases in it and that's going to make your claim more verifiable over time. And if it turns out that your system had some huge traits that you haven't spotted, well, at least the mechanism helps you identify it right, there. Similarly, we think about the creation of third party auditing organizations, right? So you could have an additional step, I could have a system making some claim about bias and put in a bias bounty out there. So I have more people hitting my system. But if I'm being deployed in a critical area, and what I mean by critical is a system that makes decisions that affect someone's financial life or any of these areas the policymakers really, really care about, then I can say, OK, my system will be audited by a third party when it gets used in these areas. And so now, I'm really not asking you to believe me. I'm asking you to believe the results of my public bounty and the results of this third party auditor. And I think when all of this stuff stacks on itself and gives us the ability to have kind of trust in the systems. Other things might be, I will make a claim about how I value privacy but the mechanism by which I will be training my models and aggregating data, we will be using sort of encrypted machine learning techniques. So there I've got his claim, we could really verify because I have an audit-able system that shows you how I am preserving your privacy while manipulating your data. And so the idea of these reports is basically produce a load of mechanisms that we and a bunch of other organizations, people think of as quite good. And then the goal over the next year or two is to have organizations who are involved in the reports and others who weren't implement some of these mechanisms and try them out. We'll be trying to do this with at least a couple of them.
Lukas: Oh cool. So I can join the red team, too.
Jack: Yeah, I'm really excited about red teaming. I think, so obviously having we recommend a shared red team. That takes a little bit of unpacking because obviously if you are two proprietary companies, your red teams can't share lots and lots of information about your proprietary products, but they can share the methods they used to red team A.I. systems and they can standardize on some of those best practices. That kind of thing feels really, really useful because eventually you're going to want to make claims with your red team, the system, and it's going to be easier to make a trustworthy claim if you use a kind of industry standards' set of techniques that are well documented that many have done then if you just cowboy it and do it yourself. So, yeah, please join the red team. We want lots of people who love like some shade of red team infrastructure eventually.
Lukas: But the red team infrastructure is actually, It seems like the way you describe it, and I'm sure this comes from security, but I just I'm not super familiar with the field. It's like you have someone internal to the organization, right? Like you give an internal team that tries to break or tries to find problems with..
Jack: Yea, You have that and then you are seeking to find ways to have your internal team share insights with other people of other organizations. Now, they can't say here is the proprietary system I broke and what I did. What they can say, well, I'd like to sit down and crush my knuckles and try to like red team an ML system and here are the approaches I used and here's what's effective. Not in Red team, but we have actually done a little bit of this at Open AI where in a GPT2 research paper we wrote about some of the ways we've tried to probe the model for biases because we think this is an area that's generally, especially useful to get standards on. And then since then, we have just been emailing our methodology to lots of people at other companies. These people can't tell us about the models that they are for testing for bias, but they can look at the probes we're suggesting and tell us that they seem sensible. And so that shows you how we're able to develop some shared knowledge without breaking some proprietary rules.
Lukas: Interesting. One thing I kept thinking as I was reading your paper is I use all kinds of technology that I don't think has made verifiable claims. Like, I mean, I feel like I rely on all kinds of things to work, you know, and maybe they're making claims. But I'm certainly not aware of.. I sort of assume the Internet security works. I assume that I now have all these things plugged into my home network that. Yeah... But I just sort of... What do you think that.. It seemed like these might be the sort of best practices for developing any kind of technology or do you think there's something really AI specific within it? And where would you even draw the line where you would call something AI that needs this kind of treatment?
Jack: I think some of it comes down to, "So where do you draw the line?" I think AI stuff is basically when you cross the technology that can easily be audited and analyzed and have the scope of its behavior defined to a technology where you can somewhat audit and analyze it and list out where it will do well. But you can't fully define its scope. And I think for a lot of ones you train in Neural net, you have this big probabilistic system that will mostly do certain things, but actually has a surface area that's inherently hard to characterize fully. It's very, very, very difficult to fully list it out. And mostly it doesn't make a huge amount of sense too because only a subset of the area, the surface area of your system is actually going to be used at any one time. So it does have some kind of differences or bias bounties. There is a kind of weird thing. It's equivalent to say, "all right, before we elect this man or before we appoint this person to an administrative position, we want a load of people to ask him a ton of different questions about quite abstract values that they may or may not have, because we want to feel confident that they reflect the values that we like someone to have in that position." That feels different actually, feels a little different to normal technologies. It would be absurd to expect we get to a world where everyone verifies every claim they make all the time because you have the time. You know, I mostly govern my life depending on my own beliefs that other people are sticking to the rules of the game. But we all have some cases where we want to go in on something that's happening in our life and audit every single facet of this. And I think the way to think about why you need verifiable claims or ability to make from quite broadly is as governments consider how to govern technology and how to let technology do the most good while minimizing the harm. It's probably going to come down to the ability to verify certain things in certain critical situations. So you're kind of building all of this stuff not for the majority of your life but for the really narrow edge cases where this has to happen? Necessarily that means you need to build quite general tools for verification and then try to apply it in specific areas.
Lukas: Why then, it seems like there's been a lot of complaining about AI research recently that a lot of that just the research claims, which are maybe not so loaded and not so applied to and we don't interact with, are actually not really verifiable?
Jack: Yeah, I mean, some of these things are just because there is a compute gap. There is a lack of, minority of organizations with a large amount of compute. There is a majority of organizations and a huge swath of academia, if not all of academia, that has very real computational limits. And this means that at a really high level, you can't really validate claims made by a subset of the industry because they are doing experiments at scales which you can't hope to meet. So some of this is about what are really general tools we can create, just resolve some of these kind of asymmetries of information? Because some issues of verifiability are less about your ability to verify a specific thing in that moment. It's more about having enough cultural understanding of where the other person is coming from. You understand what they're saying, the premise behind that, and can trust them, which is less you demanding a certain type of verification, but being like, "okay, well, you know, you're a complete alien to me you come from another cultural context or another political ideology. However, we have this strong shared understanding of this one thing that you're trying to get me to believe you about. And right now, it's like certain organizations wanting to motivate academia to do a certain type of research. It would depend on, I come from this like big compute pre-vis land and I'm asking you to hear me when I list out the concern that only really makes sense if you've done experimentation of my scale because that's calibrated by intuitions." So we need to find a way to give these people the ability to have the same conversation so that you can improve that stuff as well.
Lukas: So are you going to give them a ton of compute? Like what's your solution here?
Jack: We basically specifically recommend that governments fund cloud computing, which is a bit different too, It's a bit wonky, right? But one thing you need to bear in mind is that today, a lot of the way academic funding works to centers usually on the notion of there being some bit of hardware or capital equipment that you're buying and as we know, that stuff depreciates faster than cars; it's like the worst thing to buy if you are a researcher in an academic institution. You'd be much better placed to buy a cloud computing credit system or a system that lets you access a variety of different clouds. Work generally, when we go and work with governments pushing this idea that they should fund some kind of credit, the backs on onto a bunch of different cloud systems because you don't want the government saying, "all right, all of America is going to run on like Amazon's cloud." That's obviously a bad idea, but you can probably create a credit which backs on to the infrastructures of like five or six large cloud entities and deal with competitive concerns that way. And I think it's surprisingly tractable. It's like some policy ideas are relatively simple because they don't need to be any more complicated. And so we are kind of lobbying, for lack of a better word, governments to do this. Maybe other things. Bear in mind that lots of governments, because they've invested in supercomputers, really want to use supercomputers as their compute solution for academia and that mostly doesn't work. You actually mostly need a dumber simpler form of hardware for most forms of experimentation. So you're also saying to governments that, "I know you spent all of this money on this supercomputer and it's wonderful. And yeah, it's great at simulation of nuclear weapons and whatever, we love that. You don't need it for this. Stop trying to use it for this exclusively." So that is also sort of where that comes from.
Lukas: Nice, I actually haven't thought of that. That's an interesting observation.
Jack: Well, if you're like the US, right? You are like we've spent untold billions on having like having the winner of the top 500 list and we're in some pitched Geopolitical war with China. But of course, you want to use this for AI and you are like, "Yeah, dude." But look, some people just want an 8 GPU server. Actually, most people are fine with that. So on this thing is not easy to multiplex and sample out easily compared to like AWS or Microsoft or whatever.
Lukas: Interesting. So e always end with two questions that I'm particularly interested in your point of view on this. And so, the first one. You actually view a lot of things going on on AI. I mean, from your vantage point at Open AI and then also the newsletter that you put out, what would you say is the topic that people don't pay enough attention to? The thing that matters so much more compared to how much people look at it.
Jack: I think the thing that no one looks at, that really matters is advances in a very niche part of computer vision, which is the problem of re-identification of an object or person that you've seen previously. What I mean is that our ability to do pedestrian re-identification now is improving significantly. It's stacked on all of this image net innovations. It stacks on our ability to do rapid feature extraction from video feeds. It's stacked on like a lot of just interesting component innovations. And it's creating this stream of technologies that will lead to really, really cheap surveillance that eventually is deployable on edge systems like drones or whatever by anyone. And I think we're massively underestimating the effects of that capability because it's not that interesting. It's not that advanced. It doesn't even require massively complex reinforcement learning or any of these things that researchers like to spend time on. It's just a sort of basic component, but that is a component that supports surveillance states and authoritarianism and that is the component that can make it very easy for an otherwise liberal government to slip into a form of surveillance and control that no one would really want to happen. And I am actually thinking about, "yeah can I write a survey or something about this?" Because, it's not helpful for someone like open AI to warn of this. It's sort of the wrong message. It's maybe okay for me to write about it occasionally in my newsletter as I do. But I sort of think about writing an essay like has anyone noticed this? Because it's like I look at all of the scores, I look at all of the graphs and stuff, I got a big folder of it. It's all going up like a hockey stick, its all getting cheap. Its not very cheerful, but I think it's important.
Lukas: Great answer as expected. All right. Here's the second question, when you look at the demo projects that you've witnessed, and Open AI has actually had some really spectacular ones, what's that like? What's the part of conception to completion that looks the hardest here and maybe the most unexpectedly difficult piece of it, like beating the best team at DOTA or GPT2? Where do things get stuck in and why?
Jack: I think there are maybe two parts where projects get stopped or have interesting traits. One is just data. Like I used to really want data to not matter so much. And then you just look at it and realize that, you know, whether it's like DOTA and how you ingest data from the game engine there or robotics and how you choose to do to the main randomization and simulation or supervise that in ways that you're just figuring out what datasets I have and what mixing proportion do I give them during training and how many runs do I do. That just seems very hard. I think other have talked about this. It's not really a well-documented science. It's something that many people treat with intuition and just seems like an easy place to get stuck and then the other is testing. Once I have a system, how well can I characterize it and what sort of tests can I use from the existing research literature and what tests do I need to build myself? We spend a lot of time to figure out new evaluations at Open AI because for some systems you want to do a form of evaluation that doesn't yet exist. To characterize performance in some domain and figuring out how to test for a performance trait that may not even be present in a system is a really very difficult question. So those would be the two areas.
Lukas: Okay. I can't help myself. Actually, as you were talking, I thought of one other question. I feel like the people that I know and that I've watched closely at Open AI have been actually spectacularly successful. And they've been part of projects that have really seemed to have succeeded, like the robot-hand doing the Rubik's Cube and Dota. Are there a whole bunch of projects that we don't see that have just totally failed?
Jack: Yea, we have got failures. So I don't know if you remember Universe that was a failure. And we tried to build a system which was like Open AI Gym, but the environments would be every flash of HTML game that had being published on the Internet. Yeah. So that failed, right? That failed because of network asynchronicity. And so basically you ended up having, because we were sandboxing with things in the browser and you had a separate game engine that need to go and talk over the network to them. RL actually isn't really robust enough to that level of time data to do a lot of useful stuff, so that kind of didn't work. So we have some public failures which I think is kind of helpful. Yeah, we have some kind of private ones. A lot of it is, you know, some people just spend a year or two on an idea but it ends up not working out. Some people I won't name the projects, it's public. Maybe they came up with a simple thing that worked really well. We spent six months trying to come up with what was a research that they thought was the more disciplined or like a better approach to it and the simple thing always worked and all of the other things they tried to didn't. So they eventually published a system with the simple thing like, yeah it works! But I would much rather much liked my complex idea to work. But our big bets like the hand or DOTA or GPT, those tended to go okay. But that's usually because they come from a place of iteration like DOTA came from prior work applying PBO and I think evolutionary algorithms to other systems. I think the hand came from prior work on block rotation, right? So once you could do block rotation, you could do a rubix. GPT came from prior work on scaling up language models just on GPT1. So a lot of it just happened iteratively for public demand. You feel like we don't have an abnormal lack of failure nor an abnormal amount of success can because it's pretty in distribution. I hope so.
Lukas: That was really fun.
Jack: Thanks very much.
Lukas: Thanks Jack.