AI effect: We’ll get way wealthier and witness a productivity boom, says Sam Altman

Sam Altman, the 38-year-old chief executive of OpenAI, spoke to Times Internet vice chairman Satyan Gajwani
in a chat that covered a vast range of topics, from their days at Stanford, through the birth of the company that
gave the world ChatGPT, to the future of artificial intelligence and the impact of the technology. Edited excerpts:

Satyan Gajwani: Sam, help us understand how you came to where you are right now.

Sam Altman: I started at Stanford, where we met… I was already in love with computer science, but I really fell in love with it once I got there. I actually went to study AI, but at the time AI was really not working at all. In fact, very memorably, one of my professors said, the only sure way to have a bad career in AI is to work on neural networks–we’ve decided those don’t work. And so I got kind of discouraged and I started a company. The company didn’t work out that well, but I learnt about startups and thought they were a very powerful force, something I was very excited about. I then ran YC for a while, and while I was doing that, I got really excited about the idea of startups that take on our hard, technical challenges. And I thought it was curious that more people weren’t doing that. It seemed like a really valuable opportunity. With some other people, I started OpenAI as one of those examples, and many other things which have gone on to be pretty exciting, but really fell in love with OpenAI. Once it seemed clear that we were really going to have a chance at making true general purpose AI, like a system that could do what a human can do, and contribute new knowledge to society, I got really excited, wanting to go work on that. And so I stopped being an investor and now I do that.

Sam Altman Fact File 1ETtech

Gajwani: What is Open AI? Is it just ChatGPT?

Discover the stories of your interest

Altman: We are a company that is doing research and deployment to try to figure out how to build AGI (artificial general intelligence) and how to responsibly deploy that into the world for maximum benefit. This, unlike other technologies, is a strong case of a technology that on the one hand, is the most exciting, most promising, coolest thing I think that humanity will have yet built. We can cure all diseases, we can get everybody a great education, better health care, massively increase productivity, huge scientific discovery, all of these wonderful things and we want to make sure that people get that benefit, and that benefit is distributed equitably. On the other hand, there will be obvious concerns about the power of this technology used in a negative direction. And so we want to be a force to help manage those risks, so we all get to enjoy the benefits. ChatGPT is definitely what we’re best known for, so I guess they’re sort of synonymous at this point. But OpenAI is really about this quest for AGI.

Gajwani: What’s the most surprising use case of some of the technologies that you guys have built, that you’ve seen recently?

Altman: The main thing I would say that’s interesting about it is its generality. There’s a lot of other systems that can go do this thing well. Some not. But the fact that this one system is truly general purpose, and can do so many things, means that people are integrating it into their workflow as a very powerful tool. One of the areas that we’ve seen the biggest impact is what coders are using this for, doubling and tripling their productivity. You know, there was a paper that just came out that when Italy temporarily banned ChatGPT, developer productivity fell by half. It can also help you find information. So, it can help you write computer code, a poem, summarise documents, translate things. People are using this, which we hope would happen, as this sort of super assistant that just makes them more and more productive.

Gajwani: Can you give us an example of something you’re seeing that’s really inspired you, that you’ve seen come out of the Indian market?

Altman: India has been a country that has really, truly embraced ChatGPT. Maybe you can tell me why. I’m hoping to learn while I’m here. There has been a lot of early adoption and real enthusiasm from the users. One of the very earliest things, like in the first weeks of launching ChatGPT, we heard about a farmer in India who wasn’t able to access government services, and via ChatGPT hooked up to WhatsApp in some sort of complicated way and was then able to. That was like one of the early things–we were like, we did not think that was going to happen.

Sam Altman Fact File 2ETtech

Gajwani: So what I’ve understood about OpenAI is that ChatGPT is one implementation of the things you’ve built. But you have capabilities to real-time translate, to transcribe audio, into text. Are you seeing people use these in a combination in ways that are surprising?

Sam Altman: Well, we recently launched an iPhone app that has speech recognition. That’s hooking up two of our models together, and people love that. But the main point that I would like to get across is– none of the current systems really matter. The curve here is going to be much, much steeper, and what the systems are going to be capable of in the not-too-distant future, we think it is going to be very dramatically different. This is like a system… like the old, first grayscale Nokia phone that looked like a little candy bar and the iPhone 14 is coming. What I would say is it’s a mistake to get too focused on the current systems, their limitations, their capabilities, the impact they’re having. The thing that matters here is we are on an exponential curve truly. Two big miracles I think in the field — number one, we have an algorithm that can genuinely, truly, no tricks, learn. And number two, it gets predictably better with scale. And that, we’re going to look back and think on those two realisations as a turning point in human history when you put them together. But what it means is that the rate of progress in the coming years, the capabilities are going to be significant. It’s totally cool that ChatGPT can write that poem when a future system can like cure all disease, or help us address climate change, or radically improve education or make us all like 10 or hundred times more productive at what we do. That’s quite impactful.

Sam Altman Fact File 3ETtech

Gajwani: Now, let’s flip to the other side of this, because there’s no doubt there’s incredible power in this technology. And with that comes challenges–fundamental questions around authenticity, what’s real and what’s fake. How do we handle that?

Altman: There’s a lot of fear right now about the impact this is going to have on elections and on our society and how we ever trust media that we see. I have some fear there but I think as a society, we’re going to rise to the occasion. We’re going to learn very quickly that we don’t trust videos, unless we trust that sort of provenance. We’ll have techniques like watermarking detectors. More than that, I suspect at some point, if people are saying something really important, they’ll cryptographically sign it and you know, web browsers or phones or whatever, we’re building some ability to say okay, this is authentic. But there’s a related thing that I think is getting discussed less, which is not the ability to generate mass media like that but customise one-on-one interactive persuasion. And I think people are going to be able to create AIs that are very good at this. So, it won’t just be like I’m watching a video of you, but it will be like I’m chatting with you. And it’s like the most interesting compelling conversation that I’ve ever had. It’s affecting me in ways I don’t know about. And that’s a new thing that’s different than just generating media. Again, I think we’ll find a way to build societal antibodies to it, but I don’t think it’s discussed as much and it’s going to be a challenge.

Jobs quote card altmanETtech

Gajwani: I also want to talk about jobs because the natural fear is AI is going to make us redundant, particularly in markets like India, where we have so much of a workforce and a lot of it is oftentimes doing somewhat rote work. Should we be worried about this?

Altman: I mean, to some extent, yes. Every technological revolution leads to job change, and this will be no exception. Job change by itself is fine. If you look at the history of this, in two generations, we can adapt to any amount of labour market change, and there’s new jobs and the new jobs are usually better. That’s going to happen here too. The thing that might be different about this is the speed with which it can happen. And I think it will require a change to the socioeconomic contract and the way governments think about this, if it happens at a very fast pace. The second thing is it’s not going the way people predicted so far and I don’t think it will in the future. So the current systems are actually not very good at all at doing whole jobs. The nature of the job, if you’re say a computer programmer, shifts to like manage a team of extremely junior developers, they can only do one minute task at a time. And then sometimes they’ll do 10-minute tasks and then they’ll do an hour task. But you’ll still have to think of like, how it is all going to fit together, whatever the build, and you know, maybe it eventually learns that too. But this idea that instead of replacing jobs, it’s making people dramatically more efficient. And there’s such a demand overhang in most places. We can overnight make the world create 3x more software because we make every software developer three times more efficient. That is not nearly enough. That does not nearly fulfill the demand the world has for software and I think we’ll see that in many other places. I think the world will get way wealthier, we’ll have a productivity boom, and we will find a lot of new things to do.

Gajwani: We also have to talk about that 1% extinction risk, that robots are going to come and take over our lives. What do you think about that?

Altman: For sure. I want to be super clear. I don’t think current systems are dangerous. I don’t think there’s any way that GPT-4 causes an existential risk to the world. But people are very bad at thinking about exponential curves. And GPT-10 may be an extremely different thing. Given the importance of getting this right, you know, it’s a 1% chance, I think putting a lot of effort into thinking, studying how we align with an AGI, and how we design safe systems at this kind of scale is super important. And starting that early is really good. I think we can totally manage through it. I think we’re developing techniques to mitigate it. This is really why we started the company. This was like our initial focus and still is our most important focus. But yeah, we need to address this.

Gajwani: Is there like a power switch in the back of your office that nobody knows about where you can just pull this thing, it shuts down all the systems?

Altman: Exactly like that.

Gajwani: Okay, good. I’m glad. I feel better now. And it works even if you are travelling right? Okay.

So let’s talk about regulation. Because, again, I think what’s really unusual is this company is a few years old, but really for the consumer, it’s like less than a year old because of ChatGPT. And yet here you are, traveling the world, meeting leaders globally, to talk about the importance of regulation.

Altman: Well, again, we started the company because we were nervous about AGI risk before people even talked about AGI, and now I think part of the reason we deploy systems is so that people confront the technology, feel it, understand the risks, the benefits. And now, a lot of other people are also very excited, but sharing the concern. I think this is a special moment where the globe can come together and get this right. And we certainly would like to try to do that.

Altman quoteETtech

Gajwani: Let’s talk a little bit more about AI in India. One of the obvious questions we think a lot about are languages. Now AI is by and large trained on what’s publicly available, what’s available on most of the internet, which is inevitably going to be mostly English. What do you think about biases, inclusivity, multilingual countries like India and making a product that’s relevant?

Altman: It’s super, super important to us. We had a big step forward from GPT-3.5 to 4 in non-English languages. So GPT-4 is pretty good at, say, the top 20 languages and okay, maybe the top 100. We will be able to push this much further. It’s challenging for us for very small languages spoken by, you know, only a few tens of thousands or hundreds of thousands of people. But the systems are fundamentally going to be very good at this. Now, as you were saying, it’s not just the language, it’s also the history, the culture, the values, and we want the entire world represented here. If you want to use it in the US or in India, that can be under a different legal framework, and then in different parts of the culture in each place, it’ll be very different.

Gajwani: India has been particularly unique and successful globally, at building a lot of the underlying technology stacks to support new innovation in digital with IndiaStack, UPI, Aadhaar. Should we think of this a little bit like nuclear technology where every country should be building its own capabilities?

Altman: First of all, it’s super impressive to see what India has done, in a way that really no other country has. In terms of AI strategy, I think there’s a lot of things that can work. I think this question of sort of AI serenity–none of us have an answer to yet. Feels like it’s going to be at least somewhat important. But the main thing that I think is important is figuring out how to integrate these technologies into other services. And that is an area that I think governments are behind them, and don’t have the answers to, yet. But hopefully, we all start to use LLMs (large language models) to make government services way better at both from like, how do I enroll in this program to like, how do I get better health care?

Gajwani: But if you’re in the Indian government, should we need to set up a team of crack engineers to build our own OpenAI? Is there a concern for us to say are we depending, for fundamental infrastructure, on something that’s not owned by our country?

Altman: Yeah, I think it is good to have some sort of AI research effort. What exactly that should do you know, should that be training, ground up LLMs? Should that be pursued as new research directions, fine tuning open source projects? I think there’s a lot of options there… nationally funded AI effort feels like a good idea.

Gajwani: You don’t own equity in Open AI. Why?

Altman: It started with just sort of this quirk of our structure where we needed non-conflicted people on the board that didn’t have equity, a certain number of them, a certain percentage, and then I kind of never got into that. I forget about it until it comes up with something like this… I don’t think it’s a particularly noteworthy thing. I made a ton of money early in my career. I actively invest, so I expect to make a ton more. I get far more value from, selfishly speaking, from all of the other sorts of benefits that come from running OpenAI. I just believe that this is going to be the most important project of our time, and I’m super grateful to work on it.

Gajwani: Now, not just totally theoretically, let’s say you’re running a large news media company in a market like India. What would you do?

Altman: There’s been a lot of controversy about whether this is going to be good or bad for the publishing industry and news in particular. One of the things that we’ve heard from journalists and reporters who are actually using the product is that it helps them do the boring parts of their jobs better, and they get to spend more time reporting, talking to sources, thinking of ideas. And so I think I would just encourage everyone to just start using it.

Gajwani: Now let’s say you are the ministry in India responsible for overseeing technology, AI, etc. What would you do in that situation?

Altman: I would say, we have the G20 coming up, India can play a huge role here in global conversation about what this sort of international regulatory thing might look like. And we are going to really focus on that between now and September and make sure we prioritize that.

Gajwani: Can you tell us something that you haven’t told other people about what’s coming from OpenAI?

Altman: We tell people what we’re working on, like, it’s going to get smarter, multimodal, we’re going to try to teach it to generate new ideas, help us discover more new science. We’re going to reduce hallucinations, give users more control. We don’t have a lot of secret plans here.

Gajwani: You are running what is going to be one of the most impactful companies in history.

Altman: Whenever you say ‘impactful’ and leave out whether it’s going to be good or bad…

Gajwani: That is a very purposeful leave-out, because we don’t know right? Can you tell us what else you’re doing that’s exciting you or motivating you outside of OpenAI?

Altman: Other than AI, if you could do one thing that would really help the world get richer, increase the quality of life, it’s very cheap energy. If we can get nuclear fusion to work, and if we can make enough of it for the world, and if it can cut the energy cost 10x plus, that’s pretty great. I’ll pick that one.

Gajwani: So your side gig is nuclear fusion?

Altman: I don’t… I am an investor and sort of helper of that one.

Gajwani: What is the most exciting thing that you are seeing globally in your own company?

Altman: It’s this generation of new scientific progress. If these systems can really contribute additional understanding of the world to better technology, better science, then that is like the sustainable way that the world actually gets better and that the quality of life increases. We’re not there yet. It might be sooner, might take a while, but I believe we are going to get there.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! TechAI is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.