AI systems should be tools to make us better, treat each other better
“I hope we don’t all fall in love with robots,” Altman replied on behalf of humanity, “That would be deeply depressing. What I hope happens is we are all the best versions of ourselves, figure out how to be better. These systems could help us as coaches like therapists in the future, like guides and assistants, and make us more present for each other and treat each other better.”
Picking up Altman’s explanation of AI as a tool to better ourselves, Jain recollected his conversation with Ray Kurzweil at the Google headquarters in Mountain View, California, where the computer scientist and futurist mentioned that the “frontier of AI is to make it capable of love”.
Also read | AI effect: We’ll get way wealthier and witness a productivity boom, says Sam Altman
To make AI ‘human’, it has to be endowed with all the qualities of a conversation. “All of us would’ve told our significant other, ‘You’re perfect, except for this one thing that irks me’,” Jain remarked. “If we can program AI to not make that mistake, which you find irksome about the lover, wouldn’t you then get the perfect lover?” Can this AI robot disintermediate one’s most beloved person, by having a much better conversation with a person, without error?
“Do you want that?” came Altman’s counter-question. He elucidated, “We are very much building a tool, not a creature. And I’m very happy about that. On the question of mistakes and errors, I believe that creativity, and certainly the creation of new knowledge, is very difficult, or maybe impossible, without the ability to make errors and come up with bad ideas.”
Discover the stories of your interest
Altman believes that if one made a system that was certain, one would lose some creativity in that process. “One of the reasons people don’t like ChatGPT is because it hallucinates and makes stuff up. But one of the reasons they do like it is because it can be creative.” The same logic holds with love. “If people want to chat with their ‘perfect companionship’ bot that never upsets and doesn’t do that one thing that is irksome, I think it’ll be deeply unfulfilling and a sort of hard thing to feel love for. There’s something about watching someone screw up and grow and express their imperfections. That’s the very deep part of love as I understand it. And I think humans care about other humans in a very deep way. So, a perfect lover chatbot doesn’t sound so compelling to me.”Jain wanted Altman to elaborate what he has been saying at various fora – about regulating AI atomic energy-style. “On the IAEA [International Atomic Energy Agency model of regulation], we’re not sure if this is the best answer,” replied Altman. “But in the same way we say, nuclear materials provide some real danger, some real benefits while affecting all of us, let’s have a system in place, so that we can audit people who are doing it. We can license it, have safety tests, you have to pass as you’re training these systems before you deploy them.”
Altman emphasised that this is just one idea of regulation, not the only. “Maybe someone has a better idea, which would be great…. I do think there’s a lot of very sci-fi concerns that are pretty far out and probably will turn out to be quite wrong. But this idea of misuse by a dictator, using it to oppress people is a very scary thing. And that’s not super far away.” So OpenAI does think a lot about it and that “all of us need to build these systems in a way to address that risk”.
For all the latest Technology News Click Here