Chandan Khanna | AFP | Getty Images
Vint Cerf, Google’s Chief Evangelist and “Father of the Internet,” has a message for business leaders looking to accelerate deals around artificial chat intelligence: “Don’t.”
Cerf asked attendees at a Mountain View conference Monday not to invest in conversational AI just because “it’s a hot topic.” The warning comes amid a surge in popularity at ChatGPT.
“There is an ethical issue here that I hope some of you will consider,” Cerf told the conference audience Monday. “Everyone’s talking about ChatGPT or Google’s version of it and we know it doesn’t always work the way we’d like it to,” he said, referring to Google’s Bard conversational AI that was announced last week.
His warning comes as it does for big tech companies Google, Meta And Microsoft grapple with how to remain competitive in the conversational AI space while rapidly improving a technology that is still prone to errors.
Alphabet chairman John Hennessy said earlier in the day that the systems are still a long way from being universally useful and that they have many issues of imprecision and “toxicity” that they have yet to solve before they even go live be tested by the public.
Cerf has served as Vice President and Chief Internet Evangelist for Google since 2005. He is known as one of the “Fathers of the Internet” for having helped design some of the architectures that laid the foundation of the Internet.
Cerf cautioned against the temptation to invest just because the technology is “really cool, even if it doesn’t always work that well.”
“If you think, man, I can sell this to investors because it’s a hot topic and everybody’s going to throw money at me, don’t do that,” Cerf said, drawing a few laughs from the crowd. “Be thoughtful. They were right that we can’t always predict what’s going to happen with these technologies, and to be honest the biggest problem is people – that’s why we humans haven’t changed in the last 400 years, let alone the last 4,000. “
“They will try to do what benefits them and not you,” Cerf continued, appearing to refer to general human greed. “So we have to remember that and think carefully about how we use these technologies.”
Cerf said he tried asking one of the systems to append an emoji to the end of each sentence. It didn’t, and when he told the system he’d noticed, it apologized but didn’t change its behavior. “We’re a long way from awareness or self-awareness,” he said of the chatbots.
There’s a gap between what it promises and what it does, he said. “That’s the problem… You can’t tell the difference between an eloquently expressed” answer and an accurate one.
Cerf gave an example when he asked a chatbot to provide a biography about himself. He said the bot presented his answer as factual, even though it contained inaccuracies.
“On the technical side, I think engineers like me should be responsible for finding a way to tame some of these technologies so they’re less likely to do damage. And of course, depending on the application, a not-so-good fictional story is one thing. Giving advice to someone… can have medical consequences. Figuring out how to minimize the worst-case potential is very important.”
Comments are closed.