Artificial Intelligence

‘Father of Internet’ Warns Sinking Money Into Cool AI May Be Uncool

ChatGPT by OpenAI displayed on a smartphone

Vint Cerf, known as the father of the internet, raised a few eyebrows Monday when he urged investors to be cautious when investing in businesses built around conversational chatbots.

The bots still make too many mistakes, asserted Cerf, who is a vice president at Google, which has an AI chatbot called Bard in development.

When he asked ChatGPT, a bot developed by OpenAI, to write a bio of him, it got a bunch of things wrong, he told an audience at the TechSurge Deep Tech summit, hosted by venture capital firm Celesta and held at the Computer History Museum in Mountain View, Calif.

“It’s like a salad shooter. It mixes [facts] together because it doesn’t know better,” Cerf said, according to Silicon Angle.

He advised investors not to support a technology because it seems cool or is generating “buzz.”

Cerf also recommended that they take ethical considerations into account when investing in AI.

He said, “Engineers like me should be responsible for trying to find a way to tame some of these technologies, so they’re less likely to cause trouble,” Silicon Angle reported.

Human Oversight Needed

As Cerf points out, some pitfalls exist for businesses chomping at the bit to get into the AI race.

Inaccuracy and incorrect information, bias, and offensive results are all potential risks businesses face when using AI, noted Greg Sterling, co-founder of Near Media, a news, commentary, and analysis website.

“The risks depend on the use cases,” Sterling told TechNewsWorld. “Digital agencies overly relying upon ChatGPT or other AI tools to create content or complete work for clients could produce results that are sub-optimal or damaging to the client in some way.”

However, he asserted that checks and balances and strong human oversight could mitigate those risks.

Small businesses that don’t have expertise in the technology need to be careful before taking the AI plunge, cautioned Mark N. Vena, president and principal analyst with SmartTech Research in San Jose, Calif.

“At the very least, any company that incorporates AI into their way of doing business needs to understand the implications of that,” Vena told TechNewsWorld.

“Privacy — especially at the customer level — is obviously a huge area of concern,” he continued. “Terms and conditions for use need to be extremely explicit, as well as liability should the AI capability produce content or take actions that open up the business to potential liability.”

Ethics Need Exploration

While Cerf would like users and developers of AI to take ethics into account when bringing AI products to market, that could be a challenging task.


Related: OpenAI Exec Admits AI Needs Regulation


“Most businesses utilizing AI are focused on efficiency and time or cost savings,” Sterling observed. “For most of them, ethics will be a secondary concern or even a non-consideration.”

There are ethical issues that need to be addressed before AI is widely embraced, added Vena. He pointed to the education sector as an example.

“Is it ethical for a student to submit a paper completely extracted from an AI tool?” he asked. “Even if the content is not plagiarism in the strictest sense because it could be ‘original,’ I believe most schools — especially at the high school and college levels — would push back on that.”

“I’m not sure news media outlets would be thrilled about the use of ChatGPT by journalists reporting on real-time events that often rely on abstract judgments that an AI tool might struggle with,” he said.

“Ethics must play a strong role,” he continued, “which is why there needs to be an AI code of conduct that businesses and even the media should be compelled to agree to, as well as making those compliance terms part of the terms and conditions when using AI tools.”

Unintended Consequences

It’s important for anyone involved in AI to ensure they’re doing what they’re doing responsibly, maintained Ben Kobren, head of communications and public policy at Neeva, an AI-based search engine based in Washington, D.C.

“A lot of the unintended consequences of previous technologies were the result of an economic model that was not aligning business incentives with the end user,” Kobren told TechNewsWorld. “Companies have to choose between serving an advertiser or the end user. The vast majority of the time, the advertiser would win out. “

“The free internet allowed for unbelievable innovation, but it came at a cost,” he continued. “That cost was an individual’s privacy, an individual’s time, an individual’s attention.”

“The same is going to happen with AI,” he said. “Will AI be applied in a business model that aligns with users or with advertisers?”

Cerf’s pleadings for caution appear aimed at slowing down the entry of AI products into the market, but that seems unlikely.

“ChatGPT pushed the industry forward much faster than anyone was anticipating,” observed Kobren.

“The race is on, and there’s no going back,” Sterling added.

“There are risks and benefits to quickly bringing these products to market,” he said. “But the market pressure and financial incentives to act now will outweigh ethical restraint. The largest companies talk about ‘responsible AI,’ but they’re forging ahead regardless.”

Transformational Technology

In his remarks at the TechSurge summit, Cerf also reminded investors that all the people who will be using AI technologies won’t be using them for their intended purposes. They “will seek to do that which is their benefit and not yours,” he reportedly said.

“Governments, NGOs, and industry need to work together to formulate rules and standards, which should be built into these products to prevent abuse,” Sterling observed.

“The challenge and the problem are that the market and competitive dynamics move faster and are much more powerful than policy and governmental processes,” he continued. “But regulation is coming. It’s just a question of when and what it looks like.”

Policymakers have been grappling with AI accountability for a while now, commented Hodan Omaar, a senior AI policy analyst for the Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy, in Washington, D.C.

“Developers should be responsible when they create AI systems,” Omaar told TechNewsWorld. “They should ensure such systems are trained on representative datasets.”

However, she added that it will be the operators of the AI systems who will make the most important decisions about how AI systems impact society.

“It’s clear that AI is here to stay,” Kobren added. “It’s going to transform many facets of our lives, in particular how we access, consume, and interact with information on the internet.”

“It’s the most transformational and exciting technology we’ve seen since the iPhone,” he concluded.

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Artificial Intelligence

How confident are you in the reliability of AI-powered search results?
Loading ... Loading ...

Technewsworld Channels