Artificial Intelligence

OpenAI Exec Admits AI Needs Regulation

Big Tech manipulating public fears about AI

OpenAI CTO Mira Murati stoked the controversy over government oversight of artificial intelligence Sunday when she conceded in an interview with Time magazine that the technology needed to be regulated.

“It’s important for OpenAI and companies like ours to bring this into the public consciousness in a way that’s controlled and responsible,” Murati told Time. “But we’re a small group of people, and we need a ton more input in this system and a lot more input that goes beyond the technologies — definitely regulators and governments and everyone else.”

Asked if government involvement at this stage of AI’s development might hamper innovation, she responded: “It’s not too early. It’s very important for everyone to start getting involved, given the impact these technologies are going to have.”

Since the market provides incentives for abuse, some regulation is probably necessary, agreed Greg Sterling, co-founder of Near Media, a news, commentary, and analysis website.


Related: ‘Father of Internet’ Warns Sinking Money Into Cool AI May Be Uncool


“Thoughtfully constructed disincentives against unethical behavior can minimize the potential abuse of AI,” Sterling told TechNewsWorld, “but regulation can also be poorly constructed and fail to stop any of that.”

He acknowledged that too early or too heavy-handed regulation could harm innovation and limit the benefits of AI.

“Governments should convene AI experts and industry leaders to jointly lay out a framework for potential future regulation. It should also probably be international in scope,” Sterling said.

Consider Existing Laws

Artificial intelligence, like many technologies and tools, can be used for a wide variety of purposes, explained Jennifer Huddleston, a technology policy research fellow at the Cato Institute, a Washington, D.C. think tank.

Many of these uses are positive, and consumers are already encountering beneficial uses of AI, such as real-time translation and better traffic navigation, she continued. “Before calling for new regulations, policymakers should consider how existing laws around issues, such as discrimination, may already address concerns,” Huddleston told TechNewsWorld.

Artificial intelligence should be regulated, but how it’s already regulated needs to be considered, too, added Mason Kortz, a clinical instructor at the Cyberlaw Clinic at the Harvard University Law School in Cambridge, Mass.

“We have a lot of general regulations that make things legal or illegal, regardless of whether they’re done by a human or an AI,” Kortz told TechNewsWorld.

“We need to look at the ways existing laws already suffice to regulate AI, and what are the ways in which they don’t and need to do something new and be creative,” he said.

For example, he noted that there isn’t a general regulation about autonomous vehicle liability. However, if an autonomous vehicle causes a crash, there are still plenty of areas of law to fall back on, such as negligence law and product liability law. Those are potential ways of regulating that use of AI, he explained.

Light Touch Needed

Kortz conceded, however, that many existing rules come into play after the fact. “So, in a way, they’re kind of second best,” he said. “But they’re an important measure to have in place while we develop regulations.”

“We should try to be proactive in regulation where we can,” he added. “Recourse through the legal system occurs after a harm has occurred. It would be better if the harm never occurred.”

However, Mark N. Vena, president and principal analyst at SmartTech Research in San Jose, Calif., argues that heavy regulation could repress the burgeoning AI industry.

“At this early stage, I’m not a big fan of government regulation of AI.,” Vena told TechNewsWorld. “AI can have lots of benefits, and government intervention could end up stifling them.”

That kind of stifling effect on the internet was averted in the 1990s, he maintained, through “light touch” regulation like Section 230 of the Communications Decency Act, which gave online platforms immunity from liability for third-party content appearing on their websites.

Kortz believes, though, that government can put reasonable brakes on something without shutting down an industry.

“People have criticisms of the FDA, that it’s prone to regulatory capture, that it’s run by pharmaceutical companies, but we’re still in a better world than pre-FDA when anyone could sell anything and put anything on a label,” he said.

“Is there a good solution that captures only the good aspects of AI and stops all the bad ones? Probably not,” Vena continued, “but some structure is better than no structure.”

“Letting good AI and bad AI duke it out isn’t going to be good for anyone,” he added. “We can’t guarantee the good AIs are going to win that fight, and the collateral damage could be pretty significant.”

Regulation Without Strangulation

There are a few things policymakers can do to regulate AI without hampering innovation, observed Daniel Castro, vice president of the Information Technology & Innovation Foundation, a research and public policy organization, in Washington, D.C.

“One is to focus on specific use cases,” Castro told TechNewsWorld. “For example, regulating self-driving cars should look different than regulating AI used to generate music.”

“Another is to focus on behaviors,” he continued. “For example, it is illegal to discriminate when hiring employees or leasing apartments — whether a human or an AI system makes the decision should be irrelevant.”

“But policymakers should be careful not to hold AI to a different standard unfairly or to put in place regulations that do not make sense for AI,” he added. “For example, some of the safety requirements in today’s vehicles, like steering wheels and rearview mirrors, do not make sense for autonomous vehicles with no passengers or drivers.”

Vena would like to see a “transparent” approach to regulation.

“I’d prefer regulation requiring AI developers and content producers to be entirely transparent around the algorithms they are utilizing,” he said. “They could be reviewed by a third-party entity composed of academics and some business entities.”

“Being transparent around algorithms and the sources of content AI tools are derived from should encourage balance and mitigate abuses,” he asserted.

Plan for Worst Case Scenarios

Kortz noted that many people believe technology is neutral.

“I don’t think technology is neutral,” he said. “We have to think about bad actors. But we have to also think of poor decisions by the people who create these things and put them out in the world.”

“I would encourage anyone developing AI for a particular use case to think not only about what their intended use is, but also what is the worst possible use for their technology,” he concluded.

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Artificial Intelligence

Technewsworld Channels