Artificial Intelligence

Experts Say Workplace AI Bans Won’t Work

Generative AI in business

A recent study released by enterprise security software and services provider BlackBerry revealed that 75% of organizations worldwide are implementing or considering implementing workplace bans on ChatGPT and other generative AI applications. However, experts questioned by TechNewsWorld were skeptical of the effectiveness of such bans.

The study, based on a survey of 2,000 IT decision makers in North America, Europe, and Asia by OnePoll, also found that 61% of organizations deploying or considering bans intend the measures as long-term or permanent, with risks to data security, privacy, and corporate reputation driving decisions to take action.

“Such bans are essentially unenforceable and do little more than to make risk managers feel better that liability is being limited,” declared John Bambenek, a principal threat hunter at Netenrich, an IT and digital security operations company in San Jose, Calif.

“What history shows us is that when there are tools available that improve worker productivity or quality of life, workers find a way to use them anyway,” he told TechNewsWorld. “If that usage is outside the controls or visibility of the organization, security teams simply cannot protect the data.” “Every employee has a smartphone, so bans don’t necessarily work very well,” added J. P. Gownder, a vice president and principal analyst at Forrester Research, a market research company headquartered in Cambridge, Mass.

“The reason employees use these tools is to be more productive, to speed up their efficiency, and to find answers to questions they can’t answer easily,” he told TechNewsWorld.

Gownder recommended that employers provide corporate-approved tools that meet their employees’ needs. “By doing so, they can architect generative AI solutions for the workforce that are secure, that use techniques to minimize hallucination, and that can be audited and traced after use,” he said.

Blanket Bans on AI Perilous

Greg Sterling, co-founder of Near Media, a news, commentary, and analysis website, pointed out that companies with blanket bans on AI do so at their own peril. “They risk losing out on the efficiency and productivity benefits of generative AI,” he told TechNewsWorld.

“They won’t be able to fully ban AI tools,” he said. “AI will be a component of virtually all SaaS tools within a very short period of time.”

“As a practical matter, companies cannot fully control their employees’ device usage,” Sterling added. “They need to better educate employees about the risks associated with the usage of certain apps, rather than simply implement bans.”

Nate MacLeitch, founder and CEO of communication solutions provider QuickBlox, questioned the follow-through of companies telling surveyors they planned to impose bans.

“I think 75% is higher than it will be in reality,” he told TechNewsWorld. “What will happen is a lot of the generative AI stuff will be woven into applications and services that organizations will use, although there will definitely be controls someplace.”

“Ultimately, a total ban on a new, growing, beloved technology isn’t going to work completely,” added Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla.

“It might actually work in preventing the leak of confidential information, but the technology itself is going to thrive and grow around any bans,” he told TechNewsWorld.

Bans can create a competitive risk to an organization, he contended. “Once competitors start seeing competitive advantages from AI, and they will, the bans will have to come down, or else the organization won’t be surviving or thriving,” he said.

Unworkable Approach

John Gallagher, vice president of Viakoo, a provider of automated IoT cyber hygiene in Mountain View, Calif., maintained that bans on using generative AI in the workplace are unworkable, especially at this stage of the technology’s development when its uses are rapidly changing.

“Should an organization ban use of Bing because its search results incorporate generative AI?” he asked. “Can employees still use Zoom, even though new features incorporate generative AI, or will they be limited to specific versions of the app that do not have those features?”

“Such bans are nice in theory but practically cannot be enforced,” Gallagher told TechNewsWorld.

He maintained that bans could do more harm than good to an organization. “Controls that cannot be enforced or tightly defined are eventually going to be ignored by workers and discredit future efforts to enforce such controls,” he said. “Loosely-defined bans should be avoided because of the reputational damage they can result in.”

Why Ban AI?

Barbara J. Evans, a professor of law and engineering at the University of Florida, explained that organizations might impose workplace AI bans for a number of reasons.

“Generative AI software tools — at least at present — have the potential to provide low-quality or untrue information,” she explained to TechNewsWorld. “For consultants, law firms, and other businesses that provide information services to their customers, selling wrong information can lead to lawsuits and reputational harms.”

Evans noted that another significant concern is the privacy and security of proprietary and confidential business information. “When posing questions to a generative AI tool, employees might reveal business secrets or confidential information about their customers,” she said.

“When you read the privacy policies for these tools,” Evans added, “you may find that by using the tool, you are agreeing that the tool developer can use whatever you reveal to them to further refine their model or for other uses.”

She contended that companies may also ban AI as a matter of employee relations. “People are concerned about being replaced by AI, and banning the use of AI in the workplace might be a good way to boost employee morale and send a signal that ‘we aren’t looking to replace you with a robot — at least, not this generation of robots,'” Evans explained.

Making AI Safe

Organizations are banning AI in the workplace out of an abundance of concern, but in addition to risks, the benefits of the technology should be considered, too, maintained Jennifer Huddleston, a technology policy research fellow at the Cato Institute, a Washington, D.C. think tank.

“New technologies like AI can help employees improve their efficiency and productivity, but in at least some cases still require humans to check the accuracy of their results or outputs,” she told TechNewsWorld.

“Rather than a flat-out ban on the use of a technology, organizations may want to consider if there are other ways that they can address their specific concerns while still empowering employees to use the technology for beneficial purposes,” Huddleston said.

Evans added that, ultimately, humans may have to harness AI to help them regulate AI. “At some point, we humans may not be fast enough or smart enough to catch the AI’s errors,” she said. “Perhaps the future lies in developing AI tools that can help us quickly fact-check the outputs from other AI tools — an AI peer-review system that harnesses AI tools to peer-review each other.”

“But if 10 generative AI tools all agree that something is true, will that give us confidence that it is true?” she asked. “What if they are all hallucinating?”

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Artificial Intelligence

Technewsworld Channels