Artificial Intelligence

Release AI From the Shadows, Argues Wharton Prof

Generative AI in business
The undisclosed use of generative AI can raise questions about employee integrity and productivity, potentially impacting organizational trust and performance.

Workers are using AI tools to boost individual productivity but keeping their activity on the down-low, which could be hurting the overall performance of their organizations, a professor at the Wharton business school contends in a blog posted Sunday.

“Today, billions of people have access to large language models (LLMs) and the productivity benefits that they bring,” Ethan Mollick wrote in his One Useful Thing blog. “And, from decades of research in innovation studying everyone from plumbers to librarians to surgeons, we know that, when given access to general-purpose tools, people figure out ways to use them to make their jobs easier and better.”

“The results are often breakthrough inventions, ways of using AI that could transform a business entirely,” he continued. “People are streamlining tasks, taking new approaches to coding, and automating time-consuming and tedious parts of their jobs. But the inventors aren’t telling their companies about their discoveries; they are the secret cyborgs, machine-augmented humans who keep themselves hidden.”

Mollick maintained that the traditional ways that organizations respond to new technologies don’t work well for AI and that the only way for an organization to benefit from AI is to get the help of their “cyborgs” while encouraging more workers to use AI.

That will require a major change in how organizations operate, Mollick contended. Those changes include corralling as much of the organization as possible into the AI agenda, decreasing the fears associated with AI use, and providing incentives for AI users to come forward and encourage others to use AI.

Companies also need to act quickly on some basic questions, Mollick added. What do you do with the productivity gains you might achieve? How do you reorganize work and kill processes made hollow or useless by AI? How do you manage and control work that might include risks of AI-driven hallucination and potential IP concerns?

Disrupting Business

As beneficial as bringing AI out of the shadows may be, it could be very disruptive to an organization.

“AI can have a 30% to 80% positive impact on performance. Suddenly, a marginal employee with generative AI becomes a superstar,” observed Rob Enderle, president and principal analyst of the Enderle Group, an advisory services firm in Bend, Ore.

“If generative AI isn’t disclosed, it can raise questions about whether an employee is cheating or whether they were slacking off earlier,” he told TechNewsWorld.

“The secrecy part isn’t as disruptive as it is potentially problematic for both the manager and the employee, particularly if the company hasn’t yet set policy on AI use and disclosure,” Enderle added.

AI use could generate an unrealistic view of an employee’s knowledge or capability that could lead to dangerous expectations down the road, said Shawn Surber, senior director of technical account management at Tanium, a provider of converged endpoint management, in Kirkland, Wash.

He cited the example of an employee who uses an AI to write an extensive report on a subject for which they have no deep expertise. “The organization may see them as an expert, but really, they just used an AI to write a single report,” he told TechNewsWorld.

Problems can also arise if an employee is using AI to produce code or process documentation that feeds directly into an organization’s systems, Surber added. “Large language model AIs are great at generating voluminous amounts of information, but if it’s not carefully checked, it could create system problems or even legal problems for the organization,” he explained.

Mindless AI Usage

AI, when used well, will give workers a productivity boost which isn’t inherently disruptive,” maintained John Bambenek, principal threat hunter at Netenrich, an IT and digital security operations company in San Jose, Calif.

“It is the mindless use of AI that can be disruptive by workers, simply not reviewing the output of these tools and filtering out non-sensical responses,” he told TechNewsWorld.

Understanding the logic behind generative AI results often requires specialized knowledge, added Craig Jones, vice president of security operations at Ontinue, a managed detection and response provider in Redwood City, Calif.

“If decisions are blindly driven by these results, it can lead to misguided strategies, biases, or ineffective initiatives,” he told TechNewsWorld.

Jones asserted that the clandestine usage of AI could cultivate an environment of inconsistency and unpredictability within an organization. “For instance,” he said. “if an individual or a team harnesses AI to streamline tasks or augment data analysis, their performance could significantly overshadow those not employing similar resources, creating unequal performance outcomes.”

Additionally, he continued, AI utilized without managerial awareness can raise serious ethical and legal quandaries, particularly in sectors like human resources or finance. “Unregulated AI applications can inadvertently perpetuate biases or infringe on regulatory requirements.”

Banning AI Not a Solution

As disruptive as AI might be, banning its use by workers is probably not the best course of action. Because “AI provides a 30% to 80% increase in productivity,” Enderle reiterated, “banning the tool would, in effect, make the company unable to compete with peers that are embracing and using the technology properly.”

“It is a potent tool,” he added. “Ignore it at your peril.”

An outright ban might not be the right way to go, but setting guidelines for what and what can’t be done with public AI is appropriate, noted Jack E. Gold, founder and principal analyst at J. Gold Associates, an IT advisory company, in Northborough, Mass.

“We did a survey of business users asking if their companies had a policy on the use of public AI, and 75% of the companies said no,” he told TechNewsWorld.

“So the first thing you want to do if you’re worried about your information leaking out is set a policy,” he said. “You can’t yell at people for not following policy if there isn’t one.”

Data leakage can be a considerable security risk when using generative AI applications. “A lot of the security risks from AI come from the information people put into it,” explained Erich Kron, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.

“It’s important to understand that information is essentially being uploaded to these third parties and processed through the AI,” he told TechNewsWorld. “This could be a significant issue if people aren’t thinking about sensitive information, PII, or intellectual property they’re providing to the AI.”

In his blog, Mollick noted that AI is here and already having an impact in many industries and fields. “So, prepare to meet your cyborgs, ” he wrote, “and start to work with them to create a new and better organization for our AI-haunted age.”

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Artificial Intelligence

Technewsworld Channels