Cybersecurity

IT and Security Leaders Baffled by AI, Unsure About Security Risks: Study

cybersecurity and compliance team

Employees in nearly three out of four organizations worldwide are using generative AI tools frequently or occasionally, but despite the security threats posed by unchecked use of the apps, employers don’t seem to know what to do about it.

That was one of the main takeaways from a survey of 1,200 IT and security leaders located around the world released Tuesday by ExtraHop, a provider of cloud-native network detection and response solutions in Seattle.

While 73% of the IT and security leaders surveyed acknowledged their workers used generative AI tools with some regularity, the ExtraHop researchers reported less than half of their organizations (46%) had policies in place governing AI use or had training programs on the safe use of the apps (42%).

Most organizations are taking the benefits and risks of AI technology seriously — only 2% say they’re doing nothing to oversee the use of generative AI tools by their employees — however, the researchers argued it’s also clear their efforts are not keeping pace with adoption rates, and the effectiveness of some of their actions — like bans — may be questionable.

According to the survey results, nearly a third of respondents (32%) indicate that their organization has banned generative AI. Yet, only 5% say employees never use AI or large language models at work.

“Prohibition rarely has the desired effect, and that seems to hold true for AI,” the researchers wrote.

Limit Without Banning

“While it’s understandable why some organizations are banning the use of generative AI, the reality is generative AI is accelerating so fast that, very soon, banning it in the workplace will be like blocking employee access to their web browser,” said Randy Lariar, practice director of big data, AI and analytics at Optiv, a cybersecurity solutions provider, headquartered in Denver.

“Organizations need to embrace the new technology and shift their focus from preventing it in the workplace to adopting it safely and securely,” he told TechNewsWorld.

Patrick Harr, CEO of SlashNext, a network security company in Pleasanton, Calif., agreed. “Limiting the use of open-source generative AI applications in an organization is a prudent step, which would allow for the use of important tools without instituting a full ban,” he told TechNewsWorld.

“As the tools continue to offer enhanced productivity,” he continued, “executives know it is imperative to have the right privacy guardrails in place to make sure users are not sharing personally identifying information and that private data stays private.”


 Related: Experts Say Workplace AI Bans Won’t Work | Aug.16, 2023


CISOs and CIOs must balance the need to restrict sensitive data from generative AI tools with the need for businesses to use these tools to improve their processes and increase productivity, added John Allen, vice president of cyber risk and compliance at Darktrace, a global cybersecurity AI company.

“Many of the new generative AI tools have subscription levels that have enhanced privacy protection so that the data submitted is kept private and not used in tuning or further developing the AI models,” he told TechNewsWorld.

“This can open the door for covered organizations to leverage generative AI tools in a more privacy-conscious way,” he continued, “however, they still need to ensure that the use of protected data meets the relevant compliance and notification requirements specific to their business.”

Steps To Protect Data

In addition to the generative AI usage policies that businesses are putting in place to protect sensitive data, Allen noted, AI companies are also taking steps to protect data with security controls, such as encryption, and obtaining security certifications such as SOC 2, an auditing procedure that ensures service providers securely manage customer data.

However, he pointed out that there remains a question about what happens when sensitive data finds its way into a model — either through a malicious breach or the unfortunate missteps of a well-intentioned employee.

“Most of the AI companies provide a mechanism for users to request the deletion of their data,” he said, “but questions remain about issues like if or how data deletion would impact any learning that was done on the data prior to deletion.”

ExtraHop researchers also found that an overwhelming majority of respondents (nearly 82%) said they were confident that their organization’s current security stack could protect their organizations against threats from generative AI tools. Yet, the researchers pointed out that 74% plan to invest in gen AI security measures this year.

“Hopefully, those investments don’t come too late,” the researchers quipped.

Needed Insight Lacking

“Organizations are overconfident when it comes to protecting against generative AI security threats,” ExtraHop Senior Sales Engineer Jamie Moles told TechNewsWorld.

He explained that the business sector has had less than a year to fully weigh the risks against the rewards of using generative AI.

“With less than half of respondents making direct investments in technology that helps monitor the use of generative AI, it’s clear a majority may not have the needed insight into how these tools are being used across an organization,” he observed.

Moles added that with only 42% of the organizations training users on the safe use of these tools, more security risks are created, as misuse can potentially publicize sensitive information.

“That survey result is likely a manifestation of the respondents’ preoccupation with the many other, less sexy, battlefield-proven techniques bad actors have been using for years that the cybersecurity community has not been able to stop,” said Mike Starr, CEO and founder of trackd, a provider of vulnerability management solutions, in Reston, Va.

“If that same question were asked of them with respect to other attack vectors, the answer would imply much less confidence,” he asserted.

Government Intervention Wanted

Starr also pointed out that there have been very few — if any — documented episodes of security compromises that can be traced directly to the use of generative AI tools.

“Security leaders have enough on their plates combating the time-worn techniques that threat actors continue to use successfully,” he said.

“The corollary to this reality is that the bad guys aren’t exactly being forced to abandon their primary attack vectors in favor of more innovative methods,” he continued. “When you can run the ball up the middle for 10 yards a clip, there’s no motivation to work on a double-reverse flea flicker.”

A sign that IT and security leaders may be desperate for guidance in the AI domain is the survey finding that 90% of the respondents said they wanted the government involved in some way, with 60% in favor of mandatory regulations and 30% in support of government standards that businesses can adopt at their discretion.

“The call for government regulation speaks to the uncharted territory we’re in with generative AI,” Moles explained. “With generative AI still so new, businesses aren’t quite sure how to govern employee use, and with clear guidelines, business leaders may feel more confident when implementing governance and policies for using these tools.”

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Cybersecurity

Technewsworld Channels