Emerging Tech

Think Tank Calls for Policymakers To Grow AI, Not Choke It

Policymakers should be fostering the use of artificial intelligence in making workforce decisions, not inhibiting it, according to the Center for Data Innovation.

In a report released Monday, the global think tank called on governments to encourage AI adoption and establish guardrails to limit harm.

“The dominant narrative around AI is one of fear, so policymakers need to actively support the technology’s growth,” the report’s author, policy analyst Hodan Omaar, said in a statement. “It is critical for lawmakers to avoid intervening in ways that are ineffective, counterproductive, or harmful to innovation.”

The report explained that AI-enabled tools can support workforce decisions by helping businesses manage their existing employees, as well as recruit and hire new ones.

They can also boost productivity among employers, such as by reducing the time needed to hire new employees, increasing retention rates, and improving communications and team dynamics among workers.

In addition, the report continued, these tools may help employers reduce human biases when hiring, decide on compensation, and make other employment-related decisions.

AI Concerns To Address

The report maintained that to deploy AI for workforce decisions successfully, employers will need to address potential concerns.

Some of those concerns include ensuring that the increased use of AI does not exacerbate existing biases and inequalities, metrics AI tools produce are fair and accurate, increased monitoring of employees is not unduly invasive, and processing of biometrics does not reveal sensitive personal information about employees that they may wish to keep private, such as data about their emotions, health, or disabilities.

To address these concerns, the report continued, several policymakers and advocacy groups have called for new public policies that apply the “precautionary principle” to AI, which says that government should limit the use of new technology until it is proven safe.

“In short, they favor restricting the use of AI because they believe it is better to be safe than sorry,” Omaar wrote. “But these policies do more harm than good because they make it more expensive to develop AI, limit the testing and use of AI, and even ban some applications, thereby reducing productivity, competitiveness, and innovation.”

“Instead,” she noted, “policymakers should pave the way for widespread adoption of AI in the workplace while building guardrails, where necessary, to limit harms.”

Employer and Employee Benefits

Artificial intelligence can benefit both employers and employees, added Julian Sanchez, a senior fellow at the Cato Institute, a public policy think tank in Washington, D.C.

“Ideally, AI can help businesses make both more efficient decisions — by synthesizing and analyzing much more granular data than human managers are able to process — and also more fair decisions, by providing a uniform mechanism for evaluating employees that can help filter out the biases of individual managers,” he told TechNewsWorld.

“Plenty of real-world applications of workplace AI are beneficial for employees as well — finding ways to reduce on-the-job injuries or burnout, not just ramp up productivity,” he added.

When AI systems can become a problem is when people become too dependent on them, noted Craig Le Clair, a vice president and a principal analyst at Forrester Research.

“The system becomes a black box to humans,” he told TechNewsWorld. “They can’t explain how a decision was made so they don’t know if it was biased or not.”

Algorithm Bias

Sanchez explained that algorithms can have biases in a number of ways. They can replicate biases in the data they’re trained on. They can also be insensitive to circumstances humans would be aware of.

“When that’s the case, the bias gets scaled across the entire firm or even a whole sector if a particular tool is widespread — though when biases are identified, they’re usually easier to correct than their human counterparts,” he said.

“The ability to process granular data can also be a double-edged sword because it enables a level of minute monitoring that can feel dehumanizing,” Sanchez continued.

“It can feel like important decisions about your career depend on an opaque algorithm that may not be intelligible to the employee in the way we expect supervisors’ decisions to be,” he explained.

John Carey, managing director in the technology practice at AArete, a global management consulting firm, added that AI can’t easily match experience or instinct around employees, making sure that they are treated with empathy.

“We, as humans, can detect far more about a behavioral issue from a conversation rather than relying on just data,” he told TechNewsWorld. “So it’s important that AI is used as a support tool rather than be relied on exclusively.”

Data Quality Important

Jim McGregor, founder and principal analyst of Tirias Research, a high-tech research and advisory firm in Phoenix, Ariz., explained that how an AI tool performs depends on the quality of the data it’s given and the bias of that information.

“A lot of the information going into AI systems will be coming from employees,” he told TechNewsWorld. “Everyone, no matter who you are, has biases. It’s hard to break those biases.”

“AI is a tool,” he said. “It should not be the only tool that any employer uses for hiring, firing, or advancing people.”

“AI has the potential to improve workforce decisions,” he added, “but you have to be conscious of its upside and downside when using it as a tool.”

Advice for Policymakers

In her report, Omaar proposed eight principles to guide policymakers when making decisions about AI:

  • Make government an early adopter of AI for workforce decisions and share best practices;
  • Ensure data protection laws support the adoption of AI for workforce decisions;
  • Ensure employment nondiscrimination laws apply regardless of whether an organization uses AI;
  • Create rules to safeguard against new privacy risks in workforce data;
  • Address concerns about AI systems for workforce decisions at the national level;
  • Enable the global free flow of employee data;
  • Do not regulate the input of AI systems used for workforce decisions; and
  • Focus regulation on employers, not AI vendors.

Light Touch

Sanchez endorsed the light government touch advocated in Omaar’s recommendations.

“I’m inclined to agree with the CDI report that we probably don’t need AI-specific rules in most cases, though it may take some time to figure out how to apply existing rules to decisions made with AI assistance,” he said.

“If there are things we want to require or forbid employers to do, then at some level it shouldn’t matter whether they do those things with microprocessors or human brains — trying to directly regulate software design is usually a mistake,” he observed.

“Anyone who thinks they can regulate AI is foolish,” added McGregor.

“If you start slapping regulations on it, you’re going to make it ineffective and limit innovation,” he said. “You’re going to have more downside than upside.”

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Emerging Tech

Technewsworld Channels