Spotlight Features

EXCLUSIVE INTERVIEW

The AI Revolution Is at a Tipping Point

artificial intelligence
The AI revolution has arrived with both potentially negative implications and the promise of a better world.

Some technology insiders want to pause the continued development of artificial intelligence systems before machine learning neurological pathways run afoul of their human creators’ use intentions. Other computer experts argue that missteps are inevitable and that development must continue.

More than 1,000 techs and AI luminaries recently signed a petition for the computing industry to take a six-month moratorium on the training of AI systems more powerful than GPT-4. Proponents want AI developers to create safety standards and mitigate potential risks posed by the riskiest AI technologies.

The nonprofit Future of Life Institute organized the petition that calls for a near-immediate public and verifiable cessation by all key developers. Otherwise, governments should step in and institute a moratorium. As of this week, Future of Life Institute says it has collected more than 50,000 signatures that are going through its vetting process.

The letter is not an attempt to halt all AI development in general. Rather, its supporters want developers to step back from a dangerous race “to ever-larger unpredictable black-box models with emergent capabilities.” During the time out, AI labs and independent experts should jointly develop and implement a set of shared safety protocols for advanced AI design and development.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” states the letter.

Support Not Universal

It is doubtful that anyone will pause anything, suggested John Bambenek, principal threat hunter at security and operations analytics SaaS company Netenrich. Still, he sees a growing awareness that consideration of the ethical implications of AI projects lags far behind the speed of development.

“I think it is good to reassess what we are doing and the profound impacts it will have, as we have already seen some spectacular fails when it comes to thoughtless AI/ML deployments,” Bambenek told TechNewsWorld.

Anything we do to stop things in the AI space is probably just noise, added Andrew Barratt, vice president at cybersecurity advisory services firm Coalfire. It is also impossible to do this globally in a coordinated fashion.

“AI will be the productivity enabler of the next couple of generations. The danger will be watching it replace search engines and then become monetized by advertisers who ‘intelligently’ place their products into the answers. What is interesting is that the ‘spike’ in fear seems to be triggered since the recent amount of attention applied to ChatGPT,” Barratt told TechNewsWorld.

Rather than pause, Barratt recommends encouraging knowledge workers worldwide to look at how they can best use the various AI tools that are becoming more consumer-friendly to help provide productivity. Those that do not will be left behind.

According to Dave Gerry, CEO of crowdsourced cybersecurity company Bugcrowd, safety and privacy should continue to be a top concern for any tech company, regardless of whether it is AI focused or not. When it comes to AI, ensuring that the model has the necessary safeguards, feedback loop, and mechanism for highlighting safety concerns are critical.

“As organizations rapidly adopt AI for all of the efficiency, productivity, and democratization of data benefits, it is important to ensure that as concerns are identified, there is a reporting mechanism to surface those, in the same way a security vulnerability would be identified and reported,” Gerry told TechNewsWorld.

Highlighting Legitimate Concerns

In what could be an increasingly typical response to the need for regulating AI, machine learning expert Anthony Figueroa, co-founder and CTO for outcome-driven software development company Rootstrap, supports the regulation of artificial intelligence but doubts a pause in its development will lead to any meaningful changes.

Figueroa uses big data and machine learning to help companies create innovative solutions to monetize their services. But he is skeptical that regulators will move at the right speed and understand the implications of what they ought to regulate. He sees the challenge as similar to those posed by social media two decades ago.

“I think the letter they wrote is important. We are at a tipping point, and we have to start thinking about the progress we did not have before. I just do not think that pausing anything for six months, one year, two years or a decade is feasible,” Figueroa told TechNewsWorld.

Suddenly, AI-powered everything is the universal next big thing. The literal overnight success of OpenAI’s ChatGPT product has suddenly made the world sit up and notice the immense power and potential of AI and ML technologies.

“We do not know the implications of that technology yet. What are the dangers of that? We know a few things that can go wrong with this double-edged sword,” he warned.

Does AI Need Regulation?

TechNewsWorld discussed with Anthony Figueroa the issues surrounding the need for developer controls of machine learning and the potential need for government regulation of artificial intelligence.

TechNewsWorld: Within the computing industry, what guidelines and ethics exist for keeping safely on track?

Anthony Figueroa: You need your own set of personal ethics in your head. But even with that, you can have a lot of undesired consequences. What we are doing with this new technology, ChatGPT, for example, is exposing AI to a large amount of data.

That data comes from public and private sources and different things. We are using a technique called deep learning, which has its foundations in studying how our brain works.

How does that impact the use of ethics and guidelines?

Figueroa: Sometimes, we do not even understand how AI solves a problem in a certain way. We do not understand the thinking process within the AI ecosystem. Add to this a concept called explainability. You must be able to determine how a decision has been made. But with AI, that is not always explainable, and it has different results.

How are those factors different with AI?

Figueroa: Explainable AI is a bit less powerful because you have more restrictions, but then again, you have the ethics question.

For example, consider doctors addressing a cancer case. They have several treatments available. One of the three meds is totally explainable and will give the patient a 60% chance of cure. Then they have a non-explainable treatment that, based on historical data, will have an 80% cure probability, but they do not really know why.

That combination of drugs, together with the patient’s DNA and other factors, affects the outcome. So what should the patient take? You know, it is a tough decision.

How do you define “intelligence” in terms of AI development?

Figueroa: Intelligence we can define as the ability to solve problems. Computers solve problems in a totally different way from people. We solve them by combining conscientiousness and intelligence, which gives us the ability to feel things and solve problems together.

AI is going to solve problems by focusing on the outcomes. A typical example is self-driving cars. What if all the outcomes are bad?

A self-driving car will choose the least bad of all possible outcomes. If AI has to choose a navigational maneuver that will either kill the “passenger-driver” or kill two people in the road that crossed with a red light, you can make the case in both ways.

You can reason that the pedestrians made a mistake. So the AI will make a moral judgment and say let’s kill the pedestrians. Or AI can say let’s try to kill the least amount of people possible. There is no correct answer.

What about the issues surrounding regulation?

Figueroa: I think that AI has to be regulated. It is feasible to stop development or innovation until we have a clear assessment of regulation. We are not going to have that. We do not know exactly what we are regulating or how to apply regulation. So we have to create a new way to regulate.

One of the things that OpenAI devs do well is build their technology in plain sight. Developers could be working on their technology for two more years and come up with a much more sophisticated technology. But they decided to expose the current breakthrough to the world, so people can start thinking about regulation and what kind of regulation can be applied to it.

How do you start the assessment process?

Figueroa: It all starts with two questions. One is, what is regulation? It is a directive made and maintained by an authority. Then the second question is, who is the authority — an entity with the power to give orders, make decisions, and enforce those decisions?

Related to those first two questions is a third: who or what are the candidates? We can have government localized in one country or separate national entities like the UN that might be powerless in these situations.

Where you have industry self-regulation, you can make the case that is the best way to go. But you will have a lot of bad actors. You could have professional organizations, but then you get into more bureaucracy. In the meantime, AI is moving at an astonishing speed.

What do you consider the best approach?

Figueroa: It has to be a combination of government, industry, professional organizations, and maybe NGOs working together. But I am not very optimistic, and I do not think they will find a solution good enough for what is coming.

Is there a way of dealing with AI and ML to put in stopgap safety measures if the entity oversteps guidelines?

Figueroa: You can always do that. But one challenge is not being able to predict all the potential outcomes of these technologies.

Right now, we have all the big guys in the industry — OpenAI, Microsoft, Google — working on more foundational technology. Also, many AI companies are working with one other level of abstraction, using the technology being created. But they are the oldest entities.

So you have a genetic brain to do whatever you want. If you have the proper ethics and procedures, you can reduce adverse effects, increase safety, and reduce bias. But you cannot eliminate that at all. We have to live with that and create some accountability and regulations. If an undesired outcome happens, we must be clear about whose responsibility it is. I think that is key.

What needs to be done now to chart the course for the safe use of AI and ML?

Figueroa: First is a subtext that we do not know everything and accept that negative consequences are going to happen. In the long run, the goal is for positive outcomes to far outweigh the negatives.

Consider that the AI revolution is unpredictable but unavoidable right now. You can make the case that regulations can be put in place, and it could be good to slow down the pace and ensure that we are as safe as possible. Accept that we are going to suffer some negative consequences with the hope that the long-term effects are far better and will give us a much better society.

Jack M. Germain

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open-source technologies. He is an esteemed reviewer of Linux distros and other open-source software. In addition, Jack extensively covers business technology and privacy issues, as well as developments in e-commerce and consumer electronics. Email Jack.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Jack M. Germain
More in Spotlight Features

How confident are you in the reliability of AI-powered search results?
Loading ... Loading ...

Technewsworld Channels