Artificial Intelligence

ANALYSIS

Are Gen AI Benefits Worth the Risk?

artificial intelligence computer processor
Generative AI is revolutionizing fields from content creation to medical research. While promising, it raises ethical concerns, including the spread of misinformation, the creation of deep fakes, and job displacement.

The business and content production worlds quickly embraced tools like ChatGPT and Dalle-E from OpenAI. But what exactly is generative AI, how does it operate, and why is it such a hot and controversial topic?

Simply described, gen AI is a branch of artificial intelligence that uses computer algorithms to produce outputs that mimic human material, including text, photos, graphics, music, computer code, and other types of media.

With gen AI, algorithms are created to gain knowledge using training data that contains illustrations of the intended result. Gen-AI models may create new material with traits in common with the original input data by examining the patterns and structures in the training data. Gen AI may produce information that seems genuine and human-like in this way.

How Gen AI Is Implemented

Machine learning techniques based on neural networks, which are the inner workings of the human brain, are the foundation of gen AI. Large volumes of data are fed to the model’s algorithms during training, serving as the model’s learning base. This methodology can include any content pertinent to the work, including text, code, images, and others.

After gathering the training data, the AI model examines the correlations and patterns in the data to comprehend the fundamental principles guiding the content. As it learns, the AI model continually adjusts its settings, enhancing its capacity to mimic human-generated material. The AI model’s outputs get more complex and persuasive as it produces more material.

With various technologies catching the public’s eye and causing a stir among content makers, gen AI has advanced significantly in recent years. Along with other large IT companies, Google, Microsoft, Amazon, and others have lined up their own gen AI tools.

Consider ChatGPT and Dalle-E 2 as examples of gen-AI tools that may rely on an input prompt to direct it towards creating a desirable result, depending on the application.

The following are some of the most noteworthy instances of gen-AI tools:

  • ChatGPT: Created by OpenAI, ChatGPT is an AI language model that can produce text that resembles human speech in response to cues.
  • Dalle-E 2: A second gen-AI model from OpenAI that uses text-based cues to generate visual content.
  • Google Bard: Launched as a rival to ChatGPT, Google Bard is a gen-AI chatbot trained on the PaLM large language model.
  • GitHub Copilot: Developed by GitHub and OpenAI, GitHub Copilot is an AI-powered coding tool that proposes code completions for users of programming environments like Visual Studio and JetBrains.
  • Midjourney: Created by a San Francisco-based independent research lab, Midjourney is like Dalle-E 2. It reads language cues and context to produce incredibly photorealistic visual information.

Examples of Gen AI in Use

Although gen AI is still in its infancy, it has already established itself in several applications and sectors.

For example, gen AI may create text, graphics, and even music during the content production process, helping marketers, journalists, and artists with their creative processes. Artificial intelligence-driven chatbots and virtual assistants can offer more individualized help, speed up response times, and lighten the workload of customer care representatives.

Gen AI is also used in the following:

  • Medical Research: Gen AI is used in medicine to speed up the development of new medications and reduce research costs.
  • Marketing: Advertisers employ gen AI to create targeted campaigns and modify the material to suit customers’ interests.
  • Environment: Climate scientists use gen-AI models to forecast weather patterns and simulate the impacts of climate change.
  • Finance: Financial experts employ gen AI to analyze market patterns and forecast stock market developments.
  • Education: Some instructors utilize gen AI models to create learning materials and evaluations tailored to each student’s learning preferences.

Limitations and Risks of Gen AI

Gen AI raises several problems that we need to address. One significant concern is its potential to disseminate false, harmful, or sensitive information that could cause serious harm to individuals and companies — and perhaps endanger national security.

Policymakers have taken notice of these threats. The European Union proposed new copyright regulations for gen AI in April, mandating that businesses declare any copyrighted materials used to create these technologies.

These laws aim to curb the misuse or infringement of intellectual property while fostering ethical practices and transparency in AI development. Moreover, they offer a measure of protection to content creators, safeguarding their work from inadvertent imitation or replication by general AI methodologies.

The proliferation of automation through generative AI could significantly affect the workforce, potentially leading to job displacement. Additionally, gen-AI models have the potential to inadvertently amplify biases present in the training data, producing undesirable results that support negative ideas and prejudices. This phenomenon is often an under-the-radar consequence that goes unnoticed by many users.

Since its debut, ChatGPT, Bing AI, and Google Bard have all generated criticism for their wrong or damaging outputs. These concerns must be addressed as gen AI develops, especially given the challenge of carefully examining the sources utilized to train AI models.

Apathy Among Some AI Firms Is Scary

Some tech companies exhibit indifference towards the threats of gen AI due to various reasons.

First, they may prioritize short-term profits and competitive advantage over long-term ethical concerns.

Second, they might lack awareness or understanding of the potential risks associated with gen AI.

Third, certain companies may view government regulations as insufficient or delayed, leading them to overlook the threats.

Lastly, an overly optimistic outlook on AI’s capabilities may downplay the potential dangers, disregarding the need to address and mitigate the risks of gen AI.

As I’ve written previously, I’ve witnessed an almost shockingly dismissive attitude with senior leadership at several tech companies about the misinformation risks with AI, particularly with deep fake images and (especially) videos.

What’s more, there have been reports where AI has mimicked the voices of loved ones to extort money. Many companies that provide the silicon ingredients appear satisfied with placing the AI-labeling burden on the device or app provider, knowing that these AI-generated content disclosures will be minimized or ignored.

A few of these companies have indicated concern about these risks but have punted the issue by claiming they have “internal committees” still contemplating their precise policy positions. However, that hasn’t stopped many of these companies from going to market with their silicon solutions without explicit policies in place to help detect deep fakes.

7 AI Leaders Agree to Voluntary Standards

On the brighter side, The White House said last week that seven significant artificial intelligence actors have agreed to a set of voluntary standards for responsible and open research.

As he welcomed representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, President Biden spoke about the responsibility these firms have to capitalize on the enormous potential of AI while doing all in their power to reduce the considerable dangers.

The seven companies pledged to test their AI systems’ security internally and externally before making them public. They will share information, prioritize security investments, and create tools to help people recognize AI-generated content. They also aim to develop plans that could address society’s most pressing issues.

While this is a step in the right direction, the most prominent global silicon companies were conspicuously absent from this list.

Closing Thoughts

A multi-faceted approach is essential to safeguard people from the dangers of deep fake images and videos:

  • Technological advancements must focus on developing robust detection tools capable of identifying sophisticated manipulations.
  • Widespread public awareness campaigns should educate individuals about the existence and risks of deep fakes.
  • Collaboration between tech companies, governments, and researchers is vital in establishing standards and regulations for responsible AI use.
  • Fostering media literacy and critical thinking skills can empower individuals to discern between authentic and fabricated content.

By combining these efforts, we can strive to protect society from the harmful impact of deep fakes.

Finally, a public confidence-building step would require all silicon companies to create and offer the necessary digital watermarking technology to allow consumers to use a smartphone app to scan an image or video to detect whether it’s been AI-generated. American silicon companies need to step up and take a leadership role and not shrug this off as a burden for the device or app developer to shoulder.

Conventional watermarking is insufficient as it can be easily removed or cropped out. While not foolproof, a digital watermarking approach could alert people with a reasonable level of confidence that, for example, there is an 80% probability that an image was created with AI. This step would be an important move in the right direction.

Sadly, the public’s demands for this type of common-sense safeguard, either government-ordered or self-regulated, will be brushed aside until something egregious happens as a consequence of gen AI, like individuals getting physically injured or killed. I hope I’m wrong, but I suspect this will be the case, given the competing dynamics and “gold rush” mentality in play.

Mark N. Vena

Mark N. Vena has been an ECT News Network columnist since 2022. As a technology industry veteran for over 25 years, Mark covers numerous tech topics, including PCs, smartphones, smart homes, connected health, security, PC and console gaming, and streaming entertainment solutions. Vena is the CEO and Principal Analyst at SmartTech Research, based in Silicon Valley. Email Mark.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Mark N. Vena
More in Artificial Intelligence

Technewsworld Channels