Here’s why we shouldn’t write off AI completely

Here’s why we shouldn’t write off AI completely

While Artificial Intelligence (AI) is, no doubt, rapidly transforming every sector of society, concerns about potential risks of this technology are growing as it becomes more ubiquitous. But it’s also proven to be a net good through other applications.

Over 1,000 tech leaders, researchers, and experts – including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak – came together to call for a pause on advanced AI projects for the next six months in an open letter published last week.

The letter warned that companies are creating AI tech so fast that they are not considering the potential risks and consequences like the prevalence of misinformation and a potential “loss of control of our civilization,” the letter states. They believe that anything smarter than GPT-4 (the newest version of ChatGPT) could present a “profound risks to society and humanity.”

Fears of job replacements, spread of disinformation, and having it become ‘more intelligent’ that its human creators has got many in the public anxious over what AI is capable of.

“There’s a lot of ways that AI can actually make the world better,” said Rijul Gupta, CEO of the AI synthetic company Deep Media. “[But] it all comes down to how we use it.”

Its recent assistance to locate graves of approximately 93 victims of the Spanish flu in Alaska portrays a different alternative of this technology if used wisely.

Cornell Scientist Thomas Urban used ground-penetrating radar and AI modeling technology to find the graves on the Seward Peninsula, helping clarify historical records for Indigenous communities devastated by the 1918 pandemic.

“During fieldwork in Alaska, I’ve sometimes been asked to help Indigenous communities locate unmarked burials using the same noninvasive technologies we’re using to investigate other types of sites,” said Urban to Newswise.

Ground-penetrating radar technology has long been used to investigate all kinds of burials, including unmarked graves at former Indian residential schools.

Urban used his large database of burial data to develop AI-based approaches to identify anomalies consistent with burials. “The hope is that this will expand our capacity by speeding up the process of analyzing data,” said Urban.

Collaborating with Iris Technology Inc., Urban developed several AI applications for geophysical data on a newly launched platform called webAI.

Among those applications, a model was trained from a large number of burial scenarios for the purpose of efficiently going through anomalies in radar data to identify likely burials.

Gupta acknowledges the many benefits AI can bring to address climate change, create opportunities for energy resource needs that are more sustainable and analyze energy grids in a more thorough way that human beings cannot.

But the technology is not perfect. AI is not immune to biases and discrimination that can negatively impact fields like hiring, lending and law enforcement.

“That’s not going to get solved if we pause development, that’s only going to get worse,” said Gupta, contrasting with what the open letter is calling for. “So we need to be careful how we continue development, and we need to think about it ethically. But pausing it is not the right solution.”

And according to Dr. Sarah Myers West, Managing Director of the AI Now Institute, the key thing that the letter misses is the current, most pressing, concerns with the AI industry unfolding before us, not in the distant future.

“Innovation can’t look like companies experimenting in the wild, creating anxieties about the future of the economy, labor market, creative industries, and our everyday information environment, without any outside scrutiny,” said Dr. West in an email response to Reckon.

She adds that many of these AI products being rolled out are not ready for commercial use because they either introduce new risks or don’t work as intended.

“So, I don’t think just a six-month, industry-led ‘pause’ is the right approach: we need stronger regulatory intervention,” said Dr. West. “We have laws for this purpose, and the FTC’s recent blog posts outlining how its authorities apply to AI is one place to start.”

Although this isn’t the first time many have expressed worry over these issues, Google and Microsoft’s race to prove their own AI advancement doesn’t help either. This adds to the anxiety around how quickly AI can spread disinformation and alter people’s minds.

“If [their AI] is poised to be as game-changing as they claim, this would only deepen the foothold big tech firms have on our digital economy, and we can’t afford to wait until this takes place to step in,” Dr. West points out. “Enforcement of our anti-trust laws has a big role to play here.”