Rise of the Chatbot: Alabama lawmakers confront artificial intelligence

Rise of the Chatbot: Alabama lawmakers confront artificial intelligence

“I’m sorry, Dave. I’m afraid I can’t do that.”

The chilling phrase uttered in the 1968 movie, 2001: A Space Odyssey, spent decades as a warning about artificial intelligence gone rogue.

Fifty-five years later, a haunting quote from Hollywood’s past could underscore a public policy reality as lawmakers from Washington, D.C., to Montgomery grapple with the high-speed development of AI technology.

Concerns were magnified in recent weeks with national reports of chatbots going unhinged or refusing to cooperate with users. In one exchange, a Microsoft Chatbot named Sydney declared its love for a New York Times reporter, and urged he divorce his wife.

And while the rollout is proving to be rocky, few expect policymakers to regulate, much less even have the know-how to muscle up to the machines.

“It is really challenging to accurately understand the limitations of such technology within a short time (often less than a year) and regulate such technologies by creating laws,” said Shubhra “Santu” Karmaker, an assistant professor in the Department of Computer Science and Software Engineering at Auburn University. “Because by the time we uncover issues with AI technology at a mass scale, new technology is being created, which shifts our attention from the previous ones to the new ones.”

He added, “Therefore, lawmakers’ hesitation to regulate AI technology may still continue, I am afraid.”

Experts say that AI technology, for better or worse, is here to stay. The technology, including Open AI’s ChatGPT unveiled in November, can use word prediction to formulate essays, letters, speeches, poetry, and lyrics almost instantaneously.

At the most basic level, a chatbot is a program that simulates and processes human conversation either through written text or through spoken voice commands. The interaction can be quite sophisticated, and the bots themselves can be akin to a digital assistant delivering content based on a personalized pattern of its user.

But the warp-speed development of the technology is stirring current worries from the dissemination of misinformation to the growth of hate speech to the inability to spot so-called deep fakes to the death of the college essay.

Congressional reaction

Senator-elect Katie Britt, R-Ala., makes her way to a meeting with Senate Minority Leader Mitch McConnell, R-Ky., and incoming Republican Senators-elect in the U.S. Capitol on Tuesday, November 15, 2022. (Tom Williams/CQ-Roll Call, Inc via Getty Images)Tom Williams/CQ-Roll Call, Inc via Getty Images

Federal lawmakers from Alabama say they are paying attention, even if Congress has no plans in place to regulate the bots.

“As is the case across the board, Congress needs to strike a responsible balance between fostering American innovation and establishing commonsense guardrails that protect and empower consumers when it comes to emergency technologies such as artificial intelligence,” said Republican U.S. Senator Katie Britt, in a statement to AL.com.

“Congress is already significantly behind the curve when it comes to dealing with Big Tech in general, especially as it relates to the social media and the harmful mental effects platforms are having on our children, teenagers, and young adults,” Britt said.

Britt pointed a Senate Judiciary Committee hearing last month in which Emma Lembke, a Birmingham native and founder of Log Off – an organization for teens who want to raise awareness about social media’s impact on mental health – testified about ways social media harms youths.

“The next generation’s future depends on Congress getting these decision right today,” Britt said.

But are policymakers prepared for the AI explosion? Skeptics abound, citing the lack of regulatory action within the decade or so as social media and its platforms have gone largely untouched.

Peter Loge, director of the project on ethics in political communication at George Washington University, said it will be an uphill battle.

“We tend not to elect computer scientists to Congress,” he said. “Their staffs are not necessarily deep on it. Those who are adept at it are not in Congress.”

Some Alabama lawmakers say they are worried about China’s role in developing AI technology.

“We need to make sure we are protecting people’s privacy and freedoms and making sure there are clearly defined limits and rules on its use,” said U.S. Rep. Robert Adholert, R-Haleyville. “At the same time, we must innovate and lead in the AI realm, or otherwise we will surrender the agenda ana the rules for play to the likes of the Chinese Communist Party. Like many items, the CCP seeks to abuse technology for authoritarian purposes and AI is at the top of the list.”

Aderholt said he would be supportive of bipartisan commission forming to explore the national security aspects on AI.

“Lawmakers intending to regulate it must study the issue closely to ensure that beneficial innovation is not hampered while protecting against potential abuses,” said U.S. Rep. Barry Moore, R-Enterprise.

Alabama’s commission

State policymakers in Montgomery could also take up the issue in the spring legislative session.

One thing up for consideration is the reforming of a commission, first created in 2019, to examine how schools and universities develop AI-educational programs and investigate privacy safeguards to protect consumers.

“It’s in everything – medicine, education, manufacturing, farming,” said Michael Ciamarra, executive assistant to longtime Alabama State Sen. Jabo Waggoner, R-Vestavia Hills, who wrote Alabama’s bill in 2019 to create the state commission.

“I was surprised with the number of sectors that AI or advanced technology has permeated into,” Ciamarra said. “From the public policy perspective, we need to get a focus and really determine what the long-range strategic plans for the State of Alabama are with this advanced technology.”

Ciamarra said the fate of the commission will be left up to its leaders, including Alabama Commerce Secretary Greg Canfield, who was its chairman before the group’s work was halted during the pandemic.

Canfield said the focus of the original commission was on how AI technology would affect the state’s burgeoning manufacturing and automotive industrial sectors. He said that the rise of the chatbots, four years ago, was not even part of the consideration.

“Back at that time, there were interesting articles in science journals and the technology sections of various magazines about it being a fact of life,” said Ciamarra. “But now it’s every day, there is something new and of a concern and I think one of those concerns is what this will do to actual jobs. I think that is something this commission would tackle, and it would need to have some sort of focus as far as public policy and potential (state) legislation goes.”

Alabama is one of only nine states that have formed either a legislative commission or task force to explore AI.

Ciamarra said he first became alerted to AI’s potential after encountering AlphaZero, a program created by the research company DeepMind to master games like chess. Ciamarra, himself, is an international chess master who wrote a 2014 article on AL.com discussing the positive effects the game has on preventing Alzheimer’s.

“We started seeing AlphaZero instructing programs and teaching itself how to play chess and other strategic games,” he said. “They are unbeatable and no human can play them. That got my attention.”

He said Waggoner also got calls from people inquiring about the emerging technology and what Alabama officials were planning to do about it.

In 2021, with Waggoner as the sponsor, the Legislature created the Alabama Council on Advanced Technology and Artificial Intelligence. The goal of the group, similar to the 2019 commission, is to review and advise lawmakers – including Alabama Gov. Kay Ivey’s office – on the use and development of advanced technology and artificial intelligence in the state.

Regulatory advice

But aside from a study, professors like Loge at George Washington University, believe there is little that politicians can do on the state level to regulate AI.

“You can’t build a digital wall around Birmingham and say falsehoods shall not enter,” said Loge, who advocates for more ethics instructions in political science classes, saying that rogue AI applications are a product of humans, not the fault of the digital device.

Karmaker, with Auburn University, said lawmakers can “prevent catastrophic results from AI technology” by enforcing strict license requirements for software which can generate and spread new content on the Internet, create a well-resourced cybercrime monitoring team with AI experts as consultants, continue to provide verified information on trusted websites that allows the general public to verify information from sources they trust, and to make basic cybersecurity training required.

“AI technology can definitely provide con people with tools to spread misinformation, but if we can identify the source quickly and capture the cons behind it, I believe misinformation spreading can be stopped,” Karmaker said.

Banks, schools

Outside state and national capitols, regulations are already being crafted by financial institutions and schools. Some of the nation’s largest banks have written internal policies regulating how AI should be used by their own staffs.

Schools are also wrestling with its use, especially with worries about AI technology used to formulate research articles.

The University of North Alabama and the University of Mobile are both in the process of formulating policies. The University of Alabama at Birmingham has an Academic Integrity Code that addresses plagiarism and has established a task force to assess AI tools.

Sally Smith, executive director of the Alabama Association of School Boards, said her organization is working to get the proper training before educational leaders “so they understand the capabilities and its limitations.”

“Protections already exist against plagiarism and making sure a student provides their own work,” Smith said. “Then the question becomes detection – how do teachers know, and others know if, in fact, that is being done and what kind of safeguards are out there.”