06-08 Cartoon

Be afraid. Be very, very afraid.

Or choose not to be. We each have to make up our minds: Is FEAR just another four-letter word, or the worst one?

First it was the fear of terrorism, rooted in the horror of Sept. 11, 2001. Goodbye, Fortress America.

Next came climate change. Al Gore’s 2006 Oscar-winning documentary, “An Inconvenient Truth,” sounded the alarm on our warming Earth and all the data since then has only confirmed it.

Even so, it wasn’t hard to put all that out of your mind in the course of your daily life. But then came the COVID-19 pandemic three years ago. Suddenly it was scary just going to the grocery store.

Many quarantines, masks and vaccinations later, OK, it looks like we’re perhaps moving past that. Take a deep breath and move on with your life, right?

Wrong. We just passed the first anniversaries of the mass slaughters at Tops Grocery in Buffalo, N.Y., and Robb Elementary School in Uvalde, Texas. The epidemic of gun violence rages on, worse than ever, and no public place is truly, completely safe anymore.

I personally am still trying to process these threats and at least stay informed about them. But now another joins the list: artificial intelligence (AI), or more exactly, generative AI, in which powerful computer tools create text and images, as well as audio and video content.

The technology titans – Google, Microsoft, Meta (Facebook) and others – are locked in a furious, unregulated competition to dominate this greatest advance in our digital age since the advent of search engines and social media.

In March, more than 1,000 technology leaders issued a 590-word appeal for a six-month pause on AI development, which is deploying “ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control,” the letter states. Citing the dangers of mass disinformation, human-labor obsolescence and computers that are smarter than we are, the letter urges that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Last week, the Center for AI Safety, a nonprofit watchdog group in California, issued a much more succinct warning undersigned by a similar group of tech leaders, including one of the most important, OpenAI chief executive officer Sam Altman, a native St. Louisan: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

You know it’s serious when they start talking about the end of humanity.

AI has huge implications for education at all levels, so I wondered what a local academic leader would have to say on this topic.

Turns out Chris DeGeare, vice president for instruction at Jefferson College, had a lot to say.

“I wouldn’t say we should be afraid of it,” said DeGeare, who holds a master’s degree in information science and learning technologies from Mizzou and formerly taught computer repair and networking. “We just need to be prepared for it and planning around it.

“I think one of the causes for fear comes from the potential to displace the value of human workers. And in some cases that might happen. But it also creates opportunities for new types of work – to prepare our students, from my perspective, on how to interact with it in this emerging field.”

He pointed out that AI already is embedded in how we use our smartphones, for example, finishing words and sentences when we text message. With AI, he said, we can do a lot more – if we remain the ones doing it.

“It’s a question of whether we leverage the artificial intelligence to improve our work, or if we just turn over and let artificial intelligence do the work for us,” he said.

Those of us old enough to remember an old-fashioned circus can picture the wild-animal trainer cracking the whip to keep lions and tigers on their perches. We all need that animal-trainer mindset when it comes to generative AI.

“On one hand, as humans, our capacity to communicate is something that sets us apart from all other animals in this world,” DeGeare said. “When you give that power to a piece of technology it feels like it takes away some of our humanity. But at the same point, we will have the power over that technology, to determine how we use it. It could go either way.”

DeGeare noted that while Jefferson College doesn’t offer a degree or even a course on AI, the college does emphasize digital literacy – “the critical thinking to help students determine what is real,” he said. “You need to be able to distinguish what is a credible source.”

We could all use that education.

I recently re-watched a movie from 1970, “Colossus – The Forbin Project,” about an IT genius who designs a massive computer to control U.S. nuclear defenses. Colossus discovers the Soviets have a similar system and immediately starts collaborating with it. Through “machine learning” – a very real-life element of AI – the twin mega-computers grow more powerful until – well, there’s no “happily ever after.”

In the final scene, the digital dictator tells his creator that in time he will learn to love his new master. Forbin’s answer: “Never!”

It was too late for him, but it’s not for us.

Time to put fear aside and train the tiger.

(0 Ratings)