Madison

Tom Still: Will AI improve our world, unravel it … or a little of both?

O.Anderson1 hr ago

You and millions of other people likely harbor feelings about artificial intelligence that range from awe to anxiety, and from hopeful to fearful.

Both ends of that spectrum are visible at this point in the evolution of generative AI, which can create text, images, videos and more through user prompts. The genie cannot be squeezed back in the bottle, so the challenge is how to harness AI for societal good — and how to build guardrails that prevent AI from being hijacked by bad actors or taking on a life of its own.

That challenge is front and center around the world, including in Wisconsin.

Stern warnings about AI were issued Sept. 16 by a seemingly unlikely source: An international group of AI scientists who were central to its development. Scientists from the United States, China, Britain, Singapore and Canada were among those who met in Venice, Italy, and concluded "loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity."

These are pioneers of the science, yet they believe some AI models may begin to autonomously self-improve and spin out of human command unless safeguards are put in place, starting within nations that are major developers and progressing to a system of red flags shared worldwide.

Others also predict AI will forever change the nature of warfare. Might the exploding pagers and walkie-talkies in Lebanon have been equipped with an AI trigger? The losers in warfare throughout history have often been those who fought with the technology and tactics of the previous war.

With climate change already a concern, others worry the data centers that power AI functions will consume ever-growing amounts of energy and water for cooling. Still others fear AI will destroy jobs now done by real people, deepfakes will manipulate politics, autonomous vehicles will run amok, and human interaction itself will end like a bad sci-fi movie. The list goes on.

The flip side is more heartening. Experts believe AI will create more jobs than it will end, especially higher-end jobs. It can make manufacturing and other businesses more efficient, improve health care delivery, make natural resource discovery more precise and econ-friendly, uncover energy inefficiencies and even extend life itself by opening new doors to cellular research.

One prominent example of AI hitting home in Wisconsin is the emerging Microsoft data center in Racine County, which will be built in phases and cost $3.3 billion through 2026. It will support Microsoft's Copilot, a generative AI chatbot, and is already working with the UW-Milwaukee Connected Systems Institute to reach manufacturers. In fact, it's the only Microsoft data center focused on manufacturing research, development and education.

As noted during a Sept. 12 Wisconsin Technology Council forum in Wauwatosa, about a dozen companies are engaged in a "soft launch" with Microsoft and the CSI lab with scores more to come. Work is also underway to create an AI-ready workforce through partners such as UW-Milwaukee, the Wisconsin Technical College System, gener8tor and others.

"What we are seeing is that companies are adopting (AI) extremely faster than what we thought," said Balamurugan Balakreshnan, chief AI officer and chief architect for Microsoft.

He also described the company's sustainability targets, which include being "carbon negative," "water positive," and "zero waste" by 2030. Microsoft also aims to protect more land than it uses by 2025 — and by 2050 remove "all historical carbon" since its founding in 1975. A nearby resource for advancing water technology on the Racine County campus is The Water Council, based in Milwaukee and which works with major firms in Wisconsin and beyond.

Skeptics may note that even if Microsoft meets its ambitious goals and turns Wisconsin into an AI showcase, other companies could be less transparent; some off-the-radar players could even be dangerous. That's why the federal government and most states — Wisconsin included — are considering and enacting ways to regulate AI without stomping on its potential.

The first mathematical modeling of a neural network to create algorithms that mimic human thought processes occurred in 1943, so there's no chance the world is going to "unlearn" what it already knows about AI and machine learning. The better bet is to carefully confront the fears while embracing the potential.

Tom Still is the president of the Wisconsin Technology Council. Email: .

0 Comments
0