Edweek

Students Are 'Digital Natives,' But Here’s Where They Struggle (Opinion)

A.Wilson53 min ago

Civic Online ReasoningReading Like a HistorianBeyond the BubbleVerified: How to Think Straight, Get Duped Less, and Make Wise Decisions about What to Believe Online

Rick: You've become a leading authority on digital literacy and misinformation. Can you talk a bit about how you got into these issues?

Sam: Fortuitously. Back in 2015, I got an email from a program officer at Chicago's McCormick Foundation. This person had seen our innovative history assessments, in which students analyze primary sources from the collection of the Library of Congress. This person wanted to know if we could create an instrument that directly measured students' ability to assess online sources. We accepted the challenge. The next year, Trump was elected, and "fake news" became part of the public discourse. During this time, the conventional wisdom preached by people like Marc Prensky and others was that adults were the digital knuckleheads but that young people—also known as "digital natives"—had game. But we weren't so sure, so we set out to measure students' abilities to sift fact from fiction, in many cases by having them analyze actual material from the web. After combing through nearly 8,000 responses from students in middle school through college, we found them to be just as confused as the rest of us. A Wall Street Journal reporter featured our study, which led to appearances on NPR, BBC, ABC, and countless other outlets. From that point on, there was no turning back.

Rick: Can you tell me more about that study? When you say you found the students were "just as confused as the rest of us," what did you see?

Sam: One of the findings that the Wall Street Journal highlighted was that 82 percent of middle school students couldn't tell the difference between an ad and a news story. What the Journal didn't say was that in a study conducted by Edelman-Berland, a global communications firm, 59 percent of adults couldn't tell the difference, either. Findings like these made us realize that were all in the same boat—and that boat was rapidly taking on water.

Rick: Is there an appetite for schools taking this on?

Sam: There's increased attention at the legislative level to issues of information literacy. States like Illinois, California, and New Jersey have passed curriculum mandates, and there's legislative action in something like 15 other states. What's heartening is that this concern spans the red state/blue state divide. Teaching students to be wise consumers of digital information can't be a partisan issue. Without the ability to tell the difference between information backed by solid evidence and sham, democracy doesn't stand a chance.

Rick: I love the goal. But, as you know, we live in a time of sometimes intense disagreement about what's fact and what's "misinformation." I mean, we've seen credible authorities vehemently denounce some statements as falsehoods, on topics like the origins of COVID or Hunter Biden's laptop—only to later learn the statements were actually true. How do you navigate those tensions?

Sam: Listen, there are topics where authorities rushed to pronounce judgment—case in point, the COVID lab-leak hypothesis. To broach the idea in 2020 branded you a racist; today, the origin of the virus is an open question. But to generalize from this instance—to go from "authorities sometimes err" to "you can't trust them at all"—leads to a crippling nihilism. Let's stick with medical issues for a second: The rage on TikTok is a procedure called "mewing," the idea that by doing repetitive jaw exercises, you can change your jawline and achieve a sleeker profile. There are hundreds of videos with of views attesting to the procedure, including endorsements from supermodels . But if you know how to separate signal from noise on the internet, you quickly learn that there are no medical studies that attest to the efficacy of the procedure and that the dentist who promoted it had his dental license stripped. You won't die from mewing, but there's a lot of scary medical advice floating that can lead to serious illness or even death . When it doubt, it's wise to go with authorities like the Mayo Clinic over sketchier places such as the [fictional] Dave and Tom's Homeopathic Supplements.

Rick: How has the emergence of AI affected your work?

Sam: AI magnifies the challenge. We have a wondrous tool that's been programmed to offer persuasive responses—accurate or not. In too many cases, the responses of large language models—LLMs—are the linguistic equivalents of a green smoothie—a phrase from a Facebook post combined with text drawn from a RAND report, abutting content from Wikipedia, and a sprinkling of text from The Onion. In fact, the now-famous " Elmer's glue keeps cheese on pizza " LLM response originally came from a satirical Reddit post. AI weakens the most important bond we need to consider when evaluating information: the nexus between claim and evidence. In the words of cognitive scientist Gary Marcus, generative AI is "frequently wrong, but never in doubt." Rather than rendering traditional search skills obsolete, AI has made the ability to verify information even more imperative. Letting kids loose on AI without establishing that they have search skills in place is like framing a house without first pouring a foundation.

Rick: Your book , published last year, is a resource for helping to sort fact from fiction on the internet. What are a few key takeaways?

0 Comments
0