Detecting Digital Lies with Media Literacy
#Achievement
Lindsey Canny
by
Lindsey Canny
Lindsey Canny Edtech Thought Leader |
Factual and historical integrity is, all at once, a lofty responsibility but also somewhat of an oxymoron. The emotional, social, and cultural perspective of world events changes with each person who touches it, but facts will remain facts. Knowing that, it seems like having AI’s metaphorical mitts on information management would be the perfect solution—how could a robot do anything but organize and convey the facts, and nothing but the facts?
Well, who runs the robots? That makes all the difference.
AI’s education is iffy:
AI generators are now well-documented to have “hallucinations,” or outputs that convey false or misleading information based on inconsistencies in training data—most of which was produced before 2023. New training data for AI generators is the content it receives from users, and internet content is notoriously a cesspool for bias, racism, uninformed opinions, and misinformation.AI does not have its own capacity for filtering out skewed data, or the ability to reject intellectually or morally questionable materials from its learning set, hence the catchphrase of AI: garbage in, garbage out. Since there is no overreaching AI legislation or regulation in the United States, our collective trust of AI systems rests on the integrity of the humans running it behind the scenes.
Layers of media literacy
As AI-generated content grows, attempts to analyze material for digital fakery grows with it. One thing researchers are finding, however, is that AI detection is patchy at best, but increased media literacy improves detection rates.Media literacy is gaining ground in school curricula, but this literacy goes much further than being able to spot a false claim or a dodgy website. To bolster comprehensive media literacy, diving into the nuances of how and why the media is produced, and what it may be lacking, is a must.
Historical context & perspective
Moments in history don’t exist in a vacuum, but are surrounded by a web of connected events and perspectives. That’s why the “just the facts” nature of AI generators works against them: AI-generators haven’t reached a level of sophistication to build a sphere of context around the content it makes. Good media literacy involves using a breadth of sources to build a clear picture of the varying points of view and corroborating perspectives behind an event. If the AI-generated results don’t match with established research materials or firsthand sources (read: scholarly publications and print media,) it’s best to remain skeptical and leave the AI results out.Purpose and bias
There is no such thing as pure, unadulterated information—everyone has an agenda, whether they acknowledge it or not. Taking a source at face value is quick and easy, but it’s also an easy way to disperse misinformation even further. As for AI, detecting humans’ implied meanings or ulterior agendas is not within its skillset, and will integrate the skewed viewpoints and prejudices the materials may have.Algorithmic cherry-picking
YouTube, TikTok, Spotify, Facebook ads—everything is run on an algorithm to best tailor each user’s digital experience to their personal tastes. When looking for cat videos, it’s not much of a problem. When 51% of Gen Z report that they get their news from social media, however, suddenly these hyper-curated for-you pages breed the cherry-picking and confirmation bias fallacies right off the bat. Get one video saying there is no feasible way for Helen Keller to fly a plane, suddenly there are dozens more conspiracy-laden videos denying historical fact. Even scarier is the fact that terrorist and hate groups rely on these same algorithms to amplify malicious content to young and vulnerable users who are more likely to embrace extremist views.A stewardship to the future
Leaders in education are stewards to students’ futures, and have a responsibility to a fact-based, unbiased, agenda-free education for these members of the world community. For district leaders, this means prioritizing professional development, implementing well-outlined AI-use policies, and incorporating intensive media literacy throughout the curriculum (not forgetting that this literacy encompasses images, videos, print, algorithms, ads, digital socialization, filters and image manipulation, etc.)Since AI tools are being added on to nearly every tech platform, it’s becoming less feasible to try to enact sweeping bans of AI. Instead of trying to get back to the way things were, the way forward means focusing on critical analysis and best practices of tech use. Anything else may be the educational equivalent of patching a dam with chewing gum.
The question districts need to answer for themselves in this equation is: what are the skills and knowledge our students will need when AI is no longer a pesky problem, but an ordinary part of daily life?
Educators aren’t strangers to introducing and guiding students through historical fact and knowledge, but they are facing increasing competition from all over cyberspace in a way they haven’t before. The more informed students are about informational integrity, and the earlier they learn it, the less likely it is that they will be the ones to spread falsehoods and attempt to manipulate the narratives of society now and in the future.
Follow-up resource: The Top 10 Literacies in Education Today
A perennial favorite illustrates the different types of literacy students encounter.Lindsey Canny Edtech Thought Leader |
|