Deepfakes and The Deep Trouble We're In
How can we catch up with technology changing so rapidly?
Before I begin today, I want to alert you about a troubling incident in Kansas that is a big concern to everyone who believes in the Freedom of the Press…which should be everyone.
This past Friday, police in the small town of Marion, KS, raided a newspaper, the Marion County Record, taking computers, cellphones, and the newspaper’s file server. One reporter, Deb Gruver, had her cellphone ripped out of her hand causing an injury. It is said the raid was because of an investigation report the newspaper chose not to publish, but its publisher told the Kansas Reflector, that the raid was linked to sensitive documents leaked under confidentiality as well as a restaurant owner who dismissed reporters during a meeting last week “with U.S. Rep. Jake LaTurner, and revelations about the restaurant owner’s lack of a driver’s license and conviction for drunken driving.”
The shock of the raid, however, upset publisher Joan Meyer so much, that she died the following afternoon at the age of 98. Before her death, she told The Wichita Eagle about the events, “These are Hitler tactics, and something has to be done.” She also told her son, Eric, who ran the newspaper with her, “Where are all the good people who are supposed to stop this from happening?”
This is all certainly nerve-wracking. Not only was this a small newspaper that was raided but the alleged reasons should certainly raise eyebrows. That raid was certainly illegal and unconstitutional, and charges must be brought up against whoever called for it and did it; I certainly hope Kansas journalists are working hard to find out how did this all happen.
Even worse, if this can happen to a newspaper in a small town, what else can happen to larger publications serving larger populations?
Let’s keep our eyes on this story and hope this is not the beginning of something that should not be.
Speaking of something that should not be the beginning of anything…
I learned the hard way today that I have yet to talk about Deepfakes, and it’s about damn time I did. After all, this AI-generated tool is growing more and more in use which can and will cause endless problems for just about everyone. And those problems are just beginning as, again, Deepfakes grow in usage.
What are Deepfakes, you ask? They are AI-generated videos that can take real people’s images and voices and make them say and do things they really wouldn’t. This can lead to not only to misinformation, but public deception that may just lead to serious consequences. Deepfake videos of President Biden, Hillary Clinton, and Russian president Vladimir Putin have been made, sparking fears in many on how Deepfake could play a role in politics.
And not just politics. Imagine how Deepfakes can be made to influence society. How easy it can be to create a Deepfake video that will make the masses do troubling and dangerous things. Remember the chaos and mayhem in New York’s Union Square over popular Twitch streamer Kai Cenat’s giveaway? Now that was based on a real person who made real comments. But imagine if that weren’t the case? As in, a Deepfake video led to all that? The NYPD said the chaos, “speaks to the power of social media and the danger of social media.” How dangerous and powerful will a Deepfake or any AI tool be? Worse or the same?
Remember that QAnon gathering in late 2021 Dallas over the hope that JFK or his son, JFK, Jr will reveal to be alive and declare Donald Trump the true winner of the 2020 presidential election? Again, that was all done by a person. But if a Deepfake or any AI tool was used instead, then what will happen? And what is scarier: a person manipulating people or AI?
It really is only a matter of time that a voice is taken against someone’s will and a solid AI video is made, and all hell breaks loose.
I may sound really paranoid and dramatic, but I also don’t think we should ever underestimate anything that is AI. Not to mention, not enough policy or legislation is being made to slow down or curb AI’s use. As of Monday, the Federal Election Commission took very tiny steps to create guidelines to limit the use of Deepfake videos in political ads. I mean, one step at a time matters, but I think massive steps are needed here after all this time. Granted, some states like Texas and New York are either introducing or passing laws to halt Deepfakes, but the federal government can’t just sit around and let the states do the work.
But Deepfakes are so dangerous. Not only can they take jobs and mislead many people, but they are used to commit fraud which is a growing problem each day. They can also be used to blackmail people or just basically ruin people’s lives in every way possible. Imagine a Deepfake is made of you committing murder, robberies, animal cruelty, and God knows what else?
But getting back to the media literacy aspect of Deepfakes (I guess we can create a new media category, AI media). How can we tell what is AI or not? Now, you may have seen some AI videos on TikTok or even some photos of ideal vacation spots. Some of those videos are clearly AI; they almost look cartoonish. But some of those photos can leave people awestruck until they (hopefully) learn those vacation sites are entirely fake. According to NPR, Irene Solaiman, a safety and policy expert at the AI company Hugging Face, had this to say about AI’s development, "I look at these generations multiple times a day and I have a very hard time telling them apart. It's going to be a tough road ahead.”
Some AI is obviously fake. But some are not. And who’s to say AI will continue to evolve until we can’t tell the difference between a real picture or not?
According to Infosecurity Magazine, here’s the best way to do so: “To combat deepfakes, it’s crucial to sharpen your media literacy skills. Start by questioning the source of any image or video clip you encounter, especially if it looks suspicious. For example, if you come across an extraordinary photo that seems too good to be true, it may be a Deepfake.”
The same goes for political videos, just a year before the 2024 election. Next year’s election will far more tense than the 2020 election, and Deepfakes won’t help. If videos of Hillary Clinton endorsing Florida Governor Ron DeSantis can be made, anything is possible, I’m afraid. Why do I get the impression that the January 6th insurrection will be dwarfed by something else, thanks to Deepfakes?
Anyway, detecting what is a Deepfake or not is being added to media literacy programs and courses. Last year, the MIT Center for Advanced Virtuality developed an online course, Media Literacy in the Age of Deepfakes, where users learn how to figure out which image or video is fake or the real thing (not sure about Photoshop though). Chances are, the course will evolve as AI evolves.
Most especially, though programs like these will grow and develop more, and at as quick of a pace as Deepfake itself. It almost feels like we are all in a race against AI, particularly who work in media literacy. Making sure just about everyone is educated and aware of just how powerful, and even dangerous, AI can be on our media and understanding of the world around us means following AI tools’ development day-by-day it seems. Not enough policies or laws are being implemented enough, so we need to take matters into our own hands.
We owe it to ourselves and our future to outsmart Deepfakes and all the other AI tools. Hopefully, those machines won’t outsmart us along the way.
What do you think? What are you most concerned about when it comes to Deepfakes and other AI tools, especially with misinformation and disinformation at our fingertips everyday? How will you educate yourself about it? Share with us in the comment section:
The Media & Us is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Or if you’d like to support me in writing this newsletter, Buy Me a Coffee!