-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- Newsroom
- :
- Learning Stories Blog
- :
- Teaching about Deep Fakes in the Communications Cl...
Teaching about Deep Fakes in the Communications Classroom
- Subscribe to RSS Feed
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
From retouching images to digital deep fakes, things aren't always what they seem online.
This October, Macmillan Learning hosted a webinar with Bettina Fabos and Christopher Martin, co-authors of Media & Culture, 13th Edition, to share what students need to help them think critically about and better understand manipulated videos and images.
Students are becoming more aware of manipulated images, in general. Images that promote impossible beauty standards and funny #Photoshopfails have brought awareness of image manipulation to the mainstream. Often, the fails help to teach students how to spot some of the more egregious manipulated images. In addition to knowing about these images, students are also using photo manipulation tools themselves, like Facetune, Canva, Instagram, Snapseed and TouchRetouch to erase imperfections.
Deep fakes are becoming more and more mainstream. It’s not just images we should be concerned about. Synthetic video (also known as a “deep fake”) is a progression of artificial intelligence, and it’s becoming more and more realistic and popular every day. Some experts even argue that deep fake-related AI developments are as important as the internet itself. Easily available apps like Deep Nostalgia make the ability to create a deep fake easily accessible to anyone with a smartphone. There are also lip synching deep fakes, which place lip movements over real video and pair it with “synthetically” generated audio. One example of this is the fake Tom Cruise TikTok that recently went viral. According to Fabos, most tech companies have been involved with AI since 2014. “We’re delegating power and creativity to machines,” she noted.
Deep fakes have come a long way in a few years. It used to be painstaking and costly to manipulate videos. For example, Carrie Fisher, known in the Star Wars movies as Princess Leia, was able to appear in The Rise of Skywalker even after her untimely death, but the technology that made that happen cost the studio a great deal of money. Now you don’t need complex graphic models of faces to make this kind of manipulation happen, because AI is doing all of that work.
If you create or use manipulated images, it’s possible to lose your job: An LA Times photojournalist was fired for combining two photos into one that changed its meaning. In that case, the photographer admitted that it was a "complete breakdown in judgment," but the offense doesn’t have to be that egregious for there to be consequences: an AP photographer was fired for photoshopping his own shadow out of an image.
There are some “harmless” uses for AI and video manipulation.. Some of these include breaking language barriers with better translations, news delivery, turning back the clock so that aging actors look young again, and the ability to have “conversations” with deceased loved ones. It can also help students with vision disabilities to learn better by using compelling audio, and it can be used to help create automated transcriptions.
If you’re thinking that there should be some laws, you’re not alone. Deep fake usage carries all kinds of risks. Some of the more common ones include: being used for extortion or coercion against women, political manipulation and deception, and the threat of society perceiving a real video as being fake. According to Martin, laws are needed for commercial uses of synthetic video, disinformation campaigns, and nonconsensual deep fakes.
What this all means is that media literacy is critical. The prominence of image and media manipulation will only increase over time as AI becomes simultaneously more sophisticated and more readily available. Students need to understand the current state and where things are headed, because these issues will impact them.
“The relationship between media and truth has always been tenuous.” Fabos said. “In light of these deep fakes, and these deep fake developments, we will need media literacy and the core work of fact checking.”
To learn more about what students need to know about media literacy, check out this free on-demand webinar from the co-authors of Media & Culture, 13th Edition by clicking here to access it.
-
2020
12 -
2021
32 -
2022
44 -
2023
52 -
2024
61 -
Accessibility
5 -
Achieve
15 -
AI
18 -
Author Spotlight
1 -
Authors
3 -
COVID19
1 -
DEI
47 -
Diversity and Inclusion
1 -
History
1 -
iClicker
4