{"id":9197,"date":"2026-03-16T13:39:48","date_gmt":"2026-03-16T17:39:48","guid":{"rendered":"https:\/\/cento.centre.edu\/?p=9197"},"modified":"2026-03-16T13:39:48","modified_gmt":"2026-03-16T17:39:48","slug":"the-ai-video-onslaught-age-of-misinformation","status":"publish","type":"post","link":"https:\/\/cento.centre.edu\/index.php\/2026\/03\/16\/the-ai-video-onslaught-age-of-misinformation\/","title":{"rendered":"The AI Video Onslaught: Age of Misinformation"},"content":{"rendered":"\n<p>by Soren Ryan-Jensen &amp; Daniel Covington<\/p>\n\n\n\n<p>Generative AI has seen increasing use in the creation of video content, and as these programs become more sophisticated many now find it hard to distinguish an AI generated video from real footage. AI videos have been used to run scams, spread misinformation, and fill social media with generated videos either depicting real or fictional events. A recent and controversial example would be the White House\u2019s use of these generative technologies to create a video depicting Donald Trump piloting a fighter jet and dumping sewage on \u201cNo Kings\u201d protestors.&nbsp;<\/p>\n\n\n\n<p>The Trump White House\u2019s use of generative AI has come under scrutiny, with many raising concerns over its use by government officials to spread demeaning or outright incorrect information. One such example would be the AI-edited photo posted on the White House\u2019s official X account of Nekima Levy Armstrong, a woman who was arrested during a protest against ICE, which made her have darker skin and appear far more distressed.<\/p>\n\n\n\n<p>But the use of AI in video generation is not limited to the government. Private social media accounts have posted generated videos depicting violence between ICE agents and protestors, often with one group violently assaulting the other. This manipulation of reality, often termed \u201cslopaganda\u201d, is especially concerning as independent actors can create seemingly real evidence of any narrative they choose.&nbsp;<\/p>\n\n\n\n<p>It is not hard to imagine the end consequences of accessible generative AI, especially when we know that countries such as Russia have engaged in \u201ctroll farms\u201d on social media to manipulate the outcomes of elections. These \u201cfarms\u201d have generated vast amounts of social media posts aimed at polarizing citizens against each other in an effort to sow unrest. Large actors, such as Russia, will now be able to directly launch disinformation campaigns using generated footage that is often difficult to distinguish from reality. But on a wider scale, generative AI allows for a democratization of misinformation campaigns. Lone individuals can (with limited technological ability) create as many bot social media accounts as they wish, all posting generated footage that furthers their own narrative. This is uncharted territory, never before has so great a power to manipulate reality been so easy to access.<\/p>\n\n\n\n<p>Above all, the creation of so much \u201cslopaganda\u201d actually devalues the worth of footage. Videos are often the most accessible method of documenting reality, and when generative AIs can produce life-like footage, the ability of normal citizens to document what is happening around them falls into question. The AI videos of ICE officers and protestors clashing may fuel ideological narratives, but far worse, it makes it more difficult to prove when either group acts inappropriately. In this light, the White House\u2019s use of AI to generate official posts normalizes its use in widespread disinformation.<\/p>\n\n\n\n<p>Sources:&nbsp;<\/p>\n\n\n\n<p><strong>White House using AI, fabricating reality or \u201cmemes\u201d: <\/strong><a href=\"https:\/\/www.theguardian.com\/us-news\/2026\/jan\/29\/the-slopaganda-era-10-ai-images-posted-by-the-white-house-and-what-they-teach-us\">https:\/\/www.theguardian.com\/us-news\/2026\/jan\/29\/the-slopaganda-era-10-ai-images-posted-by-the-white-house-and-what-they-teach-us<\/a><\/p>\n\n\n\n<p><strong>Use of AI erodes public trust:<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.pbs.org\/newshour\/politics\/trumps-use-of-ai-images-further-erodes-public-trust-experts-say\">https:\/\/www.pbs.org\/newshour\/politics\/trumps-use-of-ai-images-further-erodes-public-trust-experts-say<\/a><\/p>\n\n\n\n<p>\u201cAn influx of AI-generated videos related to Immigration and Customs Enforcement action, protests and interactions with citizens has already been proliferating on social media. After Renee Good was shot by an ICE officer while she was in her car, several AI-generated videos began circulating of women driving away from ICE officers who told them to stop. There are also many fabricated videos circulating of immigration raids and of people confronting ICE officers, often yelling at them or throwing food in their faces.\u201d<\/p>\n\n\n\n<p><strong>Specifically focusing use of AI videos in relation to ICE:<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.wired.com\/story\/anti-ice-videos-are-getting-the-ai-fanfic-treatment-online\">https:\/\/www.wired.com\/story\/anti-ice-videos-are-getting-the-ai-fanfic-treatment-online<\/a><\/p>\n\n\n\n<p>\u201cTucker says there is concern that the increasing flood of anti-ICE AI content could potentially backfire by contributing to \u201ca general perception that you just can\u2019t trust videos when you see them anymore,\u201d making it \u201charder to convince people of the fact that things which are actually real are, in fact, real.\u201d This played out on Wednesday, with <a href=\"https:\/\/www.youtube.com\/watch?app=desktop&amp;v=CRWR13BAIEs\">new footage<\/a> of Pretti confronting ICE officers on January 13, more than a week before he was killed, posted by media outlet The News Movement; on Instagram and YouTube, many commenters accused the video of being AI generated. (Pretti\u2019s family has <a href=\"https:\/\/www.nytimes.com\/2026\/01\/28\/us\/alex-pretti-kicking-ice-vehicle-video.html?smtyp=cur&amp;smid=bsky-nytimes\">confirmed<\/a> to the New York Times that it was him.)\u201d<\/p>\n\n\n\n<p><strong>Use of AI in writing regulations:<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.propublica.org\/article\/trump-artificial-intelligence-google-gemini-transportation-regulations\">https:\/\/www.propublica.org\/article\/trump-artificial-intelligence-google-gemini-transportation-regulations<\/a><\/p>\n\n\n\n<p>\u201cThe answer from the plan\u2019s boosters is simple: speed. Writing and revising complex federal regulations can take months, sometimes years. But, with DOT\u2019s version of Google Gemini, employees could generate a proposed rule in a matter of minutes or even seconds, two DOT staffers who attended the December demonstration remembered the presenter saying. In any case, most of what goes into the preambles of DOT regulatory documents is just \u201cword salad,\u201d one staffer recalled the presenter saying. Google Gemini can do word salad.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>by Soren Ryan-Jensen &amp; Daniel Covington Generative AI has seen increasing use in the creation of video content, and as these programs become more sophisticated many now find it hard to distinguish an AI generated video from real footage. AI videos have been used to run scams, spread misinformation, and fill social media with generated [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":9198,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7,10],"tags":[],"class_list":["post-9197","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-features","category-opinions"],"_links":{"self":[{"href":"https:\/\/cento.centre.edu\/index.php\/wp-json\/wp\/v2\/posts\/9197","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cento.centre.edu\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cento.centre.edu\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cento.centre.edu\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/cento.centre.edu\/index.php\/wp-json\/wp\/v2\/comments?post=9197"}],"version-history":[{"count":1,"href":"https:\/\/cento.centre.edu\/index.php\/wp-json\/wp\/v2\/posts\/9197\/revisions"}],"predecessor-version":[{"id":9199,"href":"https:\/\/cento.centre.edu\/index.php\/wp-json\/wp\/v2\/posts\/9197\/revisions\/9199"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cento.centre.edu\/index.php\/wp-json\/wp\/v2\/media\/9198"}],"wp:attachment":[{"href":"https:\/\/cento.centre.edu\/index.php\/wp-json\/wp\/v2\/media?parent=9197"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cento.centre.edu\/index.php\/wp-json\/wp\/v2\/categories?post=9197"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cento.centre.edu\/index.php\/wp-json\/wp\/v2\/tags?post=9197"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}