AI News Feed

AI Upscales Misrepresent FBI Person-of-Interest Photos

Users posted AI-upscaled versions of the FBI’s blurry photos of the Charlie Kirk shooting suspect; generative tools fabricate plausible but unreliable details, so such upscales aren’t trustworthy evidence.

General
Trending


Earlier on Sep 11, the FBI posted two blurry photos on X of a person of interest in the fatal shooting of Charlie Kirk. Almost immediately, users responded under the FBI post with AI-“enhanced” upscales — some generated by X’s Grok bot, others with tools like ChatGPT — that turned pixelated surveillance shots into sharp, high-resolution portraits. Those AI variations ranged from plausible attempts to obviously invented details (one “textual rendering” added a different shirt and a pronounced “Gigachad-level” chin), and many were shared more as attention-grabbing content than reliable leads.



The Verge notes that generative upscalers don’t reveal new facts; they infer plausible features to fill missing pixels and have a documented history of fabricating details — examples include depixelating a low-res Obama image into a different-looking person and adding a nonexistent lump to another public figure’s head. Because these tools extrapolate rather than verify, the story warns they shouldn’t be treated as hard evidence during an active investigation or manhunt.



View Full Story

The method

The prompts

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied

Copied