A recent video went viral on social media, capturing widespread attention. Framed as a breaking news broadcast, it opens with a shocking report: "Lava is erupting in downtown Seoul."
Throughout the video, various anchors, reporters, students, and celebrities appear — but at the end, each reveals:
"I'm AI. Don't fall for it."
The video highlights how incredibly difficult it has become to distinguish between real and AI-generated content.
This campaign was created using Google’s video-generating AI model Veo as part of a public awareness initiative showcasing the risks of advanced AI content. Many viewers reacted with alarm: "It’s scary how real it looks." and "I don’t know what’s real anymore."
AI is not a “creator” in the way we typically imagine. Instead, after learning from vast amounts of data, it predicts plausible outcomes by combining the most probable words, pixels, and sound fragments.
That’s why AI writing models follow the natural flow of language, AI image generators combine facial proportions and lighting, and AI video models add realistic facial expressions and voices.
The results are highly natural — sometimes so real that they appear even more authentic than reality itself, leading people to mistake them for the real thing.
AI-generated content, once seen as merely fascinating technology, is now being used as a tool that can cause real harm and social disruption.
In today’s environment where fakes can circulate as if they were real, trusting content simply because it "looks convincing" is becoming increasingly dangerous.
As AI-generated content becomes so sophisticated that it is nearly indistinguishable from reality,
the issue has evolved into a social challenge that can no longer be managed by individual discernment alone.
Governments and global platforms are now establishing institutional frameworks to address digital disruption.
AI-generated content has become so sophisticated that it is now virtually impossible for the human eye alone to tell real from fake.
In the past, people could sometimes spot clues—such as unnatural fingers, blurry text, or distorted proportions—but now even these traces are increasingly difficult to find.
That’s why the first mindset we need is to develop a healthy habit of asking, “Is this real?” And when necessary, we can also make use of the following tools to check whether content was AI-generated.
✅ 1. AI or Not
✅ 2. Hive AI
✅ 3. FakeCatcher (Intel)
✅ 4. GPTZero / Originality.ai
Note: The AI detection market is still technologically immature. Many tools are business-oriented, and their accuracy is not 100% guaranteed.
In other words, these tools serve only as reference points—ultimately, the most important thing is to develop the habit of questioning and verifying.
AI does not provide us with the truth.
That’s why we must avoid accepting all content at face value. Instead, we need to check sources, consider context, and maintain a habit of healthy skepticism.
This is our most fundamental defense and essential survival skill in the age of AI.
📌 Coming Up Next
So, who legally owns AI-generated content?
In the next installment, we’ll explore copyright issues and the potential uses of AI-generated works.