AI Made Misinformation Cheaper—So I Only Consume Primary Sources

4 min

TL;DR: AI has driven misinformation production costs to near zero. My response is returning to primary sources—watching original podcasts, YouTube, and reports directly, doing my own verification, not trusting second-hand processed content.

Writing Last Year’s Events as This Year’s

Recently scrolling Facebook, I saw a post about tech news. Halfway through, I noticed the date was wrong—it described last year’s events as this year’s.

Not a vague time reference—explicitly wrong.

My first reaction: that’s way off. My second reaction: did this person even read their own output?

Then I scrolled away.


These Posts Are Everywhere Now

This wasn’t the first time. Scrolling Facebook and Twitter lately, I increasingly react with: “This is AI-generated again.”

Not a guess—almost certain. Fixed formats, sentence structures like “This isn’t X, it’s Y,” and obvious errors like the one above. The person probably took a YouTube transcript or English report, fed it to an LLM, and generated a post in seconds.

With no source linked, how do I know this matches the original material?


The Problem: Misinformation Got Cheaper

This isn’t a brand new problem. Before AI, irresponsible media would distort foreign reports, twisting original meanings into their own versions, spreading misinformation. Readers didn’t know the truth because they couldn’t see the original.

Now the same problem has gotten worse.

Content producers care not about accuracy, but about producing and spreading quickly. AI has driven this cost to near zero. Cheap information is one thing, but worse: misinformation has become cheaper too.


I’m Not Against Using AI

Actually, I’m not against using AI to assist content production. This article was written with AI assistance—polishing text, smoothing out meaning, reorganizing paragraphs.

What I oppose is: using AI to mass-produce content without the author actually thinking. Those posts show no authorial perspective—just someone’s source material fed into an LLM that spits out something “article-shaped,” formatted slightly, then published.

When I use AI, I ensure the content has been digested by me, with my own perspective.


My Response: Return to Primary Sources

I’ve noticed my behavior changing.

“Primary sources” here doesn’t mean things I personally experienced, but content as close to the original source as possible—the original podcast, YouTube video, blog, or report text, not someone else’s reprocessed version.

When I judge content might interest me, I don’t read the reprocessed second-hand article. I find the primary source behind it.

This habit actually didn’t start because of AI. In the pre-AI era, when reporting seemed off, I’d try to find the original. It’s just more frequent and necessary now.

Another change: I’ve started blocking accounts that mass-produce or repost AI content. I barely look at Taiwan Facebook anymore.


Fed Up, So I Built My Own

There are so many posts converting YouTube transcripts to articles. One day I thought: instead of constantly consuming others’ processed content, why not do it myself?

I built a tool that grabs YouTube video transcripts and has AI summarize the key points. The key: I require AI to note which minute and second each point comes from in the transcript. When I doubt a summary’s accuracy, I can jump directly to that timestamp to verify.

After using it for a while, I found it changed how I absorb information. With the transcript, I can Q&A against the content, not just passively read someone’s summary. Want to verify a detail? Jump directly to the corresponding timestamp and see what the original said. When necessary, I can watch the video or listen to the original podcast.

Though I built a tool, the barrier isn’t that high. I started by manually getting transcripts before automating. Getting YouTube transcripts is simple—Google “youtube transcript” and you’ll find plenty of online sites, or install a Chrome extension.

When necessary, I also have AI do web searches to cross-verify information.

Two core things:

  1. Start from primary sources—don’t trust second-hand processed content
  2. Do your own verification—don’t outsource judgment to AI

You Don’t Need to Do This for Everything

I’m not saying dig up original sources for every article you see. That’s exhausting.

My principle is more like: when a topic really matters to me, I won’t just read second-hand summaries. Or when an article feels off, I try to find its source.

Not everything is worth this effort, but important things are.


Next Step

If you also feel anxious about today’s information environment, here’s my advice:

Don’t consume second-hand content. Find primary sources, process them yourself.

This sounds like more work, but it’s not. You were spending time reading potentially wrong second-hand articles anyway; now you spend time reading primary sources. The difference: you know what you’re reading.

Next time you see AI-generated content, take a moment to find the original source and try digesting the original text yourself. You’ll be surprised how many errors these garbage content farms contain.