SEO+
how-to

How I built a fact-checking skill with Gemini, OpenCode, and DeepSeek

It is typically good advice to never trust anything you read on the Internet (including this article, I suppose).

Misleading content is ubiquitous on the web, whether intentional or not. Facts change over time. LLMs hallucinate and make up statistics. Websites repeat claims they haven’t verified.

Today, its trivial to launch a AI-powered, content-rich website before lunch. This acceleration of web content has also led to an acceleration of content that is poorly researched or outright false. Many content creators are unknowingly reposting outputs from LLMs straight to the web with little or no critique of its claims.

So as someone who also wants to scale content production, how do I avoid contributing to this onslaught of misinformation? My solution for producing quality, accurate content at scale was to hire a fact-checker.

The foundation

I start every custom skill by using Gemini’s Deep Research tool. The tool produces highly-researched reports on a topic of your choosing by sending AI agents to consume, interpret, and collate authoritative web content. In this case it used over 60 sources to compile this report, with each one meticulously sourced. Here are a few examples that were used for the fact-checking skill:

Gemini Deep Research sources

These reports provide an incredible foundation on which to build an effective skill.

How does it work?

I imagined this fact checker to be an extremely skeptical AI agent that verified each claim made in a piece of content, and that meant teaching that agent a variety of truth-finding tactics. The frameworks used in this process include SIFT, the CREDIBLE framework, the IFCN Code of Principles, KSJ Editorial Verification Workflow, Lateral Reading, Tiered Evidence Hierarchy, and OSINT Visual Forensics/Geolocation.

Having one person apply each of these processes to an article would require an enormous amount work, cross-referencing, navigating to and consuming content from dozens of sites and meticulously comparing any subtle differences between sources. The skill takes less than 5 minutes to perform all these tasks.

Here is the 7-step process that the fact-checking skill iterates through on each use:

  1. Identify verifiable claims: Extract factual assertions, classify by type (numerical, attribution, temporal, categorical, causal)
  2. Apply SIFT to each claim: Stop, Investigate the source, Find better coverage, Trace to origin
  3. Assess source credibility: Classify sources by tier (primary/secondary/tertiary) and score with the CREDIBLE framework
  4. Run quantitative/statistical checks: Correlation vs. causation, absolute vs. relative risk, single study syndrome, cherry-picked timescales, misleading denominators, and others
  5. Verify visual/media evidence: Reverse image search, metadata check, context integrity, geolocation clues. I typically don’t need this step but figured maybe someday I would.
  6. Produce the verification report: This is an executive summary of claim-by-claim verdicts with evidence and SIFT trail, source credibility summary, overall assessment table, and a list of unverifiable claims. Unverified claims are removed from the text. Better to omit any questionable claims.
  7. Link every verification source: Include a source link for each verified claim. I like to use these as external links sometimes.

How I’m using it now

Now I’m using this skill as a quality check of every content output. The results are that not only does content pass the sniff test, it also holds up to careful scrutiny. The quality of my content has skyrocketed and I feel much more comfortable publishing copy that has been examined so carefully.

Use it yourself

Want your AI-generated content to be more accurate? Try out the skill for yourself and flag any inaccurate or unverified claims.

Link to Github Repository: https://github.com/nealkindschi/skills/tree/main/fact-checker Gemini Deep Research Report: Fact-Checking and Accuracy Report