But It Works on My Machine!

FactCheck AI – From an Old Idea to an MVP

TL;DR


Motivation

Back in 2016, while studying for my Analyst Programmer degree at the Universidad Nacional de La Plata, I often thought about creating a tool to prevent the spread of misinformation, fake news, and conspiracy theories—in other words, a fact-checking system.

At the time, AI technology still felt quite basic. There were no GPT-style language models yet, and one of the biggest AI headlines was when DeepMind’s AlphaGo unexpectedly defeated Go champion Lee Sedol.

Despite the early state of AI, I believed a fact-checker was still possible. The internet was entering a worrying period, with political elections flooded by fake news campaigns and an overwhelming amount of online noise and misinformation.

Fast forward to today. AI has become the cutting-edge feature every product seems to need (at this rate, even our microwaves will have AI-powered features).

I remembered my old idea and decided to finally build a quick MVP. Using the tools available online today, I wanted to see how far I could go without spending a single dollar on infrastructure, deployment, or AI models.

Homer at Computer


Orchestrating the Backend with Python, FastAPI, and LangChain

I started by looking into the Hugging Face Inference API. It offers plenty of free models, and the quota is more than enough for a hobby project. I chose the Llama 3.1 8B Instruct model and began building the backend.

For the backend, I went with a Python FastAPI application. I’m not super comfortable with Python since I’m mainly a Ruby on Rails developer, but I gave it a shot. To be honest, I leaned on Cursor to help build some features. I’m not a big fan of relying entirely on it, so for this MVP I reviewed every step, refined the structure into something I could understand and maintain, and added tests to keep things safer and more reliable.

As a senior developer, I see tools like Cursor as a helpful partner—it lets you get something running quickly so you can learn from it while building. But it’s far from perfect. Sometimes it completely misses the mark or tries to use a bazooka to kill a mosquito. That’s when you need to put your own shoes on, debug, and refine the solution yourself. The same goes for deployment, debugging, applying good practices, and writing proper tests—Cursor won’t save you from understanding the scope and doing things the right way.

I really enjoyed building the backend and definitely learned a lot in the process. I used LangChain to orchestrate the workflow, combining web search results with vector database lookups and managing the prompt logic for the AI model. I also implemented a hybrid search system to compare claims between web search results and previous claims stored in a vector database.

For the vector database, I experimented with NeonDB’s free tier, which worked really well in local tests. It allowed me to store and retrieve embeddings efficiently while keeping costs at zero—perfect for an MVP setup. However, I disabled the vector search on Render because of memory constraints (the free tier only provides 512 MB, which wasn’t enough to keep it running reliably).

The Render server trying to handle everything with 512 MB of RAM
Render heroically trying to handle everything with just 512 MB of RAM.

I designed the backend infrastructure and deployed it on Render. Of course, Render has its limits—it puts your app to sleep after a while on the free tier. To work around that, I set up an UptimeRobot monitor. Problem solved: the app stays awake.

Here’s how the backend works: it receives a claim, and first, an AI model checks it to filter out illegal or inappropriate content (because that’s not the goal of the project). I actually enjoyed adding this AI moderator step to the MVP.

If the claim passes moderation, the FactCheck AI then uses LangChain to perform a web search, look up similar claims in the vector database, detect contradictions, and finally produce a conclusion with linked sources and references.

Of course, it’s not perfect. Some features, like the vector database search, are disabled in the live demo due to Render’s free tier memory limits. The prompts could also be refined further for better responses. But overall, it’s a solid start—and fully open source for anyone who wants to explore it, improve it, or just see how it works.


Next.js + Vercel: The Frontend to the Rescue

While building the backend, I also started working on the frontend. I’ve always felt comfortable with Ruby on Rails handling the frontend (thanks to Hotwire), but for this MVP, I wanted to step outside my comfort zone. I’ve used Next.js before for proof-of-concept projects involving ShadCN and chart tools, and this felt like the right time to give it another shot. Plus, deploying the frontend on Vercel would be quick and easy.

So I started building a lightweight app with Next.js. The main goal was a UI that works well on both mobile and desktop, following clean code principles to keep it easy to understand and maintain.

Connectivity between the frontend and backend is handled through Next.js API routes, which adds a secure layer for managing the API key and keeps sensitive configuration server-side.

The main app page features a clean, responsive two-column layout. On the left, there’s a sticky input form where users can enter claims they want to fact-check. It includes a large text area for entering claims and a prominent “Check Claim” button that shows a loading state with an animated icon during analysis.

On the right, results are displayed in a scrollable feed. Each result shows the original claim, a verdict badge (True, False, or Mixed), a confidence score with a visual progress bar, and a detailed explanation with source citations. There’s also a TL;DR section for quick summaries and a share button for each result.

To improve the user experience, I implemented localStorage caching to persist the latest fact-checking results. This way, users can quickly access their recent claims without losing their history when refreshing the page or returning later. The cache automatically manages the most recent results and provides a seamless experience across browser sessions.

After polishing the About page and adding disclaimers (reminding everyone this app is just experimental and built for learning), I deployed the frontend on Vercel.

demo


Wrapping Up

Building this FactCheck AI MVP has been a rewarding learning experience. From revisiting an old idea born during my university days to deploying a fully functional application, the journey reinforced my belief in the power of accessible AI tools and modern development practices.

The project shows how today’s AI landscape has evolved from the basic systems of 2016 to something capable of tackling real-world problems. While the tool has its limitations (as any experimental project should), it demonstrates the potential for AI-powered fact-checking systems that could one day help combat misinformation at scale.

What I enjoyed most was stepping outside my comfort zone—learning Python and FastAPI while building the backend, exploring Next.js for the frontend, and experimenting with LangChain and vector databases. It reminded me that even as a senior developer, there’s always something new to learn, and staying curious about different technologies keeps the work interesting and challenging.

The fact that this entire project was built using free tiers and open-source tools proves that you don’t need a massive budget to experiment with AI. This MVP showed me that with proper funding, building a production-ready fact-checking system is totally doable—with dedicated source providers, better prompts, and AI models specifically trained for fact-checking.

As for the future, I’m excited to see how this space evolves. I hope social media platforms will one day provide quick fact-checking tools to improve the quality of online spaces. For now, I’m happy to have turned an old university idea into a working prototype and learned a ton in the process.

Thanks for reading! 🚀 You can try out the demo at https://ai-factcheck.vercel.app/.

#ai #fact-checking #fastapi #langchain #nextjs #open-source #python