System Design Nuggets

System Design Nuggets

Designing for AI Failures: Hallucinations, Safety, and Reliability Patterns

Learn how to design AI systems that handle hallucinations, failures, and uncertainty using RAG, confidence scoring, retries, and human-in-the-loop patterns.

Arslan Ahmad's avatar
Arslan Ahmad
Apr 01, 2026
∙ Paid

AI systems fail.

In fact, they fail quietly, confidently, and sometimes catastrophically.

The worst part?

They fail while sounding completely sure of themselves.

A large language model can generate a response that looks perfect, reads well, and is entirely wrong. And if your system blindly trusts that output, you have just shipped a bug that no test suite caught.

This is not a rare edge case. It is the default behavior of non-deterministic systems.

If you are building anything that relies on AI, you need to design for failure from day one. Not as an afterthought. Not as a “nice to have.”

As a core architectural requirement.

In this post, we will explore:

  • Non-deterministic behavior in AI systems

  • Grounding AI outputs using RAG

  • Confidence scoring and output verification

  • Retry patterns with verification layers

  • Human-in-the-loop as a design pattern

Subscribe to my publication to unlock informational guides and resources on system design in the future.


Why AI Systems Are Different from Traditional Software

Traditional software is deterministic.

You give it the same input, you get the same output. Every single time.

A function that adds two numbers will always return the same result. You can test it, trust it, and move on.

AI models do not work this way.

Non-deterministic behavior means that the same input can produce different outputs on different runs.

Large language models generate text by predicting the next most probable token (think of it as the next word).

There is randomness baked into this process.

A parameter called temperature controls how much randomness the model uses. Higher temperature means more creative but less predictable outputs.

This means you cannot write a simple unit test that says “given input X, expect output Y.”

The output might be Y today and Z tomorrow.

Both might be reasonable.

Or one might be completely fabricated.

This single characteristic changes everything about how you design systems around AI.


The Hallucination Problem

Let’s talk about the elephant in the room.

Hallucination is when an AI model generates information that sounds correct but is factually wrong.

The model does not “know” things the way a database does. It has learned statistical patterns from training data and it generates responses based on those patterns.

Sometimes those patterns produce outputs that have no basis in reality.

The model might cite a research paper that does not exist. It might invent statistics. It might confidently describe a feature of your product that was never built.

Here is why this matters for system design.

If your system takes the AI output and feeds it directly into a downstream process without verification, you now have a cascading failure.

The hallucinated data flows through your pipeline, corrupts other systems, triggers wrong decisions, and by the time someone notices, the damage is spread across multiple services.

Think about it this way.

If a single AI response generates a wrong customer recommendation, that is a contained failure.

But if that recommendation feeds into an inventory system, which triggers an order, which updates a financial report, you now have a chain reaction that started with one bad output.

This is why guardrails are not optional. They are load-bearing walls in your architecture.


Pattern 1: Grounding with RAG (Retrieval-Augmented Generation)

The first and most important reliability pattern is RAG, which stands for Retrieval-Augmented Generation.

Here is the core idea.

Instead of asking the AI model to answer a question purely from what it “remembers” from training, you first retrieve relevant, verified information from a trusted data source.

Then you pass that information to the model along with the question.

The model’s job is now to synthesize and present the retrieved information, not to recall facts from memory.

How It Works Behind the Scenes

  1. The user sends a query. For example, “What is our refund policy?”

  2. The retrieval step kicks in. Your system searches a vector database or a knowledge base for documents related to “refund policy.” This search uses embeddings, which are numerical representations of text that capture meaning. Similar meanings produce similar numbers, so the search finds relevant documents.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Arslan Ahmad · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture