Back to Blog

Tamper‑Proof Audit Trails for LLM Applications

Paul WaweruPaul WaweruAugust 23, 2025

The Problem

When AI applications handle sensitive data in healthcare, finance, or government, regulations like HIPAA, SOC 2, and the EU AI Act require organisations to prove exactly what happened with each piece of data. Traditional application logs capture API calls, but they don’t record the actual conversations between your application and AI models like OpenAI or Anthropic.

Without tamper-proof records of these AI interactions, organisations face compliance gaps and potential penalties. This is where Traceprompt provides a solution.

What Traceprompt Does

Traceprompt is an open-source SDK that automatically creates tamper-proof audit trails for every AI conversation. With minimal code changes, it wraps your existing OpenAI, Anthropic, or other LLM calls to create encrypted, verifiable logs that meet audit requirements.

Key Features

  • Client-side encryption - Your data is encrypted before it leaves your server
  • Tamper-proof logs - Uses cryptographic hashing to detect any changes
  • PII detection - Automatically identifies sensitive information
  • One-click audit reports - Generate compliance reports instantly
  • Minimal performance impact - Under 2ms overhead

How It Works

See Traceprompt in Action

Watch our demo to see how Traceprompt creates tamper-proof audit trails for AI applications in real-time.

Watch Demo on YouTube

Understanding how Traceprompt works requires examining the complete journey of an AI interaction - from the moment you call an LLM to when that interaction becomes part of an immutable audit trail. The process involves several coordinated steps that happen at different times to ensure both performance and security.

Client-Side Encryption: Your Data Stays Yours

The foundation of Traceprompt’s security is that your sensitive data never leaves your infrastructure in plaintext. When you wrap an LLM call, the SDK captures both the prompt you send and the response you receive. Before transmitting anything to Traceprompt’s servers, it encrypts this data using AES-256-GCM encryption with a key that only you control.

This encryption happens using your own AWS KMS key - a “Bring Your Own Key” (BYOK) approach. The SDK generates a fresh encryption key for each interaction, encrypts your data with it, then encrypts that key using your AWS KMS key. This ensures that even if someone gained access to Traceprompt’s systems, they would only see encrypted data that cannot be decrypted without your permission.

Timing: This encryption process happens immediately when your LLM call completes, adding less than 2 milliseconds of overhead to your application.

Creating Cryptographic Fingerprints

Once your data is encrypted, Traceprompt creates a unique cryptographic “fingerprint” of the entire interaction using the BLAKE3 hashing algorithm. This fingerprint, called a leaf hash, represents exactly what happened in that specific AI interaction. Any change to the original data - even changing a single character - would produce a completely different fingerprint.

These fingerprints are then linked together in a hash chain, where each new entry includes a reference to the previous entry’s fingerprint. This creates an unbreakable sequence - if someone tries to alter or delete an entry, the chain breaks and the tampering becomes immediately detectable.

The Anchor System: Publishing Proof to the World

Periodically, Traceprompt’s anchor service collects recent interactions and organizes them into a mathematical structure called a Merkle tree. This structure provides a single “root” hash that represents all the interactions in that batch, along with cryptographic proofs that any specific interaction was included.

Timing: The batching process runs automatically based on volume and time intervals, typically processing groups of interactions every few minutes to balance efficiency with timely anchoring.

These Merkle roots are then published to a public GitHub repository with GPG-signed commits, creating a public, timestamped record that anyone can verify. When you need to prove that a specific AI interaction happened at a particular time and wasn’t tampered with, you can use the Merkle proof and the public GitHub record to demonstrate this mathematically.

Smart Privacy Protection

While your actual conversations remain encrypted, Traceprompt extracts and analyzes metadata that’s safe to store in plaintext. The SDK automatically scans for sensitive information like social security numbers, credit card numbers, and personal health information, categorizing the risk level as general, sensitive, or critical.

This metadata includes information like token counts, response times, which AI model was used, and what types of sensitive data were detected - but never the actual content. This allows compliance teams to generate reports and filter interactions (like “show me all conversations that touched protected health information”) without ever exposing the sensitive data itself.

Integration That Just Works

This entire process is designed to be transparent to your application. Adding Traceprompt requires minimal code changes:

import { init, wrap } from "@traceprompt-node";
import OpenAI from "openai";
import { config } from "dotenv";

// Load environment variables
config();

// Initialize Traceprompt once
await init();
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// Wrap your existing LLM calls
const trackedChat = wrap(
  (prompt) => openai.chat.completions.create({
    messages: [{ role: "user", content: prompt }],
    model: "gpt-4o",
  }),
  {
    modelVendor: "openai",
    modelName: "gpt-4o",
    userId: "alice",
  }
);

// Use exactly as before - your app doesn’t change!
const response = await trackedChat("Hello, world!");

Your wrapped function returns exactly the same data as the original function would, so your application logic doesn’t need to change. The encryption, hashing, and transmission all happen asynchronously in the background with minimal performance impact - typically under 2 milliseconds of overhead.

Now that we’ve seen how individual interactions are processed, let’s examine how these encrypted logs become comprehensive audit packages that organizations can use for compliance reporting.

The Complete Traceprompt Workflow

Achieving audit compliance with Traceprompt follows a structured 4-step process that takes you from SDK installation to verifiable audit trails. Each step builds on the previous one to create a complete audit system:

Step 1: Install the SDK

Add Traceprompt to your application with minimal code changes. The SDK automatically encrypts and logs every AI interaction as it happens, with immediate client-side encryption.

Step 2: Monitor Your Integrations

View real-time logs of your AI interactions through the Traceprompt dashboard, including PII detection and risk assessment. This provides ongoing visibility into your AI usage patterns.

Step 3: Generate Audit Packs

Create comprehensive audit packages containing encrypted data, cryptographic proofs, and GitHub anchor references for any time period. These packages are only available after all interactions in the time range have been anchored to ensure completeness.

Step 4: Verify Audit Packs

Use independent verification tools to mathematically prove the integrity of your audit data using Merkle proofs and public GitHub commits. This verification can be performed by third-party auditors without requiring access to Traceprompt systems.

Dashboard Overview

Once your SDK is installed, you’ll have access to a comprehensive dashboard that provides real-time visibility into your AI interactions. The dashboard automatically categorizes PII detected in your conversations, tracks audit activity, and provides tools for generating compliance reports.

Traceprompt Dashboard showing LLM interactions, PII detection, and audit activity

The Traceprompt dashboard provides real-time monitoring of AI interactions, PII detection, and audit trail generation

From Logs to Audit Packs

When it’s time for an audit, Traceprompt automatically generates comprehensive audit packs - cryptographically sealed ZIP files containing everything an auditor needs to verify your data’s integrity. Each audit pack includes:

  • Encrypted interaction data - Your actual AI conversations, encrypted with your own AWS KMS keys
  • Merkle proofs - Mathematical evidence that each interaction is part of the anchored batch
  • GitHub anchor references - Links to public commits containing the Merkle root hashes
  • Digital signatures - GPG-signed manifests proving the pack’s authenticity
  • Verification tools - Scripts and metadata to independently verify all cryptographic claims

The Anchor System: Your Public Proof

Behind the scenes, Traceprompt’s anchor service continuously processes your encrypted logs through a sophisticated batching system. Every few minutes, it groups recent interactions into Merkle trees and commits the root hashes to a public GitHub repository with GPG-signed commits.

This creates an immutable, publicly verifiable timestamp for when your data was recorded. The anchor system processes thousands of interactions efficiently while maintaining cryptographic integrity through hash chains and Merkle proofs.

Independent Verification

The verification process provides auditors with tools to mathematically prove data integrity without requiring trust in Traceprompt. This independent verification capability includes:

  • Digital signature verification - Validate Ed25519 signatures on audit pack bundles using published public keys
  • Merkle proof validation - Verify that specific interactions are included in anchored batches using cryptographic proofs
  • GitHub commit verification - Cross-reference Merkle roots against public, GPG-signed commits in the anchor repository
  • Data decryption - Decrypt audit data using your own AWS KMS keys, ensuring you maintain control over access

This verification can be performed entirely offline using the tools and proofs included in each audit pack, creating a level of trust that traditional logging systems cannot provide. Auditors can verify the integrity and authenticity of your AI audit trails without needing access to Traceprompt’s infrastructure.

Real-World Use Cases

Healthcare

Medical AI assistants need to prove they properly handled patient data and followed HIPAA requirements for every diagnosis or recommendation.

Finance

Banking chatbots making loan decisions must maintain tamper-proof records for regulatory audits and fair lending compliance.

Legal

AI legal assistants need verifiable logs showing exactly what information was used for each legal recommendation or document review.

Government

Public sector AI systems require transparent, auditable records to maintain public trust and meet accountability standards.

Why This Matters

As AI becomes more integrated into critical business processes, regulators are requiring organisations to prove their AI systems are operating correctly and handling data responsibly. Traditional logging isn’t sufficient because:

  • Regular logs can be easily modified or deleted
  • They often don’t capture the actual AI conversations
  • Sensitive data isn’t properly protected during storage
  • There’s no way to verify the logs haven’t been tampered with

Traceprompt addresses these challenges by creating immutable, encrypted, and verifiable records of every AI interaction, transforming compliance from a manual burden into an automated, systematic process that organizations can rely on for regulatory requirements.