← Back to 2026-02-06
api-testingpostmandockerpythonfastapiragdocumentation

Building a Complete API Testing Suite: From Postman Collections to Production Readiness

Oli·

Building a Complete API Testing Suite: From Postman Collections to Production Readiness

Testing APIs can feel like a chore—until you realize it's actually the foundation that lets you ship with confidence. Today I want to share how I built a comprehensive testing suite for MiniRAG, a multi-tenant RAG (Retrieval-Augmented Generation) API, using Postman collections with automated assertions.

The Mission: 28 Requests, 7 Workflows, Zero Manual Testing

The goal was ambitious but clear: create a Postman collection that could test every critical path of the MiniRAG API without requiring manual intervention. This meant building a collection that could:

  • Bootstrap tenants and manage API tokens
  • Create and configure bot profiles
  • Handle document ingestion and retrieval
  • Test the full chat RAG workflow
  • Clean up resources automatically
  • Run end-to-end in CI/CD pipelines

The Architecture: Building Smart Test Flows

Instead of creating isolated requests that require manual setup, I designed the collection as 7 interconnected workflows that build upon each other:

0. Health Check (The Smoke Test)

A simple ping to verify the API is alive. Every good test suite starts with "is the thing even running?"

1. Tenant Bootstrap

  • Create tenant
  • Handle duplicate creation (409 expected)
  • Retrieve tenant details
  • Test unauthorized access (401 expected)

2. API Token Management

  • Generate tokens with proper scopes
  • List active tokens
  • Revoke tokens when needed

3. Bot Profile Lifecycle

  • Create profiles with and without credentials
  • List, retrieve, and update profiles
  • Test 404 handling for non-existent profiles

4. Source Document Management

  • Upload and ingest documents
  • Filter sources by tenant
  • Poll ingestion status
  • Test cross-tenant isolation (422 expected)

5. Chat RAG (The Money Shot)

  • Start new conversations
  • Continue existing chats
  • Retrieve conversation metadata and history
  • Handle invalid bot/chat scenarios

6. Cleanup

  • Soft-delete sources and profiles
  • Leave the system clean for the next run

The Secret Sauce: Auto-Managed Variables

Here's where it gets interesting. Instead of hardcoding IDs or requiring manual variable updates, every request includes Postman test scripts that automatically extract and store important values:

// Example: Auto-extract tenant_id from response
pm.test("Store tenant_id for future requests", function () {
    const response = pm.response.json();
    pm.collectionVariables.set("tenant_id", response.id);
});

The collection manages these variables automatically:

  • api_token - Authentication for all requests
  • tenant_id - Multi-tenant isolation
  • profile_id - Bot profile reference
  • source_id - Document reference
  • chat_id - Conversation thread
  • token_id - Token management

Lessons Learned: The Pain Points That Taught Me

Environment Configuration Hell

The Problem: Tried running docker compose up -d without an .env file. Docker Compose failed silently, leaving me wondering why services weren't starting properly.

The Solution: Always create .env from .env.example first. More importantly, I learned that development environments need different hostnames—use localhost instead of container names when running the API server locally.

Python Environment Gotchas

The Problem: Needed to generate a Fernet encryption key using Python's cryptography module. System Python didn't have the module installed, causing ModuleNotFoundError.

The Solution: Always use the project's virtual environment: .venv/bin/python instead of system Python. This seems obvious in retrospect, but in the heat of development, it's easy to forget.

The Payoff: Documentation That Actually Helps

A Postman collection is only as good as its documentation. I updated the README with everything a new developer needs:

## Testing with Postman

### Import Collection
1. Open Postman
2. Import `postman/MiniRAG.postman_collection.json`
3. Collection includes 28 requests across 7 folders

### Running Tests
- **Folders 0-4**: Work without LLM API key
- **Folders 5-6**: Require `OPENAI_API_KEY` or similar
- **Full Suite**: Run all folders in sequence

### Command Line Testing
```bash
newman run postman/MiniRAG.postman_collection.json \
  --environment your-env.json

What's Next: Production-Ready Testing

This collection is already paying dividends, but there's more to build:

  1. CI/CD Integration: Add Newman (Postman's CLI) to GitHub Actions
  2. Database Migrations: Implement Alembic for schema management
  3. Load Testing: Extend collection for performance validation
  4. Environment Management: Create staging/production environment configs

The Bigger Picture

Building this testing suite taught me that good API testing isn't just about catching bugs—it's about creating confidence. When you can run 28 requests with automated assertions and know your entire API surface is working correctly, you ship faster and sleep better.

The time investment upfront (about half a day) has already saved hours of manual testing and caught integration issues that would have been painful to debug in production.

Try It Yourself

If you're building APIs, I highly recommend this approach:

  1. Start with workflows, not individual requests
  2. Use test scripts to manage state automatically
  3. Document everything for your future self
  4. Build incrementally—start with health checks

Your future self (and your team) will thank you when that critical hotfix needs to ship and you can validate the entire system with a single click.


The MiniRAG API is running locally on port 8000 with Docker services (PostgreSQL, Qdrant, Redis) providing the backend infrastructure. All code is committed and ready for the next development session.