Database

Optimizing PostgreSQL Queries with Prisma ORM: A Deep Dive

Learn techniques to improve database performance by 40%. Real-world examples from production systems at Asynq.ai and Modelia.ai handling millions of requests daily.

Harsh RastogiHarsh Rastogi
Jan 8, 2025Updated Feb 15, 202610 min
PostgreSQLPrismaPerformanceDatabaseBackend

The Database Challenge at Scale

When I was building the backend at Asynq.ai, we hit a wall. Our Agentic AI hiring platform was growing — more candidates, more assessments, more data flowing through the system every day. PostgreSQL queries that took 20ms with 10,000 rows were taking 800ms with 500,000 rows. Dashboard pages were timing out. Background jobs were backing up.

This deep dive shares the exact techniques I used to improve database performance by 40% across two production systems. These same patterns now power the data infrastructure at Modelia.ai, where we process Generative AI model outputs and manage product catalogs for Shopify merchants at enterprise scale.

Understanding Prisma's Query Engine

Prisma ORM is a powerful abstraction over SQL, but that abstraction can hide performance problems if you're not careful. The key is understanding what queries Prisma generates under the hood and optimizing at both the Prisma and PostgreSQL levels.

The first step I always take is enabling Prisma's query logging:

typescript
const prisma = new PrismaClient({
  log: [
    { level: 'query', emit: 'event' },
    { level: 'warn', emit: 'stdout' },
    { level: 'error', emit: 'stdout' },
  ],
});

prisma.$on('query', (e) => {
  if (e.duration > 100) {
    console.warn(`Slow query (${e.duration}ms): ${e.query}`);
  }
});

This immediately reveals the biggest offenders. At Asynq.ai, I discovered that a single dashboard endpoint was making 47 database queries to render a list of 20 candidates. The culprit? The N+1 problem.

Solving the N+1 Problem

The N+1 problem is the most common performance killer in any ORM. You fetch a list of records (1 query), then for each record, you fetch related data (N queries). With Prisma, it looks innocent:

typescript
// BAD: N+1 queries — this generates 21 queries for 20 candidates
const candidates = await prisma.candidate.findMany({ take: 20 });
for (const candidate of candidates) {
  const scores = await prisma.assessmentScore.findMany({
    where: { candidateId: candidate.id },
  });
  const interviews = await prisma.interview.findMany({
    where: { candidateId: candidate.id },
  });
  candidate.scores = scores;
  candidate.interviews = interviews;
}

// GOOD: Single query with includes — generates 1 query with JOINs
const candidates = await prisma.candidate.findMany({
  take: 20,
  include: {
    assessmentScores: true,
    interviews: {
      include: { interviewer: { select: { name: true, avatar: true } } },
    },
  },
});

After this fix alone, that dashboard endpoint at Asynq.ai went from 47 queries (820ms) to 1 query (45ms).

Strategic Indexing

Indexes are the most impactful database optimization, but they're also the most commonly neglected. At Modelia.ai, we developed an indexing strategy based on actual query patterns extracted from our slow query log.

Composite Indexes

When your WHERE clause filters on multiple columns, a composite index is dramatically faster than individual indexes:

sql
-- For queries like: WHERE merchant_id = ? AND status = ? ORDER BY created_at DESC
CREATE INDEX idx_products_merchant_status_created
ON products (merchant_id, status, created_at DESC);

-- Prisma schema equivalent:
// @@index([merchantId, status, createdAt(sort: Desc)])

The column order matters. PostgreSQL uses indexes left-to-right, so put the most selective column first (the one that eliminates the most rows). At Modelia.ai, merchant_id eliminates 99% of rows immediately, making the subsequent status filter trivial.

Partial Indexes

For queries that always filter on a condition, partial indexes save space and improve performance:

sql
-- Only index active products (90% of our queries filter on active = true)
CREATE INDEX idx_active_products ON products (merchant_id, created_at)
WHERE active = true;

-- This index is 60% smaller than a full index and faster to scan

GIN Indexes for JSONB

At Modelia.ai, we store AI model metadata as JSONB columns — style tags, color palettes, size recommendations. GIN indexes make querying into these columns fast:

sql
CREATE INDEX idx_product_metadata ON products USING GIN (ai_metadata jsonb_path_ops);

-- Now this query uses the index:
SELECT * FROM products WHERE ai_metadata @> '{"style": "casual", "season": "summer"}';

Connection Pooling with PgBouncer

At Asynq.ai, we ran 6 microservices, each with 3 replicas, each opening 10 Prisma connections. That's 180 connections — dangerously close to PostgreSQL's default limit of 200. Every time we deployed, connections would spike and occasionally exhaust the pool.

PgBouncer sits between your application and PostgreSQL, maintaining a pool of reusable connections:

bash
# pgbouncer.ini
[databases]
mydb = host=localhost port=5432 dbname=mydb

[pgbouncer]
listen_port = 6432
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 20

The key insight is pool_mode = transaction. This means a connection is assigned to a client only for the duration of a transaction, then returned to the pool. Our effective connection count dropped from 180 to 20 while handling more throughput than before.

At Modelia.ai, we took this further by running PgBouncer as a sidecar container alongside each service in our Docker deployment.

Advanced Prisma Patterns

Raw Queries for Complex Analytics

Sometimes Prisma's query builder isn't enough. For the analytics dashboard at Modelia.ai, we need window functions, CTEs, and complex aggregations that Prisma doesn't support natively:

typescript
const merchantAnalytics = await prisma.$queryRaw`
  WITH daily_stats AS (
    SELECT
      date_trunc('day', created_at) as day,
      COUNT(*) as total_requests,
      AVG(response_time_ms) as avg_response,
      COUNT(CASE WHEN status = 'completed' THEN 1 END) as successful,
      COUNT(DISTINCT customer_id) as unique_customers
    FROM ai_requests
    WHERE merchant_id = ${merchantId}
      AND created_at > NOW() - INTERVAL '30 days'
    GROUP BY day
  )
  SELECT
    day,
    total_requests,
    avg_response,
    successful,
    unique_customers,
    successful::float / NULLIF(total_requests, 0) as success_rate,
    SUM(total_requests) OVER (ORDER BY day) as cumulative_requests
  FROM daily_stats
  ORDER BY day DESC
`;

Prisma Middleware for Soft Deletes

At Asynq.ai, we never hard-delete candidate records — compliance requires we keep data for 2 years. Prisma middleware handles this transparently:

typescript
prisma.$use(async (params, next) => {
  // Intercept delete operations and convert to soft delete
  if (params.action === 'delete') {
    params.action = 'update';
    params.args.data = { deletedAt: new Date() };
  }
  if (params.action === 'deleteMany') {
    params.action = 'updateMany';
    params.args.data = { deletedAt: new Date() };
  }

  // Automatically filter out soft-deleted records
  if (params.action === 'findMany' || params.action === 'findFirst') {
    if (!params.args) params.args = {};
    if (!params.args.where) params.args.where = {};
    params.args.where.deletedAt = null;
  }

  return next(params);
});

Query Performance Monitoring

You can't optimize what you don't measure. We built a Prisma middleware that tracks query performance and alerts on regressions:

typescript
prisma.$use(async (params, next) => {
  const before = Date.now();
  const result = await next(params);
  const duration = Date.now() - before;

  if (duration > 200) {
    console.error({
      level: 'SLOW_QUERY',
      model: params.model,
      action: params.action,
      duration,
      args: JSON.stringify(params.args).slice(0, 500),
    });
  }

  return result;
});

Measuring Impact

After implementing these optimizations across both companies:

At Asynq.ai:

  • Dashboard query time: 820ms to 45ms (N+1 fix)
  • Candidate search: 1.2s to 180ms (composite indexes)
  • Connection pool usage: 180 to 20 (PgBouncer)

At Modelia.ai:

  • Product catalog queries: 350ms to 12ms (GIN indexes on JSONB)
  • AI request analytics: 4.5s to 280ms (raw SQL with CTEs)
  • Overall database CPU: reduced by 30%

Migration Strategy

One thing I learned at Bharat Electronics Limited (BEL) is that changes to production systems must be treated with extreme care. At BEL, every deployment for Airforce projects required committee review. I apply a lighter version of this discipline to database migrations:

  • Always use CREATE INDEX CONCURRENTLY — Regular CREATE INDEX locks the table
  • Test migrations against a production-size dataset — A migration that takes 10ms on dev can take 10 minutes on prod
  • Have a rollback plan — Every migration should have a corresponding down migration
  • Monitor after deployment — Watch query latency for 30 minutes after any schema change

Key Takeaways

  • Enable Prisma query logging in development — it reveals N+1 problems immediately
  • Use include and select strategically — never fetch data you don't need
  • Composite indexes should match your most common WHERE + ORDER BY patterns
  • Use PgBouncer for connection pooling — it's the easiest infrastructure win
  • Raw SQL is your friend for complex analytics — Prisma doesn't need to do everything
  • GIN indexes on JSONB columns are essential for AI metadata queries
  • Soft delete middleware saves you from compliance headaches later
  • Profile queries against production-size data, not dev fixtures
  • The discipline from mission-critical systems at BEL applies directly to database migrations

Written by Harsh Rastogi — Full Stack Engineer building production Generative AI systems at Modelia. Connect with me on LinkedIn for more on Shopify, Generative AI, agentic systems, and production engineering.

Share this article

Harsh Rastogi - Full Stack Engineer

Harsh Rastogi

Full Stack Engineer

Full Stack Engineer building production AI systems at Modelia. Previously at Asynq and Bharat Electronics Limited. Published researcher.

Connect on LinkedIn

Follow me for more insights on software engineering, system design, and career growth.

View Profile