When integrating Pgpool-II with PostgreSQL RDS, many administrators notice an interesting behavior:

Queries executed directly from the database server benefit from Pgpool’s memory cache, but the same logical queries executed through Amazon QuickSight do not.

This article explains why this happens, why it is expected behavior, and how to properly architect Pgpool for BI workloads. It also includes a complete step-by-step Pgpool-II setup guide for PostgreSQL RDS.

Why Pgpool Memory Cache Works for Direct Queries but Not for Amazon QuickSight

Understanding Pgpool Memory Query Cache (memqcache)

Pgpool-II includes a memory query cache feature (memqcache) that stores result sets of SELECT queries in memory.

A cache hit occurs only when:

  • The SQL text is identical
  • Bind parameters are identical
  • Session context matches
  • No table invalidation has occurred

If even a single character in the SQL text changes, Pgpool treats it as a new query.

Direct Queries from psql or Application Servers

When queries are executed directly from:

  • psql
  • Application servers
  • Backend services

The SQL is usually:

  • Simple
  • Deterministic
  • Textually identical across executions

Because the query text remains unchanged, Pgpool successfully returns cached results.

Result:

  • Memory cache works as expected.
What Changes with Amazon QuickSight

Amazon QuickSight does not send static SQL.

Instead, it dynamically generates SQL to support:

  • Dashboard filters
  • Pagination
  • Visual-level aggregation
  • Calculated fields
  • Row limits
  • Performance optimization

Typical QuickSight SQL behavior:

  • Wraps queries inside subqueries
  • Assigns dynamic aliases
  • Adds parameters
  • Changes LIMIT and OFFSET clauses
  • Generates new SQL text on each refresh

Even if the business logic is identical, the SQL text differs — which causes Pgpool to miss the cache every time.

This is expected behavior.

Key Takeaway

If Pgpool cache works for direct queries but not for QuickSight:

  • Pgpool is working correctly
  • QuickSight dynamic SQL prevents cache reuse
  • Caching should be handled at BI or database layer

Pgpool should primarily be used for:

  • Connection pooling
  • Load balancing
  • Routing
  • Failover

 

Pgpool-II with PostgreSQL RDS: Complete Setup Guide

This section outlines a production-ready Pgpool-II setup for PostgreSQL RDS.

Architecture
Client / Application
        ↓
Pgpool-II (Port 9999)
        ↓
PostgreSQL RDS (Port 5432)
Prerequisites
  • Ubuntu Server (22.04 / 24.04)
  • PostgreSQL client installed
  • Access to PostgreSQL RDS
  • Root or sudo access
  • RDS endpoint available
Assumptions
Component Value
Pgpool Port
9999
RDS Port
5432
Database
sampledb
Pgpool User
pgpool_user
Step 1: Install Pgpool-II
sudo apt update
sudo apt install -y pgpool2 postgresql-client
Verify installation:
pgpool --version
Step 2: Configure Pgpool

Edit:

sudo nano /etc/pgpool2/pgpool.conf

Update the following parameters:

listen_addresses='*'
port=9999
backend_hostname0='RDS_ENDPOINT'
backend_port0=5432
backend_weight0=1
backend_flag0='ALLOW_TO_FAILOVER'
enable_pool_hba=on
connection_cache=on
Step 3: Configure Authentication

Edit:

sudo nano /etc/pgpool2/pool_hba.conf

Add

host all all 0.0.0.0/0 scram-sha-256

Restart Pgpool:

sudo systemctl restart pgpool2
Step 4: Create Pgpool User in RDS

Connect to RDS:

psql -U postgres_owner -h RDS_ENDPOINT -p 5432 -d sampledb

Create user:

CREATE USER pgpool_user WITH LOGIN PASSWORD 'password';
GRANT CONNECT ON DATABASE sampledb TO pgpool_user;

Exit:

\q
Step 5: Create Pgpool Encryption Key
sudo -u postgres sh -c "echo 'PgPoolScramKey2025!' > /etc/pgpool2/.pgpoolkey"
sudo chown postgres:postgres /etc/pgpool2/.pgpoolkey
sudo chmod 600 /etc/pgpool2/.pgpoolkey
Step 6: Add User to pool_passwd

Back up existing file:

sudo mv /etc/pgpool2/pool_passwd /etc/pgpool2/pool_passwd.bak

Add encrypted user:

sudo -u postgres pg_enc -k /etc/pgpool2/.pgpoolkey -m -u pgpool_user -p

Fix permissions:

sudo chown postgres:postgres /etc/pgpool2/pool_passwd
sudo chmod 600 /etc/pgpool2/pool_passwd

Restart Pgpool:

sudo systemctl restart pgpool2
Step 7: Connect Through Pgpool
psql -U pgpool_user -h 127.0.0.1 -p 9999 -d sampledb
Step 8: Verify Backend Status

Inside psql:

\x
show pool_nodes;

Expected output:

  • Status = up
  • Role = primary
Summary of Changes

On RDS:

  • Created pgpool_user
  • Granted CONNECT privilege

On Pgpool:

  • Configured backend
  • Enabled authentication
  • Added encrypted credentials
  • Restarted service

Successful connection through port 9999 confirms setup.

Final Thoughts

Pgpool-II works excellently for:

  • Connection pooling
  • Routing
  • High-availability
  • Failover management

However, its memory cache is not suitable for dynamic BI workloads like Amazon QuickSight.

For BI optimization:

  • Use SPICE
  • Use materialized views
  • Tune database performance

Understanding this architectural distinction prevents unnecessary troubleshooting and leads to a cleaner production design.