Skip to content

CIUF — Incremental PostgreSQL Query Cache for Python

Sub-millisecond repeated SQL queries. No Redis. No separate service. No manual cache invalidation.

pip install ciuf[postgres]

The problem

Your Python app runs the same JOIN-heavy queries thousands of times a day. The data barely changes. You're paying full PostgreSQL roundtrip cost every single time.

Redis helps — but it adds a service to deploy, serialization to manage, and invalidation logic to maintain. When a new order arrives, you invalidate the entire cached result and recompute from scratch.

How CIUF works

CIUF builds an in-memory DAG that mirrors your query's JOIN structure:

TableNode(orders) ──────┐
                        ├── JoinNode ── FilterNode ── SelectNode → result
TableNode(customers) ───┘
  • First read: loads from PostgreSQL once
  • Repeated reads: pure in-memory lookup — no network, no serialization
  • On write: only the affected rows propagate through the DAG — no full recompute

Quickstart

from ciuf import Engine

engine = Engine("postgresql://user:pass@localhost/mydb")

# First call hits PostgreSQL; subsequent calls are in-memory
result = engine.query("""
    SELECT orders.id, orders.amount, customers.name
    FROM orders
    JOIN customers ON orders.customer_id = customers.id
    WHERE customers.plan = 'pro'
""")

# Keep the cache in sync with writes
engine.on_insert("orders", {"id": 12345, "amount": 99.0, "customer_id": 42})
engine.on_update("customers", old={"id": 42, "plan": "free"}, new={"id": 42, "plan": "pro"})

Full quickstart →{ .md-button .md-button--primary } GitHub →{ .md-button }


When to use CIUF

Criterion CIUF Redis
High read-to-write ratio
Complex SQL (JOINs, GROUP BY) Painful
Single-process app
Multi-process / multi-instance
Incremental write propagation Manual
Zero infra overhead

Status

Alpha — MVP in active development. Follow progress on GitHub →

Inspired by Noria.