Open Source · Python · PostgreSQL · Apache 2.0

Sub-millisecond PostgreSQL queries.
Without Redis.

CIUF builds a DAG that mirrors your SQL query. Hot reads return from memory in <1ms. Writes propagate incrementally — no full recomputation, no manual invalidation.

$ pip install ciuf[postgres]
Star on GitHub
Cold (first query)
baseline
Redis (hot)
5–15ms
CIUF (hot)
<1ms

Redis is the wrong tool
for SQL query caching

Redis is excellent for key-value access. For complex SQL results, it creates a maintenance burden that grows with your schema.

With Redis
Serialize DataFrame → JSON/pickle on every write (+5–20ms roundtrip)
Write invalidation logic for every INSERT, UPDATE, DELETE on every table
Miss one write path → stale data silently served in production
1 row changed out of 500k → invalidate entire result, rebuild from scratch
Separate service to deploy, monitor, and scale alongside your app
With CIUF
In-process DataFrame — zero serialization, zero network hop
One call per write event: on_insert, on_update, on_delete
DAG propagates delta automatically — correctness guaranteed by the library
Incremental update: only the changed rows move through the DAG
Pure Python library — pip install ciuf, nothing else to run

Three steps.
No infrastructure.

CIUF mirrors your SQL query structure in memory as a DAG. Reads are pure in-memory lookups. Writes propagate only the changed delta.

1

Connect

Point CIUF at your database. It discovers the schema automatically — no configuration needed.

Python
from ciuf import Engine

engine = Engine(
    "postgresql://user:pass@localhost/mydb"
)
2

Query

Pass any SQL SELECT. CIUF parses it, builds the DAG, and returns a pandas DataFrame — cached after the first call.

Python
from ciuf import from_sql

result = from_sql(engine, """
  SELECT orders.id, orders.amount,
         customers.name
  FROM orders
  JOIN customers ON orders.customer_id
                  = customers.id
  WHERE customers.plan = 'pro'
""")
df = result.query()  # <1ms hot
3

Stay in sync

Notify CIUF of writes as they happen. The delta propagates incrementally — no full recomputation.

Python
# After inserting a new order
engine.on_insert("orders", {
    "id": 12345,
    "amount": 99.0,
    "customer_id": 42,
})

# After updating a customer
engine.on_update("customers",
    new={"id": 42, "plan": "pro"})

# Next read is <1ms, already up-to-date
df = result.query()

Built for read-heavy
Python services

Everything you need to cut query latency to under a millisecond — nothing you don't.

Sub-millisecond hot reads
After the first query, results return from in-process memory. Zero network roundtrip, zero deserialization.
Incremental delta propagation
When a row changes, only the delta moves through the DAG. 1 row updated out of 500k → 1 row recomputed.
Pure Python library
No sidecar, no daemon, no container. pip install ciuf is all you need. Works in any Python environment.
SQLAlchemy 2.x compatible
Hook into Session.after_flush to automatically sync CIUF with every ORM write. Works with scoped sessions and FastAPI.
Thread-safe by design
Every node in the DAG is guarded by an RLock. Safe for concurrent reads and writes in multi-threaded WSGI/ASGI apps.
SQL → DAG automatically
Pass any SELECT statement with JOINs, WHERE clauses, GROUP BY, and aggregates. CIUF builds the DAG for you via from_sql().

Drop-in, not a rewrite

CIUF sits alongside your existing database code. Add it to one endpoint, measure the difference.

Python
from ciuf import Engine, from_sql

# 1. Connect (discovers schema automatically)
engine = Engine("postgresql://user:pass@localhost/mydb")

# 2. Register a cached query
result = from_sql(engine, """
    SELECT orders.id, orders.amount, customers.name, customers.plan
    FROM orders
    JOIN customers ON orders.customer_id = customers.id
    WHERE customers.plan = 'pro'
""")

# 3. First call hits the database; all subsequent calls are in-memory
df = result.query()
print(df.shape)  # (n_rows, 4)

# 4. Notify CIUF about writes
engine.on_insert("orders", {"id": 9001, "amount": 49.0, "customer_id": 42})
engine.on_update("customers", new={"id": 42, "plan": "pro", "name": "Acme"})
engine.on_delete("orders", {"id": 123})

# 5. Next read reflects changes, still <1ms
df = result.query()
Python
from sqlalchemy import event
from sqlalchemy.orm import Session
from ciuf import Engine

ciuf_engine = Engine("postgresql://user:pass@localhost/mydb")

def _row_to_dict(obj):
    return {c.key: getattr(obj, c.key) for c in obj.__mapper__.column_attrs}

# Hook into SQLAlchemy session flush
@event.listens_for(Session, "after_flush")
def sync_ciuf_on_flush(session, flush_context):
    for obj in session.new:
        ciuf_engine.on_insert(obj.__tablename__, _row_to_dict(obj))
    for obj in session.dirty:
        ciuf_engine.on_update(obj.__tablename__, new=_row_to_dict(obj))
    for obj in session.deleted:
        ciuf_engine.on_delete(obj.__tablename__, _row_to_dict(obj))

# From this point on, every ORM write automatically updates the CIUF cache
Python
from fastapi import FastAPI
from ciuf import Engine, from_sql

app = FastAPI()

# Initialize once at startup
ciuf_engine = Engine("postgresql://user:pass@localhost/mydb")

# Pre-register the query on startup
dashboard_result = from_sql(ciuf_engine, """
    SELECT orders.id, orders.amount, products.category
    FROM orders
    JOIN products ON orders.product_id = products.id
    WHERE orders.status = 'completed'
""")

@app.get("/dashboard")
async def get_dashboard():
    # Sub-millisecond from the second request onward
    df = dashboard_result.query()
    return df.to_dict(orient="records")

@app.post("/orders")
async def create_order(order: dict):
    # ... write to DB ...
    ciuf_engine.on_insert("orders", order)
    return order

CIUF vs Redis

CIUF and Redis solve different problems. Use this table to decide quickly.

Criterion CIUF Redis
Complex SQL (JOINs, GROUP BY, WHERE) Manual
Incremental write propagation
Zero infra overhead
Zero serialization cost
Single-process application
Multi-process / multi-instance
Key-value / session data
Persistence across restarts
SQLAlchemy integration Manual

In active development

The core engine is complete and tested. The public release is gated on the full M1–M4 milestone — when every component is production-ready, the repo goes public.

M1 — SQL Parser
SELECT, JOIN, WHERE, GROUP BY → DAG via sqlglot
M2 — Incremental Engine
DAG propagation for insert, update, delete deltas
M3 — LRU/TTL Eviction
Memory-bounded cache with configurable TTL
M4 — Thread Safety
RLock guards on all DAG nodes for concurrent access
v0.1.0 — Public Release
GitHub public · PyPI · verified benchmarks · docs live at ciuf.io

Star the repo to be notified when v0.1.0 ships. Questions? Open a GitHub Discussion.

Ready to drop 99% of your query latency?

One command. No new services. Works with your existing SQLAlchemy setup.

$ pip install ciuf[postgres]