CIUF builds a DAG that mirrors your SQL query. Hot reads return from memory in <1ms. Writes propagate incrementally — no full recomputation, no manual invalidation.
Redis is excellent for key-value access. For complex SQL results, it creates a maintenance burden that grows with your schema.
on_insert, on_update, on_delete
pip install ciuf, nothing else to run
CIUF mirrors your SQL query structure in memory as a DAG. Reads are pure in-memory lookups. Writes propagate only the changed delta.
Point CIUF at your database. It discovers the schema automatically — no configuration needed.
from ciuf import Engine engine = Engine( "postgresql://user:pass@localhost/mydb" )
Pass any SQL SELECT. CIUF parses it, builds the DAG, and returns a pandas DataFrame — cached after the first call.
from ciuf import from_sql result = from_sql(engine, """ SELECT orders.id, orders.amount, customers.name FROM orders JOIN customers ON orders.customer_id = customers.id WHERE customers.plan = 'pro' """) df = result.query() # <1ms hot
Notify CIUF of writes as they happen. The delta propagates incrementally — no full recomputation.
# After inserting a new order engine.on_insert("orders", { "id": 12345, "amount": 99.0, "customer_id": 42, }) # After updating a customer engine.on_update("customers", new={"id": 42, "plan": "pro"}) # Next read is <1ms, already up-to-date df = result.query()
Everything you need to cut query latency to under a millisecond — nothing you don't.
pip install ciuf is all you need. Works in any Python environment.
Session.after_flush to automatically sync CIUF with every ORM write. Works with scoped sessions and FastAPI.
from_sql().
CIUF sits alongside your existing database code. Add it to one endpoint, measure the difference.
from ciuf import Engine, from_sql # 1. Connect (discovers schema automatically) engine = Engine("postgresql://user:pass@localhost/mydb") # 2. Register a cached query result = from_sql(engine, """ SELECT orders.id, orders.amount, customers.name, customers.plan FROM orders JOIN customers ON orders.customer_id = customers.id WHERE customers.plan = 'pro' """) # 3. First call hits the database; all subsequent calls are in-memory df = result.query() print(df.shape) # (n_rows, 4) # 4. Notify CIUF about writes engine.on_insert("orders", {"id": 9001, "amount": 49.0, "customer_id": 42}) engine.on_update("customers", new={"id": 42, "plan": "pro", "name": "Acme"}) engine.on_delete("orders", {"id": 123}) # 5. Next read reflects changes, still <1ms df = result.query()
from sqlalchemy import event from sqlalchemy.orm import Session from ciuf import Engine ciuf_engine = Engine("postgresql://user:pass@localhost/mydb") def _row_to_dict(obj): return {c.key: getattr(obj, c.key) for c in obj.__mapper__.column_attrs} # Hook into SQLAlchemy session flush @event.listens_for(Session, "after_flush") def sync_ciuf_on_flush(session, flush_context): for obj in session.new: ciuf_engine.on_insert(obj.__tablename__, _row_to_dict(obj)) for obj in session.dirty: ciuf_engine.on_update(obj.__tablename__, new=_row_to_dict(obj)) for obj in session.deleted: ciuf_engine.on_delete(obj.__tablename__, _row_to_dict(obj)) # From this point on, every ORM write automatically updates the CIUF cache
from fastapi import FastAPI from ciuf import Engine, from_sql app = FastAPI() # Initialize once at startup ciuf_engine = Engine("postgresql://user:pass@localhost/mydb") # Pre-register the query on startup dashboard_result = from_sql(ciuf_engine, """ SELECT orders.id, orders.amount, products.category FROM orders JOIN products ON orders.product_id = products.id WHERE orders.status = 'completed' """) @app.get("/dashboard") async def get_dashboard(): # Sub-millisecond from the second request onward df = dashboard_result.query() return df.to_dict(orient="records") @app.post("/orders") async def create_order(order: dict): # ... write to DB ... ciuf_engine.on_insert("orders", order) return order
CIUF and Redis solve different problems. Use this table to decide quickly.
| Criterion | CIUF | Redis |
|---|---|---|
| Complex SQL (JOINs, GROUP BY, WHERE) | ✓ | Manual |
| Incremental write propagation | ✓ | ✕ |
| Zero infra overhead | ✓ | ✕ |
| Zero serialization cost | ✓ | ✕ |
| Single-process application | ✓ | ✓ |
| Multi-process / multi-instance | ✕ | ✓ |
| Key-value / session data | ✕ | ✓ |
| Persistence across restarts | ✕ | ✓ |
| SQLAlchemy integration | ✓ | Manual |
The core engine is complete and tested. The public release is gated on the full M1–M4 milestone — when every component is production-ready, the repo goes public.
Star the repo to be notified when v0.1.0 ships. Questions? Open a GitHub Discussion.
One command. No new services. Works with your existing SQLAlchemy setup.