Operant Solutions

Hosted DuckDB
Operant Solutions

A managed DuckDB warehouse that ingests your largest CSV and Excel datasets into PostgreSQL and runs retrieval-augmented generation on top of every row. Fully hosted, starting at $120/month.

Hosted DuckDB query interface preview

Spreadsheets in,
grounded answers
out

Massive Ingestion

Stream multi-gigabyte CSV and Excel files straight into Postgres without writing a single ETL script.

Native RAG

Ask questions in plain English and get answers grounded in your rows, with citations to the exact records.

Zero Ops

We run DuckDB, Postgres, and the embedding pipeline. You upload — we handle scaling, backups, and tuning.

Plug in your data

Drop in a CSV or Excel file and our pipeline ingests, normalizes, and indexes every row into PostgreSQL. Query it through chat, raw SQL, or your own application — all from a single managed workspace.

Grounded in your data

Every answer cites the exact rows and tables it pulled from.

Query results with cited source rows

Built on Postgres

Your spreadsheets land in a real database you can query or export.

Postgres tables generated from ingested files

Any shape, any size

Multi-gigabyte CSV and Excel files with messy schemas — we normalize them all.

Mixed CSV and Excel datasets being ingested

FAQ

CSV, TSV, and Excel (.xlsx and .xls). We auto-detect delimiters, encodings, and column types, and we stream multi-gigabyte files without timing out.
We spin up a dedicated Postgres database for your workspace, infer a schema from your headers and row samples, and stream rows in with COPY. You always see and own the underlying tables.
DuckDB is the analytical engine — we use it to slice, profile, and pre-aggregate your raw files before they land in Postgres. Postgres then becomes the durable system of record for your data and embeddings.
We embed your rows, build a hybrid vector + SQL index, and route natural-language questions to the right tables. Every answer comes back with the exact rows used so you can audit it.
The $120/month plan handles up to 50 GB of stored data and individual files up to 10 GB. Larger workloads get a custom-sized cluster with parallel ingestion.
Yes. Every workspace gets its own Postgres database, its own embeddings, and dedicated DuckDB workers. Nothing is shared at the data layer.
Absolutely. You get a standard Postgres connection string plus a REST API for the RAG endpoints. Tableau, Metabase, Superset, and your own apps work out of the box.
Most files under 1 GB are queryable within five minutes. Multi-gigabyte files typically finish in 15–45 minutes depending on row width and column count.
One workspace, up to 50 GB of stored data, unlimited ingestions, the RAG API, and a managed Postgres database. Larger plans add more storage, dedicated compute, and SSO.
Drop new files in through the UI, the API, or a scheduled S3 sync, and we re-ingest only what changed. The Postgres tables and embeddings update in place.

Book an Intro

Speak with our team to explore the use cases and benefits of our AI-native platform.