QuestDB’s cover photo
QuestDB

QuestDB

Technology, Information and Internet

The open-source database for the most demanding workloads. SQL-native, LLM-ready and built on top of open formats.

About us

QuestDB is the open-source time-series database for demanding workloads—from trading floors to mission control. It delivers ultra-low latency, high ingestion throughput, and a multi-tier storage engine. Native support for Parquet and SQL keeps your data portable, AI-ready—no vendor lock-in.

Website
http://questdb.com
Industry
Technology, Information and Internet
Company size
51-200 employees
Headquarters
New York
Type
Privately Held
Founded
2019
Specialties
big data, databases, time-series, and capital markets

Products

Locations

Employees at QuestDB

Updates

  • We ship a lot of features, examples, and documentation updates in #QuestDB, and it is not always easy to keep track of everything just from release notes or commits. So we added a new docs changelog page. It gives you a simpler way to follow what is new across the documentation, including feature additions, updated examples, guides, and other changes as they land. Useful if you want to keep up with QuestDB developments without having to dig through release history or source code. Link in first comment 👇

    • A dark social banner for QuestDB. On the left, the eyebrow "Docs · changelog" sits above the headline "Every doc update, in one place." in white and pink, followed by the URL questdb.com/docs/changelog. On the right, a stylized changelog feed shows three dated groups along a vertical timeline rail: May 2026 (tagged "This Month") with a NEW entry for the Upgrade QuestDB guide and an UPDATED entry for the Configuration reference across 17 pages; April 2026 with a NEW entry for UNNEST, LATERAL JOIN and Storage policy and an UPDATED entry covering 65 refreshed SQL pages; and March 2026 with a NEW entry for Log returns and gamma scalping recipes. NEW chips are pink, UPDATED chips are cyan.
  • WINDOW JOIN in #QuestDB is not just syntactic sugar. Without a dedicated operator, this class of query often turns into a fairly ugly combination of ASOF JOIN, range joins, UNION ALL, and GROUP BY, making efficient execution difficult. We published a deep dive into how QuestDB executes WINDOW JOIN internally: * partitioning work across workers * vectorized/SIMD aggregation * reusing aggregation kernels from SAMPLE BY * optimizing contiguous window slices for low-cardinality joins The article also compares the resulting execution path with equivalent approaches in Timescale, DuckDB, and ClickHouse. On the benchmarked workload, the new execution path was 5x faster than the previous QuestDB implementation. Link in first comment 👇

    • Dark navy thumbnail with the headline "How we made WINDOW JOIN parallel & vectorized" — WINDOW JOIN in pink, the ampersand in cyan. Below it, a two-stream timeline: a row of bright cyan anchor events on top, and a dense pink data cloud beneath. Eight pink window frames cut vertically from each anchor down through the cloud, with the cloud noticeably denser inside each frame. A small pink pill at the top of every frame labels it T1 through T8, suggesting one thread per window. A line of monospace type along the bottom right reads "8 windows · parallel · SIMD".
  • One of the interesting aspects of designing your own ingestion protocol is that you get to decide exactly what matters most. In this example from the upcoming QWP protocol in #QuestDB, you can see several optimizations working together in the same message format: • Multi-table ingestion • Schema evolution support • Gorilla delta-of-delta encoding for timestamps • Symbol dictionary compression • Compact binary columnar layout The goal is not just reducing payload size, but minimizing overhead across the full ingestion path while keeping decoding efficient on the server side.

    • QWP example as seen at https://github.com/questdb/questdb/blob/vi_sf/docs/qwp/wire-ingress.md#example-4-multi-table-with-gorilla--delta-symbol-dictionary
  • We are adding more SQL window functions to #QuestDB: • ntile(n) distributes rows of an ordered partition into approximately equal buckets • cume_dist() returns the cumulative distribution of the current row within the partition • nth_value(value, n) returns the n-th value within the current window frame These functions are commonly used for ranking, bucketing, percentile-style analysis, and distribution analysis directly in SQL. Typical use cases include: * splitting instruments or users into quantiles * identifying top and bottom segments * cumulative ranking analysis * retrieving specific values within analytical windows They fit naturally alongside existing time-series and analytical windowing workflows in QuestDB.

    • Dark thumbnail on a charade-grey background reading "Three new window functions." Below the headline, three side-by-side panels illustrate each function: on the left, ntile(4) shown as twelve dots split into four bucket bands numbered 1–4, each band a different shade of QuestDB pink; in the middle, cume_dist() shown as a 0.0–1.0 track with a pink fill up to a white knob marked 0.62; on the right, nth_value(v, 3) shown as a window-frame bracket over seven numbered cells with the third cell highlighted pink and a small downward arrow pointing at it. Each panel is labelled with its function signature in monospaced type.
  • For years, we’ve said that most #QuestDB workloads do not need indexes. That is still true for the majority of time-series queries, where sequential scans over ordered columnar data are often faster and simpler than maintaining large indexing structures. But there are also workloads that combine time filters with highly selective predicates, for example filtering a narrow set of symbols while reading only a few additional columns. We are now adding covering indexes in QuestDB for these query shapes. If all required columns are already present in the index, the engine can answer the query directly from it without reading the underlying table data. Stay tuned!

    • No alternative text description for this image
  • We are extending SAMPLE BY FILL in #QuestDB with cross-column PREV() support. Previously, a filled column could only carry forward its own previous value. Now it can inherit the previous value from another aggregate. That turns out to be very useful for OHLC generation. On empty buckets, the candle open can inherit the previous close directly in SQL, instead of carrying forward the previous open. We are also moving supported SAMPLE BY FILL queries from the sequential cursor path onto the parallel GROUP BY execution path.

    • DIAGRAM TEXT

QuestDB SAMPLE BY FILL: Cross-Column Support Query Optimization & New SQL Feature

Cross-Column PREV() Support
H (High)
O (Open)
C (Close)
L (Low)
Previous Candle (T-1)
PREV() / Cross-Column Inheritance
Directly reference previous row values across columns for seamless gap filling.
Current Candle (T, Empty Bucket)
SAMPLE BY FILL: Cross-Column Support allows the use of PREV() to carry over the previous row's values to fill gaps in aggregated data, enhancing time-series analysis in QuestDB.
  • Last week we mentioned that OHLC bars are coming to #QuestDB. We’ve now written more about the broader idea: rendering small charts directly in SQL results as text. That includes sparklines, candlesticks, and also market depth charts, which are particularly useful when working with order book data and liquidity analysis. The goal is simple: when you are already querying data, you can quickly get a visual sense of what is going on without switching tools. Link in first comment 👇

    • No alternative text description for this image
  • What if your query could return OHLC bars directly? We’re working on rendering OHLC bars as part of SQL results in #QuestDB, returned as a VARCHAR alongside your data. Handy when you are already in a query and want to quickly see how price moved or compare trends, without jumping to a chart. Coming soon.

    • No alternative text description for this image
  • One nice aspect of storage policies in #QuestDB Enterprise is that storage tiers are not mutually exclusive. Data can live in the native columnar format for fast ingestion and low-latency queries, while also being available in Parquet for access from external tools. This makes it possible to: • Keep a fast query tier in native format • Expose the same data in Parquet for interoperability • Gradually, and automatically, move away from native storage when it is no longer needed Instead of forcing a transition between formats, you can choose to keep both when it makes sense, and optimise for both performance and openness at the same time.

    • Showing hot storage in native format, a transition zone with native and parquet coexisting, then cold storage with only parquet
  • We’re excited to be nominated for the DBTA Readers’ Choice Awards this year. These awards are driven by the community, and it’s great to see QuestDB included among the technologies people are using and recommending. If #QuestDB has been useful to you, we’d really appreciate your support. Voting is open until May 22. Thanks to everyone building with us and pushing the project forward. Link in first comment 👇

    • QUESTDB SHORTLISTED!
'DBTA READERS' CHOICE AWARDS 2026'
BEST TIME-SERIES DATABASE
VOTE NOW >>
CAST YOUR VOTE & SUPPORT THE FUTURE OF TIME-SERIES DATA

Similar pages

Browse jobs

Funding

QuestDB 3 total rounds

Last Round

Non equity assistance

Investors

Intel Ignite
See more info on crunchbase