Documentation Index
Fetch the complete documentation index at: https://lancedb-bcbb4faf-docs-namespace-typescript-examples.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
View on Hugging Face
Source dataset card and downloadable files for
lance-format/natural-questions-val-lance.hf://datasets/lance-format/natural-questions-val-lance/data. Sourced from google-research-datasets/natural_questions.
The NQ train split is 143 GB (307,373 rows); it is intentionally not bundled here. Add it via natural_questions/dataprep.py --splits train once disk and bandwidth allow.
Key features
- Real Google search queries with the full Wikipedia article that answers each one —
document_htmlcarries the inline UTF-8 HTML, so no sidecar files or external lookups are needed at query time. - Annotator answer summaries —
short_answersaggregates and dedupes spans across all annotators,yes_no_answercarries the majority vote, and thehas_short_answer/has_long_answerflags make annotation-coverage filters a single predicate. - Pre-computed 384-dim question embeddings (
question_emb,sentence-transformers/all-MiniLM-L6-v2, cosine-normalized) with a bundledIVF_PQindex for semantic question lookup. - One columnar dataset — scan question metadata cheaply, then read the heavy
document_htmlonly for the rows you actually want.
Splits
| Split | Rows |
|---|---|
validation.lance | 7,830 |
Schema
| Column | Type | Notes |
|---|---|---|
id | string | NQ example id |
question | string | Original Google search query |
document_title | string | Wikipedia article title |
document_url | string | Wikipedia article URL |
document_html | large_binary | Full HTML of the article (inline; UTF-8 bytes) |
short_answers | list<string> | Deduped short-answer spans across all annotators |
num_short_answers | int32 | Total annotator spans (incl. duplicates) |
has_short_answer | bool | At least one annotator provided a short-answer span |
has_long_answer | bool | At least one annotator selected a long-answer candidate |
yes_no_answer | string | YES / NO / NONE — majority vote across annotators |
question_emb | fixed_size_list<float32, 384> | MiniLM question embedding |
Pre-built indices
IVF_PQonquestion_emb— semantic question lookup (cosine)INVERTED(FTS) onquestion— keyword and hybrid searchBTREEonid,document_title— stable lookup by identifierBITMAPonyes_no_answer,has_short_answer,has_long_answer— cheap predicate evaluation for annotation coverage
Why Lance?
- Blazing Fast Random Access: Optimized for fetching scattered rows, making it ideal for random sampling, real-time ML serving, and interactive applications without performance degradation.
- Native Multimodal Support: Store text, embeddings, and other data types together in a single file. Large binary objects are loaded lazily, and vectors are optimized for fast similarity search.
- Native Index Support: Lance comes with fast, on-disk, scalable vector and FTS indexes that sit right alongside the dataset on the Hub, so you can share not only your data but also your embeddings and indexes without your users needing to recompute them.
- Efficient Data Evolution: Add new columns and backfill data without rewriting the entire dataset. This is perfect for evolving ML features, adding new embeddings, or introducing moderation tags over time.
- Versatile Querying: Supports combining vector similarity search, full-text search, and SQL-style filtering in a single query, accelerated by on-disk indexes.
- Data Versioning: Every mutation commits a new version; previous versions remain intact on disk. Tags pin a snapshot by name, so retrieval systems and training runs can reproduce against an exact slice of history.
Load with datasets.load_dataset
You can load Lance datasets via the standard HuggingFace datasets interface, suitable when your pipeline already speaks Dataset / IterableDataset or you want a quick streaming sample.
Load with LanceDB
LanceDB is the embedded retrieval library built on top of the Lance format (docs), and is the interface most users interact with. It wraps the dataset as a queryable table with search and filter builders, and is the entry point used by the Search, Curate, Evolve, Versioning, and Materialize-a-subset sections below.Load with Lance
pylance is the Python binding for the Lance format and works directly with the format’s lower-level APIs. Reach for it when you want to inspect dataset internals — schema, scanner, fragments, the list of pre-built indices.
Tip — for production use, download locally first. Streaming from the Hub works for exploration, but heavy random access, ANN search, and HTML decoding are far faster against a local copy:Then point Lance or LanceDB at./natural-questions-val-lance/data.
Search
The bundledIVF_PQ index on question_emb makes nearest-neighbour question lookup a single call. In production you would encode an incoming user query through the same 384-dim MiniLM encoder used at ingest and pass the resulting vector to tbl.search(...). The example below uses the embedding from row 42 as a runnable stand-in so the snippet works without loading a model.
question_emb is never read on the result side, and the heavy document_html is left untouched, keeping the working set small even though each row carries a full Wikipedia article inline.
Because the dataset also ships an INVERTED index on question, the same query can be issued as a hybrid search that combines the dense vector with a keyword query against the question text. LanceDB merges the two result lists and reranks them in a single call, which is useful when a named entity must literally appear in the query but the dense side still does most of the ranking.
metric, nprobes, and refine_factor on the vector side to trade recall against latency for your workload.
Curate
A typical curation pass over NQ starts with annotation-coverage filters before any HTML gets read. Lance evaluates the filter inside a single scan, so the candidate set comes back already filtered, and the bounded.limit(500) keeps the output small enough to inspect. The example below assembles a set of factoid questions with at least one short-answer span and a non-yes/no resolution.
document_html column is not read by this scan, so a 500-row curation pass against the Hub moves only kilobytes of metadata even though each row holds an entire Wikipedia article.
Evolve
Lance stores each column independently, so a new column can be appended without rewriting the existing data. The lightest form is a SQL expression: derive the new column from columns that already exist, and Lance computes it once and persists it. The example below adds aquestion_length column, a first_short_answer_length derived from the deduped span list, and an is_factoid flag that combines the annotation flags, any of which can then be used directly in where clauses without recomputing the predicate on every query.
Note: Mutations require a local copy of the dataset, since the Hub mount is read-only. See the Materialize-a-subset section at the end of this card for a streaming pattern that downloads only the rows and columns you need, or use hf download to pull the full corpus.
id:
document_html), Lance provides a batch-UDF API — see the Lance data evolution docs.
Train
Projection lets a training loop read only the columns each step actually needs. LanceDB tables expose this throughPermutation.identity(tbl).select_columns([...]), which plugs straight into the standard torch.utils.data.DataLoader so prefetching, shuffling, and batching behave as in any PyTorch pipeline. For an open-domain QA reader the natural projection is the question plus the full document HTML and the answer spans; for a question-encoder retraining loop the precomputed embedding is enough on its own, and skipping document_html keeps each batch small.
["question_emb", "short_answers"] to select_columns(...) on the next run reads only the 384-d vectors and the answer spans, which is the right shape for fine-tuning a retrieval head on cached embeddings without paying for the multi-megabyte document_html per row. Columns added in Evolve cost nothing per batch until they are explicitly projected.
Versioning
Every mutation to a Lance dataset, whether it adds a column, merges labels, or builds an index, commits a new version. Previous versions remain intact on disk. You can list versions and inspect the history directly from the Hub copy; creating new tags requires a local copy since tags are writes.factoid-v1 keeps returning stable answer spans while the dataset evolves in parallel — newly added retriever scores or labels do not change what the tag resolves to. An evaluation experiment pinned to the same tag can be rerun later against the exact same questions and articles, so changes in metrics reflect model changes rather than data drift. Neither workflow needs shadow copies or external manifest tracking.
Materialize a subset
Reads from the Hub are lazy, so exploratory queries only transfer the columns and row groups they touch. Mutating operations (Evolve, tag creation) need a writable backing store, and a training loop benefits from a local copy with fast random access. Both can be served by a subset of the dataset rather than the full corpus. The pattern is to stream a filtered query through.to_batches() into a new local table; only the projected columns and matching row groups cross the wire, and the bytes never fully materialize in Python memory.
./nq-factoid is a first-class LanceDB database. Every snippet in the Search, Evolve, Train, and Versioning sections above works against it by swapping hf://datasets/lance-format/natural-questions-val-lance/data for ./nq-factoid. Note that this projection deliberately omits document_html; include it in the .select(...) list when the downstream task needs the article body.
Source & license
Converted fromgoogle-research-datasets/natural_questions. NQ is released under CC BY-SA 3.0 (matching the Wikipedia source).