Skip to main content

Documentation Index

Fetch the complete documentation index at: https://lancedb-bcbb4faf-docs-namespace-typescript-examples.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

https://mintcdn.com/lancedb-bcbb4faf-docs-namespace-typescript-examples/tsMoej_yo3g0KMHe/static/assets/logo/huggingface-logo.svg?fit=max&auto=format&n=tsMoej_yo3g0KMHe&q=85&s=16a86ecc43dfa9ff35068d69c809cdb5

View on Hugging Face

Source dataset card and downloadable files for lance-format/hotpotqa-distractor-lance.
A Lance-formatted version of HotpotQA using the distractor config — multi-hop reading-comprehension questions where each answer requires combining facts from two Wikipedia paragraphs, with 10 candidate paragraphs per question (gold + 8 distractors). The dataset ships with MiniLM question embeddings, flattened context text for full-text search, and pre-built ANN/FTS indices, available directly from the Hub at hf://datasets/lance-format/hotpotqa-distractor-lance/data.

Key features

  • Multi-hop questions with gold supporting facts — each row carries the question, the canonical short answer, and the (title, sent_id) pointers into the paragraphs that justify it.
  • Ten candidate paragraphs per question in the parallel context_titles / context_sentences columns, plus a flattened context_text field that feeds the FTS index.
  • Pre-computed 384-dim question embeddings (question_emb, sentence-transformers/all-MiniLM-L6-v2, cosine-normalized) with a bundled IVF_PQ index for semantic question lookup.
  • One columnar dataset — scan metadata cheaply, then read the heavy context text only for the rows you actually want.

Splits

SplitRows
train.lance90,447
validation.lance7,405

Schema

ColumnTypeNotes
idstringHotpotQA question id
questionstringThe question
answerstringReference short answer (yes / no / span)
typestring?bridge or comparison
levelstring?easy / medium / hard
supporting_titleslist<string>Wikipedia titles that contain the gold facts
supporting_sent_idslist<int32>Sentence indices into those titles
context_titleslist<string>All 10 paragraph titles (gold + distractors)
context_sentenceslist<list<string>>Sentences per paragraph
context_textstringFlattened paragraphs — feeds the FTS index
num_supporting_factsint32Number of gold supporting facts
question_embfixed_size_list<float32, 384>MiniLM question embedding

Pre-built indices

  • IVF_PQ on question_emb — semantic question lookup (cosine)
  • INVERTED (FTS) on question and context_text — keyword and hybrid search
  • BTREE on id, answer — stable lookup by identifier
  • BITMAP on type, level — cheap predicate evaluation for question class

Why Lance?

  1. Blazing Fast Random Access: Optimized for fetching scattered rows, making it ideal for random sampling, real-time ML serving, and interactive applications without performance degradation.
  2. Native Multimodal Support: Store text, embeddings, and other data types together in a single file. Large binary objects are loaded lazily, and vectors are optimized for fast similarity search.
  3. Native Index Support: Lance comes with fast, on-disk, scalable vector and FTS indexes that sit right alongside the dataset on the Hub, so you can share not only your data but also your embeddings and indexes without your users needing to recompute them.
  4. Efficient Data Evolution: Add new columns and backfill data without rewriting the entire dataset. This is perfect for evolving ML features, adding new embeddings, or introducing moderation tags over time.
  5. Versatile Querying: Supports combining vector similarity search, full-text search, and SQL-style filtering in a single query, accelerated by on-disk indexes.
  6. Data Versioning: Every mutation commits a new version; previous versions remain intact on disk. Tags pin a snapshot by name, so retrieval systems and training runs can reproduce against an exact slice of history.

Load with datasets.load_dataset

You can load Lance datasets via the standard HuggingFace datasets interface, suitable when your pipeline already speaks Dataset / IterableDataset or you want a quick streaming sample.
import datasets

hf_ds = datasets.load_dataset("lance-format/hotpotqa-distractor-lance", split="validation", streaming=True)
for row in hf_ds.take(3):
    print(row["question"], "->", row["answer"])

Load with LanceDB

LanceDB is the embedded retrieval library built on top of the Lance format (docs), and is the interface most users interact with. Each .lance file in data/ is a table — open by name (train, validation). The same handle is used by the Search, Curate, Evolve, Versioning, and Materialize-a-subset sections below.
import lancedb

db = lancedb.connect("hf://datasets/lance-format/hotpotqa-distractor-lance/data")
tbl = db.open_table("validation")
print(len(tbl))

Load with Lance

pylance is the Python binding for the Lance format and works directly with the format’s lower-level APIs. Reach for it when you want to inspect dataset internals — schema, scanner, fragments, the list of pre-built indices.
import lance

ds = lance.dataset("hf://datasets/lance-format/hotpotqa-distractor-lance/data/validation.lance")
print(ds.count_rows(), ds.schema.names)
print(ds.list_indices())
Tip — for production use, download locally first. Streaming from the Hub works for exploration, but heavy random access and ANN search are far faster against a local copy:
hf download lance-format/hotpotqa-distractor-lance --repo-type dataset --local-dir ./hotpotqa-distractor-lance
Then point Lance or LanceDB at ./hotpotqa-distractor-lance/data.
The bundled IVF_PQ index on question_emb makes nearest-neighbour question lookup a single call. In production you would encode an incoming user question through the same 384-dim MiniLM encoder used at ingest and pass the resulting vector to tbl.search(...). The example below uses the embedding from row 42 as a runnable stand-in so the snippet works without loading a model.
import lancedb

db = lancedb.connect("hf://datasets/lance-format/hotpotqa-distractor-lance/data")
tbl = db.open_table("train")

seed = (
    tbl.search()
    .select(["question_emb", "question"])
    .limit(1)
    .offset(42)
    .to_list()[0]
)

hits = (
    tbl.search(seed["question_emb"], vector_column_name="question_emb")
    .metric("cosine")
    .where("level = 'hard'", prefilter=True)
    .select(["question", "answer", "supporting_titles", "type"])
    .limit(10)
    .to_list()
)
for r in hits:
    print(f"[{r['type']}] {r['question']}  ->  {r['answer']}")
The result set carries only the projected columns; the 384-d question_emb is never read on the result side, and the long context_text body is left untouched, keeping the working set small even when the underlying scan touches every row of the train split. Because the dataset also ships an INVERTED index on both question and context_text, the same query can be issued as a hybrid search that combines the dense vector with a keyword query against the full paragraph text. LanceDB merges the two result lists and reranks them in a single call, which is useful when a named entity must literally appear in one of the supporting paragraphs but the dense side still does most of the ranking.
hybrid_hits = (
    tbl.search(query_type="hybrid")
    .vector(seed["question_emb"])
    .text("inception dunkirk")
    .select(["question", "answer", "supporting_titles"])
    .limit(10)
    .to_list()
)
for r in hybrid_hits:
    print(r["question"], "->", r["answer"])
Tune metric, nprobes, and refine_factor on the vector side to trade recall against latency for your workload.

Curate

Building a focused evaluation slice usually means stacking predicates over the question metadata before any context text gets read. Lance evaluates the filter inside a single scan, so the candidate set comes back already filtered, and the bounded .limit(2000) keeps the output small enough to inspect. The example below assembles a set of hard, multi-hop comparison questions for which the gold answer is a real span rather than yes/no.
import lancedb

db = lancedb.connect("hf://datasets/lance-format/hotpotqa-distractor-lance/data")
tbl = db.open_table("train")

candidates = (
    tbl.search()
    .where(
        "type = 'comparison' "
        "AND level = 'hard' "
        "AND num_supporting_facts >= 2 "
        "AND answer NOT IN ('yes', 'no') "
        "AND length(question) >= 40",
        prefilter=True,
    )
    .select(["id", "question", "answer", "supporting_titles"])
    .limit(2000)
    .to_list()
)
print(f"{len(candidates)} candidates; first: {candidates[0]['question']}")
The result is a plain list of dictionaries, ready to inspect, persist as a manifest of question ids, or hand to the Evolve and Train sections below. Neither context_text nor context_sentences is read by this scan, so a 2000-row curation pass against the Hub moves only kilobytes of metadata.

Evolve

Lance stores each column independently, so a new column can be appended without rewriting the existing data. The lightest form is a SQL expression: derive the new column from columns that already exist, and Lance computes it once and persists it. The example below adds a question_length column and a is_multi_hop flag, either of which can then be used directly in where clauses without recomputing the predicate on every query.
Note: Mutations require a local copy of the dataset, since the Hub mount is read-only. See the Materialize-a-subset section at the end of this card for a streaming pattern that downloads only the rows and columns you need, or use hf download to pull the full corpus.
import lancedb

db = lancedb.connect("./hotpotqa-distractor-lance/data")  # local copy required for writes
tbl = db.open_table("train")

tbl.add_columns({
    "question_length": "length(question)",
    "is_multi_hop": "num_supporting_facts >= 2",
})
If the values you want to attach already live in another table (offline retriever scores, reranker logits, alternate embeddings from a stronger model), merge them in by joining on the question id:
import pyarrow as pa

retriever_scores = pa.table({
    "id": pa.array(["5a8b57f25542995d1e6f1371", "5a8c7595554299585d9e36b6"]),
    "bm25_top1_score": pa.array([12.7, 9.4]),
})
tbl.merge(retriever_scores, on="id")
The original columns and indices are untouched, so existing code that does not reference the new columns continues to work unchanged. New columns become visible to every reader as soon as the operation commits. For column values that require a Python computation (e.g., running a different encoder over the question text), Lance provides a batch-UDF API — see the Lance data evolution docs.

Train

Projection lets a training loop read only the columns each step actually needs. LanceDB tables expose this through Permutation.identity(tbl).select_columns([...]), which plugs straight into the standard torch.utils.data.DataLoader so prefetching, shuffling, and batching behave as in any PyTorch pipeline. For a multi-hop QA model the natural projection is the question plus the flattened context and the gold answer; for a question-encoder retraining loop the precomputed embedding is enough on its own.
import lancedb
from lancedb.permutation import Permutation
from torch.utils.data import DataLoader

db = lancedb.connect("hf://datasets/lance-format/hotpotqa-distractor-lance/data")
tbl = db.open_table("train")

train_ds = Permutation.identity(tbl).select_columns(["question", "context_text", "answer"])
loader = DataLoader(train_ds, batch_size=16, shuffle=True, num_workers=4)

for batch in loader:
    # batch carries only the projected columns; tokenize, forward, backward...
    ...
Switching feature sets is a configuration change: passing ["question_emb", "answer"] to select_columns(...) on the next run reads only the 384-d vectors and the short answer string, which is the right shape for fine-tuning a retrieval head on cached embeddings. Columns added in Evolve cost nothing per batch until they are explicitly projected.

Versioning

Every mutation to a Lance dataset, whether it adds a column, merges labels, or builds an index, commits a new version. Previous versions remain intact on disk. You can list versions and inspect the history directly from the Hub copy; creating new tags requires a local copy since tags are writes.
import lancedb

db = lancedb.connect("hf://datasets/lance-format/hotpotqa-distractor-lance/data")
tbl = db.open_table("train")

print("Current version:", tbl.version)
print("History:", tbl.list_versions())
print("Tags:", tbl.tags.list())
Once you have a local copy, tag a version for reproducibility:
local_db = lancedb.connect("./hotpotqa-distractor-lance/data")
local_tbl = local_db.open_table("train")
local_tbl.tags.create("hard-multihop-v1", local_tbl.version)
A tagged version can be opened by name, or any version reopened by its number, against either the Hub copy or a local one:
tbl_v1 = db.open_table("train", version="hard-multihop-v1")
tbl_v5 = db.open_table("train", version=5)
Pinning supports two workflows. A QA system locked to hard-multihop-v1 keeps returning stable supporting facts while the dataset evolves in parallel — newly added retriever scores or labels do not change what the tag resolves to. A training experiment pinned to the same tag can be rerun later against the exact same questions and contexts, so changes in metrics reflect model changes rather than data drift. Neither workflow needs shadow copies or external manifest tracking.

Materialize a subset

Reads from the Hub are lazy, so exploratory queries only transfer the columns and row groups they touch. Mutating operations (Evolve, tag creation) need a writable backing store, and a training loop benefits from a local copy with fast random access. Both can be served by a subset of the dataset rather than the full corpus. The pattern is to stream a filtered query through .to_batches() into a new local table; only the projected columns and matching row groups cross the wire, and the bytes never fully materialize in Python memory.
import lancedb

remote_db = lancedb.connect("hf://datasets/lance-format/hotpotqa-distractor-lance/data")
remote_tbl = remote_db.open_table("train")

batches = (
    remote_tbl.search()
    .where(
        "type = 'comparison' "
        "AND level = 'hard' "
        "AND num_supporting_facts >= 2"
    )
    .select(["id", "question", "answer", "supporting_titles", "context_text", "question_emb"])
    .to_batches()
)

local_db = lancedb.connect("./hotpotqa-hard-comparison")
local_db.create_table("train", batches)
The resulting ./hotpotqa-hard-comparison is a first-class LanceDB database. Every snippet in the Search, Evolve, Train, and Versioning sections above works against it by swapping hf://datasets/lance-format/hotpotqa-distractor-lance/data for ./hotpotqa-hard-comparison.

Source & license

Converted from hotpot_qa (distractor config). HotpotQA is released under CC BY-SA 4.0.

Citation

@inproceedings{yang2018hotpotqa,
  title={HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering},
  author={Yang, Zhilin and Qi, Peng and Zhang, Saizheng and Bengio, Yoshua and Cohen, William W. and Salakhutdinov, Ruslan and Manning, Christopher D.},
  booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
  year={2018}
}