Documentation Index
Fetch the complete documentation index at: https://lancedb-bcbb4faf-docs-namespace-typescript-examples.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
View on Hugging Face
Source dataset card and downloadable files for
lance-format/imagenet-1k-val-lance.benjamin-paine/imagenet-1k. Each row is one image with its integer class id, a string class name, and a cosine-normalized OpenCLIP image embedding — all stored inline and available directly from the Hub at hf://datasets/lance-format/imagenet-1k-val-lance/data. The 1.28 M ImageNet-1k train split (~155 GB) is intentionally out of scope for this redistribution; the val split is the canonical evaluation slice for classification benchmarks and is small enough (~7 GB Lance) to ride entirely in inline storage alongside its embeddings.
Key features
- Inline JPEG bytes in the
imagecolumn — no per-class folders, no sidecar files. - Pre-computed OpenCLIP image embeddings (
image_emb, ViT-B/32 trained onlaion2b_s34b_b79k, 512-dim, cosine-normalized) with a bundledIVF_PQindex for similarity search. - Both label representations — integer
label(0-999) and stringlabel_name(first synonym of the WordNet synset, e.g.golden_retriever) — with scalar indices on both for fast class filters. - One columnar dataset — scan labels and embeddings cheaply, fetch image bytes only for the rows you actually need.
Splits
A single split, shipped asvalidation.lance (50,000 rows).
Schema
| Column | Type | Notes |
|---|---|---|
id | int64 | Row index within the split, 0-49,999 (natural join key) |
image | large_binary | Inline JPEG bytes |
label | int32 | Class id (0-999) |
label_name | string | First synonym of the synset, underscore-spaced (e.g. golden_retriever) |
image_emb | fixed_size_list<float32, 512> | OpenCLIP ViT-B-32 / laion2b_s34b_b79k image embedding (cosine-normalized) |
lance:class_names.
Pre-built indices
IVF_PQonimage_emb— vector similarity search (cosine,num_partitions=64)BTREEonlabel— fast equality / range filters by class idBITMAPonlabel_name— fast set-membership filters by class name
Why Lance?
- Blazing Fast Random Access: Optimized for fetching scattered rows, making it ideal for random sampling, real-time ML serving, and interactive applications without performance degradation.
- Native Multimodal Support: Store text, embeddings, and other data types together in a single file. Large binary objects are loaded lazily, and vectors are optimized for fast similarity search.
- Native Index Support: Lance comes with fast, on-disk, scalable vector and FTS indexes that sit right alongside the dataset on the Hub, so you can share not only your data but also your embeddings and indexes without your users needing to recompute them.
- Efficient Data Evolution: Add new columns and backfill data without rewriting the entire dataset. This is perfect for evolving ML features, adding new embeddings, or introducing moderation tags over time.
- Versatile Querying: Supports combining vector similarity search, full-text search, and SQL-style filtering in a single query, accelerated by on-disk indexes.
- Data Versioning: Every mutation commits a new version; previous versions remain intact on disk. Tags pin a snapshot by name, so retrieval systems and training runs can reproduce against an exact slice of history.
Load with datasets.load_dataset
You can load Lance datasets via the standard HuggingFace datasets interface, suitable when your pipeline already speaks Dataset / IterableDataset or you want a quick streaming sample without installing anything Lance-specific.
Load with LanceDB
LanceDB is the embedded retrieval library built on top of the Lance format (docs), and is the interface most users interact with. It wraps the dataset as a queryable table with search and filter builders, and is the entry point used by the Search, Curate, Evolve, Train, Versioning, and Materialize-a-subset sections below.Load with Lance
pylance is the Python binding for the Lance format and works directly with the format’s lower-level APIs. Reach for it when you want to inspect or operate on dataset internals — schema, scanner, fragments, and the list of pre-built indices.
Tip — for production use, download locally first. Streaming from the Hub works for exploration, but heavy random access and ANN search are far faster against a local copy:Then point Lance or LanceDB at./imagenet-1k-val-lance/data.
Search
The bundledIVF_PQ index on image_emb makes nearest-neighbor retrieval over the validation set a single call. In production you would encode a query image through the same OpenCLIP ViT-B-32 / laion2b_s34b_b79k model used at ingest (cosine-normalized) and pass the resulting 512-d vector to tbl.search(...). The example below uses the embedding already stored in row 42 as a runnable stand-in, so the snippet works without any model loaded.
metric="cosine" is the right choice and the first hit will typically be the seed image itself — a useful sanity check. Tune nprobes and refine_factor to trade recall against latency for your workload.
Curate
A typical curation pass for an ImageNet-style classification or robustness study narrows the validation set to a single class (or a synset prefix) and then materializes a small candidate set for inspection. Stacking the filter and the projection inside a single scan keeps the result small and explicit, and the bounded.limit(200) makes it cheap to inspect before committing the subset to anything downstream.
BITMAP index on label_name resolves the predicate without scanning, and the image column is never read, so the network traffic for the candidate scan is dominated by the small metadata payload rather than JPEG bytes. The result is a plain list of dictionaries, ready to inspect, persist as a manifest of row ids, or feed into the Evolve and Train workflows below. To grab a family of related classes, replace the equality with a LIKE predicate such as label_name LIKE 'tabby%' or an IN set over a curated synset list.
Evolve
Lance stores each column independently, so a new column can be appended without rewriting the existing data. The lightest form is a SQL expression: derive the new column from columns that already exist, and Lance computes it once and persists it. The example below adds a coarseis_dog flag over a curated set of canine synsets, which can then be used directly in later where clauses without re-listing the class set on every query.
Note: Mutations require a local copy of the dataset, since the Hub mount is read-only. See the Materialize-a-subset section at the end of this card for a streaming pattern that downloads only the rows and columns you need, or use hf download to pull the full split first.
label_name:
Train
Projection lets a training loop — or, more commonly for this split, an evaluation loop — read only the columns each step actually needs. LanceDB tables expose this throughPermutation.identity(tbl).select_columns([...]), which plugs straight into the standard torch.utils.data.DataLoader so prefetching, shuffling, and batching behave as in any PyTorch pipeline. Columns added in the Evolve section above cost nothing per batch until they are explicitly projected.
["image_emb", "label"] to select_columns(...) on the next run skips JPEG decoding entirely and reads only the cached 512-d vectors, which is the right shape for training a linear probe or a lightweight classifier head on top of frozen CLIP features.
Versioning
Every mutation to a Lance dataset, whether it adds a column, merges labels, or builds an index, commits a new version. Previous versions remain intact on disk. You can list versions and inspect the history directly from the Hub copy; creating new tags requires a local copy since tags are writes.clip-vitb32-laion2b-v1 keeps reporting numbers against a fixed snapshot of labels and embeddings even as the dataset evolves in parallel; newly added columns or relabelings do not change what the tag resolves to. A research experiment pinned to the same tag can be rerun later against the exact same images, so changes in metrics reflect model changes rather than data drift. Neither workflow needs shadow copies or external manifest tracking.
Materialize a subset
Reads from the Hub are lazy, so exploratory queries only transfer the columns and row groups they touch. Mutating operations (Evolve, tag creation) need a writable backing store, and a training or evaluation loop benefits from a local copy with fast random access. Both can be served by a subset of the dataset rather than the full split. The pattern is to stream a filtered query through.to_batches() into a new local table; only the projected columns and matching row groups cross the wire, and the bytes never fully materialize in Python memory.
./imagenet-dogs-subset is a first-class LanceDB database. Every snippet in the Evolve, Train, and Versioning sections above works against it by swapping hf://datasets/lance-format/imagenet-1k-val-lance/data for ./imagenet-dogs-subset.
Source & license
Converted frombenjamin-paine/imagenet-1k, itself a redistribution of the ILSVRC2012 ImageNet-1k validation split. All use is subject to the ImageNet terms of access — for research use only.