Embedding-based retrieval, also known as dense retrieval, has become the go-to method for modern systems. Neural models map queries and documents to high-dimensional vectors (embeddings) and retrieve documents by nearest-neighbor similarity. However, recent research shows a surprising weakness: single-vector embeddings have a fundamental capacity limit. In short, an embedding can only represent a certain number of distinct relevant document combinations. When queries require multiple documents as answers, dense retrievers start to fail, even on very simple tasks. In this blog, we will explore why this happens and examine the alternatives that can overcome these limitations.
Single-Vector Embeddings And Their Use In Retrieval
In dense retrieval systems, a query is fed through a neural model to produce a single vector. This model is often a transformer or other language model. The produced vector captures the meaning of the text. For example, documents about sports will have vectors near each other. Meanwhile, a query like “best running shoes” will be close to shoe-related docs. At search time, the system encodes the user’s query into its embedding and finds the nearest document.
Typically, the dot-product or cosine similarity returns the top-k similar documents. This differs from older sparse methods like BM25 that match keywords. Embedding models are famous for handling paraphrases and semantics. For example, searching “dog pictures” can find “puppy photographs” even if the words differ. These generalize well to new data because they leverage pre-trained language models.
These dense retrievers power many applications like web search engines, question answering systems, recommendation engines, and more. They also extend beyond plain text; multimodal embeddings map images or code to vectors, enabling cross-modal search.
However, retrieval tasks have become more complex, especially tasks that combine multiple concepts or require returning multiple documents. A single vector embedding is not always able to handle queries. This brings us to a fundamental mathematical constraint that limits what single-vector systems can achieve.
Theoretical Limits of Single Vector Embeddings
The issue is a simple geometric fact. A fixed-size vector space can only realize a limited number of distinct ranking outcomes. Imagine you have n documents and you want to specify, for every query, which subset of k documents should be the top results. Each query can be thought of as picking some set of relevant docs. The embedding model translates each document into a point in ℝ^d. Also, each query becomes a point in the same space; the dot products determine relevance.
It can be shown that the minimum dimension d required to represent a given pattern of query-document relevance perfectly is determined by the matrix rank (or more specifically, the sign-rank) of the “relevance matrix,” indicating which docs are relevant to which queries.
The bottom line is that, for any particular dimension d, there are some possible query-document relevance patterns that a d-dimensional embedding cannot represent. In other words, no matter how you train or tune the model, if you ask for a sufficiently large number of distinct combinations of documents to be relevant together, a small vector cannot discriminate all those cases. In technical terms, the number of distinct top-k subsets of documents that can be produced by some query is upper-bounded by a function of d. Once the number of demands made by the query exceeds the ability to use the embedding to retrieve, some combinations can simply never be retrieved correctly.
This mathematical limitation explains why dense retrieval systems struggle with complex, multi-faceted queries that require understanding multiple independent concepts simultaneously. Fortunately, researchers have developed several architectural alternatives that can overcome these constraints.
Alternative Architectures: Beyond Single-Vector
Given these fundamental limitations of single-vector embeddings, several alternative approaches have emerged to address more complex retrieval scenarios:
Cross-Encoders (Re-Rankers): These models take the query and each document together and jointly score them, usually by feeding them as one sequence into a transformer. Because cross-encoders directly model interactions between query and doc, they are not limited by a fixed embedding dimension. But these are computationally expensive.
Multi-Vector Models: These expand each document into multiple vectors. For example, ColBERT-style models index every token of a document separately, so a query can match on any combination of those vectors. This massively increases the effective representational capacity. Since each document is now a set of embeddings, the system can cover many more combination patterns. The trade-offs here are index size and design complexity. Multi-vector models often need a special retrieval index like Maximum Similarity or MaxSim, and can use a lot more storage.
Sparse Models: Sparse methods like BM25 represent text in very high-dimensional spaces, giving them strong capacity to capture diverse relevance patterns. They excel when queries and documents share terms, but their trade-off is heavy reliance on lexical overlap, making them weaker for semantic matching or reasoning beyond exact words.
Each alternative has trade-offs, so many systems use hybrids: embeddings for fast retrieval, cross-encoders for re-ranking, or sparse models for lexical coverage. For complex queries, single-vector embeddings alone often fall short, making multi-vector or reasoning-based methods necessary.
Conclusion
While dense embeddings have revolutionized information retrieval with their semantic understanding capabilities, they are not a universal solution, as the fundamental geometric constraints of single-vector representations create real limitations when dealing with complex, multi-faceted queries that require retrieving diverse combinations of documents. Understanding these limitations is crucial for building effective retrieval systems, and rather than viewing this as a failure of embedding-based methods, we should see it as an opportunity to design hybrid architectures that leverage the strengths of different approaches.
The future of retrieval lies not in any single method, but in intelligent combinations of dense embeddings, sparse representations, multi-vector models, and cross-encoders that can handle the full spectrum of information needs as AI systems become more sophisticated and user queries more complex.
Login to continue reading and enjoy expert-curated content.




