1
Document Knowledge Base
Documents are pre-processed and converted into vector embeddings. Each sphere represents a document in high-dimensional semantic space.
2
Query Embedding
Your query gets converted into the same vector space using the same embedding model. The golden octahedron shows your query's position.
3
Similarity Search
The system calculates cosine similarity between your query vector and all document vectors to find the most relevant matches.
4
Document Retrieval
Top-K most similar documents are retrieved based on similarity scores. Watch as the most relevant documents light up and scale.
5
Context Assembly
Retrieved documents are connected to your query with visual lines, showing the relationship strength and preparing context for the LLM.
6
LLM Generation
The Language Model uses retrieved documents as context to generate accurate, grounded responses to your query.
💡 Pro Tip: Try different queries like "deep neural networks", "image recognition", or "data analysis" to see how the vector space responds differently!