Concepts
LlamaIndex.TS helps you build LLM-powered applications (e.g. Q&A, chatbot) over custom data.
In this high-level concepts guide, you will learn:
- how an LLM can answer questions using your own data.
- key concepts and modules in LlamaIndex.TS for composing your own query pipeline.
Answering Questions Across Your Data
LlamaIndex uses a two stage method when using an LLM with your data:
- indexing stage: preparing a knowledge base, and
- querying stage: retrieving relevant context from the knowledge to assist the LLM in responding to a question
This process is also known as Retrieval Augmented Generation (RAG).
LlamaIndex.TS provides the essential toolkit for making both steps super easy.
Let's explore each stage in detail.