LlamaIndex is the leading framework for creating apps by connecting your data to LLMs, known as context-augmented applications. These applications range from retrieval-augmented generation or "RAG" systems through structured data extraction to complex, semi-autonomous agent systems that retrieve data and take actions. LlamaIndex provides simple, flexible abstractions to more easily ingest, structure, and access private or domain-specific data in order to inject these safely and reliably into LLMs for more accurate text generation. It's available in Python and Typescript. You can use LlamaIndex with Elastic in six ways:
- As a data source: using the Elasticsearch Reader you can source documents from your Elasticsearch database to be used in your app
- As an embedding model: Elasticsearch embeddings can encode your data as vectors for semantic search
- As a vector store: using Elasticsearch as a Vector Store will let you perform semantic searches on your embedded documents
- As an index store, a KV store and a document store to create more advanced retrieval structures such as a Document Summary or a Knowledge Graph