This is a cache of https://www.elastic.co/guide/en/elasticsearch/reference/current/inference-apis.html. It is a snapshot of the page at 2024-10-27T00:43:51.791+0000.
Inference APIs | <strong>elasticsearch</strong> Guide [8.15] | Elastic

Inference APIs

edit

This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the Machine learning trained model APIs.

The inference APIs enable you to create inference endpoints and use machine learning models of different providers - such as Amazon Bedrock, Anthropic, Azure AI Studio, Cohere, Google AI, Mistral, OpenAI, or HuggingFace - as a service. Use the following APIs to manage inference models and perform inference:

A representation of the Elastic inference landscape
Figure 13. A representation of the Elastic inference landscape

An inference endpoint enables you to use the corresponding machine learning model without manual deployment and apply it to your data at ingestion time through semantic text.

Choose a model from your provider or use ELSER – a retrieval model trained by Elastic –, then create an inference endpoint by the Create inference API. Now use semantic text to perform semantic search on your data.