About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Series
Use various models with the watsonx.ai flows engine
Create a custom flow for text completion with several different prompting techniques
New models are released almost every day, making it hard for developers to know which one to use based on how they differentiate or are priced. That’s why having the flexibility to experiment with various models can significantly enhance the development and deployment of generative AI applications. Watsonx.ai flows engine offers this flexibility by providing a unified API that works seamlessly with all models available on the IBM watsonx platform.
Whether you're using one of the foundation models from IBM Granite or Meta's LLama 3, the way you send a request to flows engine using the API or SDK remains consistent. The only difference is in how you structure your prompts and set parameters, like temperature and decoding methods. This unified approach allows developers to effortlessly switch between different models to determine which one best suits their specific generative AI use case.
In this series, I'll delve into some of the most popular models on watsonx.ai and demonstrate how to use them with the flows engine to create a custom AI flow for text completion with several different prompting techniques.
Learn more about these LLM families:
| Model | Tutorial |
|---|---|
| Using different LLMs in watsonx.ai flows engine | Set up watsonx.ai flows engine to work with some of the most popular models that are available in IBM watsonx.ai |
| IBM Granite | Use LLMs from the IBM Granite series |
| Meta's LLama | Implement and leverage the capabilities of Llama 3.1 |
| Mistral | Learn about the Mistral AI LLMs, and more specifically, Mistral Large 2 |
Next steps
After reading about the various models and their use with wastonx.ai flows engine, explore more articles and tutorials about watsonx on IBM Developer. You can also try watsonx.ai for yourself!
Want to learn more about this topic? Join our Discord community, and let us know what other types of tutorials you'd like to see in the future.