Deep learning models that generate high-quality content
Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on. These deep learning models can take raw data and “learn” to generate statistically probable outputs when prompted.
A hands-on journey to implement open standards for AI agents using MCP, A2A, ContextForge MCP Gateway, and external integrations with OpenAI API on watsonx Orchestrate.
A hands-on journey to design, extend, and integrate AI agents by using Langflow, IBM Granite models, Astra DB, the AI Gateway, and a custom user interface on watsonx Orchestrate.
Learn to test, benchmark, and optimize AI agents with a structured evaluation framework, automated workflows, and performance metrics for reliable results.
Build a scheduled agentic workflow in watsonx Orchestrate using Python tools, custom agents, and automated email notifications for efficient daily task automation.
Discover the watsonx Orchestrate Agent Catalog, which includes hundreds of prebuilt agents and hundreds more prebuilt tools across numerous domains, including HR, Sales, Finance, IT, and more. Understand the steps you need to take to build and deploy the prebuilt domain agents - configuring secure connections, customizing the prebuilt agent as a template and deploying it, and then testing the agent.
Get started with LangChain4j, a Java library designed to simplify the integration of large language models (LLMs), and learn how to leverage its features for vector embeddings, semantic search, and generative AI applications.
Explore the A2A standard for agent‑to‑agent communication, from deploying HR agents on Code Engine to integrating them with watsonx Orchestrate for robust workflows.
Discover universal JSON prompt templates for extraction, generation, and analysis, plus best‑practice automation and error‑handling techniques for production‑grade AI pipelines.
In this tutorial, learn how to set up a local AI co-pilot in Visual Studio Code using IBM Granite 4, Ollama, and Continue, overcoming common enterprise challenges such as data privacy, licensing, and cost. The setup includes open-source LLMs, Ollama for model serving, and Continue for in-editor AI assistance.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.