This is a cache of https://developer.ibm.com/technologies/large-language-models. It is a snapshot of the page as it appeared on 2025-11-15T03:17:06.468+0000.
Models that can understand and generate natural language
LLMs are a class of foundation models that are designed to understand and generate text (and other forms of content) like a human, based on the vast amount of data used to train them. LLMs can infer from context, generate coherent and contextually relevant responses, translate to languages other than English, summarize text, answer questions (general conversation and FAQs) and even assist in creative writing or code generation tasks.
Discover universal JSON prompt templates for extraction, generation, and analysis, plus best‑practice automation and error‑handling techniques for production‑grade AI pipelines.
In this tutorial, learn how to set up a local AI co-pilot in Visual Studio Code using IBM Granite 4, Ollama, and Continue, overcoming common enterprise challenges such as data privacy, licensing, and cost. The setup includes open-source LLMs, Ollama for model serving, and Continue for in-editor AI assistance.
In this tutorial, you learn how to create a simple RESTful Java AI application that asks a large language model (LLM) to write a short poem based on a topic provided by the user.
In this article, we demonstrate using different prompts with different contexts to analyze how LLMs behave when asked generic-related questions. We show how the Granite 3.3-2b model is good at identifying associated quantifiers in generic phrases and the model's responses are context-driven too.
InstructLab empowers developers to unleash the full potential of LLMs, offering a streamlined training process, cost-efficiency, community collaboration, and stability in model performance.
In the rapidly evolving landscape of AI development, large language models (LLMs) have become powerful tools for creating intelligent applications. However, a persistent challenge has been connecting these models to the diverse data sources they need to provide context-aware responses. Today, this integration becomes dramatically simpler by combining Model Context Protocol (MCP) and GraphQL.
Boost mainframe DevOps with AI tools such as watsonx Code Assistant for Z and IBM test Accelerator for Z for COBOL refactoring, Java transformation, automated testing, and z/OS modernization.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.