About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Series
IBM's generative AI tech stack
An article series that details the layers of IBM's generative AI tech stack
Generative AI models are powerful types of AI models specialized in the generation of content. These models are built on top of a type of model called a foundation model, which leverages deep learning and vast quantities of training data to learn how to generate text, code, images, audio, video and even a combination of these and other mediums. Foundation models are “foundational” and represent a new paradigm shift in AI because one model is pre-trained for a variety of use cases, instead of training a different model for each new task.
ChatGPT, developed by OpenAI, is likely the most known example of generative AI (or gen AI). ChatGPT is built using a Large Language Model (LLM), which is a special type of foundation model specialized in the generation of natural language. GPT stands for Generative Pretrained Transformer, indicating that the model is pre-trained for generative uses (not all foundation models are engineered for generation). Transformer refers to the model’s transformer architecture, a type of Neural Network that has become the most popular architecture for Natural Language Processing (NLP) use cases and LLM design.
However, generative AI is an area that is evolving fast and rapidly moving well beyond ChatGPT. These are some of the most exciting generative AI use cases: code generation & application modernization, translation, project management & operations, customer support & customer service, and fraud detection & risk management, just to name a few.
IBM watsonx is an end-to-end, all-purpose generative AI platform. The platform’s unique value proposition is to enable enterprises to move away from AI science experiments, and to realize immediate value by building and deploying AI right now. Our technology is simple for developers to use, and helps businesses ensure that their AI is trustworthy, stable and in compliance with regulations.
The following image shows the layers of IBM’s generative AI tech stack.

In this video, IBM Master Inventor Martin King explains these layers using the analogy of a layered cake, showing how its unique nature can help you operationalize your AI.
Articles in this series
Learn more about these layers in IBM’s generative AI tech stack in this article series:
| Layer | Article |
|---|---|
| Hybrid cloud AI tools | The open source ecosystem of watsonx, which describes the key AI open source tools and technologies combined with IBM research innovations that underpin watsonx. |
| Data services | The data services underlying IBM wastonx, which describes a data fabric approach for defining, organizing, managing, and delivering trusted data to train and tune AI models. |
| AI portfolio of products | Scale enterprise AI with IBM watsonx, which describes the features and use cases of the IBM watsonx product suite to show how users can manage the full AI lifecycle. |
| AI assistants | Enterprise generative AI virtual assistants: IBM watsonx, which provides an overview of the watsonx virtual assistant offerings. |
Next steps
After reading all about IBM’s generative AI tech stack, explore some of the workshops, courses, and more that IBM offers on generative AI and watsonx. And, explore more articles and tutorials about watsonx on IBM Developer.
Then, try watsonx.ai for yourself!