This is a cache of https://blog.google/inside-google/message-ceo/alphabet-earnings-q3-2024/. It is a snapshot of the page at 2024-10-30T01:12:59.516+0000.
Alphabet Q3 earnings call: CEO Sundar Pichai's remarks
Skip to main content
The Keyword

Q3 earnings call: CEO’s remarks



Editor’s Note: Our Q3 results were led by great performance in Search, Cloud, and YouTube. On today’s 2024 Q3 earnings call Google and Alphabet CEO Sundar Pichai shared more about the company’s momentum and innovation, as well as our long-term focus and investment in AI. Below is our transcript of his remarks.

Hello everyone.

Q3 was another great quarter. The momentum across the company is extraordinary, as you have seen in recent product launches, and as you will hear on the call today. Our commitment to innovation, as well as our long term focus and investment in AI, are paying off and driving success for the company and for our customers.

We are uniquely positioned to lead in the era of AI because of our differentiated full stack approach to AI innovation, and we’re now seeing this operate at scale. It has three components:

  • First, a robust AI infrastructure that includes data centers, chips, and a global fiber network.
  • Second, world class research teams who are advancing our work with deep, technical AI research, and who are also building the models that power our efforts.
  • And third, a broad global reach through products and platforms that touch billions of people and customers around the world, creating a virtuous cycle.

Let me quickly touch on each of these.

Full stack approach to AI innovation

We continue to invest in state-of-the-art infrastructure to support our AI efforts, from the U.S. to Thailand, to Uruguay. We’re also making bold clean energy investments, including the world’s first corporate agreement to purchase nuclear energy from multiple small modular reactors, which will enable up to five hundred megawatts of new 24/7 carbon-free power.

We’re also doing important work inside our data centers to drive efficiencies, while making significant hardware and model improvements.

For example, we shared that since we first began testing AI Overviews, we've lowered machine costs per query significantly. In eighteen months, we reduced costs by more than 90% for these queries through hardware, engineering, and technical breakthroughs, while doubling the size of our custom Gemini model.

And of course, we use — and offer our customers — a range of AI accelerator options, including multiple classes of NVIDIA GPUs and our own custom-built TPUs. We're now on the sixth generation of TPUs — known as Trillium — and continue to drive efficiencies and better performance with them.

Turning to research, our team at Google DeepMind continues to drive our leadership.

Let me take a moment to congratulate Demis Hassabis and John Jumper on winning the Nobel Prize in Chemistry for their work on AlphaFold. This is an extraordinary achievement and underscores the incredible talent we have, and how critical our world-leading research is to the modern AI revolution, and to our future progress. Also congratulations to Geoff Hinton who spent over a decade here on winning the Nobel Prize in Physics.

Our research teams also drive our industry-leading Gemini model capabilities, including long context understanding, multimodality, and agentive capabilities. By any measure — token volume, API calls, consumer usage, business adoption — usage of the Gemini models is in a period of dramatic growth. And our teams are actively working on performance improvements and new capabilities for our range of models. Stay tuned!

And they’re building out experiences where AI can see and reason about the world around you. Project Astra is a glimpse of that future. We’re working to ship experiences like this as early as 2025.

We then work to bring those advances to consumers and businesses: Today, all seven of our products and platforms with more than 2 billion monthly users use Gemini models. That includes the latest product to surpass the 2 billion user milestone, Google Maps. Beyond Google’s own platforms, following strong demand, we’re making Gemini even more broadly available to developers. Today we shared that Gemini is now available on GitHub Copilot, with more to come.

To support our investments across these three pillars, we are organizing the company to operate with speed and agility.

We recently moved the Gemini app team to Google DeepMind to speed up deployment of new models, and streamline post-training work. This follows other structural changes that have unified teams in research, machine learning infrastructure and our developer teams, as well as our security efforts and our Platforms and Devices team. This is all helping us move faster. For instance, it was a small, dedicated team that built Notebook LM, an incredibly popular product that has so much promise.
We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster.

I am energized by our progress, and the opportunities ahead. And we continue to be laser focused on building great products.

Search advancements

In Search, recent advancements, including AI Overviews, Circle to Search, and new features in Lens, are transforming the user experience, expanding what people can search for and how they search for it. This leads to users coming to Search more often for more of their information needs, driving additional search queries.

Just this week, AI Overviews started rolling out to more than a hundred new countries and territories. It will now reach more than 1 billion users on a monthly basis.

We're seeing strong engagement, which is increasing overall search usage and user satisfaction. People are asking longer and more complex questions, and exploring a wider range of websites. What’s particularly exciting is that this growth actually increases over time, as people learn that Google can answer more of their questions.

The integration of ads within AI Overviews is also performing well, helping people connect with businesses as they search.

Circle to Search is now available on over 150 million Android devices, with people using it to shop, translate text, and learn more about the world around them. A third of the people who have tried Circle to Search now use it weekly, a testament to its helpfulness and potential.

Meanwhile Lens is now used for over 20 billion visual searches per month. Lens is one of the fastest-growing query types we see on Search, because of its ability to answer complex, multimodal questions, and help in product discovery and shopping.

For all these AI features, it’s just the beginning and you'll see a rapid pace of innovation and progress here.

Google Cloud growth

Next, Google Cloud.

I’m very pleased with our growth. This business has real momentum, and the overall opportunity is increasing as customers embrace gen AI.

We generated Q3 revenues of 11.4 billion dollars, up 35% over last year, with operating margins of 17%.

Our technology leadership and AI portfolio are helping us attract new customers, win larger deals, and drive 30% deeper product adoption with existing customers.

Customers are using our products in five different ways.

First, our AI Infrastructure, which we differentiate with leading performance, driven by storage, compute and software advances… as well as leading reliability and a leading number of accelerators. Using a combination of our TPUs and GPUs, LG AI Research reduced inference processing time for its multimodal model by more than 50% and operating costs by 72%.

Second, our enterprise AI platform — Vertex — is used to build and customize the best foundation models from Google and the industry. Gemini API calls have grown nearly 14x in a six month period. When Snap was looking to power more innovative experiences within their “My AI” chatbot, they chose Gemini's strong multimodal capabilities. Since then, Snap saw over 2.5 times as much engagement with My AI in the United States.

Third, customers use our AI platform together with our data platform — BigQuery — because we analyze multi-modal data, no matter where it is stored, with ultra low latency access to Gemini. This enables accurate, real-time decision making for customers like Hiscox, one of the flagship syndicates in Lloyd's of London, which reduced the time it took to quote complex risks, from days to minutes. These types of customer outcomes, which combine AI with Data Science, have led to 80% growth in BigQuery ML operations over a six-month period.

Fourth, our AI-powered cybersecurity solutions — Google Threat Intelligence and Security Operations — are helping customers, like BBVA and Deloitte, prevent, detect, and respond to cybersecurity threats much faster. We have seen customer adoption of our Mandiant-powered threat detection increase 4x over the last six quarters.

Fifth: In Q3, we broadened our applications portfolio with the introduction of our new Customer Engagement Suite. It’s designed to improve the customer experience online and in mobile apps, as well as in call centers, retail stores, and more. A great example is Volkswagen of America, who is using this technology to power its new myVW Virtual Assistant.

In addition, the employee agents we deliver through Gemini for Google Workspace are getting superb reviews. 75% of daily users say it improves the quality of their work.

YouTube

Moving now to YouTube: For the first time ever, YouTube's combined ad and subscription revenue over the past four quarters has surpassed $50 billion.

Together, YouTube TV, NFL Sunday Ticket, and YouTube Music Premium are driving subscription growth for the platform. And we’re leaning into the living room experience with multiview, and a new option for creators to organize content into episodes and seasons, similar to traditional TV.

At Made On YouTube, we announced that Google DeepMind's most capable model for video generation, Veo, is coming to YouTube Shorts to help creators later this year.

Platforms and Devices

Next, Platforms and Devices. Gemini's deep integration is improving Android. For example, Gemini Live lets you have free-flowing conversations with Gemini; people love it. It’s available on Android including Samsung Galaxy devices. We continue to work closely with them to deliver innovations across their newest devices, with much more to come.

At Made by Google, we unveiled our latest Pixel 9 series of devices, featuring advanced AI models, including Gemini Nano. We've seen strong demand for these devices, and they have already received multiple awards.

Other Bets

Turning to Other Bets, I want to highlight Waymo, the biggest part of our portfolio.

Waymo is now a clear technical leader within the autonomous vehicle industry and creating a growing commercial opportunity.

Over the years, Waymo has been infusing cutting-edge AI into its work. Now, each week, Waymo is driving more than 1 million fully autonomous miles and serves over 150,000 paid rides — the first time any AV company has reached this kind of mainstream use.

Through its expanded network and operations partnership with Uber in Austin and Atlanta, plus a new multi-year partnership with Hyundai, Waymo will bring fully autonomous driving to more people and places.

By developing a universal Driver, Waymo has multiple paths to market. And with its sixth-generation system, Waymo has significantly reduced unit costs without compromising safety.

Before I close, I'm delighted to welcome our new CFO, Anat. We're thrilled to have her on board.

And as always, I want to express my gratitude to our employees worldwide. Your dedication and hard work have made this another exceptional quarter for Alphabet.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe