For years, Apple avoided using the acronym AI when discussing its products. Not anymore.
The boom in generative artificial intelligence, which began in late 2022 with OpenAI, has become the biggest story in the tech industry. This trend has propelled Nvidia to a $3 trillion market cap and shifted priorities at Microsoft, Google, and Amazon as they race to integrate the technology into their core services.
Investors and customers are now eager to see what AI features Apple will unveil.
New AI features are expected to be announced at Apple’s Worldwide Developers Conference (WWDC), which takes place on Monday at Apple’s campus in Cupertino, California. Apple CEO Tim Cook has teased "big plans," a departure from the company’s usual secrecy about future products.
WWDC typically isn't a major draw for investors. On the first day, Apple usually announces annual updates to its iOS, iPadOS, WatchOS, and MacOS software during a two-hour prerecorded keynote event hosted by Cook. This year, the presentation will be screened at Apple’s headquarters. App developers then enjoy a week of parties and virtual workshops to learn about the new software.
Apple fans get a preview of upcoming software for iPhones, while developers can start updating their apps. New hardware products, if any, are not the focus.
But this year, all eyes will be on the most hyped acronym in tech.
With over a billion iPhones in use, Wall Street is keen to hear about the AI features that will make the iPhone more competitive against Android rivals and justify Apple's investment in developing its own chips.
Investors have favored companies with clear AI strategies. Nvidia, the leading AI processor maker, has seen its stock price triple over the past year. Microsoft, aggressively incorporating OpenAI into its products, is up 28% over the same period. Apple has only seen a 9% increase, trailing behind these companies in market cap.
“This is the most important event for Cook and Cupertino in over a decade,” said Dan Ives, an analyst at Wedbush. “The AI strategy is the missing piece in Apple’s growth puzzle, and this event needs to be a showstopper, not just a shrug-the-shoulders event.”
Executives, including software chief Craig Federighi, are expected to discuss the practical applications of Apple’s AI, whether it should run locally or in large cloud clusters, and what should be integrated into the operating system versus distributed through an app.
Privacy is also a significant concern. Attendees will want to know how Apple plans to deploy data-intensive technology without compromising user privacy, a key element of the company’s marketing for over half a decade.
“At WWDC, we expect Apple to unveil its long-term vision for implementing generative AI throughout its diverse ecosystem of personal devices,” wrote Gil Luria, an analyst at D.A. Davidson. “We believe the impact of generative AI on Apple’s business is profound, with a unique opportunity to reach billions of consumer devices with generative AI functionality.”
Upgrading Siri Last month, OpenAI revealed a voice mode for its AI software, ChatGPT-4o.
In a short demo, OpenAI researchers demonstrated speaking directly to the bot inside the ChatGPT app on an iPhone. The bot could perform impressions, speak fluidly, and even sing. The conversation was quick, and the bot gave advice with a human-like voice. Further demos showed the bot singing, teaching trigonometry, translating, and telling jokes.
Apple users and analysts saw this as a glimpse of what Siri could become. Since its debut in 2011, Siri has struggled with utility, only able to answer a limited set of queries due to its older machine learning foundation.
Apple might collaborate with OpenAI to upgrade Siri next week. It has also explored licensing chatbot technology from companies like Google and Cohere.
Apple could choose not to compete directly with fully-featured chatbots but instead improve Siri’s current features and refer complex queries to a partner. This approach mirrors how Apple’s Spotlight search and Siri work now, turning to Google when they can't answer a question.
However, Apple might avoid a full embrace of an OpenAI partnership due to potential risks, such as malfunctioning chatbots generating embarrassing headlines or compromising user privacy.
“Data security will be a key advantage for Apple, and we expect them to emphasize their privacy efforts during WWDC,” said Citi analyst Atif Malik.
OpenAI’s technology relies on web scraping, and user interactions with ChatGPT improve the model, a practice that could conflict with Apple’s privacy principles.
Large language models like OpenAI’s still face issues with inaccuracies or "hallucinations." For instance, Google’s search AI incorrectly stated that President Barack Obama was the first Muslim president. OpenAI CEO Sam Altman recently faced controversy over accusations from actress Scarlett Johansson that OpenAI’s voice mode mimicked her voice without permission. These are the kinds of conflicts Apple prefers to avoid.
Efficient vs. large Outside of Apple, AI relies on massive server farms using powerful Nvidia processors paired with terabytes of memory. Apple, in contrast, aims for its AI features to run on iPhones, iPads, and Macs, which operate on battery power. Cook has highlighted Apple’s own chips as superior for running AI models.
“We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apple’s unique combination of seamless hardware, software, and services integration, groundbreaking Apple Silicon with our industry-leading neural engines, and our unwavering focus on privacy,” Cook told investors in May.
Samik Chatterjee, an analyst at JPMorgan, wrote that Apple’s WWDC keynote will likely focus on on-device AI capabilities and the use of GenAI models.
In April, Apple published research on “efficient language models” that can run on a phone. Microsoft is also researching this concept. One of Apple’s "OpenELM" models has 1.1 billion parameters, significantly smaller than OpenAI’s 2020 GPT-3 model with 175 billion parameters and even smaller than Meta’s 70 billion parameter Llama model.
Apple’s researchers demonstrated the model on a MacBook Pro with Apple’s M2 Max chip, showing that these efficient models don’t need to connect to the cloud, enhancing response speed and privacy by processing data on the device.
Possible new features in Apple’s software might include summarizing missed text messages, generating images for new emojis, code completion in Xcode, or drafting email responses.
Apple could also use its M2 Ultra chips in data centers to handle AI queries requiring more processing power.
Green bubbles and Vision Pro WWDC won’t be all about AI.
Apple has over 2.2 billion devices in use, and customers expect improved software and new apps.
One potential upgrade is Apple’s adoption of RCS, an enhanced text messaging system. Currently, texts between iPhones use iMessage, showing blue bubbles, while texts to Android phones show green bubbles, lacking features like typing notifications. Apple has confirmed it will add RCS support alongside iMessage, likely debuting with iOS 18.
The conference also marks the first anniversary of the Vision Pro, Apple’s virtual and augmented reality headset, released in the U.S. in February. Apple may announce its expansion to more countries.
The Vision Pro will feature prominently at WWDC. Currently, the headset is on its first version of the operating system, with core features like Persona videoconferencing still in beta.
For Vision Pro users, Apple will offer some virtual sessions at the event in a 3D environment.