Apple’s new Vision Pro virtual reality headset is shown during Apple’s Worldwide Developers Conference (WWDC) at the Apple Park campus in Cupertino, California on June 5, 2023.
Josh Edelson | Afp | Getty Images
Investors and customers now want to see what the iPhone maker has in store.
New AI features are coming to Apple’s Worldwide Developers Conference (WWDC), which takes place Monday at Apple’s campus in Cupertino, California. Apple CEO Tim Cook has teased “big plans,” a shift in approach for a company that doesn’t like to talk about products before they’re released.
WWDC is usually not a major attraction for investors. On the first day, the company announces annual updates to its iOS, iPadOS, WatchOS and MacOS software in what is typically a two-hour videotaped keynote launch event hosted by Cook. This year, the presentation will be screened at Apple headquarters. App developers then get a week of parties and virtual workshops where they learn about Apple’s new software.
Apple fans are getting a preview of the software coming to the iPhone. Developers can get down to updating their apps. New hardware products, if they appear at all, are not showcases.
But this year, everyone will be listening to the buzziest acronym in technology.
With more than 1 billion iPhones in use, Wall Street wants to hear what AI features will make the iPhone more competitive against Android rivals and how the company can justify its investment in developing its own chips.
Investors have rewarded companies that demonstrate a clear strategy and vision for AI. Nvidia, the main maker of AI processors, has seen its stock price triple over the past year. Microsoft, which has been aggressively incorporating OpenAI into its products, has grown 28% over the past year. Apple is up just 9% over the same period, and the other two companies outpace it in terms of market capitalization.
“This is the most important event for Cook and Cupertino in more than a decade,” Dan Ives, an analyst at Wedbush, told CNBC. “The AI strategy is the missing piece in Apple’s growth puzzle, and this event should be a showcase, not a shrug event.”
Executives will take the stage, including software chief Craig Federighi, who is likely to address the real-world uses of Apple’s AI, whether it should run locally or in massive cloud clusters, and what should be built into the operating system instead of distribute in application .
Privacy is also a key issue, and attendees will likely want to know how Apple can implement technology that requires data without compromising user privacy, a central part of the company’s marketing for more than half a decade.
“At WWDC, we expect Apple to unveil its long-term vision for deploying generative AI across its diverse ecosystem of personal devices,” Gil Luria, an analyst at DA Davidson, wrote in a note this week. “We believe the impact of generative AI on Apple’s business is one of the most profound in all of technology, and unlike much AI innovation that impacts the developer or the enterprise, Apple has a clear opportunity to reach billions of consumer devices with generative AI functionality.”
Last month, OpenAI revealed a voice mode for its AI software called ChatGPT-4o.
In a short demo, OpenAI researchers held an iPhone and spoke directly to the bot in the ChatGPT app, which could do impressions, speak fluently, and even sing. The conversation was sharp, the bot gave advice and the voice sounded human. Additional demonstrations at the live event showed the bot singing, teaching trigonometry, translating and telling jokes.
Apple users and experts immediately recognized that OpenAI had demonstrated a preview of what Apple’s Siri could be in the future. Apple’s voice assistant debuted in 2011 and has since gained a reputation for being unhelpful. It is rigid, able to answer only a small fraction of well-defined queries, in part because it is based on older machine learning techniques.
Apple may team up with OpenAI to upgrade Siri next week. Licensing of chatbot technology from other companies, including Google and Cohere, is also being discussed, according to a report by The New York Times.
Apple declined to comment on the partnership with OpenAI.
One possibility is that Apple’s new Siri won’t compete directly with full-featured chatbots, but will enhance its current features and reject requests that can only be answered by a chatbot to a partner. It’s close to how Apple’s Spotlight search and Siri work now. Apple’s system tries to answer the question, but if it can’t, it turns to Google. This agreement is part of a deal worth $18 billion a year for Apple.
Apple may also avoid fully embracing an OpenAI or chatbot partnership. One reason is that a malfunctioning chatbot can generate embarrassing headlines and can undermine a company’s emphasis on user privacy and personal control of user data.
“Data security will be a key asset for the company, and we expect them to spend time talking about their privacy efforts during WWDC as well,” Citi analyst Atif Malik said in a recent note.
OpenAI’s technology is based on web scraping, and ChatGPT’s user interactions are used to improve the model itself, a technique that may violate some of Apple’s privacy principles.
Big language models like OpenAI still have problems with inaccuracies or “hallucinations,” like when Google’s search AI said last month that President Barack Obama was the first Muslim president. OpenAI CEO Sam Altman recently found himself in the middle of a thorny public debate about deep fakes and fraud when he denied accusations by actress Scarlett Johansson that OpenAI’s voice mode ripped off her voice. It’s the kind of conflict Apple executives prefer to avoid.
Apple Senior Vice President of Software Engineering Craig Federighi speaks before the start of the Apple Worldwide Developers Conference at Apple headquarters on June 05, 2023 in Cupertino, California. Apple CEO Tim Cook kicked off the annual WWDC23 developer conference.
Justin Sullivan | Getty Images News | Getty Images
Outside of Apple, AI has come to rely on large server farms using powerful Nvidia processors coupled with terabytes of memory to crunch the numbers.
Apple, by contrast, wants its AI features to work on battery-powered iPhones, iPads and Macs. Cook touted Apple’s own chips as better at handling AI models.
“We believe in the transformative power and promise of AI, and we believe we have strengths that will set us apart in this new era, including Apple’s unique combination of seamless integration of hardware, software and services, innovative Apple Silicon with our industry-leading neural engines and our unwavering focus on privacy,” Cook told investors in May on an earnings call.
Samik Chatterjee, an analyst at JPMorgan, wrote in a note this month that “We expect Apple’s WWDC keynote presentation to focus on the device’s features and capabilities, as well as the GenAI models running on the device to enable these Characteristic.”
In April, Apple published a study on AI models it called “effective language models” that could work on a phone. Microsoft also published research on the same concept. One of Apple’s “OpenELM” models has 1.1 billion parameters, or weights—far smaller than OpenAI’s 2020 GPT-3 model, which has 175 billion parameters, and smaller even than the 70 billion parameters in one version of Meta’s Llama, which is one of the most widely used language patterns.
In the paper, Apple researchers compared a MacBook Pro laptop model powered by Apple’s M2 Max chip, showing that these efficient models don’t necessarily connect to the cloud. This can improve response speed and provide a level of privacy, as sensitive questions can be answered on the device itself instead of being sent back to Apple’s servers.
Some of the features built into Apple’s software could include giving users a summary of their missed text messages, generating images for new emoticons, completing code in the company’s Xcode development software, or drafting email responses, according to Bloomberg .
Apple may also decide to load its M2 Ultra chips into its data centers to handle AI requests that need more horsepower, Bloomberg reported.
A customer uses Apple’s Vision Pro headphones at Apple’s Fifth Avenue store in Manhattan in New York, U.S., February 2, 2024.
Brendan McDermid | Reuters
WWDC won’t be strictly about AI.
The company uses more than 2.2 billion devices, and customers want improved software and new applications.
One potential upgrade could be Apple’s adoption of RCS, an improvement on the older text messaging system known as SMS. Apple’s messaging app routes text messages between iPhones to its own iMessage system, which displays conversations as blue bubbles. When an iPhone sends an SMS to an Android phone, the bubble is green. Many features such as input notifications are not available.
Google led the development of RCS, adding encryption and other features to text messages. Late last year, Apple confirmed that it would add support for RCS alongside iMessage. The debut of iOS 18 would be the logical time to show off its work.
The conference will also be the first anniversary of the unveiling of Apple’s Vision Pro, its virtual and augmented reality headset that was launched in the US in February. Apple may announce its expansion into more countries, including China and the United Kingdom
Apple said in its WWDC announcement that Vision Pro will be in the spotlight. Vision Pro is currently on the first release of its operating system, and core features such as the Persona video conferencing simulation are still in beta.
For users with Vision Pro, Apple will offer some of its virtual sessions at the event in a 3D environment.