You are currently viewing iOS 18 Project Greymatter will use AI to aggregate notifications, articles and more

iOS 18 Project Greymatter will use AI to aggregate notifications, articles and more

Apple’s next-generation operating systems will include Project Greymatter, bringing a host of AI-related improvements. We have new details on the AI ​​features planned for Siri, Notes, and Messages.

AI will enhance several basic applications with summarization and transcription functions

After widespread claims and reports of AI-related improvements in iOS 18, AppleInsider got more information about Apple’s AI plans.

People familiar with the matter revealed that the company is internally testing various new AI-related features ahead of its annual WWDC. Known by the project codename “Greymatter,” the company’s AI improvements will focus on practical benefits for the end user.

In pre-release versions of Apple’s operating systems, the company is working on a notification aggregation feature known as “Greymatter Catch Up.” The feature is linked to Siri, meaning users will be able to request and receive an overview of their recent notifications through the virtual assistant.

Siri is expected to get significantly updated response generation capabilities via a new Smart Response framework, as well as LLM on Apple’s device. When generating answers and summaries, Siri will be able to consider entities such as people and companies, calendar events, locations, dates, and more.

In our earlier reports on Safari 18, Ajax LLM and the updated Voice Memos app, AppleInsider revealed that Apple plans to introduce AI-based text summarization and transcription into its built-in apps. We’ve since learned that the company intends to bring these features to Siri as well.

This ultimately means Siri will be able to answer queries on the device, create summaries of long articles, or transcribe audio, like in the updated Notes or Voice Memos apps. All of this will be done through the use of Ajax LLM or cloud-based processing for more complex tasks.

We’re also told that Apple is testing enhanced and “more natural” voices, along with text-to-speech improvements, which should ultimately lead to a significantly better user experience.

Apple is also working on multimedia and TV controls for Siri for various devices. This feature would allow someone, for example, to use Siri on their Apple Watch to play music on another device, though the feature isn’t expected until later in 2024.

The company decided to embed artificial intelligence into several of its core system applications, with different use cases and tasks in mind. One notable area of ​​improvement has to do with photo editing.

Apple has developed generative AI software for improved image editing

iOS 18 and macOS 15 are expected to bring AI-powered photo editing options to apps like Photos. Internally, Apple has developed a new Clean Up feature that will allow users to remove objects from images through the use of AI-generating software.

The Clean Up tool will replace the current Apple Retouch tool

Also related to Project Greymatter, the company created an app for internal use known as “Generative Playground”. People familiar with the app exclusively revealed to AppleInsider that it can use Apple’s generative AI software to create and edit images, and that it has iMessage integration in the form of a dedicated app extension.

In Apple’s test environments, it is possible to generate an image using artificial intelligence and then send it via iMessage. There are indications that the company is planning a similar feature for end users of its operating systems.

This information is in line with another report which claims that users will be able to use AI to generate unique emoticons, although there are additional possibilities for image generation features.

According to people familiar with the matter, preview versions of Apple’s Notes app also contain references to a generative tool, though it’s unclear whether that tool will generate text or images — as is the case with the Generative Playground app.

Notes will get AI-powered transcription and summarization along with Math Notes

Apple has prepared significant improvements for its built-in Notes app, which is set to make its debut with iOS 18 and macOS 15. The updated Notes will gain support for in-app audio recording, audio transcription, and LLM-powered summarization.

The Notes app on iOS 18 will support audio recording, transcription, and in-app summarization

Audio recordings, transcriptions, and text summaries will be available within a note, along with any other material users choose to add. This means that a note can, for example, contain a recording of an entire lecture or meeting, along with photos and text on a whiteboard.

These features would make Notes a real powerhouse, making it the go-to app for students and business professionals alike. Adding audio transcription and summarization features will also allow Apple’s Notes app to better position itself against competing offerings like OneNote or Microsoft’s Otter.

While app-level audio recording support, along with AI-powered audio transcription and summarization features will greatly improve the Notes app – they’re not the only things Apple is working on.

Math Notes – Create graphs and solve equations using AI

The Notes app will get a whole new addition in the form of Math Notes, which will introduce support for proper math notation and allow integration with Apple’s new calculator app GreyParrot. We now have additional details on what Math Notes will include.

iOS 18’s Notes app will introduce support for AI-assisted audio transcription and math notes

People familiar with the new feature revealed that Math Notes will allow the app to recognize text in the form of math equations and offer solutions to them. Graphing support is also in the works, which means we could see something similar to the Grapher app on macOS, but in Notes.

Apple is also working on math-focused input enhancements in the form of a feature known as “Keyboard Math Predictions.” AppleInsider the feature was said to allow math expressions to be completed when they are recognized as part of text input.

This means that within Notes, users will get the option to automatically complete their math equations in a way similar to how Apple currently offers predictive text or built-in completions on iOS — which are also expected to make their way to visionOS in -late this year.

Apple’s VisionOS will also see improved integration with Apple’s Transformer LM, the predictive text model that offers suggestions as you type. The operating system is also expected to get a redesigned user interface with voice commands, which serves as an indicator of how much Apple values ​​input-related improvements.

The company is also looking to improve user input through the use of so-called “smart replies” that will be available in Messages, Mail and Siri. This will allow users to reply to messages or emails with basic text responses generated instantly by Ajax LLM on the Apple device.

Apple’s AI vs. Google Gemini and other third-party products

AI has made its way into almost every app and device. The use of AI-focused products such as OpenAI’s ChatGPT and Google Gemini also saw a significant increase in overall popularity.

Google Gemini is a popular AI tool

Although Apple has developed its own AI software to better position itself against the competition, the company’s AI isn’t as impressive as something like Google Gemini Advanced. AppleInsider has learned.

During its annual Google I/O developer conference on May 14, Google showed off an interesting AI use case — where users can ask a question in video form and get an AI-generated answer or suggestion.

As part of the event, Google’s AI was shown a video of a broken player and asked why it wasn’t working. The software identifies the model of the turntable and suggests that the turntable may not be properly balanced and that it is malfunctioning.

The company also announced Google Veo, software that can generate video using artificial intelligence. OpenAI also has its own video generation model known as Sora.

Apple’s Greymatter project and Ajax LLM can’t generate or process video, meaning the company’s software can’t answer complex video questions about consumer products. This is probably why Apple is looking to partner with companies like Google and OpenAI to reach a licensing agreement and make more features available to its user base.

Apple will compete with products like the Rabbit R1 by offering vertically integrated AI software on established hardware

Compared to physical AI-themed products like the Humane AI Pin or the Rabbit R1, Apple’s AI projects have a significant advantage in that they work on devices that users already own. This means that users will not need to purchase a dedicated AI device to enjoy the benefits of artificial intelligence.

Humane’s AI Pin and Rabbit R1 are also generally considered unfinished or partially functional products, and the latter was even revealed to be little more than a custom Android app.

Apple’s AI-related projects are expected to make their debut at the company’s annual WWDC on June 10, as part of iOS 18 and macOS 15. Updates to the Calendar, Freeform, and System Settings apps are also in the works.

Leave a Reply