Apple Intelligence With Privacy, New Siri, ChatGPT Coming to iPhone, Mac

Published on 10 Jun, 2024, 8:25 PM IST
Updated on 10 Jun, 2024, 8:25 PM IST

Jamshed Avari
ReadTimeIcon
4 min read
Top stories and News
Follow us onfollow-google-news-icon

Share Post

iPhones, iPads and Macs will be able to offer on-device and cloud-connected AI experiences starting later this year.

In a nearly two-hour presentation that deliberately used the term “AI” as little as possible, Apple CEO Tim Cook, SVP of Software Engineering Craig Federighi, and several other executives laid out the future of Apple’s major software platforms and its strategy for staying current as competitors race to offer AI-backed features and experiences. “Apple Intelligence” is a new umbrella term for the company’s efforts to integrate AI across its operating systems and apps, starting with iPhones, iPads and Macs, and eventually coing to Apple Watches, HomePod speakers, AirPods earphones, Apple TV streaming devices, and the Vision Pro headset.

The major differentiating factor for Apple will be user privacy, which it has put at the forefront of its marketing and brand identity for several years now. The company promises that data will be protected whether or not it is processed on-device or sent to a server. By ensuring privacy, Apple says it can use your data in ways that make Apple Intelligence extremely personalised and contextually aware of your activities, interests and social circle. The company also confirmed that it has developed its own servers powered by Apple Silicon to run Private Cloud Compute instances, which allow for larger-scale AI processing using more powerful models when needed. 

Apple Intelligence will be coming to the iPhone 15 Pro, and all iPads and Macs with M-series processors, in beta as part of iOS 18, iPadOS 18 and macOS Sequoia which will be released this fall. Not all features will be available at launch time. US English will be supported first, with additional languages coming later. There will be no additional cost or subscription fee. It is expected that all models in the iPhone 16 series, expected to launch this September, will be capable of running these experiences. 

Siri, the original voice assistant, gets a complete overhaul with a new visual feedback indicator on Apple devices and the ability to string conversations together as well as deal with users pausing or changing information in their prompts. Conversations will be more natural and personal. Users can also now type to Siri rather than using voice prompts. The assistant is able to take action based on what a user has on screen at any point, including the contents of messages. It can understand intentions such as “Make this photo pop” and link contextual cues to perform actions such as pulling up all photos of a tagged person that match a specific situation, place, group of other people, or item of clothing. 

Generative text and image capabilities as well as contextual surfacing of important information seen demonstrated across macOS, iOS and iPadOS.

Apple has also integrated its device and service user guides into Siri’s knowledge base so it can answer questions and guide users’ actions. It can draw from information across Apple’s own apps and third-party ones. 

New writing tools will allow users to quickly have their text proofread or adjusted for tone and concision. Generative AI models can even transform text into poems or create summaries and identify highlights across emails, Web pages, and messages.

Notifications can be filtered, with high-priority ones shown at the top. A new Focus mode will allow only messages deemed important through, while reducing distractions from others. The Mail app will also be able to sort messages by purpose, and show AI-generated summaries that are more likely to be useful than the first few lines of each message.

Apple Intelligence also includes image generation tools. Users will now be able to create custom emojis, or “Genmojis”, by describing what they want. Users can also generate stickers and messages to share as memes, that include Memoji-style representations of actual people. 

A new Image Playground app will let users create images based on text concepts, so there’s no need to think of a prompt. Image Playground is also integrated into apps such as Notes, and users will also be able to transform rough sketches into more professional-looking images. 

Within the photo library, Apple Intelligence can perform tasks such as erasing subjects and filling in the blank space, creating specific memory movies out of captured photos and videos, and surfacing specific results with natural language searches based on photo or video content.

Audio recordings can be transcribed on-device, with summaries generated automatically. The same capabilities are coming to phone calls, with participants notified for privacy protection. 

For more extensive generative AI requirements, Apple has integrated OpenAI’s famous ChatGPT. Users will be prompted if any text, image or other content is being shared to ChatGPT in order to generate results based on any question or prompt, and no content or even an IP address is logged or retained. All users with eligible devices will have access to ChatGPT-4o for free, without needing an account, though they can choose to tie in their own accounts to leverage paid features. Other third-party models will be offered in the future.

IconTags
Apple Intelligence
AI
Apple
WWDC
WWDC 2024
Tim Cook
Craig Federighi
Siri
ChatGPT
OpenAI