Apple is gearing up to host its Worldwide Developer Conference on June 10, where it will unveil new features for iPhone and other Apple devices. According to a report from Bloomberg, Apple is also developing an AI-enabled version of its Siri voice assistant. This new iteration of Siri will be powered by large language models (LLMs), allowing users to perform tasks such as opening documents and sending emails using voice commands.
Initially, the enhanced Siri will only be compatible with Apple’s apps. However, it is not expected to debut with iOS 18 but is likely to be introduced as an update in the early part of next year.
The upcoming assistant will have the capability to analyze the activity on your phone and automatically activate Siri-controlled features. Initially, it will support “hundreds” of commands but will only process one at a time. However, in later iterations, Siri will be able to handle multiple tasks within a single request. Initially, supported commands will include actions like sending or deleting emails, opening specific sites in Apple News, emailing web links, or requesting article summaries.
Once multiple commands are enabled, users will be able to perform tasks like summarizing a recorded meeting and then texting it to a colleague, all in one request. According to Bloomberg’s Mark Gurman, “an iPhone could theoretically be asked to crop a picture and then email it to a friend.”
Apple’s plans for the next version of Siri involve the use of a specific LLM, though the exact one has not been disclosed yet. There are reports suggesting that Apple has recently struck a deal with OpenAI to incorporate ChatGPT into iOS 18. Additionally, Apple may be negotiating with Google to integrate Gemini AI into iPhone search. It’s rumored that Apple will handle many AI requests on the device itself, reserving cloud usage for more complex commands.