Apple avoided talking about Artificial Intelligence at the event, but did include it in several products

0
17
ia en apple.jpg
ia en apple.jpg

At Apple’s recent WWDC 2023 event, the company presented significant advances in terms of integrating machine learning into its products and services. While avoiding using the term “artificial intelligence” (AI), Apple highlighted the use of “machine learning” (ML) techniques to improve the user experience. This contrasts with the strategy of its competitors, such as Microsoft and Google, which have placed a great emphasis on generative AI. Through its focus on ML, Apple demonstrates its commitment to user-centric technology innovation.

Transforming autocorrect and dictation in iOS 17

In iOS 17, Craig Federighi, Apple’s Senior Vice President of Software Engineering, introduced significant improvements to autocorrect and dictation, powered by on-device machine learning. Federighi mentioned the use of a “transformer language model” that makes autocorrect more accurate than ever. This model is based on the transformer architecture, which has driven innovations in generative AI. Autocorrect can now complete words or entire phrases just by pressing the space bar. In addition, the model learns from the user’s writing style, which further improves the suggestions offered.

The power of Apple Silicon and the Neural Engine

Apple has integrated machine learning into its devices thanks to the Neural Engine, a specialized part of Apple Silicon chips. This unit is designed to speed up machine learning applications and has been around since 2017’s A11 chip. At the event, it was highlighted that dictation in iOS 17 uses a “transformer-based speech recognition model,” which takes advantage of of the Neural Engine for even more precise dictation.

Improvements on iPad, AirPods and Apple Watch

During the event, Apple referenced machine learning on multiple occasions. In connection with the iPad, a new feature was introduced on the lock screen that uses an “advanced machine learning model” to synthesize additional frames in selected Live Photos. iPadOS can also identify fields in PDF files, thanks to new machine learning models, allowing them to be quickly filled with contact information using auto-completion.

AirPods now offer “Adaptive Audio” which uses machine learning to understand the user’s listening preferences over time. This allows you to adjust the volume in a personalized way. On the other hand, the Apple Watch has a widget called Smart Stack, which uses machine learning to display relevant information at the right time.

Journal: a new application that takes advantage of machine learning

Apple introduced a new app called Journal, which allows users to write and keep a personal journal with encrypted text and images on the iPhone. While Apple mentioned the presence of AI in Journal, it chose not to use the term explicitly. Using on-device machine learning, the app can offer personalized suggestions to inspire writing, based on information stored on iPhone, such as photos, location, music, and fitness. Users have full control over what information to include and save in their journal.

Vision Pro: an immersive experience created with machine learning

The flagship product presented at the event was the Apple Vision Pro, a new device that provides an immersive augmented reality experience. During the demo, Apple revealed that the moving image of the wearer’s eyes in the glasses is generated using advanced machine learning techniques. By scanning the user’s face, a digital representation called a “Person” is created using an encoder-decoder neural network. This network compresses the facial information captured during the scanning process and uses it to generate a 3D model of the user’s face.

The powerful M2 Ultra chip and the future of machine learning at Apple

At the event, Apple introduced its most powerful Apple Silicon chip to date, the M2 Ultra. This chip has up to 24 CPU cores, 76 GPU cores and a 32-core Neural Engine, capable of performing 31.6 trillion operations per second. Apple stressed that this power will be especially useful for training “large transformer models”, thus demonstrating its interest in AI applications. AI experts have been enthusiastic about the M2 Ultra’s capabilities, as its unified memory architecture allows it to run larger AI models.

As you can see, AI is everywhere, even if it’s not specifically mentioned.

Previous articleAndroid 14 like iOS: function to know the health of the upcoming battery
Next articleThis free tool with ChatGPT is ideal for those who use a lot of PDFs
Brian Adam
Professional Blogger, V logger, traveler and explorer of new horizons.