Apple uses user centric stories to sell their products. We propose AI to be a new unique selling point after video becomes more and more a commodity?
Apple has a history of selling their products on a clear user centered value proposition. A milestone in this was the famous advertisement “Think different”. The story for long was about enjoying and creating content. Early that was about writing and desktop publishing, later music and photos and currently video. Since the iPod Apple also takes content consumption seriously. Today on iPhone, iPad, HomePod and Apple TV. Since Apple Cloud, Siri and the Apple Car they have to ensure also to be top in their own datacenter. With today's video creation a datacenter offering is also relevant for their customers.
Apple does a lot to optimize the respective user experiences throughout their technology stack from chip to developer framework to user interface and even applications where needed.
With writing came the mouse, with music the iPod and GarageBand, with Video came iMovie and a h.264 video encoder and decoder right in the chip. Currently Apple’s main selling story is video. How amazing the iPhone is to capture video is noted. On the desktop much is about the amount of video streams their systems can do in parallel (18 on 8k). Soon everybody will have so many that it is not worth it to talk about anymore. It will be a commodity.
So what is next for Apple as a selling story? The big topics in computing right now are AR, Crypto, Augmented Reality, Cybersecurity, Quantum Computing.
Quantum computing is too far away, Cybersecurity is a hygienic topic that is dealt with by Apple through their privacy efforts. Crypto is mainly used as money, here Apple has their Apple Pay franchise.
We see an Apple AR-Framework and hear about work on the Apple Glasses. We also see a neural engine core in the chips and CoreML in the programming Framework. Augmented Reality is an I/O topic comparable to the mouse or the Apple Watch. So what?
Apple AI as one of the two next USPs
In this article it is argued that there is a new addition to Apple USP. The time is ripe for Apple to tell an AI story underlying their content creation and information platforms. As Apple does, that shall happen with a business model for creators.
Current AI features are about getting to Information through a voice user interface with clear answers. Examples are Siri from Apple and “Hey Google” or Amazon Fire.
Further features are text-, face- and object-recognition in pictures with respective added information, post processing options or automated improvements. Apple portrait mode is the most visible here.
Typical other AI functionality is translation of text, transcribing and translation of audio in communication or in audio replay as seen in Telegram Prime or Youtube.
Cars that learn self driving, humanoid robots that learn walking and behaving in a human world are in need of very strong AI. Apple is working on the car and will certainly not be able to neglect the humanoid.
Faster comes when the user needs to have steady access to a next generation chatGPT like access to general knowledge from Wikipedia, Books, Scientific papers and the internet. We expect this to be a standard feature of our phones very soon.
More AI on device: From Phone to AI-Cortex
We will need more and more of this processing power on the device. The iPhone 14 has 2 Tflops. It does learning of faces and the “Hey Siri” command on the edge already. With an assumed rate of development kept until 2029 (50%) the iPhone may have up to 30 Tflops by then.
Maybe we will see a slow transformation from something we currently still call a phone to a full blown AI device, where the usage is more and more invisible. As soon as the usage is not visible anymore, I suggest to call it “AI-Cortex”.
The AI training machine on the desk and in the cabinet
Next to such AI running machines on mobile devices there is a need for the AI training machine.
While our brains do learn internally, such machines nowadays are Exaflop-computers that have the size of 10-50 racks/cabinets. Apple's current Mac Pro is rack mountable. Apple needs AI training capability specifically for object recognition for AR and Photos, Siri and Apple Car. Apple likes vertical integration. So it is fair to think that they at one point will work on a data center capable device optimized for AI.
What can you do? Get your hands dirty on
What we should expect from Apple is that they show a fast increasing amount of AI based capabilities that are making a difference to information, text, music, photo and video in all their lineup.
An AI optimized, data-center enabled computer would be primarily interesting for developers. Such items are naturally introduced at the Apple Developer conference. We may expect such a device based on M2 Ultra as early as in July 2023. Find more details about such an Mac Pro AI here.