TOPLINE
Apple finally joined the great AI arms race this week after months of lagging behind big tech rivals like Google, Microsoft and Meta, unveiling a series of products juiced with artificial intelligence that it has pitched as a more secure, privacy-focused alternative to how its competitors process requests in the cloud.
KEY FACTS
Apple, which for years has sought to position itself as one of Silicon Valley’s most privacy- and security-focused firms, previewed an array of AI features at its annual WWDC event on Monday including voice transcription, Siri upgrades, improved photo editing and integrating OpenAI’s popular chatbot ChatGPT into devices.
Apple said its new features — part of the company’s broader vision for AI it calls “Apple Intelligence” — have been designed with privacy “at the core,” which it said will be achieved by keeping as much of the computer processing needed to fulfill an AI task as possible on users’ devices, rather than shipping it off elsewhere in the cloud.
Many firms rely on cloud computing for their AI services, a system that delivers computer services like data processing, software and storage remotely over the internet, given the high computer processing and power demands required to fulfill many requests and the need to use this data to answer the request means it cannot be hidden from providers in the same way that encrypted messages are.
By nature, data processed in the cloud is more vulnerable to access and exploitation than data processed locally on the device itself, and Apple said its AI features will prioritize keeping data on users’ devices by making use of whatever resources like battery and processing power they have at their disposal.
Loading...
Inevitably, some AI tasks will require more power than an iPhone, Mac or other Apple device can muster, and for these Apple said it has developed a new privacy-preserving method of sending data to cloud servers.
Apple said it will only send absolutely necessary data to cloud servers, will encrypt it on the way there, and will not save or use any data aside from what is necessary to complete the AI task, adding that it will allow independent researchers access to inspect and verify the system’s security.
KEY BACKGROUND
Apple’s privacy-focused debut to the AI arms race has reignited a long-simmering debate over data privacy in Silicon Valley. Unlike many of its marquee competitors such as Alphabet’s Google, Instagram parent Meta and Amazon, Apple cares less about collecting user data as more of its profits come from hardware like iPads, iPhones and MacBooks. As a result, Apple can afford to position itself more as a privacy-focused firm, though at times it has still found itself under fire from activists. Apple is now trying to capitalize on its privacy reputation when it comes to rolling out AI as well, going as far as to promise not to utilize user data for quality control or maintenance either, which is usually standard. It is clear Apple plans to use privacy as a key differentiator between it and its primarily cloud-using, data-collecting rivals. Many of these have already been attacked by digital rights groups and consumers over their use of user data and AI features, including field leaders like Microsoft, ChatGPT maker OpenAI (which has gotten billions in investments from Microsoft), Meta, Adobe and Google.
CRUCIAL QUOTE
“You shouldn’t have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple software engineering chief Craig Federighi said at the WWDC event on Monday, highlighting a crucial difference between it and other companies.
CHIEF CRITIC
Elon Musk was so incensed at Apple’s decision to partner with OpenAI for its “Apple Intelligence” vision that he threatened to bar Apple devices from his companies’ campuses. He branded Apple’s AI tools as “creepy spyware” and claimed the connection to OpenAI represented an “unacceptable security violation.” In posts that were later corrected by crowdsourced fact-checkers on his own social media platform, X, formerly Twitter, Musk said it was “patently absurd” to think Apple could ensure OpenAI protected user data and alleged the company had “no clue what’s actually going on once they hand your data over.” Musk has a history of feuding with OpenAI, which he cofounded and later left amid alleged conflicts of interest, and particularly with fellow cofounder Sam Altman. He also founded and runs what could become a major competitor to OpenAI, xAI, and styled its only product, chatbot Grok, as a direct rival to ChatGPT.
Loading...