Breakthrough Unveils Exciting Tech Advancements for iPhone 16
During the past few months, there have been reports about Apple working on Apple GPT, a learning language model. Apple is investing millions of dollars daily to train its LLM, with a focus on AppleCare customers. Additionally, the Siri team plans to incorporate these language models to make complex shortcut integrations more accessible. Apple has also been building AI servers and plans to combine cloud-based AI and on-device data processing to release its generative AI to iPhone and iPad users by late 2024.
While other companies rely on cloud-based processing, Apple plans to mainly rely on on-device processing due to its commitment to privacy. However, given that Large Language Models are large, an iPhone would not be able to run this future Apple GPT locally without a proper server. Apple researchers have published a paper showing how they can efficiently use Large Language Models with limited memory. They propose a method that reduces the volume of data transferred from flash memory and optimizes data reading in larger, more contiguous chunks. This can bring a 4-5 times increase in speed on CPUs and 20-25 times faster GPUs, allowing AI models to run up to twice the size of an iPhone’s memory.
This technology has the potential to improve Siri’s capabilities, real-time translation, and other AI features related to photos, videos, and understanding how customers use their iPhones.