Apple’s launch of iOS 26 brought Foundation Models, a framework that gives developers direct access to local AI on iPhone and iPad. Unlike traditional cloud-based AI, these models run entirely on-device, lowering costs for developers and preserving user privacy. The framework is not designed to compete with the largest cloud models but instead powers practical, focused features within everyday apps.
Early adoption shows how developers are weaving local AI into their products in ways that feel natural. Rather than building full AI chatbots, apps are adding subtle enhancements to make daily workflows smoother. This aligns with Apple’s privacy-first strategy, where AI features enhance user experience without requiring data to leave the device.
Several apps already showcase how the Foundation Models framework is being used. TechCrunch is maintaining a list of such apps:
- Lil Artist: AI-powered story creation for kids.
- Daylish: Emoji suggestions when scheduling calendar events.
- MoneyCoach: Spending insights and automated expense categorization.
- LookUp: Contextual examples and etymology maps for word learning.
- Tasks and Day One: Smart tag suggestions, text summaries, and prompt creation.
- Crouton, Dark Noise, and Lights Out: Recipe simplification, AI-generated soundscapes, and real-time commentary summaries.
- Signeasy, Capture, Lumy, and CardPointers: Document summaries, note suggestions, contextual weather insights, and credit card reward tracking.
- Carrot Weather: unlimited conversations with the chatbot thanks to Apple Intelligence.
The common thread is efficiency and cost. Developers are turning Apple’s smaller on-device models into targeted features that would otherwise require server calls. This makes AI seamless, fast, and private, while also reducing dependency on third-party services and their charges.
Android also supports local AI, but its implementation is less unified. Google’s Gemini Nano in Android 15 enables features like summarization and smart replies, while Samsung’s Galaxy AI provides tools such as translation and text assistance. Developers on Android, however, often combine these with larger cloud-based models from Google or other providers. This gives them more flexibility and power but comes with variable costs and potential privacy trade-offs.
Apple’s Foundation Models are not meant to compete directly with the likes of OpenAI’s GPT-5, Anthropic’s Claude, or Google’s Gemini Ultra. Those are frontier models optimized for large-scale reasoning, multi-turn conversations, and complex generation tasks, typically running in the cloud. Apple’s models, by comparison, are lightweight and tuned for on-device use cases such as summarization, tagging, auto-complete, and content suggestions.
Apple has created a consistent framework that developers can rely on across its hardware lineup. While the models are smaller than cloud AI systems, their tight integration ensures predictable performance and user trust. The adoption in iOS 26 apps demonstrates that developers are willing to embrace this local-first approach, and Apple’s strategy may influence how app ecosystems balance privacy, cost, and capability in the years ahead.