I'm Not Impressed With AI on Phones Yet. What It Will Take to Change My Mind
While these updates arguably bring additional convenience to your phone, they don't feel as groundbreaking as tech giants would have you believe. The first phase of AI-centric phone features are designed for very specific use cases - so specific, in fact, that I often forget to use them. The new capabilities that feel the most promising, like Google's Circle to Search and Apple's Visual Intelligence , require users to think about navigating their phones in a different way, which presents its own set of challenges.
To be sure, tech companies have made it clear that this is the start of a multiyear evolution of mobile software. Getting it right is critical, because there's a belief that generative AI will define the internet's future and how we access information. Generative AI adoption in the US is said to be moving faster than adoption of the PC and the internet, according to a September economic research paper from the Federal Reserve Bank of St. Louis . By not incorporating generative AI into their devices, tech companies run the risk of being left behind - like those that missed out on the shift to smartphones in the early 2000s.
So far, we've seen hints at where the future of smartphone software will go, with novel ideas emerging such as a phone interface that doesn't rely heavily on apps and AI agents that can act on your behalf. For now, those ideas are just that, but I'm hoping to see steps that push mobile phones in those directions in 2025.
AI features in 2024 are trivial, not essential
Generative AI, or AI that creates content or responses based on prompts, captured the world's attention in 2023 largely thanks to ChatGPT. But 2024 was the year phone makers began seriously embracing the tech. Samsung kicked things off in January with the introduction of Galaxy AI , while Apple unveiled Apple Intelligence in June months ahead of its October rollout. Google sporadically announced AI advancements throughout 2024, from Gemini Live and Gemini's ability to understand what's on your phone's screen at Google I/O in May to the new image generation tools on the Pixel 9 family in August.
Many of these early features aim to solve problems I'm not sure need to be fixed. For example, I rarely find myself in a situation that calls for rewriting a text message to make it sound more professional or friendly. Most of the people I text are close friends or family members, so I don't usually think too carefully about the phrasing or tone. In the rare instances in which I'm texting a work-related contact, the conversation is usually just a brief reminder about an upcoming meeting or event.
Other new AI features are amusing and impressive, but fail to prove their long-term usefulness. Samsung's Portrait Studio, which launched on the Galaxy Z Fold 6 and Z Flip 6 , comes to mind. It uses AI to recast photos of people in different art styles, like watercolor or cartoon.
When I got my hands on the Galaxy Z Fold 6 back in July, I had so much fun playing with different selfies and images of friends to see how Samsung would reimagine us with a new look. But the novelty quickly wore off. I haven't touched that feature since, even when revisiting the Z Fold 6 three months later .
I feel the same way about other image generation apps and features like the Pixel 9's Pixel Studio, which makes it possible to create an image based on a prompt, and Samsung's Sketch-to-image tool for turning rough sketches into detailed images. I will admit, there is a degree of delight that comes from playing around with these creative tools and seeing what they'll do. But months later, these features have yet to find a place in my everyday life.
Apple also just launched a preview of its own image creation app called Image Playground as part of the iOS 18.2 developer beta . I haven't spent enough time with it yet to form an impression, but I can't imagine I'd feel very differently.
Of course, my experience doesn't reflect everyone's opinion. Some may find great value in these tools, such as those who struggle in social situations and need some extra help figuring out how to frame a text message. Or creatives that need to quickly create images on the fly for a personal project. But that's my exact point; these features feel designed for specific circumstances rather than wholesale changes that push the mobile experience forward.
The most promising features so far set the stage for the future
While the vast majority of new AI features feel inconsequential, there are a few that show real potential. Google's Circle to Search, which lets you launch a Google search for almost anything on your phone screen by circling it, is one such example. As is Apple's Visual Intelligence mode for the iPhone 16 , and the message and notification summaries in Apple Intelligence .
What separates these features from the others mentioned above is that they feel more integrated at the system level rather than being buried in specific apps. But more importantly, they were designed to solve bigger-picture pain points with how we use our phones, even if they don't fully live up to that ambition just yet.
Circle to Search and Visual Intelligence are two of the strongest examples of this. They may seem very different on the surface - Circle to Search leverages what's on your phone's screen, while Visual Intelligence requires you to use the iPhone 16's camera to scan the world around you. But they both aim to get rid of the middle step of having to open an app, launch a Google Search or type a prompt into ChatGPT to retrieve information. They're both an indication that tech giants think there's a better way to get things done on our phones.
Apple Intelligence's message and notification summaries also stand out as an example of an AI feature that feels genuinely useful at times without requiring additional user effort. Like Circle to Search and Visual Intelligence, it feels like a sweeping change aimed at a very common problem: managing the influx of information on our mobile devices.
But even features like these are far from perfect and still have a long way to go. Apple's summaries are sometimes sufficient enough to give me the gist of a text thread, but more often than not, they're missing crucial context. Visual Intelligence is still in preview as part of the iOS 18.2 developer beta, so I'm still getting a feel for its usefulness.
Beyond that, Visual Intelligence and Circle to Search suffer from the same conundrum: After years of being conditioned to tap, swipe and scroll, adopting a new way of doing things on our phones doesn't come naturally. The reason these features are so interesting and promising is the same factor that could be holding them back. Drawing a circle around something onscreen, or launching the iPhone's camera instead of opening Google, just isn't instinctual yet, and who knows if and when it will be.
What's become clear in 2024 is that AI still needs to prove its purpose on our smartphones . AI's potential is starting to take shape, especially when you consider more dramatic ideas about how our phones could change, such as Google's Project Astra demo from Google I/O, Qualcomm's concept for apps that can take actions for you and Brain.ai's vision for a phone that can generate its interface as needed. There are already plenty of efforts underway to make phones more intuitive right now, such as Google's Gemini extensions, which let the digital helper work with other apps, and Apple's upgraded Siri that has the ability to understand personal context.
It's impossible to say whether we'll ever abandon apps in favor of AI, or rely on virtual agents to accomplish everyday tasks. But that's not what I'm looking for in 2025. For now, I just want features that feel practical, useful and innovative in a bigger way than what we've seen so far.