Today Google announced its latest family of smartphones, Pixel 9, at Made by Google and they looked pretty good to me. (Skip first 15 minutes of leader.)
Google seemed to follow Apple Intelligence’s lead by implementing some AI locally on the device.
Here’s a comparison of specifications for the iPhone 15 Pro vs Pixel 9 Pro:
I guess we’ll have to wait another month before we know how the Pixel 9 compares to the iPhone 16.
Similar to Apple’s WWDC 2024 announcement of iOS 18, yesterday’s Made by Google 2024 announcement of Pixel 9 also touted that its smartphone AI, Gemini, runs locally:
Hopefully Apple Intelligence features won’t be released for iOS 18 until they are more polished than what Google shipped on its Pixel 9.
The software is … promising, some of it seems like a party trick, and some of it is downright reckless. Google’s been rolling out generative AI features here and there over the past year, but this feels like the company’s first big swing at an AI phone. It’s kind of all over the place.
There’s a little sparkly AI icon in so many different corners of the UI, and these various assistants and systems don’t work well together yet. Do you want to have a conversation with AI? Or use AI to write an email? Or organize and refer to your screenshots with AI? Those features all exist on the Pixel 9 series, but they’re all in separate apps and interfaces. It’s starting to feel like I need AI to sort out all of the AI, and that’s not a great place to be. What’s worse is that they all work inconsistently, making it hard to rely on any of them.