I get that it’s hot right now, but man, the user experience of LLMs being this bot you type text to seems like a huge step backwards compared to just integrating these AI features natively into products.
@jcrabapple yeah, looks like it. They’ve been doing this sort of AI integration and it’s much more interesting to me.
When everyone says Bing is way ahead because they licensed ChatGPT, this is why I was annoyed. These things are much more useful when deeply integrated.
I don’t use Google search anymore, but man, they really are taking over the entire search engine with their own AI model instead of showing results. I feel like web publishers are gonna be pissed.
I do enjoy split screen on my Pixel for events like this. I can have the video up top with Mastodon at the bottom. Better than picture-in-picture since it doesn’t cover up the app.
(The top corners not being rounded is driving me nuts though!)
So a huge focus of previous Google I/O events has been reaching people who don’t have Internet. How do all of these LLM/AI models work for people who are disconnected?
Add comment