Music playback on the frame’s speakers sounds fine, but I still prefer wireless earbuds for something I actually want to listen to. Podcasts and phone calls sound great, though—voices seem to shine on the glasses. The touch controls along the sides of the frames work well enough, which is nice considering oversensitive touch controls on wearables can often trigger an unintended reaction like accidentally pausing a track or changing the volume.
The glasses are lightweight compared to the face computers of the past, but they are heavy enough to leave deep red divots on either side of the bridge of my nose after a couple of hours of wearing them. This is probably a skill issue, since I don’t have prescription lenses and normally just wear sunglasses.
It’s a petty peeve, but I much prefer polarized lenses, which Meta doesn’t offer. At least you can get them with transition lenses, which is what I tested. Those do a good job of reacting to the light, but I find they don’t ever feel quite dark enough to keep me from squinting on a sunny day.
As I wore them on one of my walks through San Francisco, on the shore of Ocean Beach, I came upon a dolphin-like fish that had washed up on the sand. Though I got my camera glasses close enough to the thing that I could smell it, Meta’s AI assistant could not tell me what kind of animal it was. It correctly identified that it was very dead and that I should not touch it. It was then able to direct me to a number to call for city animal control services.
Beyond instances like that, I tend to avoid the AI voice interaction because I haven’t gotten to the point where it feels natural. Getting it to search something is usually very quick, but doing so requires you to stop dead in your tracks, stare directly at another person’s purse or something, and say out loud, “Hey Meta. HEY META. Is this bag Gucci?”



