I tried the new Meta AI app: 3 unexpected features
In April 2025, Meta launched a Meta AI app built with Llama 4. After trying it out, here are our biggest takeaways.


Meta has spent the better part of a year integrating Meta AI with Facebook, Instagram, WhatsApp, and its other existing services, but hadn’t yet launched a standalone experience for Meta AI fans. That all changed yesterday at LlamaCon, the company's inaugural AI developer conference, when the company finally launched the Meta AI app.
The new app is built with Meta's Llama 4 model. It's a full-fledged competitor to ChatGPT, which became the fastest-growing app in history after its launch.
Already, AI enthusiasts are digging into the app to see what sets it apart from the competition. Per Meta’s press release, the big takeaway is personalization. Not only does the app integrate with your Facebook and Instagram accounts to give you more personalized responses, but it also has a memory feature, so it can reference past discussions and add more context to future ones.
This isn’t necessarily unique to the Meta AI app since Grok also does this with people’s X accounts, and every other major AI chatbot has the memory feature. However, we would argue the ability to work with both Facebook and Instagram is fairly significant, considering their widespread popularity. That also gives Meta AI more potential data to draw from to make its answers more personalized.
With that said, there’s more to the Meta AI app than just its personalization and memory capabilities, and some of those features are fairly unique to the Meta AI experience. So, after downloading and experimenting with the new Meta AI app, here are three big features to check out.
A social Discover feed to give you ideas

Let’s start with the most obvious one, and that’s the Discover feed. Upon opening the app, you’ll be able to engage with it by tapping on the compass icon. It works almost exactly as you would expect. People use Meta AI to generate answers to questions, images, and other such things, and then those posts are shared to the feed for you to engage with.
You can like, comment, or share anything you see there. A fourth button appears to load the same prompt into your own Meta AI conversation, so you can see what you get when you ask Meta AI the same question. During my testing, I saw someone post an image with the prompt “imagine me Miley Cyrus at Beyoncé’s Cowboy Carter Tour.” The image it generated for me was different from the one in my Discover feed. Now, whether Meta's AI is supposed to be generating quasi-photorealistic images of public figures is another question entirely.
Near as I can tell, the Discover feed has two important uses. The first is showing off what Meta AI can do while giving you yet another thing to doomscroll. The other is giving users fresh ideas on what they can ask Meta AI about. During my brief time on Discover, I found people asking about Mars colonization, what colors would work for their wardrobe, and loads of stuff about the Catholic Church in the wake of Pope Francis' passing. In short, it not only serves entertainment value, but also as an idea generator, especially when the next wave of AI trends hits the market.
Hardware support
The Meta AI app also supports the Ray-Ban Meta Smart Glasses. In fact, Meta is replacing the existing Meta View companion app with the Meta AI app, so this is the app you’ll need to use moving forward for your smart glasses. It’s easy enough to get to. Just open up the app and click the glasses icon to add your smart glasses, and then continue using them as normal from there.
Per Meta, once you get everything synced up, you’ll be able to start a conversation on the glasses and continue it on the app. Chat history will also be accessible through the Meta AI app, and it’ll all be integrated with conversations you have on the app natively. Meta does note that you won’t be able to start a chat from the app and then continue on the glasses. Even so, OpenAI, xAI, and Google certainly don't have this type of hardware integration.
I don’t personally own a pair of the smart glasses, so there are likely some extra little things that I haven’t seen that Meta didn’t put in the press release. Even so, direct hardware support is something ChatGPT doesn’t boast.

Full-duplex Voice Mode
This one isn’t particularly new or unique, but it's the first such implementation for Meta’s AI. For the uninitiated, full-duplex voice mode describes the feature when you can chat with AI in real-time. You talk, it responds, you respond back, so on and so forth. A few AI chatbots have this feature already.
Meta uses it differently, though. While you can still chat with Meta’s AI in both directions, full-duplex voice mode also changes how the AI talks back. It integrates natural human language, like pauses, along with filler words like “umm.” This is demonstrably different from how the AI typically talks to you, so it’s something different for people who want that.
The app says that the feature is in beta and doesn’t use the most updated knowledge base like the regular AI voice, so you’ll likely get worse answers if you use it. Once it hits primetime, it’s a neat little addition. In the meantime, the regular AI voice has options for John Cena, Awkwafina, Judi Dench, Keegan-Michael Key, and Kristen Bell.
