The Ray-Ban Meta goes multimodal and confirms that they are on the opposite path to the Vision Pro: from less to more.

  • Ray-Ban Meta has been updated to be multi-modal, with cross-cutting visual and acoustic information…

  • This is a technological approach to a natural form factor that will improve over time.

  • Vision Pro takes the opposite path: it launches all possible technologies and gradually miniaturizes

A few days ago we were able to try out the Ray-Ban Meta, the connected glasses created as a result of an agreement between both manufacturers, and we found that not only are they a significant improvement over their Stories predecessors, but they are also very comfortable to use. a compelling product or at least an attractive concept for envisioning a future in which wearing tied glasses no longer looks like excess.

These glasses, on the market for several months now, have been updated and become much more functional thanks to the built-in Meta AI assistant, which reaches them in a multimodal way. That is, it can combine the capabilities of acoustics, text processing and image recognition.

And in the process, they cement their theory of the path the Meta is taking.

Focus on the highs, focus on the lows

This update allows get much more out of these points, especially in the US, since outside there the help is quite limited. The assistant is now, according to Edgeallows you to do things like simply saying “Hey Meta” and asking, “Tell me what plant is this?”


The camera is responsible for sending the image to Meta AI, which recognizes it and returns a response directly to our ear.

Same thing with something like “What does this sign say?” if it is written in a language we do not understand so that it can be translated out loud, or “tell me what this monument is and tell me about it.” And all this is integrated into glasses that are not much different from any other glasses, and, in addition, have a recognizable appearance, like the classic Wayfarer.

Meta has two physical products that well define its ambitions. Meta Quest 3 for virtual and augmented reality, Ray-Ban Meta how wearable based on artificial intelligence and augmented reality, interacting through voice. They live and grow in parallel.

The expected evolution is that Ray-Ban Meta ramps up capabilities, sensors and miniaturization to gradually get closer to Meta Quest 3.

They will never be the same because they don’t have to be, as traditional goggles have light leaks that prevent a satisfactory immersive environment from being created, and that’s only if that can be achieved. But we can imagine that at some point many of the Quest’s current capabilities will be replicated by Ray-Ban.

Is continuous learning between both: Ray-Ban gains the capabilities of the Quest, the Quest becomes miniature until it approaches the naturalness of Ray-Ban.

And now a comparison with Apple’s proposal. Vision Pro are an example the best thing Apple could create around augmented reality (and also virtual reality, although this is not what the company is keen to promote). They represent a cruel use of resources and technical developments, causing the bill to increase to $4,000.

In other words, Vision Pro is an approach to the maximum, which must be naturalized as far as miniaturization allows, while Ray-Ban Meta is an approach to the minimum, to which enhancements are added as technology allows, without distorting its natural image.

Now we just have to wait for time and technological development to make this device somewhat smaller, less intrusive and less bulky. But there is no product that exists in parallel and does the opposite.

At some point, the Vision Pro will look like ski or running goggles, very wide, with one panel covering the entire field of view, and will be much more natural than now.

Meanwhile, The Ray-Ban Meta is capable of much less, but what it does it does very well and has many opportunities ahead to take advantage of technological developments. (panels, circuitry, battery, power efficiency, processor miniaturization) until much more is integrated into a form factor similar to the current one.

The Vision Pro gets the current attention, but the Ray-Ban Meta, and even more so now that they’re multimodal, cost ten times less and are closer to what we can do with them on a daily basis without losing touch with the world. the real is around us.

In Hatak | LLaMA 3: what is it and what’s new in the new version of AI that will be integrated into Facebook, Instagram and WhatsApp using Meta AI

Featured Image | Hataka

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button