Burgos captures the future of communications
Change our relationship with the digital world. Behind the screen everything is growing every day. New technologies become universal due to the forces imposed by the market. A continuous transformation that multiplies opportunities but requires giant strides to be at the forefront of an increasingly global world. For this reason it is important to say language of the future; understand that things are changing and take a position to continue moving in that direction.
This is not an easy task because the way we communicate, socialize, learn, flirt, do business… has nothing in common with what it was a few years ago. Technology has become a tool for social change. However, the perception of dehumanization often causes many people to put it aside, despite the benefits it can bring. To provide him with this human warmth, DINper Research Group of the University of Burgos (UBU).
Yes, this team is part of universities that, together with Meta, are promoting research in the field of computer vision and its applications in virtual reality. In particular, the teacher Pedro Luis Sanchez Ortega explains that it consists of collecting egocentric and exocentric data sets from various human activities. In his opinion, this is a contribution that can change the way society interacts with the digital world. In addition, he adds, it will allow future artificial intelligence to learn from real data and from the perspective of humans.
Implementation Aria – that’s the name of the project – is possible thanks to a research device, apparently in the form of simple glasses that capture real-world data from a first-person perspective. “Its goal is to offer recordings so that future virtual reality devices can know the location, context and intentions of their owners. The device’s sensors also capture the user’s video and audio, and also track his gaze, where he is constantly looking,” he says, and then adds that this way he gets all the information about the situation and its location. “On-device computing helps researchers understand how augmented reality can work in the real world.”
This initiative already includes more than 3670 hours of video and images, the most diverse set of egocentric data in the world.. “These images capture people performing tasks simultaneously from their own point of view, and in some cases from the outside,” says Pedro Luis Sánchez Ortega, who is clear that this information will help provide artificial intelligence models with a true understanding of how people perform various actions. .
In his opinion, this work innovative because it uses advanced technologies collect information always from the user’s point of view, contributing to the development of egocentric research in the fields of machine perception, augmented reality and virtual reality. “Egocentric research is a little-known area. Although the development of devices in the market has varied; Previously, research equipment was very expensive and bulky. From now on, as a research device, we have lightweight glasses that include not so traditional technologies, since in addition to the built-in ones, they have five cameras, seven microphones, inertial devices, magnetometers, a barometer and GPS. WIFI and Bluetooth alarms,” emphasizes the professor at the University of Burgos.
As for the benefits, he states that it is based on the repository of everyday activities, although from DINper wants to try to contribute to the vision of people with disabilities with this project.. At the moment they are still in the recognition stage and have not reached the point of application, however Sánchez Ortega points out that they have previous experience in implementing and developing disability support tools and will try to make this one of the differentiators that their team within ARIA.
How did this happen? He says that after working with virtual reality and augmented reality viewers, they were convinced that the advent of artificial intelligence would mean important changes in these areas. To do this, he continues, they had to think about the future of self-learning machines, as has already been done in the field of language models or generative artificial intelligence that creates new images from texts.
In project “A More Inclusive Virtual Reality” The team proposed virtual reality viewers with simplified interactions for users with limited mobility who have difficulty using conventional controls. “The virtual reality environment already automatically generated the environment by recognizing walls. Gone in a short time ability to use only hand movement controls to achieve interaction“
In this sense, they acknowledge that they are already recording and collecting data, but the tests are not easy because the devices must be perfectly tuned and the sensors calibrated to produce standardized results. Looking into the future, the professor foresees that They want to collaborate internationally with partners that include universities from all continents and technology institutes such as MIT.. Additionally, they would like to convey their concerns to the Latino community.