Written by
Jerônimo do Valle
"Perceptual inconsistency" and the challenges of network infrastructure will require new technologies and large sums of money for the implementation of what we call "Metaverse".
If we ask Meta - or peers - about the feasibility of the metaverse, the answer is confident: "Yes, it's just a matter of time. The challenges are vast, but technology will soon overcome them."
This may be true for many problems faced by the metaverse: better screens, faster sensors and hardware will be fundamental. However not all obstacles can be overcome with improvements in existing gadgets. The metaverse can be found limited by technical barriers that are not easily sized when, on the other side of the scale, dollars are stacked against them, in addition to the functioning of the human body itself.
The metaverse view driven by Meta is a totally simulated "incorporated Internet", experienced through avatars. This implies a realistic experience, where users can move through the space at will and pick up objects at ease, but the metaverse, as it exists today, is much less than that. The movement is restricted and the objects rarely react as expected, if they do so. Experts in augmented reality declare that the reason is simple: you are not really there and are not really moving; your brain knows this and rejects the situation.
Meta often presents a demo with "friends around a virtual table". The company's marketing materials portray avatars moving fluidly around the furniture, getting up and sitting at any time, interacting with the table and chairs as if they were a real physical surface. The problem is that this can not happen.
The table is not in your environment, it is only digital; appears only on the screen. Given the lack of physical reference, we can say that "if an individual pretends to lean on the table, to make his avatar repeat the action, his 'virtual hand' will go straight through the furniture".
The developers would then fix the problem with "collision detection," which prevents his hand from moving through the table, however, the problem is that the object really does not exist, therefore, even if the hand "stop" in the metaverse the movement in the "real world" will continue, causing the disorientation of the user. The sensation, properly described, would be a bit like that joke - nasty - made by someone pulling the chair, moments before we sit down.
Yes, Meta is working on EEG and ECG biosensors that can allow movement in metaverse, just with thought. This can improve the range of motion and prevent unwanted contact with real-world objects, while the avatar moves in the virtual space, but not even it would offer total immersion. The table still does not exist and the user still can not "feel" the surface when it approaches his own hand.
Experts believe that this will limit the potential of a Metaverse of RV for short-term activities, such as playing a game or shopping - which is actually little beyond what we have already available. On the other hand, they see the "augmented reality" as a more comfortable long-term solution. AR, unlike VR, "increments" the real world, rather than creating a simulation, which avoids the problem of "perceptual inconsistency." With AR, the individual is interacting with a table that is, in fact, there.
This discomfort, however, is not the only concern. There is, therefore, the problem of nausea and dizziness caused by the simple use of virtual reality glasses or, rather, the "mismatch between visual and vestibular systems (set of internal ear organs responsible for the detection of body movements and balance)".
These side effects happen because the brain becomes confusing about what it is trying to process. When the eyes are seeing a scenario, they send the brain the message that the body is moving, when, in fact, it is not. The brain interprets these disconnected signs as malfunction of the organism, responding with adverse reactions, such as sickness and dizziness that affect most people who have tried to use any of these devices.
As if all this was not enough, the metaverse also faces the challenge of having to move data, among users thousands of miles away, with null or very low latency. It turns out that, to be really immersive, the "round trip" between the user's action and the simulation reaction should be imperceptible and, in some cases, the word "imperceptible" means a fraction of time less than 15 milliseconds.
Users can exist anywhere in the world and the path that data run between them may not be under the control of the platform. To resolve this, Metaverse platforms would need more than "scale"; it would have to be built a whole network infrastructure that cover many server clusters working together in several "data centers". To further reduce latency, service providers could need to provide rendering and computing power at its edge, while making data backhaul for central servers, exponting the cost of the system, as a whole.
The problems of "perceptual inconsistency" and network infrastructure can be solved, however, clearly, many years of work, scientific research and big, better saying, "huge" sums of money will be needed. Meta's Reality Labs lost more than $ 20 billion in the last three years. That, it seems, is just the top of the iceberg.