Hosting
Wednesday, February 5, 2025
Google search engine
HomeMobileNew app performs real-time full-body motion tracking with a mobile device

New app performs real-time full-body motion tracking with a mobile device


Northwestern engineers have developed a new system for recording whole-body movements – and it doesn’t require specialized chambers, expensive equipment, bulky cameras or an array of sensors.

Instead, a simple mobile device is required.

The new system, called MobilePoser, uses sensors already embedded in consumer mobile devices, including smartphones, smartwatches and wireless earbuds. Using a combination of sensor data, machine learning and physics, MobilePoser accurately tracks a person’s body posture and its global translation in space in real time.

“MobilePoser runs in real-time on mobile devices and achieves state-of-the-art accuracy through advanced machine learning and physics-based optimization, unlocking new possibilities in gaming, fitness and indoor navigation without the need for specialized equipment,” said Karan Ahuja from Northwestern, who led the study. “This technology marks a significant leap towards mobile motion capture, making immersive experiences more accessible and opening doors for innovative applications across industries.”

An expert in human-computer interaction, Ahuja is the Lisa Wissner-Slivka and Benjamin Slivka Assistant Professor of Computer Science at Northwestern’s McCormick School of Engineering, where he directs the Sensing, Perception, Interactive Computing and Experience (SPICE ) Lab.

The Ahuja team unveiled MobilePoser at the 2024 ACM Symposium on User Interface Software and Technology in Pittsburgh.

Limitations of current systems

Most moviegoers are familiar with motion capture techniques, which are often revealed in behind-the-scenes footage. To create CGI characters — like Gollum in “The Lord of the Rings” or the Na’vi in ​​“Avatar” — actors wear skin-tight suits covered in sensors as they prowl through specialized rooms. A computer captures the sensor data and then displays the actor’s movements and subtle expressions.

“This is the gold standard for motion capture, but it costs more than $100,000 to implement this setup,” Ahuja said. “We wanted to develop an accessible, democratized version that basically anyone can use with the equipment they already have.”

For example, other motion detection systems, such as Microsoft Kinect, rely on stationary cameras that sense body movements. If a person is within the camera’s field of view, these systems work well, but are impractical for mobile or on-the-go applications.

Predicting poses, no camera needed

To overcome these limitations, Ahuja’s team turned to inertial measurement units (IMUs), a system that uses a combination of sensors—accelerometers, gyroscopes, and magnetometers—to measure a body’s motion and orientation. These sensors are already in smartphones and other devices, but their reliability is too low for accurate motion tracking applications. To improve their performance, Ahuja’s team added a custom, multi-stage artificial intelligence (AI) algorithm, which they trained using a publicly available, large dataset of synthesized IMU measurements, generated from high-quality motion capture data.

MobilePoser uses the sensor data to obtain information about acceleration and body orientation. This data is then fed through an AI algorithm, which estimates joint positions and rotations, walking speed and direction, and contact between the user’s feet and the ground.

Finally, MobilePoser uses a physics-based optimization to refine the predicted movements to match real body movements. For example, in real life, joints cannot bend backward and a head cannot rotate 360 ​​degrees. The physics optimization ensures that captured movements cannot move in physically impossible ways.

The resulting system has a tracking error of only 8 to 10 centimeters. By comparison, the Microsoft Kinect has a tracking error of 4 to 5 centimeters, assuming the user stays within the camera’s field of view. With MobilePoser the user has the freedom to roam around.

“Accuracy is better when a person wears more than one device, such as a smartwatch on their wrist and a smartphone in their pocket,” says Ahuja. “But an important part of the system is that it is adaptive. Even if one day you don’t have your watch and only have your phone, it can adapt to the position of your whole body.”

Possible applications: A proactive assistant

While MobilePoser could offer gamers immersive experiences, the new app also opens up new possibilities for health and fitness. It goes beyond just counting steps. It allows the user to view their entire body posture so that he or she can be sure that form is correct while exercising. The new app can also help doctors analyze patients’ mobility, activity level and gait. Ahuja also envisions the technology could be used for indoor navigation – a current weak point for GPS, which only works outdoors.

“Currently, doctors track patients’ mobility with a pedometer,” says Ahuja. “That’s a bit sad, right? Our phones can calculate the temperature in Rome. They know more about the outside world than about our own bodies. We want phones to become more than just intelligent pedometers. A phone should be able to detect different activities, determine your posture and be a more proactive assistant.”

To encourage other researchers to build on this work, Ahuja’s team has released its pre-trained models, data pre-processing scripts, and model training code as open source software. Ahuja also says that the app will soon be available for iPhone, AirPods and Apple Watch.



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular