"A mocap session especially involves multiple characters, complex choreography, stunts, or fights. It's a high-pressure environment and the last thing you want is to spend any minute troubleshooting the equipment and let your actors wait"
Han Yang is an industry veteran and now a self-employed Unreal Filmmaker and Content Creator. Previously he worked at Method Studios on productions such as Detective Pikachu, Logan and Aquaman.
On his YouTube channel, he did an in-depth review directly comparing AI motion capture (Move.ai) with the Xsens MVN Link motion capture system.
In his in-depth review and stress test, he shows simple and more complex scenarios to see how the data behaves. The video helps you make a more informed decision when you are looking into different motion capture options for your project.
Han's main question:
Is AI Motion Capture powered by iPhones production ready?
An overview of Han Yang's findings:
Motion capture volume
-
To get a good volume and decent quality from the AI mocap data, you will need multiple good-quality phones or cameras.
-
To set up an AI motion capture volume with cameras you will need a well-lit environment in a decent size space. It can be a challenge to have such a space available every time you need to capture data.
-
You will need an internet connection to get the camera data uploaded and synchronized, which can be risky in outdoor environments.
-
The Xsens suit gives you full flexibility when it comes to space, you can utilize the space you have available and you can capture without the hassle of setting up the volume. There is also no essential need for an internet connection.
Data quality
-
Basic motions processed with AI mocap look good, such as running, walking, turning, breathing, and leg and limb movement
-
When occlusion starts to dominate, such as rolling on the ground and actions close to the edge of the volume, the data accuracy drops significantly. The Xsens system gives consistently accurate data in such conditions.
-
Faster and high-frequency motions like fast punching and jumping are no challenges for the Xsens system, but the computer vision data has trouble processing it correctly.
-
AI mocap can track props, such as a ball. With the Xsens data you will need to do this in post.
-
Multiple characters can be challenging for computer vision because of the occlusion.
Shooting experience
-
The Move.ai app does not give you a live preview, so you don't know what you get until you processed the footage.
-
The Xsens software gives you a live preview of the data, which makes it easier to curate the shots live on set, so you don't have to redo shoots at a later point if the data doesn't turn out as aspected.
-
The shooting experience using cameras can be challenging, you probably will need to try a couple of times to get the best practice for you.
-
The Xsens system gives a consistent and reliable user experience during inside or outside shoots.
Data processing
-
To process the iPhone motion capture data you will need to upload it. You will get 30 minutes of processed data and there will be additional costs after this point. The 30-minute quota can be a challenge if you do regular mocap sessions.
-
Xsens hardware, software, and/or processing investments are higher than for AI motion capture, depending on your project budget and goals you have different options.
Go check out his full video here:
In his specific situation which involves multiple characters, complex choreography, stunts, or fights. It's important to have a reliable system so you can spend every minute capturing data, instead of troubleshooting.
But, in the end, your own budget and type of project will define your motion capture needs.
If you would like to receive more information about the Xsens motion capture system, you can find more information on the MVN Animate page, or you can request more information via this link.