I’m happy to keep bouncing Ideas, but ultimately figuring that is exactly what will turn this idea into a project.
That said:
I see your reasoning there. It would integrate it better and make it less bulky.
Unfortunately that would not work with rotary-decoder-based gloves without some major overhaul on how they work.
a Rotary based decoder is nothing more than a spring loaded spool of wire (or something similar) with a sensor to detect how much of the wire is out of the spool.
With this you have one numeric value which you can correlate to something…
In the case of the LucidVR gloves, each spool gives you a number that trannslates to the state of one finger, like this:
Measure how much of the wire is out of the spool when you stretch your finger. This is your baseline
Then measure how much of the wire is out of the spool when you close your finger. This is your maximum value.
Then you get the software to extrapolate a 0-1 value from “Current length out” - “the baseline value”, and then folds the finger in the 3D model accordingly.
There are numerous issues with that approach. One of them is that you only track one value for each finger, so your hardware cannot tell when you are spreading your fingers sideways.
Worse, if you spread the fingers sideways it will think you are folding the fingers lightly instead.
Why am I explaining all that?
Because once you move the spools up to the arms, the wires will pass through other joints. at least the wrists.
When this happens, if you bend your wrist even on the slightest, the system will think you are bending all your fingers instead.
So… for every joint you go through, you would need to add some extra spools and wires to track said joint, just so you can compensate for the joints further ahead…
Not only that would make for a system with a huge margin of error, but you would also end up with about 20 spools attached to your shoulders with wires running on the outside of your arms… making it so you must be standing up and it would easily be caught on anything that touches your arm…
Not to mention that the haptic feedback provided by that system would be a huge mess and you might feel your elbow being pulled back because your finger touched a 3D object.
Also, increasing the scale like that would also put some strain on the bluetooth module needed, on the batteries, and a few more technical issues…
In short… LucidVR is a system which barely works as finger tracking, and you for sure don’t want that style of sensors on a suit.
There are many other methods for tracking motion. Some of which are even mentioned on the video from the link you posted.
You just need to read into a bunch of them, try to figure out what could work and what would cause issues, and then try to come up with a theory to fix those issues.
For this stage you don’t need neither money nor cad skills nor parts. it’s purely reading and thinking.
If it serves as an incentive, if you can crack how to effectively make up cheap and efficient full body haptic suits, you’re done for life!
That’s an easy grab for investment money, and currently full haptic suits selling to military personal go for a few hundred thousand a pop… and that’s still on experimental stages.
So I can see that suit of yours, with a revamped look, being a goldmine.
If you can crack it!