Device for general purpose augmentation - "ACUMA"? (GPT-3 invented the name)

That is a very cool project!
Love it!!

That is a good example of a great focused project!
They know exactly what they want from it, what it will do and they set boundaries for what it will not do.

but… have you read the section “The Somatic will not” ??
It’s right on the first page on that link, and the first line in that section seems to me like a dealbreaker for your project.

Also,
A good example of how they designed it in a doable way can be seen here…

Yes. and No.
Yes, it is designed to control a wearable computer through gestures.
That is not the same thing as “designed to control any computer you plug it on”.

In fact they even make it very clear that it won’t work to type on a virtual keyboard.

On your project, I feel like it needs to “do everything” and “plug into a lot of use cases”… and that is a huge constraint

I hope you see the following a constructive criticism

I love your enthusiasm, but you lost me very early on, and you got Zero buy-in from me, and I had to force myself to read through the remaider of this thread, which I found very hard work.

I was curious and I guessed 17-18.

They actually sound sensible to me and you sound like you may need reining in.
A spontaneous spirit rather than a considered one, dont get me wrong, this is not necessarily a bad thing, but unless you have disposable income, for a project like yours, you may need a little more control.
remember
If you move out, you may have more freedom of choice, but that will like come with LESS financial freedom aka. bills

For me, I think you need to get people onboard, and maybe take a step back.
Throwing all this stuff out there and hoping it sticks is probably not the most efficient strategy.

Personally this is what I think you should do.

Watch some youtube videos on Project management, (I would suggest books, but I dont think that will fit your style) there are a whole variety, but most follow a pretty similar structure.

initiation.
planning.
execution.
monitoring and control.
closure.

dont get a head of yourself. take the time, especially in the planning phase.
You want to build on great foundations, you’re trying to build on sand at the moment.

I think you should re read all the great feedback you have recieved from above, and use this advice to structure a plan.

You have a great ally in @Eyeux , but from the outside It looks like, to me at least, you are wasting his time and it may wear thin, rather quickly.
Im surprised he has stayed engaged for as long as he has, but this says more about him than it does about your project.

Go and do some research ,that doesn’t mean finding something on the surface that is close to what you want, you actually need to get into the weeds, and see how this will fit into your plan.
You will find key decision points along the way, these may shape the direction you take, but you need to stop putting the cart before the horse.

Then, watch some successful Dragons Den episodes, see their approach to get the buy in from the Dragons.
This is YOUR project, but you need help, so you have to sell your passion; your enthusiasm is not infectious, it is more distracting.

Then try coming back with a mythodical, logical and feasible plan.
Present it in a way that people want to help and is easy for them to do so, rather than wasting their time curbing your enthusiasm.

then you can move on to the fun stuff
…the execution
200w (4)

Anyway
Thats my advice, take it or leave it, but either way, I hope at least some of it helps and wish you luck with your project

4 Likes

That’s a hella great advice(s)!

I want to see where this one ends.
Quite invested in the growing-by-the-minute possibility this is an AI chatbot experiment. :sweat_smile:

I mean… Jokes aside, this whole shebang fits one of three possibilities:

  • a Turing attempting AI chatbot
  • Someone who suffers from a quite strong ADHD and has an interest in a field which is not forgiving to that
  • an overenthusiastic headline kid

The first one intrigues me :unicorn_magic:
Second one deserves my time :unicorn_good_luck:
third one will tire itself off :unicorn_i_dont_know:

1 Like

Well technically i do have ADHD. Also ASD which is why I’m not always great with words tbh.

I probably do as well.
I mean, apparently my trademarked caffeine naps are a symptom of AHDH. :sweat_smile:

either way…

Are you in a good direction now?

Like what to you mean as far as direction? Tbh this project was kinda started out of something else but i figured I could try to like expand the idea to see if that would be better

It looked like you were on a spray-and-pray approach.

While that’s great for brainstorming it’s also highly counterproducent towards actually bringing a project into fruition.

Now it seems like you got to a good stage to choose where to direct your energy here.

Mostly because anyone willing to help would need a clear direction of what needs to be done next.

Well mainly this is what I’d want (I’ll describe the sorts of usage modes of the device)

  1. AR mode - you can see the world but with information overlayed, from sensors attached to the suit, etc. You can also run AR apps.
  2. VR remote presence - allows you to see the world from the point of view of something your controlling
  3. VR apps - VRChat type apps can be used
  4. Direct control mode - controlling something directly connected to the suit.

Now I believe full body tracking could be done for this by possibly adding.hinges with position sensors exactly mounted over where the joints in someone’s body would be in the suit that would allow normal flexibility but would also track it (using rotary encoders and things like that). After I thought for a bit, I realized that the LucidVR data gloves should work reasonably well for normal tasks since you shouldn’t need to put much inside the gloves (palm area) where you’d normally hold objects.

Also possibly a sort of AR app control system could work to " allowing you to control drones and stuff without having to use VR if you don’t want to, but still showing controls and stuff in an overlay, possibly controled by just sticking your hand into the video feed and tilting or something like that

What about something like this ?

SlimeVR Full-Body Tracker | Crowd Supply

1 Like

Well the general idea was a singular device which could be styled however the wearer wanted and also used a singular power supply (like a large battery and/or possibly something else) and have stuff added to it (like a tail or something). Those would probably not be the best fit for that. I mean possibly they could work but I’m not 100% sure. If you just add like rotation sensors on someone’s joints that’d probably be more accurate

So kind of a “diving suit” with sensors attached to the various hinge locations and a VR headset ?

Well also the LucidVR gloves and also stereoscopic cameras for AR but also probably places things can be mounted to add stuff (sensors, etc) as well as ports to connect stuff to

I’ll try to be succint and direct. So if I sound rude, that is not my intent.
(Just trying to keep things short to help you)

That is a great idea. it’s also a good objective.

But you might be thinking backwards.

i.e. most of what you describe still needs an application to be used for.

for example:

First you need to develop something that can be controlled remotely.
And the last step is to build your suit.

As I see it:

  • Step 1. Idea: Done. :white_check_mark:
  • Step 2. build a Proof of Concept controller: To-Do :negative_squared_cross_mark:
  • Step 3. build a Proof of Concept controllable: To-Do :negative_squared_cross_mark:
  • Step 4. develop an interface (the software that can be used from the controller and into the controllable): To-Do :negative_squared_cross_mark:
  • Step 5. Build a prototype: To-Do :negative_squared_cross_mark:

For the Proof of concept controller, all you need is:

  • something like Meta Quest or even Android VR.
  • Lucid or, Leap Motion
  • Unity 3D and to develop a “game” that takes your gestures and shows them to you in VR.

Once you’ve done that, you need the Proof of Concept controllable.
So…
What would you want to control with it?

Well that’s sorta like something it can do but it’s more like a new sort of computer tbh because it can do a lot more than simply control stuff

I had an idea for a UI involving like a sort of window in midair that when you put your hand through it it adds like small labels over gestures you can use to interact with it and to use whatever app is controlling that window you just make hand gestures with your hand stuck through the window. But since the window only responds if your hand is in it, it wouldn’t get in the way or anything

Yes. but in order to build it, you need to do it part by part.
Start small, then grow it item by item.

That interface you described is something you can start on!

If you like Lucid, build one hand of it.

Then build that interface you just described in Unity.
You can export it to work on Google Cardboard VR…

So all you need are $11 for one hand of Lucid and your phone…
And you can begin the project.

Actually I don’t have a pc rn tbh. however, another benefit of this over a conventional system is that it works anywhere and is portable but had full body tracking. Since it uses a vr headset instead of conventional AR, it would likely have a better FOV then, say, Microsoft HoloLens

So it’d be like having that computer that uses gestures like the one in minority report except it’s AR and has haptic feedback and is in a suit so you’d have it 24/7. Plus imagine what you could do by adding sensors. You could, say, add sensors to detect electromagnetic fields emitted by wires with power flowing through them and use the haptic gloves to feel that. Imagine how many jobs could be improved if you could just have something like this that can do a ton of stuff by just connecting a cheap sensor to something like this. Plus it’d have a browser. Think how cool it’d be to tell someone “Hold on, let me look that up” then without pulling out a phone or anything just throw open an AR browser window and look something up. Speaking of phones, many Android phones support video output over the USB c port. You could dock your phone in a pocket or something and have the screen show up in a resizable ar window that would feel and possibly even look like a real screen (or it could be sci-fi -y and be slightly transparent. But still haptic feedback helps). Also think what you could do with a VNC client. You could have a pc available 24/7 with a massive hi res monitor available just with a gesture (with a keyboard and mouse). It’s basically a new form of computing.

The whole concept is really cool!
Although I personally would rather achieve that without the suit…
(hello, BCI!)

There’s just one thing you seem to keep forgetting:

Just like a new brand of smartphone, or a new operating system…
You need apps to use on it, or it will have no use.

Take “piloting a drone” for example…

even right now, I can technically pilot a drone using a keyboard.
But… I need to plug my keyboard to the drone first, right?

But drones don’t have keyboard connectors.
So…
You need to take a drone, and hack it… then make a hardware connector, and then write a software interface for it.

Then you can control it.

But even then, that’s not gonna be universal.
Each different drone/make/model will require a whole new connector and a whole new interface.

It’s sort of a chicken and egg situation

Well still could it be possible to like somehow program the thing to display something like a desktop environment (like using Linux or something)? Like modify something like a normal desktop environment to run normal Linux apps on it?