Device for general purpose augmentation - "ACUMA"? (GPT-3 invented the name)

Hi! So I had an idea for something I could build. Im not the greatest writer in the world, so I used some of the the general ideas for the project and had GPT-3 come up with a name for the project and also write a short intro. I’m including that below. I don’t really have the funds or resources to build it, but I did come up with most of the tech behind this and it should actually work. If you need more info, I can provide it. I just thought an intro would be done better by AI than me to be honest. Anyway, here’s the introduction. (FYI I did edit it a bit. Still not perfect but this is kinda a taste of what the project is but I could use more ideas to add to it. I had more ideas for stuff to add but this is just an intro so I’m keeping it short. Some of the things I didn’t include was the ideas of using something like data gloves to track hand motions, allowing devices to be controlled with guesture, and also possible ideas for power sources, among other things)

The ACUMA suit is a revolutionary device that offers cutting-edge technology to enhance human abilities. It’s designed to be highly customizable and involve the latest AI technology and powerful technology to augment physical and mental capabilities. The suit utilizes Vantablack (a material that absorbs up to 99.965% of visible light, making it appear almost like a black hole, sounds hard to get a hold of though, but apparently there are paints and stuff designed to be very similar to it) and RGB LEDs to create a unique look, while AR and VR headsets allow the wearer to be immersed in a digital world. The AI within the suit will learn about the wearer and assist them through various tasks, as well as building a digital model of them which would contain many elements of their personality and allow them to build a sort of “mind upload”. The suit also comes with hardware designed to augment motion (such as running long distances) using electrical muscle stimulation.

The ACUMA suit is the perfect device for anyone from athletes to scientists, who want to take their abilities to the next level. With the ACUMA suit, you can enhance your physical and mental capabilities, as well as build a digital model of your personality and explore a digital world. The ACUMA suit is the future of personal augmentation, and with your help, we can make it a reality.

1 Like

I won’t think this is a home run

Comes off as chatbot spam with no actual content

I still don’t even understand what it is you want to make, just buzzwords


Well basically, think of it as a way to make a sort of suit that could act sort of like an exoskeleton (like the motorized ones), except without motors, instead using electricity to stimulate muscles ( so instead of replacing it’s actually just strengthening them sort of) (I’ve heard of this tech being used for people that were paralyzed, to allow them to walk using their own legs by controlling them with technology). But it would be controlled by the actual attempt to move them. But since the tech is doing most of the actual work, you should theoretically use less energy, allowing you to run longer distances without tiring as quickly. Another possibility is using some kind of headset that can read signals in the wearers brain and interpret them with AI algorithms (I’ve seen articles where they actually managed to get admittedly very blurry images of what someone’s thinking of like this, so it is possible) trained on the signals read when the wearers is, say, walking, to control the system. But in general the idea is it’s supposed to be almost like a sort of wearable computer system that would be able to tie into other things. Imagine, for example, if this could tie into a car. Because it has AR tech, you could literally control a car using just the suit. Basically it would be like walking around in VR, while actually being in the real world. It’s kinda hard to explain but the general idea is it’s modular enough to tie into just about anything (for example, drones) and allow you to control just about anything that you can make work with it (drones, cars, etc) mostly using gestures or even having it look like you are what you’re controlling (like using video feeds from cameras). And again, since it’s modular, you could design and build things to put on it (say, sensors and tools) and have them integrate into it and be controlled by it. Also, I was considering the possibly of having it be able to (optionally) train an AI on everything you do in the suit, which should be useful for having something that would respond to situations and input the way you would (sort of an AI that acts like you) which may be useful. I kinda have been thinking about the general idea a while but the idea is it’s supposed to be a way to completely integrate people with technology in a way that would be the most intuitive. Pulling a flat piece of plastic, glass, and metal from your pocket and poking it repeatedly isn’t the most efficient way to use a computer. This would be like instead of holding a plastic remote with a screen on it and controlling a drone, you could actually “be” the drone, controlling it with gestures and seeing using the cameras on the drone. And the ability to add functionality and customize it to look like anything is definitely a interesting prospect. I believe we do have all the tech to do it, it’s just kinda a big challenge to put it together. But its definitely possible, and it should be done (at least I think it should be) because to be honest, I’d love having something like this and I think a lot of other people would too.

I know I’m the last to be grammar police, but that is a wall of text

Break things up and keep it succinct


Ok. The simplest way to put it is a suit that allows you to control everything. Cars, drones, heck, maybe even a starship. And it’s not controlled by a remote or a touchscreen, but by your hand/etc motions, maybe even thought itself. And you don’t look at a screen to see what’s happening, but you see it as you would normally see the world.

Oh and it’s modular and can have functionality added through devices you could add to it

Still feels like this… I guess from my point of view there is no specific tech listed, no how it works, or reference to other things that work similarly, just grandiose ideas based on a black mech suit presumably. I get the message is ML generated I’m just saying it’s very apparent.

That being said sounds cool just not a bucket I’d toss a penny in personally.

Much better, I actually got what you were going for lol

I don’t want to be the wet blanket, but that’s probably how it’s going to come across… I’ll try to put a nice spin on it lol

what you’re proposing requires developing several different tech milestones that we are not close to yet

It’s a fun design concept, sure…. But break it all down to constituent components

we are in our infancy of interacting with a brain, there are very few sub cranial devices… most of them require months of thinking a specific thing really hard, and then teaching a computer to look for that activity

It may never be possible to have anything plug and play due to each brain working and talking to itself differently

I’m no biologist, but pretty sure electrically stimulation of muscles doesn’t increase endurance… muscles aren’t limited by how you control them, but rather their construction and fuel requirements ( ATP if I remember correctly, unless anarobic, then it gets weird)


My advice that’s worthless and you didn’t pay for is this

Pick a specific thing, and try to evolve THAT

The more “undeveloped” tech a thing needs to exist, the less people will give it a chance or be willing to entertain it


Actually, I have been thinking about how to accomplish 90% of this and have figured out ways to do a lot of it

Most of it should be possible

All the VR/ar stuff is here, for one. A lot of this tech is actually something that exists already but not used like this

So for like a specific example, if you were to have a computer controlling balance and actually in essence running for you (using some sort of model based on how the you normally run, possibly using some kind of neural network, for electronic muscle stimulation while running, you could theoretically have the ability to run a long distance because you’re not really using any energy, the electronic system is. So it’s not really for endurance but to allow you to not have to actually do as much work, if that makes sense. and as far as reading the brain with AI, it’d be trained on you and acting you do wearing the system. Obviously training it would take time, but there isn’t really a concrete reason it can’t work, at least to some degree. All your trying to read is muscle movement or something simple, not like controlling everything completely from thought.

There are some interesting concepts there, but also some things that got me concerned…

so, first things first…
The muscle movement issue for me is the major red flag here… and not only because of the aspects @Eriequiet already raised.

So… if we ignore the muscular stuff (which is the red flag), then a carefully curated selection of software and hardware would already achieve that, from a good computer… Right?
as in… what you described sounded more like a “computer that you dress up” instead of a suit that does things…
So, given that you would be immersed into VR (or wearing an AR headset), then wearing it or having a computer connected to it would only make an aesthetic difference…?

Ok, Now for the musculature part…

This is where I think you stumbled

You would actually be using all the energy and feeling super tired.
Most likelly it would feel far worse than running by yourself!

Be it electrical muscular stimulation, or mechanical muscular manipulation, it doesn’t prevent you from feeling your muscle being moved.

Actually, not only you would still feel it, but because it’s an external stimulus, your body would reflexively fight against it, which would cause your muscles to build up fatigue much faster than if you were running by yourself.
It would also have a very high chance of harming your muscles as well.

Beyond that, muscles making effort require “food”, which comes from the bloodflow.
So running, no matter where the muscular effort comes from, would cause you to feel tired, get out of breath, increase your cardiac response…

In fact, I remember a particular training where I would hold up a rope and be pulled by a car (nothing too fast. just enough that I had to keep up pace).
It’s funny to notice how much you cease controlling your legs in that circumstance…
You just let your arc reflexes kick in and the legs feel like moving on their own. 0 mental effort from my side.
Yet, I got crazy tired! even more than when I ran the same distance by myself.

With that said, I feel like what you want to achieve is an emulation of a mind upload: Some suit that would give you an “out-of-body” experience…
Is that it?


Well it’s more like this. Look at the world. Everyone is approximately the same appearance, with (ignoring mental abilities and disabilities and strength and everything else for a minute) approximately the same capabilities. I.e. no one can fly, most people walk on 2 legs, etc. That’s great for most things, but what if you could sort of change that by using a sort of interface? That’s what this is. You could literally plug yourself into a starship and control it without using screens or control panels or anything like that. You could basically use the interface to actually become just about anything. So it’s basically like a data suit sort of things with a few augmentation abilities as well (also, the electronic muscle thing doesn’t necessarily need to be on 24/7. It’d be sorta like a sci fi story I read once about this guy named.gulliver foyle
In a part of the story he has an electronic system embedded into his body that can do things like allowing him to “accelerate” (able to move faster and think faster than anyone else, temporarily, obviously not within the ability to create with present day tech but something like having the computer figure out how to do what you want faster than you might).
Another example is being able to see tho what your controlling as if you were that. Like using a vr headset. Normally it was display a video feed from stereoscopic cameras but with info overlayed, possibly from sensors, giving you more senses and things like that.
Also fyi it says I’m out of replies so I’ll just post here.
But anyway the problem with brain machine interfaces is that you can’t buy them yet. This is more of a body machine interface, if that makes sense. The basic idea is that you don’t have to have a human form,.since you can literally be absolutely anything can be controlled by this
Basically an easy way to explain it would be to instead of thinking of the yourself as the platform, think of yourself as a control system kinda. Instead of simply adding functionality, you can control another thing. Kinda like a car, except more intuitive and more integrated. Does that kinda make sense?

Well it’s more like this.
AI- would be used, when necessary, to predict movements and execute them faster than you can, using similar tech that the Replika app was originally based on (someone training AI on a friend who’d recently died’s text messages, producing an ai chatbot that sounded like them and talked as they did), except using your actions as recorded by the suit. Not on 24/7.
Control system- basically the suit is a interface for anything. You could have almost any physical form. You can also add any sensor that’ll work with it and display the info from it using ar overlays, sound, or lots of other ways. It would also track movement and positive have some sort of touch feedback system, like data gloves or something like that. the suit is more of a controller device the AI could be added in as something possibly using the same tech as AlterEgo possibly to provide advice and stuff tho but it doesn’t control anything except when it’s trying to perform actions (on request) faster than a human could. But tbh the suit is more of a controller with additional features thrown in

That is literally a “BCI”. (Brain Computer Interface)

If Neuralink (just mentioning the most famous one) becomes commercially available, that is exactly what it will achieve.



That is the defining concept behind BCIs.

I feel like you are describing 2 things:

  1. multiple uses for a BCI.
  2. an AI assistant that takes over your body.

for 1, the suit becomes redundant and unnecessary. If you have a good BCI system in place, you could use it to control from a suit all the way to a spaceship, as you said.
So trying to use the suit as means to achieve a Brain-Computer interface would lead to bulky and limited systems with more overhead than nescessary.

A single chipset installed on the skull, or a lightweight headset would be all that’s necessary here.

I reccommend you do a good read on BCIs, but just to get you started… but here’s some intro:

BCI’s can be unilateral or bilateral, and it can be gestaltic or partial

Unilateral examples range from a simple electrode put on your temples all the way to a full suite of readers…

A good example about partial BCI is the Nextmind system. It’s a bunch of electrodes aimed specifically at the optical processing center of the brain. they can quickly (seconds) be trained to detect what you are focusing on… and with an intermediary software, can be used to control any electronic system by means of “pressing/clicking whatever you are looking at”. Currently there are a few games about for it and it’s available still as a Developer kit, going for 500,00U$D

A good Gestaltic example is the Galea system, which sells only for researchers at the moment for 50.000,00 U$D a piece. it’s a suit of electrodes which can be quickly trained and would allow you to have a wide array of input methods into any computer system set to receive it, from a computer, to a game, to even a space ship, allegedly.
That is currently the most encompassing non intrusive system we have available, and even this is not 100% Gestaltic.

There are other systems around, but they all tend to do revolve around the same concept:
You use some electrodes to detect your brain activity and then let an AI decipher that and issue “input commands” which another software will read.
Just like a keyboard issues “input commands” to whatever software is connected to it.

Now if you dive into Bilateral… that’s where it crosses into Neuralink territory… Still far from consumer range.

Or you can get something in between, which is a haptic feedback suit + VR + AR…
Which was built by “The Void”, nicknamed “Rapture Technology”.

Issue there is that it requires the world around it to be built to it, instead of having the modular effect you want.
Also, here the haptic feedback is very limited and far distant from “feeling like what you’re controlling”. In fact, a lot of the feedback comes from your normal senses, such as water being misted around you…

There are also some military developed haptic suits/helmets/etc… not commercially available, but which achieve a (very) limited version of what you are describing. but even the top notch military designs are still years away from your concept

That said, the examples you use make me feel like you want a bilateral BCI, but mechanical…
Yet here you seem to fall into a loop which leaves the mechanical bit irrelevant.

So please correct me if I misunderstood you:

  1. I dress said suit.
  2. when I make movements with my body, that serves as a controller for a spaceship (or whatever, following on the example)
  3. before I make my movements, there’s an AI that will know what I want to move, and then the suit will make said movement.

if I train an AI by moving myself inside the suit, there’s no way the AI will be able to predict what I want to do, since moving my arm to point at something can be the exact same movement as to grab/click/gesture/etc…

i.e. any system in which I need to enact a movement in order to generate an input will never be able to become predictive, because there is no stimulus to be captured by the system before said movement exists.

The best you can aim for there is to reduce the movement needed.
So now all I need is the initial shoulder movement from “pointing” so that the suit knows I want to “point”. And even that isn’t “predictive”, it’s just responding to the first micro movement from a bigger movement which is already set into motion.
But this brings the drawback of reducing the range of inputs I can program the suit to read.

Which means that to achieve what you’re pledging, you would need to find some way for your AI assistant to predict what I want to do.
I can only picture a BCI (whichever system) to achieve that, since in such scenario you have electrical activity from the brain prior to the consolidation of the movement, or even prior to the consolidation of the thought of moving. (yet here we are talking about an AI able to do processing far superior than any current computer is)

Regardless, be it a BCI or any other system, as soon as you achieve letting your AI predict your movements, then you no longer need the suit to exist, since said AI would know of your movement before you move inside the suit…

Which led me to split it into two items:

The BCI, and then a suit to move your body.

1 Like

Ok I think I can post now so we’re good now

But anyway it’s more of a controller type thing. To like control other stuff in the most intuitive way possible

That is interesting.
If the goal is to “control a humanoid robot”. it makes perfect sense and I can’t think of anything better (apart from a bilateral BCI)

But the concept here is to “serve as a ‘universal’ controler for multiple distinct devices”… Then let’s ignore the “predictive” aspect for now.

So… let’s assume BCI’s would be the pinnacle of “intuitive” control.
Therefore, if we’re deliberately dodging a BCI and going for the “analog” counterpart (haptic feedback, audio, VR headsets), then we do have a few possible options.

One of which is a “suit” which uses bodily movements and more (buttons on the wrists, etc).
Another would be a cockpit…
Another, a home Desktop computer… etc, etc…

In order to better analyze what were getting at, let’s split this device into input (what you do that generates an input into the controlled device) and feedback (the audio/video/haptic feedback you receive).

About Feedback


A VR headset is nice, but having used extensively both VR and Desktops (my desktop has between 4 to 6 monitors)… I’ll tell you that VR is by far more limited than an array of monitors.

This is due to how the VR headsets currently in existence work, which causes an illusion of immersion, but the actual field of view is narrowed a lot and you can only read comfortably what is placed right in front of you.

Hence an array of monitors would provide you with far more information within your natural view than anything within VR would.

You would also face a Depth of field issue. In VR you have problems adjusting your view automatically to heads up displays which are mainly separated by depth. this occurs because of the way the lenses are assembled on the headset.
With a cockpit or array of monitors you can overlay information in a much more natural and intuitive manner.

Add some Eye tracking and servo motors on your monitor arms and we could even have a cool system where the monitors come and go from your field of view based on what you’re looking at.
That could provide a much more natural and intuitive feedback for the majority of devices you would want to control…


Binaural audio is brilliant and all you need for that is a headset.
Not much more to say here.


I think I get your premise for “extending your perception and being able to feel like what you are controlling”.

Let’s keep up on the “piloting a spacecraft” example:
Even with a Bilateral BCI it’s still only in theory that you could “feel the wind rushing through your wings”…

With a haptic suit at most you could get to “feel some pressure on your arm, which correlates to the pressure of the wind on the aircraft”.
And even then, you would still need a special aircraft with sensors on the wing, and which could convey such data back to the controller… not “any device” anymore.

Let’s take another example: driving an armored car. The car gets shot multiple times on the right rear door…
How should that translate as information on a haptic suit?
I mean… what would be “intuitive” about any haptic feedback there?

The way I see it, a haptic suit would be mostly a constant reminder of our human form, instead of means to transcend it, because as soon as you let yourself get immersed into piloting something nonhuman… if there is haptic feedback it would just make you painfully aware of your physicality.

About Input

The best and most intuitive input for any device is usually custom tailored for said device.

Take a plane drone for example: You can technically control it using a keyboard but that would be far clunkier than to control it using a Handle…
On the other hand, try to write a message for someone with a handle?

But again, we are talking about a universal controller, so the best we have at the moment is a computer’s keyboard.
If we want to push it forward, imagine a setup with a full keyboard, 2 dedicated smaller keysets, a mouse, a joystic, and toss in some motion tracking for gesture commands, and touchscreens on your monitors…

Now let’s compare that to what we could put into a suit.

We can work buttons/servos activated by moving the fingers…
That is perfect for controlling a humanoid hand, or to trigger a few simple commands…
But you could achieve the same with motion capture, without the need of a clunky set of gloves.
Even more, such a set of controllers would serve as a similar interface to a keyboard, only much more limited.

With the suit we could also use full body movements… But at the same time using large movements is limiting. i.e. if you need to stretch an arm forward to accelerate a car, then you can’t use that hand for much else…
Imagine trying to use your legs to control something… you would end up walking around the house and hit the wall… XD

Then let’s bring “Intuitive” to the mix…
What’s more intuitive when controlling a car?

  1. pressing forward on a keyboard?
  2. touching your pinky with your thumb while pointing up with the index finger?
  3. stretching your arm forward?

With a bilateral BCI we could have a translation which, in theory, would make us control the car directly in a purely intuitive manner.

But with haptics and servos, all you can capture are movements and gestures… which you can also capture with motion tracking. But that doesn’t make controlling an inhuman shape any particularly intuitive endeavour…

That’s where I see the split on your project as a focal point you would need to make a decision for:

Is this meant to be something to control a humanoid shape? or is this meant to be a universal control module to allow the user to “plug in” into other devices (cars, spacecrafts, etc…) ??

Even further… is this meant to be something to help you control other devices as an extension of yourself (which, arguably, is what we use our phones for)?
And if so, is the focus on allowing you to control said devices better than anything else would, or is the purpose to make you feel like you’re being something else, even if you’re not controlling something?

Because for the later, then a VR videogame with a good psychologically backed design would achieve this far better than any “universal” or modular device could… (exactly because part of what causes the immersion effect while playing VR is the fantastic setting in which you are and the lack of physical feedback. VR feedback must be either perfectly precise, or absent so that we can let our imagination run free)

1 Like

Look, it feels like your GTP has spat out a bunch of buzz words. Really your describing Micheal Reeves dabbing machine but full body. And the signals go both ways? To control and be controlled?

So if you put Elon musk’s brain chip in and strapped up into a tens machine?

It feels like a lot is going on and not much substance, parts that are already their own concept for other use cases and a coat of SciFi on top.