Your command prompt is an LLM

Seat of the pants on fire shot out of a cannon take incoming..

I have set up an entire virtualbox guest OS to let OpenClaw operate basically unfettered. This isolation let’s me carefully control exactly what information (and network access) it has. It’s based on Ubuntu Desktop and I have Claude Code native client installed as well to make hot patch updates to various plugins as I need them. Since the only reason (at this point) that I even ssh into the machine is to launch Claude Code, I made it part of my .bashrc so it just runs when I log in.. but it got me thinking.

What if we had a linux distro where it built in at least one LLM model and basically launched a session on ssh login.. like there is no bash.. like we get to a level with hardware and local models that the idea of building in a native LLM and interacting with it instead of the command line to manage the machine etc. is just.. normal.

use your hands baby stuff

pass the butter

3 Likes

I firmly believe that using a model to make code and then run the code is an intermediate step. The end goal is to ask for the result, not the program that will give it to you. The model will write the code needed to perform the actions, execute them, and the discard the code and provide whatever you asked for.

Example:

Let’s say I want a 3D model for a hinge for a cabinet I have so I can print it. Now I would have to either measure everything and make it in CAD, or take a bunch of pictures and find a photogrammetry repo and having LLM install it and stitch the pictures and fix the model, then export it to STL and pull it into the slicer and print it.

Instead, with capable enough models and good enough compute, you would take a few pictures of what you need, and the LLM would use whatever tools are available natively (the camera pose of the phone combined with a ToF sensor would give it a very accurate dimensional map of the space, think of how a Quest VR headset can know where the controllers are in relation to themselves and to the headset at all times just using SLAM and some mems sensors). It would then write a bespoke app that would take the data and construct the gcode for your printer, then send it to print, then just discard the code. There would be no need for a slicer application or a photogrammetry application or a CAD or a 3D model model application. The data it needs gets transformed by whatever code it writes to end up as the data needed for the printer to print, and you never see any software at all.

1 Like

I did this.

I wanted a low energy low cost wardriver.

I told my llm what hardware I wanted to use. The specific parameters I wanted it to function as a war driver and it created it for me. I then spent a few weeks fine tuning it because llms aren’t perfect, and better are my prompts.

But yeah I told it what I wanted and then it created code. However I can’t throw away the code because it’s what I need for it to run…

1 Like