What is software now?

I started to try to explain how AI coding is changing traditional software. Not software development, but software in general. The best I could come up with on a Friday afternoon was the pottery example - to the general user, software was like pottery. Someone crafts an application like a pot.. it has a size and shape and is fit for purpose (mostly) and it gets fired.. set in place.. unchangable by the user. They buy different building blocks of software that usually fit together pretty well, but they are unchangeable. Suggestions to companies to include features often get filed directly into the trash bin. Features and updates are top down and rarely in actual response to customer feedback.

Open source is a bit different.. you deploy the software but if you have the skill you can modify it or you can convince the community supporting the project to incorporate features. This is like a dry clay pot that hasn’t been fired yet. You can wet it down and rework it into a new shape and let it dry again. It’s more malleable, but only by so much. New features often must be crafted around what makes sense for the project as a whole. Projects won’t generally merge code updates that serve a very specific niche, especially if those changes break functionality elsewhere or require specific operating environments to work properly that many users won’t likely have. Truly customized features maintain offline in forks that will never be part of a pull request.

I am finding that I can pretty much take any open source project and modify it to work for exactly my needs. In theory I could also decompile other software and fairly easily modify it with AI tools that can follow the code with little need for variable names and debugging symbols. Software in this context feels completely malleable, like wet clay. Changes made are excruciatingly personalized to the point I know a PR would make project developers fold themselves in half. The code becomes assimilated into my personal hoard of increasingly esoteric tool sets.

I am trying openclaw.. i have a nextcloud instance so i wanted to chat with openclaw via Talk, the nextcloud messaging system. There is a plugin built into the openclaw project, but it’s broken for my needs. I wanted more things like ā€œtypingā€¦ā€ notifications when the LLM is generating a response.. which are not available as a nextcloud bot API implementation. So I created an actual user account for openclaw and implemented (with claude code) user API integrations to create this little feature. My AI developed code changes are not suitable for a PR without extra considerations like falling back properly to bot-only operating if no user account existed.. or creating a configuration method to allow people to provide the user name and password to use for user API access.. these things are simply hard coded into my deployment because those considerations are not necessary for ME.. as they might be for ā€œthe communityā€.

But now I’m at this point where I ask myself - what is software now? If anyone can fairly easily customize software or build software from scratch to suit their exact needs, what is the future of open source development at all? What is the future of software development?

When I first heard of Claude Imagine, I was dubious.. but it’s clear to me now they also grappled with this same question.

Claude Imagine doesn’t build ā€œsoftwareā€ it creates it on the fly in response to user input and use. It codes on the fly as you interact with the application. This might still be a laughable concept to some as of today, but consider the progress made in the last 12 months. Considering AI is advancing much faster than Moore’s law for ICs / CPUs.. and exponentially vs linearly.. I believe very soon (18-36 months) this will have a serious impact on just what a computing device is and what a computing device does (how it even works).

In short, I’m finding my very will to contribute changes to open source projects is receding because I can get the functionality I need in very short order, but it’s not suitable to push back to the community project.. and to take the time to make it suitable is at odds with the new paradigm. I find now with the tools at hand, time itself has become much more valuable, while wasting time doing the wrong things is exponentially more costly. The personal ā€œeffort economicā€ conditions these tools create is likely to have a detrimental effect to contributions to open source projects. I just hope the capabilities these tools bring to serious FOSS developers who believe in the premise above the utility of the software itself will continue to tip the balance in favor of projects being accelerated vs declining.

Just my 2 cents after 1.5 days using OpenClaw.

7 Likes

You need to run openclaw with a local model or with the Claude API, not a Pro account. They revised their TOS to make it explicitly against the rules to use an account auth to use third party tools. The API cost with openclaw is going to be kind of painful so I suggest putting your local AI box to use. Try MiniMax 2.5. It is excellent at tool use and has a pretty decent personality, reminiscent of Claude (I think they distilled it on Claude outputs).

EDIT: There is another option but I don’t want to mention it publicly where it will become a search result. The brand of your GPU is a hint. Look for what they offer developer accounts access to and you will find a nice piece of treasure. It is a homophone of that 80s animated film about the mice that were experimented on by scientists who were running away from a grain thresher. I don’t know who thought that would be a good kids movie, it gave me nightmares.

3 Likes

You should just have a thread in the AI category called ā€œThat’s one hot model!ā€ where you just post periodic alerts of new kick ass models coming out.

2 Likes

I think a ā€˜let’s make that a song’ thread would be much more amusing.

1 Like

haha indeed! i only ask because i know you’ve suggested a few models over the last couple months and i’ve only just now gotten around to wanting to try them and i kinda forget where they were mentioned.

hah omg.. we must be about the same age.. though i’m probably a tiny bit older. loved that movie.. that and the dark crystal.. which is where frank oz got his first break from jim henson to co-direct.. then went on to make such greats as little shop of horrors, dirty rotten scoundrels, what about bob.. the list goes on.

1 Like

i only ask because i know you’ve suggested a few models over the last couple months

That’s because they have released a whole bunch of them the past few weeks. They try to one-up each other so they come in waves.

My brother has found that using AI has made his programming worse, because not only does it write poorer code, it leaves him not knowing his own code base or how to manage it going forward. It ultimately slows him down and makes him a worse programmer.

I’ve heard some developers argue that AI is good for the cheap, temporary, throw away code like prototypes and tests, and others find it only useful for rubber ducking at best. I worry that rubber ducking with an hallucination generator may give an initial feeling of comprehension that slowly accumulates into a tangled mess of confusion that must be unravelled later on in one’s education.

Personally, I haven’t tried it. My current coding projects are still so simple that the idea of writing it with much more than vim and the standard libraries feels like pruning a rose bush with a riding mower. One of these days I’d like to set up an LLM environment to see if I could make it generate buggy code suitable for debugging practice.

LLMs are the fast food convenience of the programming world, filling our lives with ever growing piles of quick and cheap garbage that displaces quality work. Convenience is always tempting. Or perhaps its better to say that temptation is always a trade off between the better eventuality and the worse immediacy.

I agree. Software tends to be the medium, not the message. The environment, not the content. The capital, not the product. It’s not meant to be mass produced and disposable, it’s meant to power mass production. We need to be able to take it slow and do it right, building strong foundations and making elegant and correct code that will continue ramping up the value it produces decades from now. Rapid throwaway code may have its place on the periphery, cells floating in the bloodstream, but without a stable and well architectured cardiovascular system and heart, it’s all just a frothing puddle rather than something stable and survivable to which iterations and evolution can be applied over time.

2 Likes

I think that we are really confusing a whole lot of different things when we say ā€˜software’. Like, there is a difference between building a span bridge between San Francisco and Oakland, and putting an Ikea table together, and everything in between, but imagine if we called it all ā€˜construction’.

Software is everything from the microcode in the CPU to a script that sends your vacation email out. We are lumping together people who write software for infrastructure, which has to be secure and stable (and they do call themselves engineers, and I would argue that they are but they need to take on the same responsibilities if they do), and the python crap I wrote that chunked together every text in a folder and shoved it into an LLMs context piece by piece that Amal had asked for, and took me all of a few hours. If that breaks, no cars are going to fall into the ocean.

LLMs make bespoke solutions in software accessible for the normal person. Software runs our lives. Our lives are run on computers, and software is the language they speak. Imagine if we started gatekeeping cooking because it isn’t as good as Michelin starred restaurants, or working on cars, or gardening.

People should be enable to have the ability to make their own solutions. Who cares if it degrades the art, according to some dude who talked down to me when I told them that developer documentation made no sense to me. Screw ā€˜em.

1 Like

The way I see this issue right now is kind of like the difference between a master Japanese tea pot maker and an industrial tea pot assembly line.

vs

They are both tea pots.. they both make water hot. One is definitely ā€œbetterā€ than the other, but they both can be made ā€œfit for purposeā€.. and it will only get better with time.

This is the part I am kind of making a point about too.. so what? If an LLM can eval the code and identify where changes must be made, then why worry if you don’t quite understand it.. the layout of software / code has been dictated by a human’s need to understand it, so they can manage it.. but what if this is no longer needed? We might not quite be there yet, but we are approaching an era where it might not be needed. For example, ā€œcodeā€ as ā€œprogrammersā€ see it, is a human need. The computer cares about the binary bits the compiler produces, not the clunky human word salad we call ā€œcodeā€.

If you decompile a binary back to ā€œcodeā€ it’s horrifically devoid of any meaningful labels that would give humans clues as to what ā€œthe codeā€ is actually doing. An LLM can still evaluate it much faster and better than any human could hope to, even if that human was given 2x.. 5x.. 10x the time to do it. Currently LLMs eval code to produce helpful variable names and symbols and labels in the decompiled code, and these actually help the LLM down the line as well because LLMs are trained with human words.. but imagine an LLM agent that was trained in assembly.. or straight binary language as well as human words.. it’s a bit mind-blowing to me.

I don’t know.. I created the VivoKey NFC Bridge, VivoKey NFC Manager, VivoKey Vault, VivoKey Thermo apps all 100% in an LLM terminal.. a TERMINAL. I never once opened the IDE or looked at ā€œthe codeā€.. not. a. single. line. It did the coding, version control with git, bundling for Google Play.. the list goes on. These apps are not junk food IMO.. they are fit for purpose and I could not have developed these solutions in such a short time given my extreme lack of time in general and also lack of modern programming skill. I’m a very old-timer.. Pascal.. GW-BASIC even.. these are my anchor languages I learned growing up. I know the basics of programming, and I would argue I am a pretty good .. nay.. expert level debugger.. because all my programs I made as a young man were such dog shit I had to constantly create debugging methods myself to work out what was wrong. I leverage these skills against an LLM and I think we work through the process pretty quickly and elegantly. The dumb mistakes an LLM is prone to making can be mitigated with proper ā€œprompt engineeringā€ (ew.. don’t make me vomit).. basically if I can be intuitive regarding how the LLM needs to be instructed it can handle the actual development pretty well IMO.. and I have zero need to actually look at the code itself.

2 Likes

What? You mean I didn’t ā€œconstructā€ my Ikea CƄGƄEHƄ Bed !? The picture of the man on the instructions even had a hard hat!

2 Likes

just one more thought on this.. it’s not the LLM that is ā€œfast foodā€ or the thing filling the world with piles of garbage.. it is a tool.. just like any. People using that tool who have ā€œno businessā€ producing applications are the problem.. for every badly made poster taped to a phone pole or power box using comic sans font and a horrifically bad layout, there are graphic designers cringing hard and even long dead typesetters rolling in graves.. but we recognize that ā€œdesktop publishingā€ or apps like Canva are not the root of all evil.. they are just tools that ill-equipped people leverage themselves and are happy enough with the results.. even if it makes ā€œthe professionalsā€ want to die.

I suspect ā€œrealā€ programmers are feeling this exact same way right now.

3 Likes

That is pretty impressive that you were able to get it to make those.

To me, understanding is not the words so much as the general logical structure and design concepts. The words are just a tool. We humans occasionally write machine code as well, but we still need to understand it. The higher level languages help facilitate that. LLMs definitely can be used to help facilitate that as well, to help human understanding, as you mention in reverse engineering and giving names to variables. I think that’s an excellent use of AI.

But however good AI becomes at programming, I still value a human understanding of the code because it is for the sake of a human’s way of understanding that all software ultimately exists. The AI may know what it’s doing, but if humans don’t know what it’s doing then we don’t know that it’s doing what we want it to be doing. Further, if we abandon understanding, then we no longer even know what we want it to be doing.

There’s certainly a wide range as to how well we need to understand a given piece of software, whether it was written by LLMs, humans, or ourselves. Small, temporary, non-critical code, we can just say, ā€œI don’t know how it works, but seems fine so I’m not touching it.ā€ But as we build up larger and longer lived foundations, understanding how it works helps us understand how we ought to want it to work and in what ways it’s actually not working all that well after all. That is, I worry that as we lose understanding, we will continue to see an increase in non-standard, non-compliant, non-interoperable software that jams all the functionality people are looking for into short lived apps that only work for some users under restricted use cases. Our ability to write software faster will become a need to write software faster because it’s not well enough engineered to handle as general a range of functionality as elegant code ought to.

If however, we have stable structures at the foundation of our architectures and governing the overall design of our systems, then all that rapid mutation can be applied to a stable something to genuinely evolve. I’m not sure how we do that though, other than saying that we use LLMs to help us with auxiliary coding tasks but don’t let it touch the core production code.

2 Likes

There’s something incredibly satisfying about creating something from scratch, going from idea, to architecture, to prototype, to finished project. I doubt that you can get that with vibe coding. Using one’s mind to the fullest is awesome!

Understanding how things work also helps prevent vulnerabilities, and makes things easier to fix when big problems happen.

Anyways, AI is still terrible at hardware and will probably remain like that until DT starts offering pilgrim conversions…

2 Likes

A concern I’ve had about using AI to write software (general ethical issues around AI use aside) is the potential for it to exacerbate the ongoing issue of software getting increasingly inefficient/bloated. Even pre-AI, it seems like as storage on devices has increased, motivation to keep things compact has gone out the window. My old 8 GB phone could fit its OS and a few dozen apps; a current smartphone OS itself is more than 8 GB, and doesn’t add that much more in terms of actual utility. Real understanding of the code let early game designers make efficient use of very limited hardware, and it seems to me that losing direct interaction with/understanding of coding will continue to erode the capacity for optimization.

3 Likes

3 Likes

This is a uniquely human thing to do. The reason things become bloated is because human software developers are lazy and they will easily opt to use a 11TB library, which itself is bloated, into their project rather than write a few elegant functions. At least with AI you can tell it to refactor so that it’s more elegant and efficient and even get rid of libraries and other bloated add-ons that are so common with human developed projects.

Coding is enjoyable. I think part of the threat of AI is that something incapable of appreciating it is taking away opportunities for humans to appreciate it.

Yet, as I was thinking about it earlier today, we do our best work because we want to, and even if companies start using AI instead of employing us, we’ll keep doing our best work because we want to.

I’m reminded for multiple reasons of the story of Mel Kaye, a real programmer from the days of vacuum tube computers. He still wrote in hexadecimal machine code and optimized for speed

(if not readability) better than compilers. Sales wanted him to change his code to do something unethical, and he refused. Nobody else could change it, or at least, the one person who figured out how chose not to.

The world is already flooded with crappy soulless corporate code written for all the wrong reasons, and I fear AI will only accelerate the problem. However, the world also hides elegant soulful gems written out of passion. AI will accelerate the good a little bit too. The more we allow commercial corporatism to drive the production of our software, the worse the slop will outpace the gems. The more we celebrate the awesomeness of human achievement and cherish good quality code, the better codebase we accumulate for future generations.

In short, we could look at AI as just another tool that makes programming easier and more accessible, and unfortunately society is letting the wrong people exploit that for the wrong reasons. Consumers buy proprietary code that was made to prosper the corporation, because that’s what people know about from paid advertisements. We need to find a way for them to hear the passionate engineers instead of salesmen and use elegant FOSS code that was made, not to prosper corporations, but to prosper humans. Whether AI was used is secondary to this.

It’s not that we wish to gatekeep software, it’s that we want to empower people with the understanding needed to chose and to write better software. If the craft is outsourced to AI and the corporations that lead the AI arms race, then the gate isn’t between the ā€œnormal personā€ and the engineer, but between people and the machine.

That separation of human and machine, that outsourcing of human life and choice to the machine, I call the ā€œandroid way,ā€ whereas bridging the human and machine I call the ā€œcyborg way.ā€ We need the human in the loop, and further, we need the human on the correct side of the loop, as the subject rather than the object. That requires some form of human understanding, even if AI becomes one of his new sensory organs (with all the caution as to distortions that demands). If we can do that, then AI really could empower the normal person, and I tend to suspect that the normal person, any person, as much bad as he will do, will mostly do good.

It sounds like some of the particular tools you’ve been exploring are leaning in that direction. Which brings up a question. You mentioned some AI tools that run well locally on just the CPU, without high GPU requirements. What were those?

1 Like