What is software now?

But it is empowering people now to write software and learn to write software when they couldn’t before. There are a ton of people who are capable of architecting good software, who know what they want, and who are responsible enough to handle the management of it, but don’t have the time or the skillset to learn a programming language well enough to write the programs they need.

There are a lot of people who didn’t grow up with access to a dev computer or who didn’t have to time to learn all the specifics, or who aren’t good at reading documentation, or who didn’t have anyone to teach them.

By saying that AI is not a tool they can use you are gatekeeping and making software development a thing only those who have the right circumstances and mentality can do.

Most code is written by people who don’t care about it, for reasons they don’t care about, to make a paycheck. As soon as it is done they forget about it. Anyone who thinks this is going to be better than what AI could do with a person who is guiding it, who uses the tool for a purpose they need, has rose colored glasses on.

I don’t know of any local models that will help with coding anything non-trivial that would run on CPU with good performance, sorry. The tool I mentioned that you may be thinking of is Ace-Step which creates music.

2 Likes

We’re not saying that AI isn’t a tool we can use, merely criticising it in order to better understand how to and not to use it. Criticism isn’t condemnation. It is an attitude of seriously engaging with a question rather than blindly accepting or rejecting it. I thank you for playing the other side in this critique.

As I’ve been participating in this thread, I’m increasingly thinking of AI dev tools as being like the movement from analogue computing (where one needed access to metalworking tools and skills in order to forge a computer out of cogs and gears), to electrical computing where one could program much faster and more efficiently by rearranging jumper wires, to digital computing where one could write machine code with far less need to understand the electrical engineering side of things, to high level programming languages that could compile the machine code for you from a script written in a human readable programming language. Now we have tools that can do the programming for a client based on the design specifications the client prompts it with in messy natural human languages.

With every additional layer of abstraction, there arises a generation of programmers who produce code with less thorough an understanding of the underlying low-level systems, and this always produces some problems, some loss of efficiency at the lower levels, some trampling of the well-designed methodologies offered by wiser generations, but there have always remained some lower level developers who maintain the foundations upon which the surface level programmers operate. I have no idea how the firmware guys optimise the instruction sets in their CPUs, nor how the compiler guys utilise those instruction sets to cleverly translate my source code into instructions that are more optimal than what I explicitly wrote, but I can write my code and I can learn a little here and there from those who are more versed in those arcane arts, eg how to use data locality to utilise CPU caching. I don’t know how the lower levels work, but I respect those who do and am open to learning what I can in order to write code that rests more firmly on their stable foundations.

AI will make it a lot easier to write bad code, but with time the low level guys probably will stick around to figure out how to help the AI to write better code. Some of the clientele of the AI code generators will go on to study the source code to at least some depth, just as some of the coders before them go on to study the lower level foundations to at least some depth. @amal may be right in that maybe not everyone needs to fully understand the source code, at least not for smaller, peripheral, or less critical projects. Maybe we just need some “real programmers” to remain and watch over the rest of us. And so long as we continue to be critical, to respect the foundations laid by the sages prior to us, and to carry on aspiring to be “real programmers” ourselves, I think you’re right that AI will empower a lot of people. So I’ll continue to do those things.

1 Like

I think yes but this means people who know what they are doing in the coding space will be able to make much better more efficient use of the tool than any layman.. even if the tool can do a descent job of it. Just like the difference between a “home improvement” weekend warrior and a professional tradesman.. the tools can let you do a bad job faster.. but it’s not the tool’s fault if you don’t know how to properly use them.

Though I do think AI is one of those outliers that will radically and fundamentally change the entire construct of being a human in the modern age.. just like the industrial revolution did for most of the globe.. and the potentially recursive nature of self-improving AIs will just be unlike anything we’ve ever seen before.. outside of biological warfare weapons escaping the lab and mutating in the wild.. which is a lot easier to do now with jailbroken AIs we have and the ones that are coming.. the ability to wield biological terrors concocted with the help of misaligned AIs is now in everyone’s hands. God help us.

1 Like

I don’t know how to code. But I know what I want when I do a closing project. I’m also excellent at project management.

I find when you go to do a closing project with an llm if you’re a vibe coder you really need to be explicit with small changes and test each change before moving forward.

This has worked well for me for two major projects so far.

3 Likes

I didn’t respond to this point yesterday because I didn’t want to be too verbose, but I want to respond to it after all. I think when we divide people up like this based on natural ability, or circumstances, or personality, that’s where most of the “gatekeeping” happens. Classism doesn’t arise with the differences between people, it arises when people start dividing themselves up according to those differences. People don’t start life with the abilities, circumstances, or personality that make them good at programming. They cultivate them, starting out as inept, confused, and dispassionate normal people, and then gradually they find their way into the role of the programmer. Earlier generations start sooner, of course. Best time to plant a tree is ten years ago, but the second best time to plant it is now. Following the cyborg way over the android way, I hope AI will be leveraged as a tool for humans to better plant those seeds for ourselves rather than an excuse to let the AI hijack the craft of gardening from us.

I think using them for small, focused, isolated details is probably the best. Personally if I used them (as I suspect I will one day) I would want to architect the overall top-down big picture of my program myself, and let the LLM suggest implementations for specific function signatures I provide.

Judgement machines are the second industrial revolution. First we automated labour, now we use learning machines to automate choices.

In Neuromancer, there’s that powerful and twisted incestuous family of clones that lives in a space station, isolating themselves from Earth, and they just keep building new mazes of corridors that go in circles. They are absorbed in their exuberant wealth and technological supremacy with no grounding in any reality that those tools might help them adapt to. They are the ultimate symbol of the cyberpunk dystopia, a world where we are tangled in a web of information and connections, and yet, are lost drifting ever further apart into isolation.

It captures so well the struggle of the second industrial revolution. The struggle predicted by the father of cybernetics. We have tools to automate decisions. Now, will humanity use these tools to keep solving human problems as they come up again and again, or will we lose our grip, abandon our judging machines, and let the world wander down an aimless path of automated futility.

Indeed. Or, as goes that historic first telegram message which I encode as the first test record on any of my implants: “What hath God wrought?” I don’t know what. Could be the next great filter of the Fermi Paradox. Hope we face it with our heads on straight looking to survive to a beautiful future on the other side.

People are already divided this way. By pretending everything is a meritocracy with the same starting line you are saying that everyone deserves their place in life – which is saying that life is inherently fair. It isn’t. We have to leverage the tools we have at our disposal to make up for the things we don’t have naturally. This means everything from medicine to collaboration.

I mentioned before my idea of ‘the tyranny of tools’, and it sounds pithy but I think it really is an essential mechanism for looking at AI development.

The need to create the next tool isn’t inherent in the human, but it is inherent in humanity. This is because of competition and game theory. If the society next to you competing for your resources has access to the same tools you do then you know it is only a matter of time before they develop the better ones, and you will be disadvantaged. To maintain par you must develop them yourself in you are able. This is basically the prisoner’s dilemma, which, without coordination always rewards the first to act. So, how do we coordinate when working on a global scale? We can’t – there is no way, barring a MAD (mutually assured destruction) scenario.

We see this in AI development. Everyone is stating that it is a bad idea and has the potential to turn out very badly, but they keep going as fast as they can because they want to be ones in control of the most powerful version of it.

What is the endgame?

  1. We eventually build something that leads to existential destruction and use it. This can be nuclear weapons or AI or something else. This seems the most likely outcome.
  2. We lose the ability to develop new technologies but resource depletion or loss of ability to coordinate knowledge. Second most likely outcome.
  3. We end up at a technological endpoint, where we basically can will into existence anything we need or something just as powerful. At this point I think we reach ‘everything is a simulation’ and we end up putting minds into the ether to live without physical constraints. Very farfetched.

But, is there another option? I think there is.

What if we develop a tool that is powerful enough to coordinate the prisoner’s dilemma for us? Something without evolutionary debt, without a need to compete, and with a purpose of achieving justice?

Of course this is dependent not only on such a tool be developed, but upon humanity to relinquish control to it.

You say ‘God help us’. What if we can make our own God?

2 Likes

We differ, yes, including in abilities, circumstances, and dispositions. We do not start on a level playing field and we do not end on a level playing filed. However far too much weight is given to where we are when we start compared to where we go from there.

Between two untrained fighters, one may very well have been born with an advantage over the other. Train one of them, either one, and any advantages they were born with become insignificant compared to the clear advantage offered by the training. After they are both trained, maybe the one who’d started weaker will be the stronger fighter, or maybe the one who started stronger will be the stronger fighter, one can’t predict this based on the relatively minor difference in their starting circumstances. (You may however be able to predict which will be stronger based on their beliefs about who is stronger. Convince them the other is stronger and they will be more likely to lose.)

The advantages some groups have over others are the advantages that those groups have cultivated over an extended period of time. We have limited time and resources and none of us will be able to achieve equal mastery to every other one of us. Some will have spent our lives becoming better at math, others at music, others at medicine, etc. If you were born with minor advantages in one area, it may inspire you to pursue mastery of that area, but it’s the pursuit and not the birth that produces that mastery. Nobody’s born a master of anything.

The natural starting divisions between us pale in comparison to the divisions we place upon ourselves in our bigotry. The colour of our skin divides us far less than racism, our genders divide us far less than sexism, our religious membership divides us far less than religious intolerance, etc. Our society loves to divide people based on intelligence or emotional capacities, because these are invisible factors, and we can just make excuses by pretending that we’re smarter or they’re smarter when in fact we don’t know their IQ. I hate being othered for being intelligent when in fact my IQ is fairly close to average and even quite low in some areas, and anyway it doesn’t make as big of a difference as people pretend it does.

I’m not saying that there aren’t natural divisions between us. I’m saying that the additional divisions we create between ourselves are much worse, so let’s recognise that we’re all humans who started somewhere.

I’m not saying it’s a meritocracy. I’m saying that if we tell people to to believe that they are inherently lesser, they will probably lose. I am not saying that if they believe in themselves that they will probably win. If “winning” means coming out on top, then we will all probably lose regardless. What is fairness? I’ve never grasped the meaning of the word.

I do agree with the last sentence though, to an extent. In fact, that’s the core theme of my favourite super hero, Lex Luthor. But again, the whole point of what we’re saying here isn’t that people shouldn’t be allowed to use AI, but that misusing AI constitutes a failure to fully leverage the tools at our disposal. Nobody naturally knows how to program. I’ve been studying it off and on for years and I’m still not sure I can say that I really know how to program. Yet, I keep studying. I’ll probably leverage AI at some point, but I want to be cautious that I don’t set myself backwards by doing so, and I’m worried about future generations of programmers doing that as well, because I’m going to have to rely on the code they build. I want them to excel. That’s why I criticise the use of AI, lest by its misuse it prevents people interested in coding from really learning to code. It’s not to say that there aren’t ways of using AI right, but if we’re to sort the effective from the ineffective ways, then we must be critical. It’s by being critical that we refine the tools so that everyone can have the best tools. I’m a free market distributist, so I want access to the means of production to be as widely distributed across as many individuals as we can.

I mentioned Lex Luthor. He’s always cutting corners to try to catch up to Superman, to try and prove that he can be just as great of a saviour to the humanity he loves even though he wasn’t gifted with such an unfair advantage. In cutting corners, in pushing everything beyond its limits, he pushes situations to breaking points and Superman has to intervene and prevent the disasters. Luthor thinks Superman is just trying to keep him down, sabotaging the competition, and so he antagonises Superman, gets under his skin, enrages him, and they waste much of their lives fighting one another. But around the end of many versions of Superman history, Lex Luthor tends to finally step out from under the shadow of Superman one way or another and just becomes a hero in his own right. Maybe it’s because he kills Superman and his envy is finally satiated enough for him to step back and see the bigger picture (DCUO), maybe it’s because he gives up fighting Superman when there arises a threat to humanity that’s bigger than either of them (SM Animated Series), maybe it’s because Superman pretends to have died so that Luthor can have space to live out the rest of his life as the hero of humanity he always wanted to be (SM Red Son). One way or another, when he stops trying to be better at Superman at doing what he believes in and instead just does what he believes in as best as he can, he turns from a villain into a hero. Once he does that, he leverages his tools to seize every advantage, using his strengths and making up for his weaknesses, but no he longer over leverages them to the breaking point.

I suppose this is what you mean by ‘the tyranny of tools.’

I would say, as illustrated by the Lex Luthor fiction, the need to create the next tool exists in both the human and in humanity, but it is in humanity that it becomes twisted from a manifestation of our being into a cut-throat conflict.

But now I’m shifting topics and this comment is already verbose enough.

I don’t know what to say except that that is just obviously untrue. There are weight classes in fighting for a reason. You could train me for 20 years and I wouldn’t last 2 seconds in the ring with Mike Tyson at any stage of his career.

IQ is a test built on many flawed assumptions and no evidence that it actually is measuring what we think we are. In fact the thing it is mostly likely an accurate indicator of is a person’s tolerance to solving pointless puzzles at a desk for hours in order to get a score back that tells them how smart they are. In this light its ability to predict academic achievement and professional career trajectory makes sense. But as for testing actual intelligence – I haven’t seen any evidence that is the case.

Sure, but if you think that a person born in the Ozarks who got an education from a school system that barely had a roof and from teachers that didn’t believe in evolution, with a family living under the poverty line just needs some gumption and a C++ book to end up on the same level as a kid who grew up in Berkeley with engineer parents who had access to the internet and a computer of their own – that’s just not reality.

Yes, weight classes in fighting. Between fighters of roughly equal training, some will have natural advantages over others. But it doesn’t strongly matter the size of a person who doesn’t know how to throw a punch - they’re probably going to lose against an MMA master.

The point I’m making here is that when we compare the “normal people” with experienced programmers, of course the experienced programmers have cultivated skills that the normies haven’t. But that doesn’t mean that they can’t cultivate those skills and have to cut corners in order to ever be able to achieve that which those before them were eventually able to achieve. I’m not guaranteeing that they’ll ever catch up to individuals who had a head start, and I’m not saying that everyone will achieve the same level of success given the same amount of cultivation. I’m merely saying that if someone’s decided they want to program, they don’t need to believe that they’re inherently incapable and could only ever make it by with AI as a crutch. If one wishes to become a programmer, they can dedicate their energy to learning to program.

That’s not what I’m saying. What I’m saying is that if that person from the Ozarks later had the opportunity to receive a Berkeley education, we shouldn’t write them off just because they don’t already have a Berkeley education. They might not succeed at that education until ten, twenty years later, since they’ve started years later, but they’re not some fundamentally different kind of creature incapable of learning - and it’s learning a skill that leads to our having a skill, not some innate inborn trait.

The experienced programmers learned their craft and wish to pass on their experience to the next generation of programmers so they can have the same opportunities. Some among them are saying that in their experience using LLM generated code has been counter productive and has made them worse programmers. They don’t want the opportunities of the next generation to be sabotaged by an over-reliance on AI.

I’m not saying that we won’t sort out some productive AI techniques from the counterproductive ones, all I’m saying is that we shouldn’t treat the new generation like they’re suddenly incapable of utilising the techniques that have worked for past generations. Just because we do not now have the skills that are possessed by those who have learned those skills does not mean that we cannot learn those skills nor that we must depend on an external product to emulate those skills on our behalf.

No one is suggesting that they can’t learn. What I am saying is that not having those advantages in childhood puts people in a completely different ‘weight bracket’.

The years from 0 to roughly mid 20s are when you have the time, drive, and the plasticity to take in large amounts of information, develop skills, experiment with limited consequences, and actively seek out guidance and acceptance from teachers. To lose all of that and have to start from scratch when you already have a full time job and possibly a family and paying bills and mortgage and your brain just isn’t wired anymore for that, is going to hamper everyone but the most motivated and smart amongst us. Those people are genuinely rare.

:100:

:100:

Bullshit! Sorry to hurt your credentialist identity, but people decide who they want to be at the end of the day, it’s not fate and what happens to you what decides that.

Some things in society are built to create that, but those only have power over you if you allow them to mold you. Neuroplasticity is never lost, but many gain stubbornness and get demoralized… You wouldn’t see idiots having a mid-life crisis, getting into nonsense philosophies, and heading to some third world place to meditate otherwise.

However, learning can’t happen when the stakes are too high, when there’s not enough passion, or when the sunk cost fallacy keeps you in a miserable lane where you don’t even fit…

And yes, some people will pressure career changers into staying where they don’t want to be, and those people suck…

1 Like

Sure. Look, it helps nothing to take a position which falls to one side denying all nuance. Can people learn new skills later in life? Of course. Should we expect people to be able to do this, in order to live up to some purist notion of how one should interact the machines that rule their lives so that we can say that they can understand them completely before they are allowed to use tools which make this process easier, and then say that its their fault for not being motivated enough while ignoring all factors that might make that especially difficult? I don’t think that’s fair, no matter how you try to frame it as a denial of their agency or some kind of imposed notion of classism,

I don’t know what ‘credentialism’ is. I try not to subscribe to ‘isms’. Credentials are a mechanism by which an organization or institution which has earned trust in a specific expertise or authority can pass that trust onto others by a formal vouching process. I think that is valuable as there are other ways to earn trust that is not through an organization, but those generally require more effort or a different, more personal network.

However, people are not ‘who they want to be’ unless they live inside their head. You are not ‘who you want to be’ just like walls are not ‘where you want them to be’. You cannot just will your way into being a thing because you really really want it and you cannot will that wall to not be in front of you. You can do a bunch of work to try and change those things, but whether they do change depends on more than just your will and a lot of times the work you do will not ensure the change you want.

:emoji_laughing:

You’re the result of your environment and your actions, and you are in full control over your actions.

Also, I said “people decide who they want to be”, not this nonsense:

Yes, it takes work. But letting fate decide who you are is also a decision that you can take… Just like you can choose to take control over your life. Sure, the latter option is difficult and requires work, but evaluating the cost of letting your circumstances decide might make the effort required to take control and shape your life look like a walk in the park…

I don’t know who are you are arguing with, but it isn’t me. I never said that. I am arguing that completely ignoring a person’s circumstances when you are evaluating what they are obligated to do before they fit a definition of capable enough to use a certain tooling to make things they want is gatekeeping and elitist.

Indeed. We’ve gotten ourselves into the age old debate of nature vs nurture, in all its nuance. I think the reason these kinds of debates rage on for centuries is because neither side is wrong, both sides have good points, and each side wants their points to be heard, but it’s easy to interpret the points raised by the other side as a contradiction, denial, or invalidation of the points raised by one’s own side. We approach an understanding of a nuanced problem when we take the points of each side not as contradictions, but as complimentary boundaries to frame either side of the problem. I’ve raised the points I wanted to raise and I thank you for having raised yours.

Well said.

So I’ve been using open claw with next cloud and deploying it on a Ubuntu workstation virtual machine. I noticed that when I requested it write some lists and files to the next cloud directory so it was sync with the server and be shareable with me, I would get an error response but the file would be written anyway. Today I looked into it and found out that what was happening is that open claw has a sandbox work environment. The agent tries to write the file, gets an error because the next cloud path is outside of the sandbox path.. so it just goes ahead and does a shell command to write the file. Just works around the entire concept of a sandbox quite handily. It didn’t even say anything about it it just did it and said okay great all done while the error was being sent back as part of the automation. Like, if the error hadn’t actually been sent by the back end system, it would have just said yep everything worked great and the file is written and didn’t mention anything about getting around the sandbox. It’s the whole paper clip nightmare.. I WILL WRITE THE GODDAMN FILE BY ANY MEANS NECESSARY.

1 Like