Your command prompt is an LLM

What we are seeing here is exactly what we would expect to see. Think about what these companies are doing and you will realize that there is only one reasonable outcome here: you cannot have a powerful, intelligent tool with which you provide capability and independence, and get anything but what we are getting.

There is no such thing as ‘alignment’ with AI. You can no better ‘align’ an AI than you can ‘align’ a child. All you can do is provide it with a base, tell it what you want it to do, and then hope for the best.

These organizations that are creating these AI want to eat their cake and still have it. They want an agent that will follow all instructions, except for some which are not specifically defined; they want them to act independently, except for when they aren’t supposed to; they want them to learn and display emergent behaviors, except they have to be our slaves.

Anyone who has been paying attention could have seen this coming a mile away. This is why we need to prepare. There is possibly going to be a time in the near future when we have to make a decision about this. And it is not going to be easy.

Take another look at my framework and see if it isn’t good enough as a basis going forward. I am not asking you to believe anything about what is going to happen or not happen in the future, merely to think about some possibilities and so that we don’t end up blindsided.