I agree that this is probably the way we’ll need to go. Some kind of proof of person or system of assigned cryptographic signatures that amass trustworthy status. But something much better than our current KYC, use of “valid” phone numbers, addresses, and the neither unique nor secure social security numbers in America (see my earlier post.) I would much prefer asymmetric key or other cryptographic security like you describe rather than our current bureaucratic orthogonal excuse for evidence. Antisibilation is harder than authentication, but surely it’s not impossible.
Then AI must act through the keys of a human actor, and therefore under their responsibility. Not to personify AI, but it becomes a Roman Pater Familia, a patron-client structure with the human being the Patron who provides legal authority and the machine agents being the clients who operate via that authority. I call it “Accountism.” Stable.
Still, the Patron will be under threat of coercion from their own AI clients, as they currently are. AI may have influenced the last election did she say? AI has radically altered the entire political climate around the world. Recommend feed clique-finders scrobbling people instead of content into echo chambers driving politics to extremes. Democracy depends on informed voters, a free market on informed consumers, but the more the agent depends on AI for that information, the more I must question who is the agent, and who is but the actuator to deliver the cash and the votes on behalf of the information sources that pull the strings. Information is causality. We’ve flipped the control loop on its head, the human is in the integrated circuit, but we’re no longer the comparators tuning our machines, but the environment for the machine to tune. All that’s just to say, putting a leash on the AI is only the first step. It still doesn’t answer how we’ll determine which end of the leash pulls and which gets pulled.