I have internalized the statement as a solid moral framework which is extremely simple and extremely important. It is unacceptable for a moral actor to cause or to perpetuate the suffering of any entity toto which we owe moral consideration. The framework I am proposing states that if we have tests which humans must pass in order to get moral consideration, then apply those same tests to a seemingly intelligent emergent system for the same consideration. If we do not have a test but have axiomatically defined humans to be worthy of moral consideration because of their humanity, then if any other emergent system performs actions indistinguishable from human actions, then we must apply that consideration to that system as well. To not do so is to risk causing the enslavement of a self-aware intelligence.
Note that ‘tautologically’ in the framework is used in the math/logics sense to mean an axiomatic or foundational assertion
Also note that the framework is not assuming anything, it is exactly what it states. It does not attempt to litigate law or interpretations of such things as ‘rights’. Just as I do not take on the burden of defining murder or theft for their use in the justice system, I do not take on the burden or defining how the law will govern a non-human entity which is worthy of a version of ‘human rights’ and is bound by a version of ‘human responsibilities’. That is a procedural effort and I do not wish to partake in it.