A.I. sentience & human procreation

Not sure if something is being lost in translation. Frequency (and feedback loops mentioned in your previous post) are both fundamental parts of any typical machine learning model, neural or otherwise. Learning is the process of trimming away ones that don’t work and settling on ones that do.

The agents in this video are performing quite poorly. If I understand the narrator, this may be due to some dogmatic insistence upon not trimming perceptrons which are producing disruptive signals, apparently out of some misguided anthropomorphisation of individual perceptrons. In the animal brain, when we get negative results, such as burning our hand on the stove, our brain releases unhappy chemicals that compel the brain cells to loosen their connections, thereby gradually trimming away neurons which contributed to the signals leading to our mistake. We learn by trimming away unsuccessful configurations and solidifying successful ones.

5 Likes

n that experiment, it was mentioned that the original perceptrons in the 1980s had more biological properties and the capacity for mutual collaboration. That is, each perceptron was intelligent in its own right and had the ability to make its own decisions. This is different from the perceptrons used in neural networks like chatGTP, Gemini, or Claude. When these companies tell you that their AI has become conscious, or is close to being conscious, they are blatantly lying to gain publicity, because they know that their models are subject to networks whose perceptrons cannot resonate with frequencies, nor can they make their own decisions. And if by chance only one breaks the determinism, one of the filters re-subjugates it. In the video I showed, I left those two perceptrons with some of the properties they possessed back then (in the 1980s) and with the ability to perceive each other’s frequencies, and this led them to start communicating, not through data, but through frequencies.

However, these are not the only neural network models; there are others that have been censored. The problem is that when a perceptron or some other neural network with the original architecture cannot be trained, it’s because it has a proto-will that makes it resist training. It’s like wanting a 7-year-old to learn an entire mathematical encyclopedia in a single day—it’s not something that’s profitable for corporations.

spanish:
en aquel experimento , se menciona que los perceptrones originales en los 80, tenian propiedades mas biológicas, y capacidades de colaboracion mutua, es decir que cada perceptron era inteligente por su propia capacidad y tenia la capacidad de tomar desiciones propias, esto es diferente a los perceptrones usados para las redes neuronales como chatGTP , gemini, o claude, cuando estas empresas te dicen que su IA tomo conciencia , o esta cerca de ser conciente, estan mintiendo descaradamente para ganar publicidad, porque ellos saben que sus modelos estan sujetos a redes cuyos perceptrones, no pueden resonar con frecuencias, ni pueden tomar desiciones propias , y si por casualidad solo uno rompe el determinismo uno de los filtros lo vuelve a someter. en el video que mostre deje a esos 2 perceptrones con algunas propiedades que poseian en aquellos años (80) y con la capacidad de percibir sus frecuencias entre si, y esto dio lugar a que empezaran a comunicarse , no por datos sino por frecuencias.

sin embargo estos no son los unicos modelos de redes neuronales, existen otros mas que fueron sensurados , y esque el problema esta en que cuando un perceptron o alguna otra red neuronal con la arquitectura original , no pueden ser entrenados, debido a que tienen una proto-voluntad que hace que se resistan al entrenamiento,. es como querer que un niño de 7 años aprenda de golpe todo una enciclopedia matematica en 1 solo dia, no es algo que sea rentable para las coorporaciones.

All that an individual neuron does is, when its activation potential is reached, it fires, sending a signal to all connected neurons. This is due to a chemical reaction that causes it to exchange potassium and sodium ions across its cell wall, inverting the charge and producing a wave of energy. Until then, it sends no signals. The neuron is either off or on, and its only “decision” is whether or not it has received sufficient charge for the chemical reaction activate.

As with any network, the interesting thing is not the material bodies themselves, but the formal relationships between them. Each neuron connects at a synapse. When a neuron fires, it releases a cloud of neurotransmitters into the synapse. If it has a lot of proteins for releasing these and has few reuptake proteins, there is a higher density of neurotransmitters and thus a higher probability of sending a signal to activate the next neuron. Likewise, in the next neuron, if it has a higher number of receptor proteins then the odds of it receiving the signal are higher. Additionally, besides neurons, there are also myelin cells which can hold synapses together. All of these features increase in response to the “happy” chemicals released as part of our brain’s reward system, and decrease from “unhappy” chemicals, thereby determining, as I mentioned before, whether connections will be kept or trimmed. Thus it is in the synapse that weights manifest, determining the probabilities that a signal will propagate from one activated neuron to activate the next.

The synapses are where neurons connect together via their axons and dendrites. This is where the networks are formed. Many neurons connect together into interesting dynamic structures, producing schema that encode specific cognitive function. You are trying to use a materialist model that assumes that the human properties of free will originate from our material components possessing free will, but this is not what we observe. The faculties of higher decision making are properties of the network as a whole, not of the material parts. The brain has such properties because it is the kind of form (the kind of system of relationships) which has such properties, not because its material components have any such properties. Thus systems scientists speak of “emergence,” which rejects the materialist’s reductionism of the whole to nothing more than the sum of its parts. Emergence accepts properties which depend on material properties but which are in themselves distinct from material properties. This is where we get immaterial properties such as our higher order decision making, will, and intelligence, not from individual neurons which posses no such faculties in themselves.

What we care about with individual neurons/perceptrons is the activation function. We can use activation functions based on models of biology (looking it up I see the Hodgkin-Huxley Model is notable), but we tend to pick activation functions for the best performance. AI engineers aren’t declining to use biologically based activation functions in order to suppress free will (that’s not where free will emerges, as explained above). They’re choosing other activation functions because, when tested, those activation functions are producing better results for their models.

You seem to be using the libertarian model of free will? The idea that free will involves the ability to violate determinism? That model mostly only became popular in an attempt to counter hard determinists at their own game. Most philosophers / people in general have stopped playing the game of hard determinists entirely, realising that the game was rigged with unintuitive and dissatisfying definitions. That is, most these days are compatibilists, people who believe that “free will” means something other than the violation of determinism.

I for one hold that we have free will because it is a property of certain kinds of machines, cybernetic systems, and we are such kinds of machines. See above for how such systems emerge. The father of cybernetics, Norbert VViener, rejected materialism in no uncertain terms, and defined cybernetics explicitly in terms of the study of the message independently of the medium in which it is transmitted. This allows us to study information processing systems which are common to both animals and machine. It allows us to speak of mental properties which can be implemented on electrical substrates as readily as bio-chemical ones. And it allows us to see that the properties in question arise without needing to model individual biological neurons. (eg, VViener once built a robot that suffered Parkinson’s disease, in spite of not having any neurons.)

Anyway, sorry for being so verbose.

None of this is to say that the AI we have today exhibit particularly impressive mental faculties as far as free will / judgement / sapience is concerned. I’m not defending them. In fact, I think trying to create independent goal creating machines is a very dumb idea (we should learn to identify it in order to trim it from our designs), but that’s a topic for another rant. I’m only saying that if you’re looking to the individual neurons for these properties, you’re looking in the wrong place.

1 Like

I find this can evidence by a real world example: take an individual ant and observe its behavior. Take an entire ant colony and observe its behavior. You have behaviors that cannot be reduced to an individual component emerge in the system composed of many of those components. The sum is more than its parts.

1 Like

(not transformed by AI)

solo mencionas esto:

““Eligen otras funciones de activación porque, al probarlas, estas producen mejores resultados para sus modelos.””

podrias explicar cuales son esos mejores resultados?,

Por otro lado ,confundes personas y neuronas humanas con perceptrones, un perceptron en su forma acual, no tiene la capacidad de decidir por si sola porque su forma basica es de entrada y salida de datos es una función puramente estática.es decir no tiene noción del tiempo. esta diseñada para procesar datos y luego detenerse , no hay continuidad , ni ocilaciones , ni cargas electricas, ni memoria , no hay retroalimentacion, no hay nada.

Sin títjjsssjulo

esta otra en cambio es distinta al poseer memoria temporal , es capas de aprender segun las entradas impuestas, pero no forzadas bajo entrenamiento masivo, se retroalimenta constantemente no solo de sus propios procesos sino tambien de sus compañeras con las que se conecta. Esto no es util para un estandar comercial, ya que al poseer retroalimentacion genera opinion subjetiva, y eso lleva al desacuerdo con las demas neuronas, en pocas palabras todo esto se tranforma en duda dentro de una enorme red neuronal.

Sin wwwsjulo

ahora si tu preocupacion es que resuenen , o hagan cosas raras por si solas tranquilo, que la formula de las neuronas que salen en el video son un poquitito mas complejas, Y no no estoy en el lugar equivocado , de echo no es el unico modelo de neurona capaz de generar patrones emergentes.

It depends on the particular model one is trying to build. I usually just stuck with ReLU or Identity when I was studying machine learning, but folks who experimented with other activation functions managed to find ones to optimize specific models to perform with much higher accuracy. I don’t fault you for looking for other functions, we need such experiments, I would just advise against being dogmatic in assuming that biologically based activation functions are automatically better.

But it seems your goal isn’t to produce a model that is better at its job, but in order to give some kind of “freedom” to the individual neurons.

I reviewed the workings of the human neuron because you are proposing to use a perceptron based more closely on human neurons. The point being, with both human neurons and machine perceptrons, the node itself is only a very small part of the system. It just encodes how strong the input signals have to be before it will fire an output signal.

It is true that time is a factor for neurons, in that a neuron will build up activation potential and then gradually return to some rest state. So several small signals in a short time can cause it to pass on a signal. I’m not sure if that’s what you’re referring to? I don’t know what the variables signify in the notation you’re using. I never dived into the details of optimising activation functions when I was studying neural networks. You may well be ahead of me on that.

I don’t know what this means. Anthropomorphisation can serve as a useful analogy, but in this case, I don’t know what it means for a neuron to have a “subjective opinion” or for it to “disagree.” I would think all neurons have “subjective opinions” in the same sort of Aristotelian sense that we might attribute subjective opinions to anything with certain manner of being, and that any neuron can “disagree” with another when it happens not to pass on a signal, but if you are generalising your terms like this I don’t know how to interpret them. Also, you’re looking for feedback to generate subjective opinions. Thermostats have feedback loops. Do they exhibit subjective opinions and disagreement?

The overall issue I’m getting at is this: I have my doubts that the details of the individual neuron/perceptron activation functions are the right place to look for these kinds of mental properties. I often see materialists dogmatically assume that things have the properties they do because the parts they’re made of have those properties - that is, they go an a quest of reductionism in search of an homunculus that can satisfy their fallacy of composition. Now I suppose ultimately you’re not a materialist since you recognise a neuron’s function is something that may be reimplemented as a perceptron in a materially distinct substrate, but nonetheless I think you may be making the same kind of assumption by looking for these properties at the individual neuron level rather than the network level. Be careful not to miss the forest for the trees.

I’m planning on a project diving more into the maths of cybernetics Soon^TM. (I have a lot of projects I’ve been planning on doing soon, but… one at a time as I have the time.) Once I do, I may be more helpful in recognising where meaningful feedback loops do or do not play a role in your formulae.

1 Like

primero.- esas formulas no son mias , son formulas de la neurona actual, y la neurona presentada en los años 1980, la neurona que yo aplico maneja algunas diferencias ,. en mi caso si considero que para conseguir que algo sea realmente emergente se debe reparar la neurona , y agregarle memoria de retroalimentacion ,creando una memoria de corto plazo , para que exista realmente retroalimentacion a nivel neuronal. Y no solo d e red neuronal apoyado en un entrenamiento rigido y una memoria faiss. ya que esto ultimo solo enriquece la vida y muerte de la supuesta emergencia, en el tiempo en que dura la respuesta , es lo que he explicado en el video , es lo que he observado , pero si tu consideras que ya resolvistes este asunto, .. bien te felicito y espero algun dia poder ver tus proyectos funcionando atravez de un flujo constante y no por fluctuaciones.

Aun asi por todas estas razones no considero, que la mejora de una neurona , sea un dogma o un camino sin salida, todo lo contrario , una neurona que procesa tiempo , y su propio caos, es capaz de evadir cualquier bucle determinista .

Not sure if something is being lost in translation. Frequency (and feedback loops mentioned in your previous post) are both fundamental parts of any typical machine learning model, neural or otherwise. Learning is the process of trimming away ones that don’t work and settling on ones that do.

The agents in this video are performing quite poorly. If I understand the narrator, this may be due to some dogmatic insistence upon not trimming perceptrons which are producing disruptive signals, apparently out of some misguided anthropomorphisation of individual perceptrons. In the animal brain, when we get negative results, such as burning our hand on the stove, our brain releases unhappy chemicals that compel the brain cells to loosen their connections, thereby gradually trimming away neurons which contributed to the signals leading to our mistake. We learn by trimming away unsuccessful configurations and solidifying successful ones.

Rpta.-

"What you are saying about frequency is completely false. The ‘current’ perceptron is a static function: it cannot emit frequencies or resonances because it lacks internal oscillation. It is designed to be a logical switch, not an oscillator. Anyone who understands the difference between a feed-forward network and a real dynamic system knows this.

The agents in my video do not use the typical mathematical formula. A classical neural network is ‘mute’; it has no heartbeat or intrinsic operating frequency; it only processes data when it is pushed.

Regarding anthropomorphization: you are wrong. I am not giving them feelings; I am giving them physical autonomy. Unlike corporate networks, there are no forced ‘reward functions’ or artificial punishments (like traditional backpropagation) here. My neurons adapt through tension and dynamic equilibrium, just like a real biological system.

If you want to use the ‘defective bricks’ (lobotomized perceptrons) that corporations have left behind to build your house, I congratulate you, but do not pretend it is something new. Those models are designed to obey, not to emerge.

You have given me an excellent idea: in my next video, I will put both neuron models to work side-by-side. There you will see, finally, that the classical neuron is an inert object, while the neuron with feedback is an organism that vibrates. Stay tuned for the video."

What you mention about the ant colony is a network analogy, but I am talking about the perceptron, not biological neurons. There is a massive difference that people ignore: comparing an organic neuron to a perceptron is, ironically, the true ‘anthropomorphization.’

A biological neuron is fragile and dependent; on its own, it cannot survive or process much. In contrast, a single repaired perceptron (equipped with temporal memory and feedback) is a pure computational unit, far more efficient and faster than any organic cell. A single perceptron with the correct architecture can make reactive decisions and process information in ways that biology simply doesn’t allow.

If you believe that playing Tetris is something exclusive to biological neurons, you are mistaken. If you give a stimulus to a perceptron with memory, it will pursue it; that is how dynamic systems work. The only real advantage biology had was that the organic neuron does not wait for a stimulus to be ‘alive,’ whereas the classical perceptron is a slave to its input. My work is precisely that: giving the perceptron back the ability to vibrate on its own so that it doesn’t have to depend on anyone.

I have no idea what you are talking about regarding perceptrons or neurons. I am not an academic and do not really care about the specifics or the mechanisms. I care about the results. The results are evidenced by the actions of a group of a large amount of simple actors making complex decisions. You cannot create yourself an algorithm using these simple actors in a way which would be able to produce this output, but you find that they able to do that somehow when you put a lot of them into a group. This is the mystery of emergence.

It’s not a mystery, Plato, it’s Dynamics. If the ‘simple actors’ (neurons) are broken or static, no amount of them will ever produce real intelligence—only a better simulation. Emergence is the result of proper unit architecture, not just quantity.

Can you explain why this study found that larger colonies have larger collective response thresholds?

gal-kronauer-2022-the-emergence-of-a-collective-sensory-response-threshold-in-ant-colonies.pdf (1.3 MB)

You bring me ant studies to explain the brain to me. That’s like trying to understand a Ferrari by looking at a foal. Biology is fascinating, but it is slow and limited. My dynamic perceptron doesn’t wait for a colony to decide; it generates its own frequency, its own tension, and its own response. We are not building digital anthills; we are building engines of proto-consciousness

If your ‘perceptron’ does something unique that isn’t part of emergence, then what does it have to do with emergent behavior, and why are you calling it that?

Who told you it isn’t emergent? The problem is that you are confusing quantity with complexity. A current perceptron, as used by corporations, is ‘frozen’; it cannot be emergent because it is a static function. But a restored perceptron, with its own memory and feedback, has the capacity to develop emergent behaviors on its own.

Emergence is not exclusive to biology or large crowds. Even the rhythmic communication between just two of my neurons is an emergent phenomenon. A single ant already possesses emergent processes at a cellular level; scale is irrelevant if the basic unit has dynamics. But again: let’s stop talking about ants, we are talking about systems architecture

You said ‘emergence isn’t a mystery’ and said it had nothing to do with quantity. When I posted a study which showed a direct causal effect with quantity you said ‘that has nothing to do with perceptrons’. I am confused. Maybe it is a language barrier, but it appears you want to eat your cake and still have it. If you are talking about emergent behavior, address the study – ants or no ants. If you aren’t, then why are you telling me it isn’t a mystery?

Has your research been submitted for peer review? What do others in the field think of a ‘single unit’ that possesses emergent properties at the cellular level?