All that an individual neuron does is, when its activation potential is reached, it fires, sending a signal to all connected neurons. This is due to a chemical reaction that causes it to exchange potassium and sodium ions across its cell wall, inverting the charge and producing a wave of energy. Until then, it sends no signals. The neuron is either off or on, and its only “decision” is whether or not it has received sufficient charge for the chemical reaction activate.
As with any network, the interesting thing is not the material bodies themselves, but the formal relationships between them. Each neuron connects at a synapse. When a neuron fires, it releases a cloud of neurotransmitters into the synapse. If it has a lot of proteins for releasing these and has few reuptake proteins, there is a higher density of neurotransmitters and thus a higher probability of sending a signal to activate the next neuron. Likewise, in the next neuron, if it has a higher number of receptor proteins then the odds of it receiving the signal are higher. Additionally, besides neurons, there are also myelin cells which can hold synapses together. All of these features increase in response to the “happy” chemicals released as part of our brain’s reward system, and decrease from “unhappy” chemicals, thereby determining, as I mentioned before, whether connections will be kept or trimmed. Thus it is in the synapse that weights manifest, determining the probabilities that a signal will propagate from one activated neuron to activate the next.
The synapses are where neurons connect together via their axons and dendrites. This is where the networks are formed. Many neurons connect together into interesting dynamic structures, producing schema that encode specific cognitive function. You are trying to use a materialist model that assumes that the human properties of free will originate from our material components possessing free will, but this is not what we observe. The faculties of higher decision making are properties of the network as a whole, not of the material parts. The brain has such properties because it is the kind of form (the kind of system of relationships) which has such properties, not because its material components have any such properties. Thus systems scientists speak of “emergence,” which rejects the materialist’s reductionism of the whole to nothing more than the sum of its parts. Emergence accepts properties which depend on material properties but which are in themselves distinct from material properties. This is where we get immaterial properties such as our higher order decision making, will, and intelligence, not from individual neurons which posses no such faculties in themselves.
What we care about with individual neurons/perceptrons is the activation function. We can use activation functions based on models of biology (looking it up I see the Hodgkin-Huxley Model is notable), but we tend to pick activation functions for the best performance. AI engineers aren’t declining to use biologically based activation functions in order to suppress free will (that’s not where free will emerges, as explained above). They’re choosing other activation functions because, when tested, those activation functions are producing better results for their models.
You seem to be using the libertarian model of free will? The idea that free will involves the ability to violate determinism? That model mostly only became popular in an attempt to counter hard determinists at their own game. Most philosophers / people in general have stopped playing the game of hard determinists entirely, realising that the game was rigged with unintuitive and dissatisfying definitions. That is, most these days are compatibilists, people who believe that “free will” means something other than the violation of determinism.
I for one hold that we have free will because it is a property of certain kinds of machines, cybernetic systems, and we are such kinds of machines. See above for how such systems emerge. The father of cybernetics, Norbert VViener, rejected materialism in no uncertain terms, and defined cybernetics explicitly in terms of the study of the message independently of the medium in which it is transmitted. This allows us to study information processing systems which are common to both animals and machine. It allows us to speak of mental properties which can be implemented on electrical substrates as readily as bio-chemical ones. And it allows us to see that the properties in question arise without needing to model individual biological neurons. (eg, VViener once built a robot that suffered Parkinson’s disease, in spite of not having any neurons.)
Anyway, sorry for being so verbose.
None of this is to say that the AI we have today exhibit particularly impressive mental faculties as far as free will / judgement / sapience is concerned. I’m not defending them. In fact, I think trying to create independent goal creating machines is a very dumb idea (we should learn to identify it in order to trim it from our designs), but that’s a topic for another rant. I’m only saying that if you’re looking to the individual neurons for these properties, you’re looking in the wrong place.