Non-computability

Are new physics needed to describe consciousness?

The predominant contemporary theories used to explain the phenomenon of consciousness all employ a computational-based model of neuronal activity within the brain (neurocomputational paradigms). This of course stems from the reductionist and mechanistic scientific worldview, in which the brain is like a computer, the function of which can be described by the base unit of the neuron. Consciousness, which could be described as the interpretation-based awareness of experiences, sensations, and mental states – emerges when enough neurons are wired together in a sufficiently complex manner. The problem with this model is that the brain is not a computer, certainly not any computer that we are aware of, and that many behaviors, as well as subjective experiences, are largely non-deterministic – they do not appear to be the result of pre-programmed, or even adaptively programmed, responses. Certainly there are computational-like functions occurring within the brain, as well as automated behavioral responses – but the question is to what degree consciousness is the result of computation-based activity. Does the possible non-computability of consciousness require new theories that go beyond the neurocomputational models? Excitingly, new theories within biology and physics are pointing the way towards a completely novel approach to the explanation of conscious phenomena, and more than a few are utilizing the structure and dynamics of spacetime itself.

 

Ghosts within the machine?

It is easy to understand how the biological system functions similar to a machine. Parts are assembled in a very specific manner, and when the ‘on’ switch is thrown, it operates with complete determinism – like clockwork (almost literally like a clock). The machine starts at the molecular level, with the nucleic acid molecule known as DNA providing the materials list – however it is not a blueprint as has often been described, because there is no known coding for body part design, it is thought that it just produces the right materials in the right place at the right time and the mechanistic determinism of interaction takes care of the rest. This is a bit like building a car, without a blueprint. Different materials are needed for each section of the car – and in this analogy, DNA contains the complete list of each material that is needed and how to manufacture that material. When the material is produced, it just comes together. Such that when pistons, seals, pipes, and gears are manufactured it will come together to form the engine, through self-assembly. This proceeds for a specific amount of time, and then it shifts to producing dials, a wheel, radio parts, etc.. and the driver is formed. Thus, with the right spatiotemporal program along the parts list (DNA), each module emerges, perfectly fit together and ready for operation.

Now all that is needed is for the driver to turn the ignition, grab the wheel, and press the gas. However, the mechanistic description runs into difficulty when we turn our attention to the driver, which in this analogy would be the “thinking” part of the biological system. How do we explain this in terms of a machine? With the advent of computers, it has become possible to do so, again, by analogy. The driver is like a computer. There is a set of programmed responses to stimuli, which will cause the machine to respond appropriately to environmental inputs. The ability to appropriately respond to a wide range of environmental inputs, so that the machine responds accordingly, makes it appear as though it is intelligent – this is after all an apt description of artificial intelligence – yet we know that it is just pre-programmed responses. Where does the program come from?

..[DNA] inside gigantic lumbering robots, sealed off from the outside world, communicating with it by tortuous indirect routes, manipulating it by remote control.

They are in you and in me; they created us, body and mind; and their preservation is the ultimate rationale for our existence. They have come a long way, those replicators. Now they go by the name of genes, and we are their survival machines. – Richard Dawkins, The Selfish Gene

Although Dawkins description of living organisms is, in this case, about as myopic a view of the living system as can be obtained, it does illustrate the consensus theory of where the programs for behavior (and body patterning) originates – it comes indirectly from DNA. Although behaviors, just like specification of body patterning (morphology), is not encoded directly within DNA – there is a mechanism of programmed responses that emerges because of the types of parts and their arrangement within the neuronal system, the neuronal structuring being of key importance. So the program for environmental responses comes from the structure of the neuronal system, this is where we get the idea for neuromorphic algorithms and circuit design. The behaviors of organisms are computations performed within the neuronal system. This neurocomputational model extends beyond just reflexive and instinctual behavior, it is thought to underlie human thought and consciousness as well.

How does the neurocomputational model compare with the observed characteristics of thought and consciousness? Not surprisingly, it appears that there is more to thought and consciousness than pre-programmed responses engendered from the binary computational operations of a network of myriad neuronal synapses. There may very well be a non-computability to thought and consciousness. This may be why there is an unpredictability to the behavior of humans and other organisms, why self-volition is observed.

This leads to another critical question: “Why did nervous systems evolve subjective consciousness?” If nervous systems are able to fully provide adaptive solutions simply as heuristic computers, there is no role for extraneous brain functions that simply add a subjective shadow reality, with no adaptive function, and presumably a physiological cost. A digital computer is a purely functional entity, so has no role for a subjective aspect, no matter how complex it becomes. – Chris King. Space, Time and Consciousness

Subjective qualities raise many questions about the purely mechanistic paradigm of Darwinian evolution, because in regards to functional efficiency, they are largely extraneous. This is why many mechanistic-based theorists are loath to assign conscious experience to much anything other than humans, in that case being inarguable, although it is still highly regarded as epiphenomenal and illusionary even then. Everything fits much nicer into the model, and as such there is no need to expand on it, when living organisms are simply genetically pre-programmed automatons, computing sensory experience to respond with behaviors that maximize survivability and thus amplification of the molecular replicators (DNA) – which for Dawkins and other extremodarwinophiles, is the only purpose and reason for life (as well as the driving mechanism, design process, and orchestrating principle).

However, the complexity of characteristics, conformational entropy, and phenomena associated with the living system greatly strain such a description – at some point the computational paradigm seems to become more analogous than literal, as the mechanisms involved seem to be non-computational. Although I would not argue that they couldn’t be reproduced by a sufficiently powerful supercomputer with massive parallel processing capabilities. It is just far too cumbersome to perform 10143 calculations (Levinthal’s paradox) for a polypeptide to adopt a specific protein conformation.

Beginning at the molecular level, the degrees of freedom in the conformational entropy of protein folding – of even a modest sized polypeptide – is astronomical, and yet a specific tertiary conformation is adopted with astonishing speed and replicability. The spacetime conformation of proteins determines their functionality. In many ways, to understand the miraculous nature of the living system, it must be understood at the molecular level. Proteins are the most structurally complex and functionally sophisticated molecules known. The nanotechnological engineering prowess of the living system is mind-boggling. Nowhere else in nature are molecules so complex and in such astonishing variety produced as in the living system – it is by far the current apex of technological achievement – one day we can hope to match such sophistication.

From the molecular to the macromolecular level – the myriad chemical reactions occurring within the cellular matrix every second – in a highly orchestrated and coherent manner nonetheless – is again staggering. Compound this with orchestration and coordination among global tissue networks, and the environmental conditions of the organism, and the computational requirements are astronomical. What many of these systems have in common is that they are chaotic, small environmental perturbations create large global effects. Which make prediction-based calculations intractable. Even for the environmental information processing, the amount of information being received by the body and nervous system, and the iterative feedback mechanisms involved as an organism changes behavior to match the data input – creates a situation in which any consciousness involved would be performing computations that are, again, staggering. If such processes can even be performed in a computational-based manner.

Similarly, thoughts do not appear to follow a computational framework. Thinking uses imagery. This point is more salient than it might at first appear, because mental imagery (and other concordant sense data) may be key to understanding the nature of consciousness, as we will see. The closest analogy that could be formed between the fluid operations of mental imagery and the machine oriented paradigm is a meta-computation. This however is describing a non-computational operation as best as is possible within the computational lexicon. Somewhat similar to how operations can be performed using matrices, imagery does much the same thing, essentially allowing for massive parallelism – parallel processing being a hallmark of quantum computations.

Does computation underlie the generation of mental imagery? Well generally, any geometry can be described mathematically, even chaotic or fractal geometries, which while being locally unpredictable, still globally evolve towards specific attractors. Therefore, it is quite possible that images, and all geometries for that matter, are generated by underlying computations. However, this is rather abstract, what physical meaning does it have? To understand the actual physics of what is occurring, it is quite possible that images, that is geometries, underlie computations – and not the other way around, and simply that the geometries and their evolution can be described mathematically, especially fractally – hence the universal language of mathematics, but even more so symbolism.

Going back to the quantum computational scenario, interpretations such as the Wheeler-Feynman Absorber Theory of quantum mechanics involves the bi-directional operation of time. Imagine having the solution before the respective computation is performed, this is the action performed by advanced wave propagators within the quantum mechanical theory. If the brain is utilizing quantum coherent modes, then quantum temporal nonlocality becomes a definite consideration in neuronal dynamics. In fact, it has been shown that brain activity that is correlated with the perception of a specific stimulus can occur after we have already responded to that stimulus (Hameroff 2012).

This may pertain to the nature of imagery in the mind, which relates directly to the nature of memory. Quantum field theory has shown us that particles are not independent entities within a vacuum of space that produce fields through which they interact. It is quite the opposite – absolute space is not a vacuum, it is permeated by omnipresent fields of which particles are an intrinsic aspect. As Einstein explained, particles are extended in spacetime through their field-like nature. It has also been theorized since Wheeler’s time (a colleague of Einstein’s) that spacetime is permeated by a wormhole architecture (spacetime foam).

Given the possibility of interactions that occur bi-directionally in time (as in the Wheeler-Feynman Absorber Theory), as well as the wormhole architecture of spacetime (by definition wormholes allow for superluminal velocities) it is possible that spacetime is inherently and fundamentally transtemporal (quantum temporal nonlocality – spanning the dimension of time – future, present, and past) and nonlocal, of which particles are an extension, and the biological system, obviously, being made of particles and fields – it is not preposterous to imagine that the biological system at the quantum level has access to nonlocal spacetime coordinates. That is, it is accessing information from spacetimes that are not in its direct local environment – they could be in its past world-line, or potential future world-line. What is a “photographic memory”? It is possible that the brain did not take a picture and is storing it in the nebulous ether between synapses. But instead, that the electromagnetic activity of synapses is allowing the brain to access the spacetime coordinate of that image to view it.

Tangible connections across spacetime – formed by the quantum biochemical processes and structures of the body – form memory. Memory is the basis of consciousness, for without memory there may be perception, but there is no apprehension – no distinct awareness. Consciousness is the interconnectivity of spacetime. This is a major potential solution to the binding problem, the Cartesian theatre, and the threshold of conscious emergence – the brain is perceiving reality as it is.

Conversely, the neurocomputational model of consciousness posits that the brain reproducing a facsimile of the observable environment with nearly exact accuracy and faithful detail. Images, sounds, textures, etc… are generated in the brain through computations of synaptic networks, and the reproduction produced in our brain is virtually indistinguishable from the actual “real world”, occurring outside of us. The brain, while producing a nearly exact virtual simulation of reality, is simultaneously producing the consciousness that engenders an experience of the world through the awareness of an individual entity – to which the faithful-map virtual reality simulation is being presented and interpreted.

It is no wonder that how this is done is nearly beyond explanation within the consensus paradigm of neuroscience and philosophy, for the ability to create a nearly exact replica of the observable environment, and a consciousness capable of interpreting that data and navigating through the “real world” via its internal virtual reality map, is an unbelievably advanced technological achievement.

In this sense it is perhaps a demotion of the brain’s remarkable status, bestowed by that perspective, to instead posit a theory in which the brain is not producing a simulacrum of reality through computational translation of electrical sensory input, but is instead perceiving our observable environment as it is. When you have a conversation with your friend, you are not having a conversation with a ghostly image of your friend in your head (which would be the case if everything you ever experience is only in your brain). Instead, we suggest that you are having a conversation with that person as they are, outside of your head in your observable environment, and the brain is only facilitating the interaction through its reception/transmission operations with the field-like nature of your mind and the (localized) physical body (quote Rupert Sheldrake).

Quantum physics have certainly entered the discussion for a paradigm of consciousness. This has been for a number of reasons, but in light of our present discourse, the reason espoused by Roger Penrose is perhaps one of the most germane. Roger Penrose has explained with great insight the non-computability of consciousness. Especially highlighting Kurt Gödel’s incompleteness theorems. To put it very simply (hopefully not overly so) Penrose points out that there appears to be an aspect of our consciousness that is not programmable, non-deterministic, which engenders our capacity for free will. Since quantum mechanics is stochastic, meaning that it is based on probabilities, quantum mechanical phenomena within macromolecular structures of the neuronal system would allow for un-predictable outcomes if they were to be utilized in a quantum information processing framework (note that the probabilities of quantum mechanics are calculated using the square of a probability amplitude, this is because the wavefunction is described by complex numbers – it is a complex plane within the imaginary domain. Leading to the statement by the mathematician Jacques Hadamard that “the shortest path between two truths in the real domain passes through the complex domain” – I find this very interesting in light of our discussion on the use of imagery – and here it is mathematically formulated through the use of imaginary numbers of the complex domain).

Penrose, in collaboration with Stuart Hameroff, developed a completely novel hypothesis about quantum mechanics to model the possible non-computable operations of the biological system. Seeming to fulfill the prognostication by Erwin Schrödinger, in his book What is Life, that the study of the biological system may very well result in the discovery, or formulation, of a new kind of physics. Penrose and Hameroff developed what is known as Orchestrated Objective Reduction. What’s remarkable is that the theory utilizes the geometry of spacetime itself – in one fell swoop incorporating biology and quantum gravity in a physical description of nature – a unified field theory indeed! Within their model, free electrons form a quantum superposition. The electrons are able to do so because of certain macromolecules within the biological system known as microtubules, which are sufficiently shielded from the environment through their specific structural architecture, as well as by the ordered structure of interfacial water (forming protective layers in the liquid-crystalline phase).

However, unlike the normal interpretation of subjective reduction, the wavefunctions of the electrons do not collapse through the action of observation (which would be problematic for a number of reasons, not least of which is that if this is the operation through which consciousness is engendered, it defeats the purpose to require a conscious entity to perform a measurement for the wavefunction to collapse, as is the mechanism stipulated by the Copenhagen Interpretation of Quantum Mechanics). Instead, the wavefunction reduces once a certain threshold of quantum gravitational force is surpassed. As the wavefunction evolves, it forms a “bubble” in the geometry of spacetime, which is inherently unstable, and will thus collapse rather rapidly. Hence, it is an objective reduction of the wavefunction – it is purely mechanistic. However, since it cannot be calculated precisely when the wavefunction will collapse through quantum gravitational force, it is unpredictable, and therefore non-computable.

Translational Mneumonics

Aesthetic judgements, rather than abstract reasoning, guide the process by which we come to know. Daniel Tammet

Consciousness is an awareness of spacetime. The term awareness does not simply mean a recording of spatial and temporal characteristic and causal events in spacetime – that can be done without any awareness. Awareness emerges from the perception within a system to be a distinct entity, distinguishable from other bounded systems and events within its environment. The first kind of consciousness could very well be simply an awareness within a system of its spacetime frame, a perception – at some level – of itself as a distinct entity experiencing an outside environment. Yet each one of those spacetime coordinates that is experienced, if they are recorded, become embedded within the conscious system. Or better yet, the system is able to access those spacetime coordinates. What does that mean exactly? It means that the entity would be regenerating the characteristics of that spacetime coordinate within its perceptual system – and under this model, it would be doing so by actually accessing that spacetime coordinate via its nonlocal entanglement with it. Jumping from the generalized description to one we are more familiar with, such as human consciousness – a succession of thoughts is a succession of sensory data that are the regeneration of characteristics of spacetime coordinates. You are those spacetimes. Consciousness is made of spacetime.

The sensory data can be sight, smell, or sound. Visual data may be of primary significance because it can completely represent a single spacetime frame, which would be the spatial configuration in one 10-44 second interval (Planck time). Whereas smell, sound, or tactility all involve multiple spacetime frames, because these are frequency based

Leave a Reply

Your email address will not be published. Required fields are marked *