We saw in Part 9 that evolutionary explanations have a rarely discussed downside. They only work well when they are applied to recent episodes where we have a very secure sense that we know what the furniture of the past was like. Once our “mental picture” of the past begins to cloud-over, they cease to “work” as sure-fire explanations —the feeling that something has been thoroughly explained begins to wane. In recent times evolutionary explanations have been widely fielded in biology and cosmology, but whether they are really convincing is different matter. There are many signs that the kind of explanatory light shed by science has faded overall in public estimation in the last fifty years. This indicates that the emphasis placed on evolutionary explanations may have been overdone.
In this Part we turn to look at a more central, credible, convincing form of scientific explanation —that which works by deconstructing material objects, stars, planets, comets, plants, animals, humans, etc. into their constituent parts and eventually tiny invisible parts. The number of such approved “explanatory objects” or ‘ex-objects’ has increased markedly in recent times. Among scientists this kind of explanation has been the dominant variety for more than a hundred years —roughly since some naive counter-arguments mounted by Ostwald and Mach were seen off in the 1900s.
Among the public the situation is different. There is probably more scepticism here than in the case of evolutionary explanations. The raw “empiricism” expressed in the saying <<Seeing is Believing>> is evidently still alive and well — out there — in places in the public consciousness.
So the theme of this Instalment 10 is still the question whether official explanations really explain, but we are looking now at deconstructive explanations. Why? Because explaining is absolutely central to the practice of reasoning: and it is a feeling that today’s science is not really explaining very much which tells us that there is a crisis in reasoning.
When the classical Greeks began to reflect on the nature of physical reality, they settled on the idea that it consisted of four elements Earth, Water, Fire and Air. These were all visible, as it were, “ground level” (macro) realities. Democritus and Leucippus, though, had the brilliant insight that, at a deeper level, there must be “atoms”, and it was differences at this deep level which, they said, must explain why different kinds of solid and liquids, not to mention combustions and gases, had different potencies.
Today this move can be seen as the first stage of the thinking which has given us the Ladder of Scientific Explanation (LSE). This is a system of ten progressively deeper and deeper levels of tinier and tinier “entities” which have been recognised by the scientific consensus. People who are cross about science sometimes question this consensus, but most sensible people accept that it is sound as far as it goes. The standard official way of explaining the properties of material things has become that of postulating tiny invisible entities with simple modes of behaviour. When these tiny systems inter-act and combine together, they have the effect of necessarily producing the macro phenomena we observe. So such a narrative is pretty convincing, provided you believe in the tiny invisible entities. This injunction <<explore what is happening at tinier levels!>> has become the generally agreed advice given to anyone looking for scientific explanations. Level after level of postulated invisible entities have been found —contrary to an earlier intuitive empiricism, which supposed that “only seeing can justify believing” and that the invisible is for ever unknown. The invisible is not for ever unknown. We can learn about it, using coherence-evidence gathered by collating scores of different interventions and measurements.
But the invisibility of the entities on the lower rungs of the Explanatory Ladder and their tiny stature are not the only problems. The way these entities behave and inter-act is often unexpected.
What has become abundantly clear since the arrival of quantum theory is that deep physical reality is strange, and that the deeper you go, the stranger it becomes. So how can we represent this “strangeness in our modelling? There was a conviction in the 20th century that “degrees of mathematical abstraction” were the ideal way to represent and “bring out” this strangeness. This was part of the mindset which encouraged research mathematicians to explore more and more hyper-abstract theory. Then the awful realisation dawned in the 1970s —plunging us into a dangerously post modern state— that all this work in higher maths had led to no discernable progress in physics whatever. Of course it hadn’t, because the researchers who crafted these aesthetic, gnomic, hyper-abstractions had no connection with physics, and they were not aware of (still less immersed in) the strangeness the physicists were encountering. A so-called “dash for abstraction” had happened, a great deal of new, aesthetic abstract structure had been identified, and studied, but it was all in vain.
So the problem posed by the Explanatory Ladder is the huge, central, $64 problem of modern science. As we probe deeper and deeper into physical reality we find various characteristics changing:
[Incidentally we do not need to tie the Ladder to smaller and smaller dimensions spatially. That would be to treat space as a given, which it is certainly not. Instead we can conceptualise the “tinier” objects as “part of parts” of the macroscopic things they explain. They are like successive Russian Dolls. As we go deeper we find new items contained in the items we have already recognised.]
So how does it all end, how could it all end? Well, the final, tiniest ex-objects will be found when we successfully identify the pixel-like items which underlie the graininess of space. Next, rules can’t go on getting simpler and simpler ad inf. At some point, a “structured rule-based milieu” must disappear altogether, and we will then be left with wholly random, rule-free behaviour. The inherent strangeness of which we become increasingly aware, must be a consequence of some unguessed overall architecture in physical reality. The probabilistic effect will increase towards its only possible conclusion, total chaos. The progressive decline in knowability, points towards a final level of representation in which symbols need to be employed which simply signal difference, but have no “property” or “character” implications at all.
These are all reasonable extrapolations drawn from the preceding discussion of the Explanatory Ladder problem. So, of what can we be sure? What is the way forward? The silence from orthodox physics at this point is deafening. There appears to be no one thinking about this, the father and mother of all physical conundrums. Most theorists simply shrug, and if they say anything at all it is <<We don’t know. In all probability we can’t know.>>.
This is hardly satisfactory. This is physical reality, cornered, at last, like a wild beast which has evaded us for so long. We must surely take physical reality seriously. It isn’t a non-problem, it isn’t an ego-trip, it isn’t part of a free-for-all aesthetic game … this is physical reality on which we all rely, all the time.
The theme of this blog is that we need Actimatics, a revolutionary new abstract discipline which begins with totally random building bricks, and gradually builds quasi-physical objects using the same principles as mathematics: rigorous definition enforced by willpower, imagination in the choice of new objects. The objects it can build in this way will —we shall show— exemplify reliable behaviour, and in the future we may hope for a kind of evolution towards, replication, constructive power, synoptic representation… and finally consciousness, including freewill and imagination. At which point a massive realisation dawns: if Actimatic objects finally reach this level, they can, in effect, take over the willpower and imagination needed to create their own, and our, universe!