Paul P. Mealing

Check out my book, ELVENE. Available as e-book and as paperback (print on demand, POD). Also this promotional Q&A on-line.

Sunday 11 April 2010

To have or not to have free will

In some respects this post is a continuation of the last one. The following week’s issue of New Scientist (3 April 2010) had a cover story on ‘Frontiers of the Mind’ covering what it called Nine Big Brain Questions. One of these addressed the question of free will, which happened to be where my last post ended. In the commentary on question 8: How Powerful is the Subconscious? New Scientist refers to well-known studies demonstrating that neuron activity precedes conscious decision-making by 50 milliseconds. In fact, John-Dylan Haynes of the Bernstein Centre for Computational Neuroscience, Berlin, has ‘found brain activity up to 10 seconds before a conscious decision to move [a finger].’ To quote Haynes: “The conscious mind is not free. What we think of as ‘free will’ is actually found in the subconscious.”

New Scientist actually reported Haynes' work in this field back in their 19 April 2008 issue. Curiously, in the same issue, they carried an interview with Jill Bolte Taylor, who was recovering from a stroke, and claimed that she "was consciously choosing and rebuilding my brain to be what I wanted it to be". I wrote to New Scientist at the time, and the letter can still be found on the Net:

You report John-Dylan Haynes finding it possible to detect a decision to press a button up to 7 seconds before subjects are aware of deciding to do so (19 April, p 14). Haynes then concludes: "I think it says there is no free will."

In the same issue Michael Reilly interviews Jill Bolte Taylor, who says she "was consciously choosing and rebuilding my brain to be what I wanted it to be" while recovering from a stroke affecting her cerebral cortex (p 42) . Taylor obviously believes she was executing free will.

If free will is an illusion, Taylor's experience suggests that the brain can subconsciously rewire itself while giving us the illusion that it was our decision to make it do so. There comes a point where the illusion makes less sense than the reality.

To add more confusion, during the last week, I heard an interview with Norman Doidge MD, Research psychiatrist at the Columbia University Psychoanalytic Centre and the University of Toronto, who wrote the book, The Brain That Changes Itself. I haven’t read the book, but the interview was all about brain plasticity, and Doidge specifically asserts that we can physically change our brains, just through thought.

What Haynes' experimentation demonstrates is that consciousness is dependent on brain neuronal activity, and that’s exactly the point I made in my last post. Our subconscious becomes conscious when it goes ‘global’, so one would expect a time-lapse between a ‘local’ brain activity (that is subconscious) and the more global brain activity (that is conscious). But the weird part is that Taylor’s experience, and Doidge’s assertions, is that our conscious thoughts can also affect our brain at the neuron level. This reminds me of Douglas Hofstadter’s thesis that we all are a ‘strange loop’, that he introduced in his book, Godel, Escher, Bach, and then elaborated on in a book called I am a Strange Loop. I’ve read the former tome but not the latter one (refer my post on AI & Consciousness, Feb.2009).

We will learn more and more about consciousness, I’m sure, but I’m not at all sure that we will ever truly understand it. As John Searle points out in his book, Mind, at the end of the day, it is an experience, and a totally subjective experience at that. In regard to studying it and analysing it, we can only ever treat it as an objective phenomenon. The Dalai Lama makes the same point in his book, The Universe in a Single Atom.

People tend to think about this from a purely reductionist viewpoint: once we understand the correlation between neuron activity and conscious experience, the mystery stops being a mystery. But I disagree: I expect the more we understand, the bigger the mystery will become. If consciousness is no less weird than quantum mechanics, I’ll be very surprised. And we are already seeing quite a lot of weirdness, when consciousness is clearly dependent on neuronal activity, and yet the brain’s plasticity can be affected by conscious thought.

So where does this leave free will? Well, I don’t think that we are automatons, and I admit I would find it very depressing if that was the case. The last of the Nine Questions in last week’s New Scientist, asks: will AI ever become sentient? In its response, New Scientist reports on some of the latest developments in AI, where they talk about ‘subconscious’ and ‘conscious’ layers of activity (read software). Raul Arrables of the Carlos III University of Madrid, has developed ‘software agents’ called IDA (Intelligent Distribution Agent) and is currently working on LIDA (Learning IDA). By ‘subconcious’ and ‘conscious’ levels, the scientists are really talking about tiers of ‘decision-making’, or a hierarchic learning structure, which is an idea I’ve explored in my own fiction. At the top level, the AI has goals, which are effectively criteria of success or failure. At the lower level it explores various avenues until something is ‘found’ that can be passed onto the higher level. In effect, the higher level chooses the best option from the lower level. The scientists working on this 2 level arrangement, have even given their AI ‘emotions’, which are built-in biases that direct them in certain directions. I also explored this in my fiction, with the notion of artificial attachment to a human subject that would simulate loyalty.

But, even in my fiction, I tend to agree with Searle, that these are all simulations, which might conceivably convince a human that an AI entity really thinks like us. But I don’t believe the brain is a computer, so I think it will only ever be an analogy or a very good simulation.

Both this development in AI and the conscious/subconscious loop we seem to have in our own brains reminds me of the ‘Bayesian’ model of the brain developed by Karl Friston and also reported in New Scientist (31 May 2008). They mention it again in an unrelated article in last week’s issue – one of the little unremarkable reports they do – this time on how the brain predicts the future. Friston effectively argues that the brain, and therefore the mind, makes predictions and then modifies the predictions based on feedback. It’s effectively how the scientific method works as well, but we do it all the time in everyday encounters, without even thinking about it. But Friston argues that it works at the neuron level as well as the cognitive level. Neuron pathways are reinforced through use, which is a point that Norman Doidge makes in his interview. We now know that the brain literally rewires itself, based on repeated neuron firings.

Because we think in a language, which has become a default ‘software’ for ourselves, we tend to think that we really are just ‘wetware’ computers, yet we don’t share this ability with other species. We are the only species that ‘downloads’ a language to our progeny, independently of our genetic material. And our genetic material (DNA) really is software, as it is for every life form on the planet. We have a 4-letter code that provides the instructions to create an entire organism, materially and functionally – nature’s greatest magical trick.

One of the most important aspects of consciousness, not only in humans, but for most of the animal kingdom (one suspects) is that we all ‘feel’. I don’t expect an AI ever to feel anything, even if we programme it to have emotions.

But it is because we can all ‘feel’, that our lives mean so much to us. So, whether we have free will or not, what really matters is what we feel. And without feeling, I would argue that we would not only be not human, but not sentient.


Footnote: If you're interested in neuroscience at all, the interview linked above is well worth listening to, even though it's 40 mins long.

Saturday 3 April 2010

Consciousness explained (well, almost, sort of)

As anyone knows, who has followed this blog for any length of time, I’ve touched on this subject a number of times. It deals with so many issues, including the possibilities inherent in AI and the subject of free will (the latter being one of my earliest posts).

Just to clarify one point: I haven’t read Daniel C. Dennett’s book of the same name. Paul Davies once gave him the very generous accolade by referencing it as 1 of the 4 most influential books he’s read (in company with Douglas Hofstadter’s Godel, Escher, Bach). He said: “[It] may not live up to its claim… it definitely set the agenda for how we should think about thinking.” Then, in parenthesis, he quipped: “Some people say Dennett explained consciousness away.”

In an interview in Philosophy Now (early last year) Dennett echoed David Chalmers’ famous quote that “a thermostat thinks: it thinks it’s too hot, or it thinks it’s too cold, or it thinks the temperature is just right.” And I don’t think Dennett was talking metaphorically. This, by itself, doesn’t imbue a thermostat with consciousness, if one argues that most of our ‘thinking’ happens subconsciously.

I recently had a discussion with Larry Niven on his blog, on this very topic, where we to-and-fro’d over the merits of John Searle’s book, Mind. Needless to say, Larry and I have different, though mutually respectful, views on this subject.

In reference to Mind, Searle addresses that very quote by Chalmers by saying: “Consciousness is not spread out like jam on a piece of bread…” However, if one believes that consciousness is an ‘emergent’ property, it may very well be ‘spread out like jam on a piece of bread’, and evidence suggests, in fact, that this may well be the case.

This brings me to the reason for writing this post:New Scientist, 20 March 2010, pp.39-41; an article entitled Brain Chat by Anil Ananthaswarmy (consulting editor). The article refers to a theory proposed originally by Bernard Baars of The Neuroscience Institute in San Diego, California. In essence, Baars differentiated between ‘local’ brain activity and ‘global’ brain activity, since dubbed the ‘global workspace’ theory of consciousness.

According to the article, this has now been demonstrated by experiment, the details of which, I won’t go into. Essentially, it has been demonstrated that when a person thinks of something subconsciously, it is local in the brain, but when it becomes conscious it becomes more global: ‘…signals are broadcast to an assembly of neurons distributed across many different regions of the brain.’

One of the benefits, of this mechanism, is that if effectively filters out anything that’s irrelevant. What becomes conscious is what the brain considers important. What criterion the brain uses to determine this is not discussed. So this is not the explanation that people really want – it’s merely postulating a neuronal mechanism that correlates with consciousness as we experience it. Another benefit of this theory is that it explains why we can’t consider 2 conflicting images at once. Everyone has seen the duck/rabbit combination and there are numerous other examples. Try listening to a Bach contrapuntal fugue so that you listen to both melodies at once – you can’t. The brain mechanism (as proposed above) says that only one of these can go global, not both. It doesn’t explain, of course, how we manage to consciously ‘switch’ from one to the other.

However, both the experimental evidence and the theory, are consistent with something that we’ve known for a long time: a lot of our thinking happens subconsciously. Everyone has come across a puzzle that they can’t solve, then they walk away from it, or sleep on it overnight, and the next time they look at it, the solution just jumps out at them. Professor Robert Winston, demonstrated this once on TV, with himself as the guinea pig. He was trying to solve a visual puzzle (find an animal in a camouflaged background) and when he had that ‘Ah-ha’ experience, it showed up as a spike on his brain waves. Possibly the very signal of it going global, although I’m only speculating based on my new-found knowledge.

Mathematicians have this experience a lot, but so do artists. No artist knows where their art comes from. Writing a story, for me, is a lot like trying to solve a puzzle. Quite often, I have no better idea what’s going to happen than the reader does. As Woody Allen once said, it’s like you get to read it first. (Actually, he said it’s like you hear the joke first.) But his point is that all artists feel the creative act is like receiving something rather than creating it. So we all know that something is happening in the subconscious – a lot of our thinking happens where we’re unaware of it.

As I alluded to in my introduction, there are 2 issues that are closely related to consciousness, which are AI and free will. I’ve said enough about AI in previous posts, so I won’t digress, except to restate my position that I think AI will never exhibit consciousness. I also concede that one day someone may prove me wrong. It’s one aspect of consciousness that I believe will be resolved one day, one way or the other.

One rarely sees a discussion on consciousness that includes free will (Searle’s aforementioned book, Mind, is an exception, and he devotes an entire chapter to it). Science seems to have an aversion to free will (refer my post, Sep.07) which is perfectly understandable. Behaviours can only be explained by genes or environment or the interaction of the two – free will is a loose cannon and explains nothing. So for many scientists, and philosophers, free will is seen as a nice-to-have illusion.

Conciousness evolved, but if most of our thinking is subconscious, it begs the question: why? As I expounded on Larry’s blog, I believe that one day we will have AI that will ‘learn’; what Penrose calls ‘bottom-up’ AI. Some people might argue that we require consciousness for learning but insects demonstrate learning capabilities, albeit rudimentary compared to what we achieve. Insects may have consciousness, by the way, but learning can be achieved by reinforcement and punishment – we’ve seen it demonstrated in animals at all levels – they don’t have to be conscious of what they’re doing in order to learn.

So the only evolutionary reason I can see for consciousness is free will, and I’m not confining this to the human species. If, as science likes to claim, we don’t need, or indeed don’t have, free will, then arguably, we don’t need consciousness either.

To demonstrate what I mean, I will relate 2 stories of people reacting in an aggressive manner in a hostile situation, even though they were unconscious.

One case, was in the last 10 years, in Sydney, Australia (from memory) when a female security guard was knocked unconscious and her bag (of money) was taken from her. In front of witnesses, she got up, walked over to the guy (who was now in his car), pulled out her gun and shot him dead. She had no recollection of doing it. Now, you may say that’s a good defence, but I know of at least one other similar incident.

My father was a boxer, and when he first told me this story, I didn’t believe him, until I heard of other cases. He was knocked out, and when he came to, he was standing and the other guy was on the deck. He had to ask his second what happened. He gave up boxing after that, by the way.

The point is that both of those cases illustrate that humans can perform complicated acts of self-defence without being consciously cognisant of it. The question is: why is this the exception and not the norm?


Addendum: Nicholas Humphrey, whom I have possibly incorrectly criticised in the past, has an interesting evolutionary explanation: consciousness allows us to read other’s minds. Previously, I thought he authored an article in SEED magazine (2008) that argued that consciousness is an illusion, but I can only conclude that it must be someone else. Humphrey discovered ‘blindsight’ in a monkey (called Helen) with a surgically-removed visual cortex, which is an example of a subconscious phenomenon (sight) with no conscious correlation. (This specific phenomenon has since been found in humans as well, with damaged visual cortex.)


Addendum 2: I have since written a post called Consciousness unexplained in Dec. 2011 for those interested.

Sunday 28 March 2010

Karl Popper’s criterion

Over the last week, I’ve been involved in an argument with another blogger, Justin Martyr, after Larry Niven linked us both to one of his posts. I challenged Justin (on his own blog) over his comments on ID (Intelligent Design), contending that his version was effectively a ‘God-of-the-gaps’ argument. Don’t read the thread – it becomes tiresome.

Justin tended to take the argument in all sorts of directions, and I tended to follow, but it ultimately became focused on Popper’s criterion of falsifiability for a scientific theory. First of all, notice that I use the word, falsifiability (not even in the dictionary) whereas Justin used the word, falsification. It’s a subtle difference but it highlights a difference in interpretation. It also highlighted to me that some people don’t understand what Popper’s criterion really means or why it’s so significant in scientific epistemology.

I know that, for some of you who read this blog, this will be boring, but, for others, it may be enlightening. Popper originally proposed his criterion to eliminate pseudo-scientific theories (he was targeting Freud at the time) whereby the theory is always true for all answers and all circumstances, no matter what the evidence. The best contemporary example is creationism and ID, because God can explain everything no matter what it entails. There is no test or experiment or observation one can do that will eliminate God as a hypothesis. On the other hand, there are lots of tests and observations (that have been done) that could eliminate evolutionary theory.

As an aside, bringing God into science stops science, which is an argument I once had with William Lane Craig and posted as The God hypothesis (Dec.08).

When scientists and philosophers first cited Popper’s criterion as a reason for rejecting creationism as ‘science’, many creationists (like Duane T. Gish, for example) claimed that evolution can’t be a valid scientific theory either, as no one has ever observed evolution taking place: it’s pure conjecture. So this was the first hurdle of misunderstanding. Firstly, evolutionary theory can generate hypotheses that can be tested. If the hypotheses aren’t falsifiable, then Gish would have had a case. The point is that all the discoveries that have been made, since Darwin and Wallace postulated their theory of natural selection, have only confirmed the theory.

Now, this is where some people, like Justin, for example, think Popper’s specific criterion of ‘falsification’ should really be ‘verification’. They would argue that all scientific theories are verified not falsified, so Popper’s criterion has it backwards. But the truth is you can’t have one without the other. The important point is that the evidence is not neutral. In the case of evolution, the same palaeontological and genetic evidence that has proved evolutionary theory correct, could have just as readily proven it wrong. Which is what you would expect, if the theory was wrong.

Justin made a big deal about me using the word testable (for a theory) in lieu of the word, falsification, as if they referred to different criteria. But a test is not a test if it can’t be failed. So Popper was saying that a theory has to be put at risk to be a valid theory. If you can’t, in principle, prove the theory wrong, then it has no validity in science.

Another example of a theory that can’t be tested is string theory, but for different reasons. String theory is not considered pseudo-science because it has a very sound mathematical basis, but it has effectively been stagnant for the last 20 years, despite some of the best brains in the world working on it. In principle, it does meet Popper’s criterion, because it makes specific predictions, but in practice those predictions are beyond our current technological abilities to either confirm or reject.

As I’ve said in previous posts, science is a dialectic between theory and experimentation or observation. String theory is an example, where half the dialectic is missing (refer my post on Layers of nature, May.09) This means science is epistemologically dynamic and leads to another misinterpretation of Popper’s criterion. In effect, any theory is contingent on being proved incorrect, and we find that, after years of confirmation, some theories are proved incorrect depending on circumstances. The best known example would be Newton’s theories of mechanics and gravity being overtaken by Einstein’s special and general theories of relativity. Actually, Einstein didn’t prove Newton’s theories wrong so much as demonstrate their epistemological limitation. In fact, if Einstein’s equations couldn’t be reduced to Newton’s equations (by eliminating the speed of light, c, as a factor) then he would have had to reject them.

Thomas Kuhn had a philosophical position that science proceeds by revolutions, and Einstein’s theories are often cited as an example of Kuhn’s thesis in action. Some science philosophers (Steve Fuller) have argued that Kuhn’s and Popper’s positions are at odds, but I disagree. Both Newton’s and Einstein’s theories fulfill Popper’s criterion of falsifiability, and have been verified by empirical evidence. It’s just that Einstein’s theories take over from Newton’s when certain parameters become dominant. We also have quantum mechanics, which effectively puts them both in the shade, but no one uses a quantum mechanical equation, or even a relativistic one, when a Newtonian one will suffice.

Kuhn effectively said that scientific revolutions come about when the evidence for a theory becomes inexplicable to the extent that a new theory is required. This is part of the dialectic that I referred to, but the theory part of the dialectic always has to make predictions that the evidence part can verify or reject.

Justin also got caught up in believing that the methodology determines whether a theory is falsifiable or not, claiming that some analyses, like Bayesian probabilities for example, are impossible to falsify. I’m not overly familiar with Bayesian probabilities but I know that they are a reiterative process, whereby a result is fed back into the equation which hones the result. Justin was probably under the impression that this homing into a more accurate result made it an unfalsifiable technique. But, actually, it’s all dependent on the input data. Bruce Bueno de Mequita, whom New Scientist claim is the most successful predictor in the world, uses Bayesian techniques along with game theory to make predictions. But a prediction is falsifiable by definition, otherwise it’s not a prediction. It’s the evidence that determines if the prediction is true or false, not the method one uses to make the prediction.

In summary: a theory makes predictions, which could be right or wrong. It’s the evidence that should decide whether the theory is right or wrong; not the method by which one makes the prediction (a mathematical formula, for example); nor the method by which one gains the evidence (the experimental design). And it’s the right or wrong part that defines falsifiability as the criterion.

To give Justin due credit, he allowed me the last word on his blog.

Footnote: for a more esoteric discussion on Steve Fuller’s book, Kuhn vs. Popper: The Struggle for the Soul of Science, in a political context, I suggest the following. My discussion is far more prosaic and pragmatic in approach, not to mention, un-academic.

Addendum: (29 March 2010) Please read April's comment below, who points out the errors in this post concerning Popper's own point of view.

Addendum 2: This is one post where the dialogue in the comments (below) is probably more informative than the post, owing to contributors knowing more about Popper than I do, which I readily acknowledge.

Addendum 3: (18 Feb. 2012) Here is an excellent biography of Popper in Philosophy Now, with particular emphasis on his contribution to the philosophy of science.

Tuesday 16 March 2010

Speciation: still one of nature’s great mysteries

First of all a disclaimer: I’m a self-confessed dilettante, not a real philosopher, and, even though I read widely and take an interest in all sorts of things scientific, I’m not a scientist either. I know a little bit more about physics and mathematics than I do biology, but I can say with some confidence that evolution, like consciousness and quantum mechanics, is one of nature’s great mysteries. But, like consciousness and quantum mechanics, just because it’s a mystery doesn’t make it any less real. Back in Nov.07, I wrote a post titled: Is evolution fact? Is creationism myth?

First, I defined what I meant by ‘fact’: it’s either true or false, not something in between. So it has to be one or the other: like does the earth go round the sun or does the sun go round the earth? One of those is right and one is wrong, and the one that is right is a fact.

Well, I put evolution into that category: it makes no sense to say that evolution only worked for some species and not others; or that it occurred millions of years ago but doesn’t occur now; or the converse that it occurs now, but not in the distant past. Either it occurs or it never occurred, and all the evidence, and I mean all of the evidence, in every area of science: genetics, zoology, palaeontology, virology; suggests it does. There are so many ways that evolution could have been proven false in the last 150 years since Darwin’s and Wallace’s theory of natural selection, that it’s just as unassailable as quantum mechanics. Natural selection, by the way, is not a theory, it’s a law of nature.

Now, both proponents and opponents of evolutionary theory often make the mistake of assuming that natural selection is the whole story of evolution and there’s nothing else to explain. So I can confidently say that natural selection is a natural law because we see evidence of it everywhere in the natural world, but it doesn’t explain speciation, and that is another part of the story that is rarely discussed. But it’s also why it’s one of nature’s great mysteries. To quote from this week’s New Scientist (13 March, 2010, p.31): ‘Speciation still remains one of the biggest mysteries in evolutionary biology.’

This is a rare admission in a science magazine, because many people believe, on both sides of the ideological divide (that evolution has created in some parts of the world, like the US) that it opens up a crack in the scientific edifice for creationists and intelligent design advocates to pull it down.

But again, let’s compare this to quantum mechanics. In a recent post on Quantum Entanglement (Jan.10), where I reviewed Louisa Gilder’s outstanding and very accessible book on the subject, I explain just how big a mystery it remains, even after more than a century of experimentation, verification and speculation. Yet, no one, whether a religious fundamentalist or not, wants to replace it with a religious text or any other so-called paradigm or theory. This is because quantum mechanics doesn’t challenge anything in the Bible, because the Bible, unsurprisingly, doesn’t include anything about physics or mathematics.

Now, the Bible doesn’t include anything about biology either, but the story of Genesis, which is still a story after all the analysis, has been substantially overtaken by scientific discoveries, especially in the last 2 centuries.

But it’s because of this ridiculous debate, that has taken on a political force in the most powerful and wealthy nation in the world, that no one ever mentions that we really don’t know how speciation works. People are sure to counter this with one word, mutation, but mutations and genetic drift don’t explain how genetic anomalies amongst individuals lead to new species. It is assumed that they accumulate to the point that, in combination with natural selection, a new species branches off. But the New Scientist cover story, reporting on work done by Mark Pagel (an evolutionary biologist at the University of Reading, UK) challenges this conventionally held view.

To quote Pagel: “I think the unexamined view that most people have of speciation is this gradual accumulation by natural selection of a whole lot of changes, until you get a group of individuals that can no longer mate with their old population.”

Before I’m misconstrued, I’m not saying that mutation doesn’t play a fundamental role, as it obviously does, which I elaborate on below. But mutations within individuals don’t axiomatically lead to new species. This is a point that Erwin Schrodinger attempted to address in his book, What is Life? (see my review posted Nov.09).

Years ago, I wrote a letter to science journalist, John Horgan, after reading his excellent book The End of Science (a collection of interviews and reflections by some of the world’s greatest minds in the late 20th Century). I suggested to him an analogy between genes and environment interacting to create a human personality, and the interaction between speciation and natural selection creating biological evolution. I postulated back then that we had the environment part, which was natural selection, but not the gene part of the analogy, which is speciation. In other words, I suggested that there is still more to learn, just like there is still more to learn about quantum mechanics. We always assume that we know everything that there is to know, when clearly we don’t. The mystery inherent in quantum mechanics indicates that there is something that we don’t know, and the same is true for evolution.

Mark Pagel’s research is paradigm-challenging, because he’s demonstrated statistically that genetic drift by mutation doesn’t give the right answers. I need to explain this without getting too esoteric. Pagel looked at the branches of 101 various (evolutionary) trees, including: ‘cats, bumblebees, hawks, roses and the like’. By doing a statistical analysis of the time between speciation events (the length of the branches) he expected to get a Bell curve distribution which would account for the conventional view, but instead he got an exponential curve.

To quote New Scientist: ‘The exponential is the pattern you get when you are waiting for some single, infrequent event to happen… the length of time it takes a radioactive atom to decay, and the distance between roadkills on a highway.’

In other words, as the New Scientist article expounds in some detail, new species happen purely by accident. What I found curious about the above quote is the reference to ‘radioactive decay’ which was the starting point for Erwin Schrodinger’s explanation of mutation events, which is why mutation is still a critical factor in the whole process.

Schrodinger went to great lengths, very early in his exposition, to explain that nearly all of physics is statistical, and gave examples from magnetism to thermal activity to radioactive decay. He explained how this same statistical process works in creating mutations. Schrodinger coined a term, ‘statistico-deterministic’, but in regard to quantum mechanics rather than physics in general. Nevertheless, chaos and complexity theory reinforce the view that the universe is far from deterministic at almost every level that one cares to examine it. As the New Scientist article argues, Pagel’s revelation supports Stephen Jay Gould’s assertion: ‘that if you were able to rewind history and replay the evolution on Earth, it would turn out differently every time.’

I’ve left a lot out in this brief exposition, including those who challenge Pagel’s analysis, and how his new paradigm interacts with natural selection and geographical separation, which are also part of the overall picture. Pagel describes his own epiphany when he was in Tanzinia: ‘watching two species of colobus monkeys frolic in the canopy 40 metres overhead. “Apart from the fact that one is black and white and one is red, they do all the same things... I can remember thinking that speciation was very arbitrary. And here we are – that’s what our models are telling us.”’ In other words, natural selection and niche-filling are not enough to explain diversification and speciation.

What I find interesting is that wherever we look in science, chance plays a far greater role than we credit. It’s not just the cosmos at one end of the scale, and quantum mechanics at the other end, that rides on chance, but evolution, like earthquakes and other unpredictable events, also seems to be totally dependent on the metaphorical roll of the dice.

Addendum 1 : (18 March 2010)

Comments posted on New Scientist challenge the idea that a ‘bell curve’ distribution should have been expected at all. I won’t go into that, because it doesn’t change the outcome: 78% of ‘branches’ statistically analysed (from 110) were exponential and 0% were normal distribution (bell curve). Whatever the causal factors, in which mutation plays a definitive role, speciation is as unpredictable as earthquakes, weather events and radio-active decay (for an individual isotope).

Addendum 2: (18 March 2010)

Writing this post, reminded me of Einstein’s famous quote that ‘God does not play with dice’. Well, I couldn’t disagree more. If there is a creator-God (in the Einstein mould) then first and foremost, he or she is a mathematician. Secondly, he or she is a gambler who loves to play the odds. The role of chance in the natural world is more fundamental and universally manifest than we realise. In nature, small variances can have large consequences: we see that with quantum theory, chaos theory and evolutionary theory. There appears to be little room for determinism in the overall play of the universe.

Sunday 7 March 2010

The world badly needs a radical idea

Over the last week, a few items, in the limited media that I access, have increased my awareness that the world needs a new radical idea, and I don’t have it. At the start of the 21st Century we are like a species on steroids, from the planet’s point of view, and that’s not healthy for the planet. And if it’s not healthy for the planet, it’s not healthy for us. Why do so few of us even seem to be aware of this?

It started with last week’s New Scientist’s cover story: Earth’s Nine Lives; whereby an environmental journalist, Fred Pearce, looks at 9 natural parameters that give an indication of the health of the planet from a human perspective. By this, I mean he looks at limits set by scientists and how close we are to them. He calls them boundaries, and they are all closing or already passed, with the possible exception of one. They are: ocean acidity; ozone depletion; fresh water; biodiversity; nitrogen and phosphorous cycles; land use; climate change; atmospheric aerosol loading and chemical pollution.

Out of these, ozone depletion seems to be the only one going in the right direction, and, according to Pearce, three of them, including climate change, have actually crossed their specified boundaries already. But, arguably, the most disturbing is fresh water where he believes the boundary will be crossed mid-century. It’s worth quoting the conclusion in its entirety.

However you cut it, our life-support systems are not in good shape. Three of nine boundaries - climate change, biodiversity and nitrogen fixation - have been exceeded. We are fast approaching boundaries for the use of fresh water and land, and the ocean acidification boundary seems to be looming in some oceans. For two of the remaining three, we do not yet have the science to even guess where the boundaries are.

That leaves one piece of good news. Having come close to destroying the ozone layer, exposing both ourselves and ecosystems to dangerous ultraviolet radiation, we have successfully stepped back from the brink. The ozone hole is gradually healing. That lifeline has been grabbed. At least it shows action is possible - and can be successful.


The obvious common denominator here is human population, which I’ve talked about before (Living in the 21st Century, Sep.07 and more recently, Utopia or dystopia, Sep.09; and my review of Tim Flannery’s book, The Weathermakers, Dec. 09).

In the same week (Friday), I heard an interview with Clive Hamilton, who is Charles Sturt Professor of Public Ethics at the Centre for Applied Philosophy and Public Ethics at the Australian National University, Canberra. He’s just written a book on climate change and despairs at the ideological versus scientific struggle that is taking place globally on this issue. He believes that the Copenhagen summit was actually a backward step compared to the Kyoto protocol.

Then, today (Saturday) Paul Carlin sent me a transcript of an interview with A.C. Grayling, who is currently visiting Australia. The topic of the interview is ‘Religion in its death throes’, but he’s talking about religion in politics rather than religion in a genuinely secularised society.

He’s looking forward to a time when religion is a personal thing rather than a political weapon, that effectively divides people and creates the ‘us and them’ environment we seem to be in at the moment. Australia is relatively free from this, but the internet and other global media means we are not immune. In fact, people have been radicalised in this country, and some of them are now serving jail sentences as a consequence.

To quote Grayling, predicting a more tolerant future:

‘And people who didn't have a religious commitment wouldn't mind if other people did privately and they wouldn't attack or criticise them.

So there was an unwritten agreement that the matter was going to be left quiet. So in a future where the religious organisations and religious individuals had returned to something much more private, much more inward looking, we might have that kind of public domain where people were able to rub along with one another with much less friction than we're seeing at the moment.’


To a large extent, I feel we already have that in Australia, and it’s certainly a position I’ve been arguing for, ever since I started this blog.

But Grayling also mentions climate change, when asked by his interviewer, Leigh Sales, but hints, rather than specifies, that a debate between a science expert on climatology and a so-called climate-change-sceptic would not be very helpful, because they are arguing from completely different places. One is arguing from scientific data and accepted peer-reviewed knowledge and the other is arguing from an ideological position because he or she sees economic woe, job losses and political hardship. It’s as if climate change is a political position and not a scientific-based reality. It certainly comes across that way in this country. As Clive Hamilton argues: people look out their windows and everything looks much the same, so why should I believe these guys in their ivory towers, who create dramas for us because it’s how they make their living. I’m not being cynical – many people do actually think like that.

But this is all related to the original topic formulated by the New Scientist article – it goes beyond climate change - there are a range of issues where we are impacting the planet, and in every case it’s the scientists faint, but portentously reliable voices, who are largely ignored by the politicians and the decision-makers of the world who set our economic course. And that’s why the world badly needs a radical idea. Politicians, the world over, worship the god of economic growth – it’s the mantra heard everywhere from China to Africa to Europe to America to Australia. And economic growth propels population growth and all the boundary pushing ills that the planet is currently facing.

The radical idea we so badly need is an alternative to economic growth and the consumer driven society. I really, badly wish 2 things: I wish I was wrong and I wish I knew what the radical idea was.

Saturday 20 February 2010

On the subject of good and evil and God

I wrote a lengthy dissertation on the subject of evil, very early on in this blog (Evil, Oct.07) and I don’t intend to repeat myself here.

This post has arisen as the result of something I wrote on Stephen Law’s blog, in response to a paper that Stephen has written (that is an academic paper, not just a blog post). To put it into context, Stephen’s paper addresses what is known in classical philosophy as the ‘problem of evil’, or how can one reconcile an omniscient, ultimately beneficial and inherently good god, with the evil and suffering we witness everyday in the physical world. It therefore specifically addresses the biblical god who is represented by the three main monotheistic religions.

Stephen’s thesis, in a nutshell, is that, based on the evidence, an evil god makes more sense than a good god. I’m not going to address Stephen’s argument directly, and I’m not an academic. My response is quite literally my own take on the subject that has been evoked by reading Stephen’s paper, and I neither critique nor analyse his arguments.

My argument, in a nutshell, is that God can represent either good or evil, because it’s dependent on the person who perceives God. As I’ve said previously (many times, in fact) the only evidence we have of God is in someone’s mind, not out there. The point is that people project their own beliefs and prejudices onto this God. So God with capital ‘G’, as opposed to god with small ‘g’, is the God that an individual finds within themselves. God with small ‘g’ is an intellectual construct that could represent the ‘Creator’ or a reference to one of many religious texts – I make this distinction, because one is experienced internally and the other is simply contemplated intellectually. Obviously, I think the Creator-God is an intellectual construct, not to be confused with the ‘feeling’ people express of their God. Not everyone makes this distinction.

Below is an edited version of my comment on Stephen’s blog.

I feel this all comes back to consciousness or sentient beings. Without consciousness there would be no good and evil, and no God either. God is a projection who can represent good or evil, depending on the beholder. Evil is invariably a perversion, because the person who commits evil (like genocide, for example) can always justify it as being for the ‘greater good’.

People who attribute natural disasters to God or Divine forces are especially prone to a perverted view of God. They perversely attribute natural disasters to human activity because God is ‘not happy’. We live in a lottery universe, and whilst we owe our very existence to super novae, another super nova could just as easily wipe us all out in a blink, depending on its proximity.

God, at his or her best, represents the sense of connection we have to all of humanity, and, by extension, nature. Whether that sense be judgmental and exclusive, or compassionate and inclusive, depends on the individual, and the God one believes in simply reflects that. Even atheists sense this connection, though they don’t personify it or conceptualise it like theists do. At the end of the day, what matters is how one perceives and treats their fellows, not whether they are theists or atheists; the latter being a consequence of the former (for theists), not the other way round.

Evil is an intrinsic attribute of human nature, but its origins are evolutionary, not mythical or Divine (I expound upon this in detail in my post on Evil). God is a projection of the ideal self, and therefore encompasses all that is good and bad in all of us.

That is the end of my (edited) comment on Stephen’s blog. My point is that the ‘problem of evil’, as it is formulated in classical philosophy, leads to a very narrow discussion concerning a hypothetical entity, when the problem really exists within the human mind.