Musings of the ASA Director Emeritus
Blog Home All Blogs
This blog compiles the occasional musings of Randy Isaac who was ASA Executive Director from 2005 to 2016 and is now ASA Director Emeritus.


Search all posts for:   


Top tags: Larson  On Faith and Science  Ruse  theistic evolution  evolution  Ellis  Emergence  Mind  information  Evolutionary Creation  concordism  faith  science  annual meeting  ASA  cosmological constants  cosmos  fine tuning  human evolution  intelligent design  Physics  Probabilities  2016  Ancestry  anthropocene  computers  Darwinism  Dictionary  Ellisretirement  fossils 

2016 ASA Annual Meeting

Posted By Randall D. Isaac, Monday, August 29, 2016

It’s been a while since I’ve posted, so let me catch up on a few things until I get back to Ellis chapter 4.

First and foremost, I was delighted to be able to attend the ASA 2016 annual meeting, enjoying it without the responsibility of being the executive director for the first time since 2004. Azusa Pacific University was a great site for the meeting. The lodging, meals, and meeting rooms were in close proximity and served as a great locale. Above all, the plenary talks were excellent. All recordings are available here. My favorite was Bill Newsome’s talk on neuroscience. He brought ideas very similar to what George Ellis is talking about. He credits Nancey Murphy and colleagues just like Ellis does. It’s worth a listen.

There’s also a great summary of the meeting by Scott Buchanan in his blog. It’s well worth reading.

I’ve also had to interrupt my reading of Ellis to review and endorse Denis Lamoureux’s forthcoming book. Hope to discuss it in the future on this blog.

And a second interruption was to read Kenton Spark’s “Sacred Words, Broken Words” for our Grace Chapel book discussion group. Also an excellent read.

So all this explains my hiatus on Ellis. I hope to be back at it soon.

If any of you readers attended the meeting, how about a comment to tell us how you liked it and your favorite talk.

Tags:  2016  annual meeting  ASA 

Share |
PermalinkComments (1)

Ellis--Chapter Three: The Basis of Complexity

Posted By Randall D. Isaac, Tuesday, August 2, 2016

This chapter delves more specifically into the essence of emergence. Ellis points out that emergence requires three aspects: modular, hierarchical, and structure. He explains that “hierarchical structures have different kinds of order, phenomenological behavior, and descriptive languages that characterize each level of the hierarchy.” As a simple example, consider the hierarchy of elementary physics, nuclear physics, atomic physics, and chemistry. Each layer of the hierarchy has a different kind of order and behavior. Elementary physics combines quarks into protons, neutrons, etc. Nuclear physics combines protons and neutrons into nuclei while atomic physics looks at nuclei coupled with electrons. Chemistry emphasizes the combination of atoms into molecule structures. And so on. A different descriptive language is required at each level. The concepts at one level would not be appropriate for a different level. Describing water molecules in terms of quarks would be useless.

Ellis provides a diagram of how one might describe the hierarchical levels in both inanimate and biological entities:

Ellis notes that it is unclear what the topmost and bottommost levels of the hierarchy are.  To find bottom, one might need to go to quantum gravity or something and that may not be the lowest either. The top appears to be a metaphysical level and who knows if there is more to the story. But the concepts are nevertheless effective without knowing the limits of the levels.

At each level, the dominant entities form modular structures that comprise the entities of the next higher level in the hierarchy. The elementary physics level is comprised of quarks that combine into a modular form of protons and neutrons which comprise the entities of the nuclear physics level, etc. Working at the nuclear physics level, one can ignore the structure of quarks and study the nucleons as a module. Ellis describes this as “information-hiding” in the sense that a higher level can hide the detailed information in the next lower level. The high levels of macroscopic materials need little or no specific information of the individual atoms and molecules. That information is conveniently hidden.

Abstraction occurs whenever a module can be treated as a single unit and referred to by some appropriate label. In the simplest case above, the label “proton” refers to the module of three specific quarks. Encapsulation is whenever the internal workings of an abstract module are completely hidden. No characteristic of the module depends on the details of the components.

Bottom-up causation is well-known and is generally considered unique and sufficient in a reductionist perspective. The atoms and molecules behave according to well-known laws of nature and that action comprises the characteristic of the cell and so on up the hierarchy. Strong reductionism would claim that all behavior could be explained by such bottom-up causation. Such explanations are limited only by the complexity of expressing the equations.

Ellis claims “the phenomenon of emergent order is when higher levels display new properties not evident at the lower levels…higher level structures are created out of lower level entities and then exist as entities in their own right. They are described by suitable higher level variables.”

Variables that describe traits are “structural if they are basically of a static nature—they give the higher level its identity; and they are dynamic if they are essential to its behavior—they are time dependent in crucial ways.


Bottom-up causation “is the ability of lower levels of reality to have a causal effect on the higher levels which emerge from them, sometimes uniquely determining what happens at the higher levels.

Top-down causation “is the ability of higher levels of reality to have a causal power over lower levels.

“Emergence of complexity takes place where quite different laws of behavior hold at the higher levels than at the lower levels.” In my field of condensed matter physics, I understand this as phenomena like semiconductivity or superconductivity of macroscopic solids represent fundamentally different laws of behavior than those at the level of individual atoms. Yet the higher level is entirely determined by the behavior of the lower level. The key is that there is also top-down causation in the sense that that higher level behavior is context-dependent. External constraints limit the action of the elements of the lower level.

Multiple representation occurs when many different microstates of one level all correspond to the same higher level state. For example, many different combinations of individual molecular positions and velocities correspond to the same volume and pressure of a gas. The number of different lower level representations determines the entropy of that state.

“…emergence is when phenomena arise from and depends on more basic phenomena yet are simultaneously autonomous from that base…A phenomenon is emergent if it cannot be reduced to, explained, or predicted from its constituent parts.”

Ellis argues that top-down causation can occur if and only if there are equivalence classes of lower-level states. Consider a state in some level that can result from many different lower-level states, as in the case of multiple representation. This state “must lead to the same top level outcome, independent of which lower level states instantiates the high level state.” He calls this the Principle of Equivalence Classes. In other words, changing a high level variable will lead to a change in the lower-level equivalence class for that variable but this change cannot depend on which member of the equivalence class exists at the time of the change. Only in this way can there be consistency and reproducibility of the effect of changing a high level variable. Only then can top-down causation work and in that case, top-down causation does happen.

Ellis discusses four ways in which top-down causation can be demonstrated.

1.       Altering context. Changing a high-level variable reliably induces change in the system.

2.       Identifying equivalence classes. One can show directly that equivalence classes exist.

3.       Identifying dynamics. One can show specific mechanisms that effect top-down control or specific feedback control systems.

4.       Computer Modelling. One can model hierarchies and demonstrate the top-down causation.

Finally, he points out that the lower levels set constraints on the properties that higher levels can realize. For example, matter and energy conservation laws at the low levels put limits on the higher level possibilities.

This chapter is more theoretical than chapter two. He introduces and defines many core concepts of emergence. There seems to be a lot of overlap with chapter one, largely due to the independence of each chapter. Here he expands in detail the concepts of emergence and complexity mentioned in chapter one. It’s not easy reading but I’ve almost deluded myself into thinking that I might actually be starting to understand it.

Tags:  Ellis  Emergence 

Share |
PermalinkComments (2)

Ellis--Chapter Two: Digital Computer Systems

Posted By Randall D. Isaac, Tuesday, July 12, 2016

This is the chapter I would have loved to have been able to write. My career was in digital computer design and fabrication and I am very interested in emergence so a chapter on computers as examples of emergence is right down my alley. Of course, Ellis does a far better job than I ever could.

Anyone who has written software programs or done circuit design for chips or any microprocessor architecture work will find this chapter easy to understand. Anyone who hasn’t done anything in this field is well advised to simply skip it. Ellis shows how a digital computer system exhibits top-down causation by emergent entities. With well-known concepts, he clarifies the concepts of emergence.

He describes computer systems as having two primary hierarchies. One is an implementational (vertical) hierarchy and the other is a logical (horizontal) hierarchy. Each hierarchy involves many levels of modular structures. For example, in the implementational hierarchy, one of the lowest levels is the atoms and molecules level. A higher level is the transistor and wiring level that form the physical layout. Above that, the transistors are combined into logic and memory circuits while above that, the circuits are combined into modules such as the arithmetic control unit, the central processing unit, etc. The highest level is the physical interface with humans. On the software side, the lowest level is the binary code that runs on the hardware. Next is assembly language and sequentially higher-order compilers and sophisticated programming languages. Modular units like subroutines are closed entities comprised of lower level commands.

Ellis explains “Each higher level behavior emerges from the lower level ones. But what ultimately determines what happens? The higher levels drive the lower levels.” This, he claims, is the essence of top-down causation. It is intrinsically coupled with bottom-up causation since the behavior of the lower levels causes the desired action at the higher levels. The higher levels are characterized by what can be considered abstract entities that interact and cause the appropriate desired action at the lower levels.

Then Ellis uses a term which I believe is potentially confusing and needs some discussion. He emphasizes that the entities in the higher level logical hierarchy are non-physical. He states that “Software is not a physical thing, neither is data. They are realized, or instantiated, as energetic states in computer memory. The essence of software does not reside in their physical nature: it is the patterns of states, instantiated by electrons being in particular places at a particular time, that matters.” What does it mean to be “non-physical?” As Christians we are very comfortable with non-physical beings and we quickly associate non-physical entities with spiritual ones. But I do not believe Ellis is using the term this way. By non-physical, I believe he means an entity that is not uniquely associated with a physical state. It may be the result of any number of physical states but its existence does not depend on any specific one.

While Rolf Landauer reported to me at IBM Research in the late 90’s, he told me in our personal discussions that “Information is physical, but independent of its embodiment.” I’m still struggling to understand this more fully. Information can be transferred from one physical embodiment to another without any necessary energy dissipation but deleting one bit of information necessarily dissipates kt ln 2 of energy. I’m also trying to figure out whether Landauer would agree with Ellis about software being non-physical because it is independent of its physical embodiment. I suppose being independent of physical embodiment may not be sufficient to warrant the term “non-physical.” In any case, we cannot assume that either Landauer or Ellis are thinking of spiritual-like entities when they talk about “non-physical” entities.

Finally, Ellis closes this chapter with this gem: “At a higher level, the existence of computers is an outcome of the human drive for meaning and purpose: it is an expression of the possibility space of meanings, the higher levels whereby we guide what actions take place.” As is true of so many sentences in this book, I have to read it many times to think it through. And maybe it wouldn’t make sense without reading the whole chapter. But I was in the business of designing and building computers all my career and never thought of it this way.

What he seems to be saying is that as we go up the physical and logical hierarchies, entities emerge from the lower levels with increasing abstraction. Finally, we get to the top levels where meaning and purpose reside. The closely linked relationships of bottom-up and top-down causation lead to the possibility of meaning and purpose, aspects that do not exist at the lower levels. We often say that science cannot address meaning and purpose but in Ellis’ view, emergence allows us to recognize meaning and purpose at the top levels even though it is absent at the lower levels. This I must contemplate for a while.

Tags:  computers  Ellis  Emergence 

Share |
PermalinkComments (1)

Comments on "How Can Physics Underlie the Mind?"

Posted By Randall D. Isaac, Monday, July 4, 2016

When I learned that George Ellis had just published a major work, I jumped at the chance to write a book review for PSCF. Springer publishing company only provides an online reader version for reviewers, with a free book after the review is published. But I’ve finally figured out the technical aspects of their reader and I’ve begun to read “How Can Physics Underlie the Mind: Top-Down Causation in the Human Context.” I would like to use this blog as a mean of writing notes to myself as I go through the back and hopefully that will help me write the review. Your comments and questions would be of great help if you are interested in the topic in any way. If not, simply ignore this post and its comments.

Ellis is in the Department of Mathematics and Applied Mathematics at the University of Cape Town, South Africa. He co-authored The Large Scale Structure of Space-Time with Cambridge physicist Stephen Hawking. He has long interacted with scholars such as ASA Fellow Robert Russell, Nancey Murphy, Tim O’Connor as well as Phil Clayton and other advocates of emergence. I have been interested in and persuaded by the ideas of emergence for many years and am eager to learn about some of the more detailed issues connected with it.

He writes that his aim in the book is to “…support the view that, even though physical laws underlie all material entities, there exist higher level causal relations that allow the brain to act as a means of creating theories, searching for meaning, expressing tenderness, and doing all the other myriad things that make us human, without contradicting or overwriting those lower level physical laws. Consequently, physics does not control the mind, it enables the mind. The same is true for genetics and neurobiology: they both to some degree shape what the mind does, but neither by itself determines the outcome, because the mind has a logic of its own…We are genuinely fully human, even though we emerge through the interactions of fundamental particles.”

The book has eight chapters:

1.       Complexity and Emergence

2.       Digital Computer Systems

3.       The Basis of Complexity

4.       Kinds of Top-Down Causation

5.       Room at the Bottom?

6.       The Foundation: Physics and Top-Down Causation

7.       The Mind and the Brain

8.       The Broader View

Springer asked Ellis to write the book in such a way that each chapter could stand alone and be sold separately as well as a complete book. This results in a significant amount of repetition, especially of references, but that repetition is very helpful in gaining familiarity with complex ideas.

So with that as a background, I’ll start to dig in and occasionally share with you his ideas.

Tags:  Ellis  Emergence  Mind  Physics 

Share |
PermalinkComments (4)

The Mystery of the "Hobbits"

Posted By Randall D. Isaac, Wednesday, June 8, 2016

Most people think of J.R.R. Tolkien when they hear the term hobbits. The very personable figures in his popular sci-fi series brings to our mind small, talented, gentle beings. A little more than a decade ago, the term was also the nickname given to the fossils found on the island of Flores in Indonesia. The bones were very similar to those of humans but much smaller. Originally dated to 18,000 years ago, their age was recently updated to about 50,000 years. Their Latin name is Homo Floresiensis.

Their ancestry remains a mystery. For many years, there was a viable proposal that the bones were simply those of a human with some disease like micro encephalitis. The latest news is the discovery of more human fossils dated to about 700,000 years ago. These were also smallish humans, about the size of Hobbit, and are highly likely to be the ancestors of Hobbit. But while it solves some mysteries, it generates others.

A couple of possible explanations remain to be examined. One is that a little over a million years ago, a community of Homo Erectus migrated to Indonesia. Members of this species ranged up to six feet tall. In this case, the population could have undergone what is known as “island dwarfism” in which species that are isolated on a small island evolve into smaller versions of their species, sometimes as much as six times smaller. While many species have been found to exhibit such dwarfism, it has never been seen in humans. If this happened, then H. Floresiensis is likely a descendant of the branch of H. Erectus that underwent dwarfism rather rapidly. The 700,000 year old fossils are about the same size as the 50,000 year old one, but much smaller than H. Erectus.

An alternative theory is that it was a much earlier migration out of Africa rather than H. Erectus that led to a population in Indonesia at that time. These individuals would have been smaller to begin with and less rapid dwarfism needed to have occurred. More data will be needed to distinguish between these two ideas. In either case, it seems the argument for a diseased human explanation for the “Hobbit” is vanishingly weak.

What do we make of all this? Paleoanthropology continues to be a fascinating field. New technology is providing more specialized tools to tease more data from both old and new fossil discoveries. What we do know is that human ancestry is rich with diversity and far from a simple story of a single species.

Tags:  fossils  human ancestry  human evolution 

Share |
PermalinkComments (5)

What can we learn about God from nature?

Posted By Randall D. Isaac, Friday, June 3, 2016

While the two-book model helps us understand how the two very different types of revelation of God are related to each other, there remains the long-standing question of whether a study of nature can reveal something to us about God that is not known through revelation. About 35 years ago, while teaching a Sunday School class about science and faith, I asked this question about what we could learn about God from nature. I expected answers that would support my perspective of the consistency and faithfulness of God through the trustworthiness of the laws of nature. Instead, the first hand that went up from an eager young man brought his observation that we learn that God is unpredictable. I stammered a bit but of course a teacher is always supposed to say something positive about comments from a student. It threw me off a bit but I recovered though I never forgot that response. It was burned in my memory when a few months later this young man committed suicide, jumping from the top of a cathedral in Waterbury, CT. It turns out he had schizophrenia. I was stunned. It indicated to me that we see characteristics of God in nature that reflect what we feel inside or that we have learned elsewhere.

Is there anything we can learn about God from the book of God's works that we do not first learn from his book of words? Should we be able to?

What do you make of Neal deGrasse Tyson's comments in the attached 88 second video clip? If I didn't attach it correctly, you can find it here:

Essentially, he says he is agnostic about the existence of God but seems to say that he does not see benevolence in nature, implying that if there is a Creator, that Creator might not be benevolent either. What do you make of it? I would think that we can see any attribute in nature that we want, simply by choosing what phenomena we wish to study. In other words, we only find what we want to find. Maybe we can't learn anything about God from nature unless we already know it.

Tags:  natural theology 

Share |
PermalinkComments (2)

Genius by Stephen Hawking

Posted By Randall D. Isaac, Sunday, May 22, 2016

PBS has just started a new six-part series called "Genius of Stephen Hawking" that seems to have some interesting topics.

I recorded the first two episodes but haven't seen either one. Though I suspect I'll have some disagreements with Hawking, the topics look very interesting and perhaps we can have a good discussion through comments to this post.


Tags:  genius  hawking 

Share |
PermalinkComments (1)

The Two-Book Model

Posted By Randall D. Isaac, Friday, May 13, 2016

The two-book model is a well-known and oft-used construct for articulating the basis for harmony between science and the Bible. In the model, God is the author of two books, the book of God’s works and the book of God’s words. Having the same omniscient author who cannot err means that the two books cannot conflict. Hence, any perceived conflict between the two is the result of an inaccurate interpretation, either by science of the book of God’s works or by theology of the book of God’s words. It seems to me that there is no or very little controversy in the Christian community about this model. The secular community doesn’t necessarily accept the premise that God authored either book. But Christians invariably accept both. I have read only one article that objects to the model and I haven’t been able to find it again. This model is therefore an excellent starting point for bringing harmony to the debates on science and faith.

The difficulties soon begin when it becomes obvious that the model doesn’t provide any insight on how to resolve perceived conflicts. Which interpretation needs to be changed if there is a difference of opinion? In several of my talks, I have taken the time to explore different variations of the two-book model which lead to different schools of thought.

The first chart in the attached file (downloadable from the link below this post) shows my personal version of the two-book model. I like to refer to the author as “Logos” in light of my favorite passage of creation in the first chapter of the Gospel of John. Science and theology are the study of the books of God’s works and words, respectively. The remaining pages reflect a somewhat playful modification of the model in ways that lead to various types of errors.

Consider the one-book model which reflects the view that the material world is evil or at least irrelevant and not worthy of consideration. Where is theology, however, when it cannot relate to the world in which we live? This version has little survival value.

Alternatively, a one-book variation might consider only the book of God’s works and not admit to God’s words. In that scenario, theology would seek to find God solely through the study of nature. This was a popular form of study a few centuries and we refer to it as natural theology. However, there is no calibration of the conclusions to be drawn about God from nature. Is he full of beauty and grace as the sunset over the calm ocean? Or is he full of rage and fury as the wind and waves in a hurricane?

Without the book of God’s words, would scientific study of God’s works lead us to God? This is essentially the basis of much of scientific apologetics. Many people feel that science, by itself, reveals the existence of God, another variation of natural theology. But who is this God? How is he related to the one revealed in the book of God’s words?

Another approach is to study the book of God’s words through scientific methodology. We refer to this as higher criticism. It presupposes primarily a human authorship of the Bible. While it can bring great insight into understanding the Bible, it can also be taken to extreme to minimize divine inspiration.

It is also possible to study the book of God’s works through the book of God’s words. In the early 18th century this approach was sometimes called “Scripture geology.” This leads us to the concept of concordism. The two-book model seems to me to be inherently non-concordistic in the sense that it makes no claims about the content and teachings of God’s word, particularly whether or not God’s word describes nature inerrantly. Concordism assumes that the Bible does teach science accurately and presages modern science and not ancient cosmology. This sets up a potentially inherent conflict model in which the study of God’s works through science is pitted against that study through God’s word. A non-concordistic approach assumes that the teachings in the two books are not of the same kind and therefore not inherently in conflict.

A true two-book model therefore can help us avoid many variations that lead to conflict and disagreement. Perhaps by seeing these conflicts as originating in a deviation from the two-book model, we can help diffuse the disagreements.

Download File (PDF)

Tags:  concordism  Two-Book Model 

Share |
PermalinkComments (4)


Posted By Randall D. Isaac, Wednesday, May 4, 2016

I’m still thinking about the link posted on the ASA website a few days ago (the twitter feed on the homepage features four links each weekday) about Genesis 1-11. It is a good start to a discussion I wanted to have here on concordism. This relates to the first meta-question I posed in my comments, reference in an earlier post and discussed in my remarks on April 8.

Every study of the Bible needs to address at some point the issue of what does the Bible teach about history and science and how does it relate to our modern science? The natural assumption of Christians seems to begin with a direct correlation. Countless questions about the Bible deal with how to understand a biblical passage in light of current science. Examples abound, often as legends that refuse to die.

One well-known example is that of the missing day. Joshua commanded the sun to stand still and it did until the Israelites won the battle. Bewildered at the implications of such a miracle, concordists have offered all sorts of explanations. On one side, skeptics note that the inertial forces due to the earth halting its rotation would have ripped apart the entire globe. On the other side, a myth continues to propagate that NASA astronomers have determined that there is a missing day in the history of the path of stars. The legend goes that as computer calculations have been done of the path of stars over history, that observations could not be understood unless a missing day was assumed in approximately the year of Joshua’s battle. No such observation or calculation has ever been nor could it be made. No stellar observations in those days were anywhere near precise enough for such a determination. Yet the story persists.

The stakes are high. If there isn’t a missing day, concordists have a hard time rationalizing the sun standing still. Fortunately, theories abound with alternative interpretations and ancient ideas. But it doesn’t stop skeptics from scoffing at a Bible with errors in it and mythical stories. What all of these have in common is the assumption of concordism, the basic idea that there is a correlation of biblical teaching and modern science. So a biblical teaching of the sun standing still must correlate with some scientific observations.

Some theologians will draw the line after Genesis 11, asserting that concordance is important only after Genesis 11 while the earlier passages have no claim to being historical. Others feel that no concordance is necessary in the entire Bible.

The sheer volume of articles and books on attempts to correlate biblical teaching with scientific observations indicates the strength of the assumption of concordism. But is it really true? Must it be true? If it isn’t all true and it isn’t all false, how do we decide what is and what isn’t historical? Let’s ponder these questions in the next few posts.

Tags:  concordism 

Share |
PermalinkComments (14)

Evolution in Motion

Posted By Randall D. Isaac, Friday, April 29, 2016

Last night I attended a lecture at the Harvard Museum of Natural History on “From Supercontinents to Islands—Evolution in Motion.” It amazes me that a supercontinent like Gondwana existed as recently as about 200 million years ago. That’s a small fraction of earth’s history—less than 5%--and it means that plate tectonics are awfully fast. At least in geological timeframes.

The speaker, Gonzalo Giribet, showed a slide of the age of the ocean floor. Very little was between 200 and 400 million years old and virtually nothing older than that. Almost all of it is younger than 180 million years. In contrast, continental surfaces feature rocks ranging all the way close to the 4.65By age of the earth. That’s amazing.

The islands he was discussing were those that had formed by breaking away from the big continents during the split up of Gondwana into the current continents. For example, New Zealand, Madagascar, Sri Lanka, and many parts of Southeast Asia have been identified as having once been part of Gondwana and then were isolated. By tracing the evolutionary history of species on these islands, information can be inferred of the time and history of that isolation.

One controversy, I learned, is whether or not New Zealand was entirely submerged under water after its isolation. Indications are that it may have been submerged about 25 million years ago. The author showed evidence that he thinks is increasing in the last few years that the island was not entirely submerged after all. This stems from the diversity of species of invertebrates. He really loves those daddy long-legs.

Giribet has also spent a good deal of time in Antarctica, deep sea diving in those icy waters to retrieve countless specimens of invertebrates. I learned that part of what influenced Antarctica to have a very cold climate after a relatively warm climate was that its isolation from Gondwana opened up a circumpolar ocean current that in effect isolated the island and became a refrigerator. What an amazing history.

Tags:  evolution  Gondwana  islands  supercontinents 

Share |
PermalinkComments (2)
Page 5 of 6
1  |  2  |  3  |  4  |  5  |  6