Sunday, June 25, 2006

Specification or Liklihood by Mark Frank

Specification or Likelihood

I have just written an essay about it, simply because it interested me so much and put it here.

My main point is that the Explanatory Filter relies on rejecting chance hypotheses because they are both complex and specified. Dembski has now defined specified in terms of conforming to a simple pattern. He goes to considerable lengths to try to define simplicity and specification rigorously but never explains why conforming to a simple pattern should cause us to reject a hypothesis. Meanwhile there is a perfectly good basis for rejecting or accepting hypotheses based on the comparison of likelihoods which has a justification and is conceptually straightforward. The problem for ID is that this requires explaining not just why an outcome is improbable according to a chance hypothesis but also showing it is more improbable according to a design hypothesis. This of course implies getting into an level of detail about the design hypothesis which the ID community find unacceptable.

15 comments:

Alan Fox said...

Not having paid enough attention in my statistics classes, I can't usefully comment on your paper, except to say as I read it I almost felt I understood statistics ;).

My contention has always been that mathematics is a powerful modelling tool, but the initial assumptions need to be right or the model, no matter how elegant, will bear no relation to the system being modelled.

Take the explanatory filter, for example. No-one can explain to me how this can be applied to a real biological system. Is it applied to the genome, phenotype, population, whole species? There is a long thread at ARN, The Explanatory Filter, which I don't invite anyone to study, just maybe flick through, to illustrate that no-one could supply any example of how the explanatory filter might be used on a real biological system.

Mark Frank said...

Alan thanks for trying. It will be interesting to see if anyone responds.

JohnADavison said...

Chance and its handmaiden, statistics, never had anything whatsoever to do with either ontogeny or phylogeny. Both were driven entirely from within through the controlled release of front-loaded "prescribed" information. The only possible role for the environment was as a releaser. The entire Darwinian model is now and always was a myth generated by an overactive human imagination. Referring to ontogeny and phylogeny:

"Neither in the one nor in the other is there room for chance."
Leo Berg, Nomogenesis, page 134

Get used to it as I have!

Mark Frank said...

John

Did the front-loaded prescribed information anticipate all the changes in environment over the following 4B years?

(My essay was nothing to do with biology. It just about Dembski's definition of specification. But maybe you knew that).

JohnADavison said...

Nonsense.

Everything in Dembski's forum and books has to do with evolutionary origins. His fundamentalist mysticism is publicy displayed in the title of his forum - "Uncommon Descent" and in the clientele he attracts to his personally developed cult. Imagine if you can (I can't) that Intelligent Design must be considered to be an "inference" to be supported by mathematical "proofs." He has contributed absolutely nothing to our understanding of the great mystery of organic evolution and he has summarily rejected one who has in order to protect his little self generated world. He is the antithesis of Richard Dawkins and they are both dead wrong.

It is hard to believe isn't it?

R0b said...

Mark, I think you've done a great job of exposing Dembski's lack of theoretical foundation. As you point out, Dembski is stuck in the unenviable position of having to advocate purely eliminative testing in order to avoid analyzing design hypotheses. Anyone with a basic understanding of stats can see the fallacies in his argument.

Regarding Dembski's concept of specificity, it has never been anything more than an informal, muddled account of algorithmic compressibility.

Long before Dembski came on the scene, algorithmic information theory taught us that compressible data does not come from random sources. For example, if I find a paper that contain's Hamlet's soliloquy, the textual pattern on the paper matches a pattern in my memory and I conclude that it wasn't produced randomly by a monkey at a keyboard. Algorithmic information theory formalizes this intuition by showing that the pattern on the paper and the pattern in my head together form a data set that is too compressible to have occurred randomly. (It's compressible because the two patterns are redundant.)

Elsberry and Shallit picked up on this fact, and reframed specified complexity in terms of algorithmic information. In response, Dembski wrote the paper that you critiqued, which emphasizes specificity in terms of description length, ironically making it even more obvious that Elsberry and Shallit's criticism was right on track.

Unfortunately for Dembski, compressibility implies a causal story that is not completely random, but it does not imply an intelligent source. Not only is specificity redundant with more established approaches, it doesn't even imply what Dembski wants it to.

Mark Frank said...

seondclass - thanks for the link. I had not read that paper/essay. I feel slightly embarrassed that there are people with deeper knowledge who have already pretty much demolished Dembski's writing. But I hope there is merit in multiple approaches at different levels.

Chris - also thanks - I am so impressed by your ability to survive on UD unscathed.

Alan Fox said...

T'es méchant, Jean.

NO BANS, NO DELETIONS*

*Obscenity excepted

R0b said...

Mark, I'm not aware of anyone who as pointed out in detail the flaws in Dembski's testing philosophy, as you have. Regardless of Dembski's stance on ID, his statistical approach is logically bankrupt.

There are so many aspects of Dembski's work that are just plain wrong. It will take an awful lot of critics to cover them all.

JohnADavison said...
This comment has been removed by a blog administrator.
Alan Fox said...

POst 15 deleted (obscene content)

JohnADavison said...

Was it I you deleted? Just testing your neutrality Falan.

JohnADavison said...

jujuquisp

Thank you for exposing yourself as a fool, a liar and a blight upon the face of the internet. My science is now for all time and I am not even through yet much to the bitter disappointment of those who wish it were. I am reasonably certain that you have never contributed anything of substance to this world which is why you must denigrate all those who have. You are a textbook intellectual bigot. You are undoubtedly a delight to Falan Ox which is why he allows you to continue. Don't stop as with every outburst you elevate your target in direct proportion as you lower yourself just as David Springer does. That goes for all others who practice the same tactics, especially Springer who serves as the personal roving ambassador at large for William Dembski. As for my responses to the very many like you and like him, they are strictly Old Testament and "when in Rome" in nature. I welcome such denigration and thrive on it. Such attacks constitute irrefutable proof that I have "reached out and touched someone" as they say in the military.

"War, God help me, I love it so!"
General George S. Patton, like Albert Einstein and myself a strict determinist and the source of my signature comment.

Thanks again for exposing yourself.

I love it so!

JohnADavison said...

Dilliam Wembski still regards Intelligent Design as an "inference."

It is hard to believe isn't i?

I love it so!

Mark Frank said...

My paper has become the subject of a small debate on Panda's thumb and Uncommon Descent. As I am banned from UD and some ID people are banned from PT. I will point the discussion to this neutral blog.

Here is a copy of Dave Scott's post and my reply on PT.

Dave:


We keep getting told that the Dover (Kitzmiller) decison was the end of Intelligent Design. Judge Jones ruled that ID is just creationism in a cheap tuxedo. Yet physicist and regular contributor to Panda’s Thumb, Mark Perakh, is still struggling to dispute Dembski’s design detection math. I don’t get it. Is Mark in the business of arguing with cheap tuxedos or have rumors of ID’s death been highly exaggerated?

And just for kicks, the paper itself begins with a hugely flawed example and continues to use the flawed example through the end. The author begins by using for an example of specified complexity a poker program which is observed to deal a royal flush on the very first hand. It is then put foward that most people would reasonably presume the program was flawed. No problem with that presumption - a betting man would bet that the program is flawed. The problem is in equating this with Dembski’s specified complexity. A royal flush happens on average in one of every 2.5 million hands. That seems like long odds but in Dembski’s reasoning it’s not even close to long odds. Dembski says that the odds against something must be one in 10 to the 150th power before a design inference can be made. If the author changes his example to getting dealt 25 royal flushes in a row he’ll have an example of specified complexity aligned with Bill Dembski’s definition. One royal flush ain’t nearly enough except to make the paper specious to the casual observer who doesn’t know about Dembski’s Universal Probability Bound.



Me:

When I said Dave's comment was silly I was referring only to the first paragraph (for some reason I didn't see the second when I was browsing UD and I apologise for missing it). In this paragraph he claims that it is some sense hypocritical or contradictory or wrong (?) to criticse Dembski's work after the Dover trial has decided against ID. I do not believe it is worth discussing this accusation.

In the second paragraph Dave takes issue with the use of a Royal Flush as an example of specification - claiming that it is not nearly improbable enough. I think he needs to read both Dembski's paper and mine. I used that example because Dembksi himself uses it (on page 19) and Poker is familiar to many readers. Dembski makes it clear that specification is a matter of degree ranging from patterns that can easily be met through to the highly improbable. If Dave preferred I could have written the paper with an example of two or three consecutive Royal Flushes. That is not the point. Even three Royal Flushes in a row is just as probable as any other three defined hands in a row and we need to ask why it is surprising.