Saturday, February 03, 2007

"How Carefully Do Dembski's Advocates Read His Work?" by Secondclass

It seems to me that anyone who finds Dembski's work convincing hasn't read it very carefully or thought about it very much. I have yet to find an exception, other than possibly Dembski himself.

The most extreme case I've discovered is Joe G., who claims that he has read The Design Inference, No Free Lunch, and other works by Dembski, and that he has discussed complexity and specification with Dembski himself. But after all that study, Joe still hasn't grasped even the basics of Dembski's approach and terminology.

1. Joe doesn't know that the "complexity" part of "specified complexity" is synonymous with improbability.

2. Joe doesn't know that specificity is positively correlated with simplicity of description.

3. Joe doesn't know that Dembski's most oft-used example, the Caputo incident, is an instance of specified complexity, according to Dembski.

4. Joe doesn't know that knowledge of designers' capabilities plays no role in Dembski's approach. He has failed to realize the Dembski's approach is eliminative, with design exempt from consideration for elimination.

And until he was corrected, Joe thought that detachability was a sign of fabrication rather than a requirement for specification.

Those of you familiar with Dembski's work can decide whether Joe has actually read what he claims to have read. Either way, I thank Joe for adding a strong data point to my theory regarding the level of understanding of Dembski advocates.

66 comments:

Alan Fox said...

I admire your tenacity, Secondclass, though as you might see from this thread, Joe Gallien does seem to be rather intractable. In case Joe wishes to comment, I assure him I will not moderate any comment (other than obscenity and spam which applies to everyone).

Alan Fox said...

I have notified Joe Gallien thusly:

Joe

Secondclass has posted a thread on my blog

(Personally I think he is wasting his time with you, for two reasons, neither of which would, I am sure, remotely interest you. But, hey, the thread is there, unmoderated except for spam and obscenity, so please yourself.)

Joe G said...

1. You didn't know that the "complexity" part of "specified complexity" means improbability.

Umm that is only how Wm Dembski characterizes it. People understood complexity well before he was born. He was looking for mathematical "proof".

2. You didn't know that specificity increases with descriptive simplicity.

Again I understand how Dembski states it and I also understand that he is not the final authority.

For example "royal flush" holds no significance to someone without knowledge of poker. To that person we could be talking about a King's toilet.

IOW you can only simplify a description if you have pre-existing knowledge. Otherwise it doesn't specify a thing.

3. You thought that detachability was an indicator of fabrication, when in fact it's a requirement for specification.

Yeah, yeah, yeah. I misread Dembski in my haste to figure out what you were talking about.

4. You thought that the Caputo sequence, which is analyzed at length in both TDI and NFL and discussed in several other papers, was a fabrication, when in fact Dembski presents it as specified.

see above. And actually Caputo did fabricate the sequence. That was the charge anyway.

6. You think that "knowing and understanding what designing agencies are capable of" plays a role in the EF, when in fact it doesn't.

Yes it does for reasons already provided.

secondclass:
To think that the EF considers designers' capabilities is to completely miss the point of taking an eliminative approach.

Umm the EF is a process and as such it cannot consider anything.

And the last node is where we would consider what designing agencies are capable of. sp/SP? An event is only "specified" because we know what designing agencies are capable of- duh.

Ya see secondclass I read those books YEARS ago. And I have read many other books since then. I have also included other IDists' ideas with Dembski's.

As I told you already when one starts an investigation one does so with ALL available tools. However to listen to you people can only choose one tool. And actually it is the tool YOU chose for them.

In order to get to the third node it "takes considerable background knowledge" (Dembski NFL page 111). He goes on to say "What's more it takes considerable background knowledge to come up with the right pattern (ie, specification) for eliminating all those chance hypotheses and thus inferring design."

IOW we do have to have some knowledge about the capabilities of designing agencies coupled with the knowledge of what nature, operating freely, is capable of.

One more thing-I am not a Dembski advocate but I am an ID advocate. Wm is not and never has been the do all say all for ID for me or anyone else. I am more than capable of meshing all ideas.

And I know that all Dembski's math does is to give us a way to mathematically verify our design inference.

secondclass, like Zachrielbefore him, is just upset because he started an argument that I finished by demonstrating his argument was incorrect.

So this is his only recourse. Typical but I understand...

Joe G said...

It should also be noted that secondclass didn't know how to use the Explanatory Filter. He doesn't realize that every decision node is dealy with sequentially- that is you do everything required at node 1 before moving on and you do NOT apply an equation used in node 3 in any other node.

(secondclass wanted to use Dembski's SC equation right from the first node all the while SC isn't even considered until the third node)

And he is just upset that I showed that pulsars would NOT afford a design inference while he said they should- but he mis-applied the EF. Go figure.

Does anyone else not understand the importance of following a procedure? Does anyone else not understand that in a 3-step process you start at the first step and do not consider any other steps until you are supposed to?

Also I am fully aware the the EF is eliminative. ALL filters eliminate. That is their purpose. I have blogged on that very fact. The EF eliminates via consideration. And one has to have knowledge to do so.

R0b said...

Joe, you're simply repeating the same mistakes that I've already corrected on your blog, many with direct quotes by Dembski. I'll repeat some of my rebuttals and correct a few more of your errors when I get time tomorrow.

You were advocating Dembski's EF for analyzing the pulsar signal, and Dembski is the "end all be all" when it comes to defining his method.

Here's what it boils down to: You don't understand Dembski's concepts, so you're trying to bluff your way through the discussion, but the only one fooled by your bluffing is yourself. Here's a tip for you: Admit to yourself and to the world that you don't understand things. It's good for the soul.

Joe G said...

You were advocating Dembski's EF for analyzing the pulsar signal, and Dembski is the "end all be all" when it comes to defining his method.

Not really seeing that HE said the EF was SOP- standard operating procedure- see page 36 of TDI.

And even with that Dembski has NEVER said I was using it incorrectly. However it is obvious that you are:

Only someone totally ignorant of processes and procedures would apply an equation at step 1 that isn't specified until step 3.

It was nice of you to run away to Alan's blog after that was pointed out to you.

Also any alleged "mistakes" are in your mind and your mind alone. And the bluffing is yours and yours alone:

see here and here

Joe G said...

My first blog on the EF:

The Design Explanatory Filter has been getting bad press. However it is obvious the bad press is due to either misunderstanding or misrepresentation. Some anti-IDists argue that it is an eliminative filter. Well, yeah! All filters eliminate. The EF eliminates via consideration. Would they prefer we started at the design inference and stay there until it is falsified? Crick’s statement would have changed to “We must remind ourselves that what we are observing was designed.” (as opposed to “…wasn’t designed, rather evolved.”)

By getting to the final decision block where we separate that which has a small probability of occurring with intentional design (an event/ object that has a small probability of occurring by chance and fits a specified pattern), means we have looked into the possibility of X to have occurred by other means. May we have dismissed/ eliminated some too soon? In the realm of anything is possible, possibly. That is what comes next.

Also it pertains to a design INFERENCE. That inference is still subject to falsification. It is also subject to confirmation. Counterflow would be such evidence and/ or confirmation for the design inference: Del Ratzsch in his book Nature, Design and Science discusses “counterflow as referring to things running contrary to what, in the relevant sense, would (or might) have resulted or occurred had nature operated freely.”

IOW it took our current understanding in order to make it to that decision node and it takes our current understanding to make the inference. Future knowledge will either confirm or falsify the inference. The research does not and was never meant to stop at the last node. The DEF is for detecting design only and only when agent activity is questioned.

Look at it this way: How do forensic scientists approach a crime scene? Do they run in guns blazing, kicking stuff around? No. They pick the place clean looking for clues- macro and micro. The clues lead them to an accidental or natural death or a homicide. Somewhere along the line there may be a key indicator of agent activity, IOW something that was determined couldn’t have occurred by chance.

If the evidence points to the lava flow causing the fire then they don’t look any further. We know when lava flows make contact with buildings a fire will ensue. In the absence of lava or other natural causes (unintelligent, undirected), they look for other clues. Only after collecting and examining ALL the evidence can arson be inferred. Arson and homicide imply intent and that adds to the existing pile of evidence to nab the culprit(s).

Dembski admits that an intelligent agency may work to mimic regularity or chance. That is another reason the research doesn’t stop after the initial inference.


Finally, as Wm. Dembski states:
"The principal advantage of characterizing design as a complement of regularity and chance is that it avoids committing itself to a doctrine of intelligent agency.
Defining design as the negation of regularity and chance avoids prejudicing the causal stories we associate with the design inference."


Can anyone propose a better way to look at evidence/ phenomenon? How about a better way to make a design inference?

And one more word from Wm. Dembski:

"The prospect that further knowledge will upset a design inference poses a risk for the Explanatory Filter. But it is a risk endemic to all of scientific inquiry. Indeed, it merely restates the problem of induction, namely, that we may be wrong about the regularities (be they probabilistic or necessitarian) which operated in the past and apply in the present.

Joe G said...

One more thought for today-

Just a little background knowledge for secondclass:

I have been writing processes and procedures as part of my job for over 30 years. And I will put my knowledge of processes and procedures up against ANYONE on this planet.- especially those who think they can take an equation from step 3 and apply it at step 1.

Zachriel said...

joe g: "secondclass, like Zachrielbefore him, is just upset because he started an argument that I finished by demonstrating his argument was incorrect."

I wasn't upset. What happened is that you insisted that I support a strawman position of your own devising, one that I specficially repudiated, in order for me to post on your blog. As you had already suppressed many of my comments, and as our readers are aware of this, you have only damaged any credibility you had hoped to achieve.

Zachriel said...

On the topic, please start by providing the method by which we assign a specific numerical value to "complex specified information". You might try a few simple examples, such as a granite stone, a constellation, and a hurricane.

Don't be afraid to show your math. Thanks!

blipey said...

I'm sure Joe will show you his numbers IFF you show your face at his door. He doesn't have the time to educate you losers in simple maths that he has already provided in clear detail...uh, somewhere, if we just look hard enough.

Joe G said...

For the record:

I have always maintained that it is true the greater the complexity the smaller the probability (of occurring by chance or any combo of chance & necessity).

To Zachriel:

I NEVER insisted you support a strawman. That is just a downright lie. (the alleged starwman was my position from the beginning and one that ZAch supported until I provided data to the contrary- I have it archived).

As for credibility you never had any so what you say about me in that regard is meaningless.

As for the topic- it is whether or not by employing thne EF pulsars would come out as a designed signal

Joe G said...

To blipey- Wm Dembski has provided the math. I don't see any need for me to repeat what he has already done.

Joe G said...

Zachriel:
As you had already suppressed many of my comments,

Anyone familiar with the scenario understands what happened was of YOUR own doing. To suggest otherwise is being nothing but a whining baby.

Joe G said...

The following is the alleged "strawman" that Zachriel said I insisted he support:

First sentence first post


Zachriel:
If life descended from a common ancestor, it would form a nested hierarchy pattern.

Only later once that was exposed as being bogus did Zachriel attempt to wiggle out of that statement.

Joe G said...
This comment has been removed by the author.
Zachriel said...

Zachriel: "On the topic, please start by providing the method by which we assign a specific numerical value to "complex specified information". You might try a few simple examples, such as a granite stone, a constellation, and a hurricane."

joe g: "Wm Dembski has provided the math. I don't see any need for me to repeat what he has already done."

In other words, and for the benefit of our readers, you can't or won't show us how to calculate CSI even for simple examples. Could it be because the definition is not sufficiently rigorous to yield a quantitative solution?

Zachriel said...

joe g: The following is the alleged "strawman" that Zachriel said I insisted he support:

First sentence first post

Zachriel: If life descended from a common ancestor, it would form a nested hierarchy pattern.
"

Please note the word "if" that precedes the conditional statement. Here is your strawman.

joe g: "Your response must demonstrate that a nested hierarchy is an expected result of Common Descent*.

*Common Descent refers to the premise that all of the extant living organisms owe their collective common ancestry to some unknown popuklation(s) of single-celled organisms.
"

As I had spent months pointing out that Common Descent does not necessarily apply to the evolutionary origins of early cellular life, this was a strawman statement of my position. I suggested a more limited claim for discussion of common descent of vertebrates, but you apparently prefer to declare victory over straw.

joe g: "Nothing else from you will be posted on this blog, in any thread, until you comply."

And this statement represents a ban. You insist I support a strawman or not be allowed to comment. That is your choice. You had already delayed and suppressed some of my previous comments. This behavior reflects poorly on your credibility.

R0b said...

Joe, I hardly know where to begin. The only thing I can do is ask you to support your claims and accusations one by one. I'll number my requests so that none fall through the cracks.

Let's start with this one:

Joe: No, that is jumping to the thrid node. IOW you didn't apply the EF, you played hopscotch on it.

Joe: The nodes are sequential. The process step-by-step.

Do you understand anything about processes and procedures?


Joe: Only someone totally ignorant of processes and procedures would apply an equation at step 1 that isn't specified until step 3.


You're referring to the depiction of the EF in TDI, which goes like this:

1. Is it high probability? If so, then ascribe it to regularity.

2. Is it intermediate probability? If so, then ascribe it to chance.

3. Is it small probability and specified? If so, then ascribe it to design. Otherwise ascribe it to chance.

Since you have 30 years of experience with procedures, you'll realize that the decision nodes in the above chain are all mutually exclusive, so if you answer yes to any of them, you are simultaneously answering no to the rest.

Therefore, if all you care about is whether E is designed or not, there is no point in traversing nodes 1 and 2, since a yes or no answer to node 3 is sufficient to the design question. If E is SP, it is obviously not HP or IP. If, on the other hand, E is not SP, it is not designed, and our question is answered.

Dembski has consistently stated that improbable complexity (he calls it specified complexity, but since you think the two terms are antonymous, I'll choose my terms carefully) is sufficient for a design inference. If we determine that E exhibits specified complexity, then we can infer design, period. We do not need to first ask ourselves if E is high probability and then ask ourselves if it's intermediate probability. Those questions are redundant if the probability is small.

Request #1: Please explain the following: If the probability of an event is small, why do we have to ask ourselves whether the probability is high and then ask ourselves whether the probability is intermediate before we start determining whether the event is specified?

Joe G said...

Request #1: Please explain the following: If the probability of an event is small, why do we have to ask ourselves whether the probability is high and then ask ourselves whether the probability is intermediate before we start determining whether the event is specified?

Because you don't know if the probability is small until due diligence is first applied.

IOW the first two steps must be followed. That is the purpose of a three-step process- to follow the first step first then and only then do you proceed. If you get to the second node you do what it asks. Then you proceed. If you get to the thirs node you do what it asks.

And you do so for the reasons explained in my posts above.

Joe G said...

I am still waiting for secondclass to support his accusations and claims...

Joe G said...

joe g: "Nothing else from you will be posted on this blog, in any thread, until you comply."

Zachriel:
And this statement represents a ban.

Only in your bitty little mind.

Ya see Zachriel as I have explained many times now, if we don't observe and don't expect NH in single-celled organisms then we shouldn't expect it from the populations that evolved from that.

Also I am only here to substantiate my claim that the EF, is properly applied, would have eliminated pulsars from the design inference (the investigators would have eliminated it).

I am not here to answer anyone's questions about anything else.

R0b said...

Joe: Because you don't know if the probability is small until due diligence is first applied.

This makes no sense. We have to calculate the probability before we can evaluate any of the nodes, and once we calculate it we can immediately tell whether it's high, intermediate, or small. We don't have to ask ourselves the questions in any particular order.

That is the purpose of a three-step process- to follow the first step first then and only then do you proceed.

You seem to think that order is always important in procedures, but that's obviously false. Sometimes order doesn't matter, specifically when you have a chain of mutually exclusive decision nodes.

For instance, consider the following:

1. If the temperature is greater than 80 degrees, then it's hot.

2. If the temperature is between 60 and 80, then it's moderate.

3. If the temperature is below 60, then it's cold.

Does the evaluation order matter? Of course not. We don't have to decide whether it's hot before we decide whether it's cold.


Request #2: Given a chain of mutually exclusive decision nodes, explain why the order of evaluation is important.

R0b said...

Joe: Also I am only here to substantiate my claim that the EF, is properly applied, would have eliminated pulsars from the design inference (the investigators would have eliminated it).

Since we're talking about procedures, I'll point out that in order to calculate or estimate the probability of an event under a chance hypothesis, you first have to identify the chance hypothesis. You're claiming that pulsar signals have high or intermediate probability, but there's no way that you could have determined this because you haven't identified the chance hypothesis. You haven't even taken the preparatory steps toward applying the EF.

#3: Under what chance hypothesis (identifiable when pulsar signals were first discovered) is a pulsar signal high or intermediate probability?

Joe G said...

Joe: Because you don't know if the probability is small until due diligence is first applied.

secondclass:
This makes no sense.

Of course not.

secondclass:
We have to calculate the probability before we can evaluate any of the nodes, and once we calculate it we can immediately tell whether it's high, intermediate, or small.

That's false. We calculate the probability only based on the knowledge gained via due diligence, ie research.

That is the purpose of a three-step process- to follow the first step first then and only then do you proceed.

secondclass:
You seem to think that order is always important in procedures, but that's obviously false.

Order is important to the EF. Order is important to most procedures. If order wasn't importatnt all you would need is a checklist.

You made a list of accusations against me. I have answered that list.

Now all you can do is to say that procedures don't always follow in order. I bet you went to school in a small yellow school bus.

Finally, as Wm. Dembski states:

"The principal advantage of characterizing design as a complement of regularity and chance is that it avoids committing itself to a doctrine of intelligent agency.
Defining design as the negation of regularity and chance avoids prejudicing the causal stories we associate with the design inference."


You can't do that if you jump to the third node. You would be basing anything from the third node on ignorance.

It looks like I am finished here. Anyone who thinks one can skip steps is not worth wasting any more time on.

You know where I will be...

Joe G said...

OK one more:

Joe: Also I am only here to substantiate my claim that the EF, is properly applied, would have eliminated pulsars from the design inference (the investigators would have eliminated it).

secondclass:
Since we're talking about procedures, I'll point out that in order to calculate or estimate the probability of an event under a chance hypothesis, you first have to identify the chance hypothesis. You're claiming that pulsar signals have high or intermediate probability, but there's no way that you could have determined this because you haven't identified the chance hypothesis. You haven't even taken the preparatory steps toward applying the EF.

Don't tell me what I have or haven't done. You are the one skipping around like Mary chasing her little lamb.

Research- that is how I would make any determination. First I would check the equipment. Once I found I could "hear" the signal on all channels I would be suspicious.

That would lead me to heightened research to find what could cause such a thing.

The design inference is NOT a rush to judgement. Maybe you like to rush but most scientists would be wary of such an approach.

c-ya

R0b said...

Joe: Everything that is complex has a small probability- just like all widgets are gadgets. However everyting that has a small probability does not have to be complex- just like all gadgets don't have to be widgets.

In contrast, I have shown that Dembski characterizes improbable events as complex, regardless of whether they're complex in the conventional sense. For instance, I quoted Dembski in regards to simple SETI signals:

So we have simplicity of description combined with complexity in the sense of improbability of the outcome. That’s specified complexity and that’s my criterion for detecting design.

He also ascribes specified complexity to the Caputo sequence of ballot headliners, even though that sequence is very simple.

Also, I have provided several quotes showing that Dembski uses the terms "specified complexity" and "specified improbability" synonymously.

#4: Please provide a quote from Dembski showing that not everything that has a small probability is complex according Dembski's usage of the term.

R0b said...

Joe: You made a list of accusations against me. I have answered that list.

You have attempted to answer only one . Here are the others so far:

#2: Given a chain of mutually exclusive decision nodes, explain why the order of evaluation is important.

#3: Under what chance hypothesis (identifiable when pulsar signals were first discovered) is a pulsar signal high or intermediate probability?

#4: Please provide a quote from Dembski showing that not everything that has a small probability is complex according Dembski's usage of the term.

Joe G said...

The EF is a flowchart. Anyone familiar with flowcharts understands that one node must be completed BEFORE going on to the next.

That is more than enough to expose secondclass's dishonesty and lack of integrity.

Thanks for the thread Alan. It was good to expose secondclass for what he is.

R0b said...

Joe: Don't tell me what I have or haven't done.

So are you saying that you have identified a chance hypothesis? I must have missed it. Please point out where you did this.

Joe G said...

My job is finished here secondclass. I have no further interest in discussing this any further with someone as dishonest as you are.

R0b said...

Joe: Anyone familiar with flowcharts understands that one node must be completed BEFORE going on to the next.

So Joe, who has "been writing processes and procedures as part of my job for over 30 years" and will put his "knowledge of processes and procedures up against ANYONE on this planet", thinks that there is no case in which the order of flowchart nodes can ever be switched.

Given my example from above:

1. If the temperature is greater than 80 degrees, then it's hot.

2. If the temperature is between 60 and 80, then it's moderate.

3. If the temperature is below 60, then it's cold.

Joe thinks that these decision nodes cannot be swapped around. We must check to see if it's hot before we check to see if it's cold.

R0b said...

Joe: My job is finished here secondclass. I have no further interest in discussing this any further with someone as dishonest as you are.

Oh, but I'm just getting started. If asking you to support your claims is dishonest, then call me Enron.

Zachriel said...

joe g: "I am not here to answer anyone's questions about anything else."

Just as a reminder, YOU brought me into this discussion with your off-topic comment.

joe g: "A properly applied EF and the researchers who initially inferred design wouldn't have."

The researchers involved (Bell & Hewish) in the discovery of pulsars never made an inaccurate scientific inference of design (though they certainly did consider the possibility).

R0b said...

Joe: The design inference depends on us knowing and understanding what designing agencies are capable of coupled with us knowing and understanding what nature, operating freely, is capable of.

And yet Dembski says the opposite:

Our ability to recognize design must therefore arise independently of induction and therefore independently of any independent knowledge requirement about the capacities of designers.

Joe has completely missed the point of Dembski's eliminative* approach. At no point in Dembski's method do we ever consider designers or their capabilities. Only chance hypotheses are considered, and they're considered in isolation. That is, we consider only nature's capabilities, not designers'.

Obviously, we need to have some background knowledge. Specifically, we need to have a vocabulary and/or a store of patterns from which to draw our specification. But this has nothing to do with designers' capabilities.

#5: Please provide a quote from Dembski indicating that a knowledge of designers' capabilities is necessary in order to infer design.



* Obviously all statistical decision approaches are eliminative in the sense that some hypotheses get eliminated. The question is whether the hypotheses get eliminated in isolation ,or in comparison to other hypotheses. When Dembski refers to his approach as eliminative, he specifically means that designers and design are never considered, but only accepted by default.

I'm using Dembski's terminology the way that Dembski uses it. That goes for the words "complexity" and "fabrication" too, both of which Joe has conflated with non-Dembskian meanings.

R0b said...

Joe: Secondcalss doesn't know that simplicity of description requires pre-existing knowledge.

#6: Please provide evidence that I don't know that simplicity of description requires pre-existing knowledge.

R0b said...

I said: Joe doesn't know that Dembski's most oft-used example, the Caputo incident, is an instance of specified complexity, according to Dembski.

Joe responded: That is just a lie.

But Joe denied that the Caputo sequences exhibits complexity. Referring to the Caputo sequence, he said: It isn't a complex sequence.

Then later: However it is specified with a small probability of occurring by chance.

So Joe concedes that it is a case of specified improbability, but denies that it's a case of specified complexity.

#8: I said that the Caputo case exhibits specified complexity according to Dembski, and that you don't know that. Please provide evidence that this statement is a lie.

R0b said...

Oops, that last one should be #7.

Joe G said...

Understanding flowcharts

I know all about the Caputo case. I already explained my position on it. That you keep harping on it just shows your desperation.

Then there is the following which you completely ignore:

In order to get to the third node it "takes considerable background knowledge" (Dembski NFL page 111). He goes on to say "What's more it takes considerable background knowledge to come up with the right pattern (ie, specification) for eliminating all those chance hypotheses and thus inferring design."

Background knowledge is the knowledge of what designing agencies are capable of coupled with what nature, operating freely, is capable of.

Now we have Zachriel chiming in with:

Just as a reminder, YOU brought me into this discussion with your off-topic comment.

joe g: "A properly applied EF and the researchers who initially inferred design wouldn't have."

Zachriel:
The researchers involved (Bell & Hewish) in the discovery of pulsars never made an inaccurate scientific inference of design (though they certainly did consider the possibility).

Thanks for the support. However someone did dub the signal LGM for little green men.

So THERE you have it secondclass. The case is closed. The design inference was never made by the scientists doing the research- according to Zachriel.

And I have to change my statement to: "A properly applied EF and the chuckleheads who initially inferred design wouldn't have."

But then again the EF is only as good as the people using it. That must be why it goes to sh!+ when secondclass uses it.

R0b said...

Joe: IOW the Court did NOT share Dembski's inference.

According to Dembski, they did.

NFL page 58: T is the rejection region implicitly used by the New Jersey Supreme Court to defeat chance as the explanation of Caputo's ballot line selection.

Dembski steps through the court's reasoning in detail in NFL, and he shows exactly how the court inferred design.

NFL page 82: This rational reconstruction of the Caputo case is in my view not only faithful to the reasoning employed by the New Jersey Supreme Court in its deliberations but also normative for how we should conduct chance elimination arguments generally.

#8: Provide a quote from Dembski demonstrating that the Caputo court did not share Dembski's inference.

R0b said...

Joe: Then there is the following which you completely ignore:

In order to get to the third node it "takes considerable background knowledge" (Dembski NFL page 111). He goes on to say "What's more it takes considerable background knowledge to come up with the right pattern (ie, specification) for eliminating all those chance hypotheses and thus inferring design."


I did not ignore it. I addressed it specifically above when I said:

Secondclass: Obviously, we need to have some background knowledge. Specifically, we need to have a vocabulary and/or a store of patterns from which to draw our specification. But this has nothing to do with designers' capabilities.

But you say:

Joe: Background knowledge is the knowledge of what designing agencies are capable of coupled with what nature, operating freely, is capable of.

I'm still waiting for you to provide a quote demonstrating that. In contrast, I provided a quote from Dembski that specifically excludes designers' capacities from the knowledge necessary for a design inference.

I repeat:
#5: Please provide a quote from Dembski indicating that a knowledge of designers' capabilities is necessary in order to infer design.

Doppelganger said...

I wouldn't mind seeing something indicating that Dembski has sufficient background knowledge regarding the bacterial flagellum such that he can even apply his 'filter' to it.

R0b said...

Joe: So THERE you have it secondclass. The case is closed. The design inference was never made by the scientists doing the research- according to Zachriel.

I never said it was. You are the one who referred to "the researchers who initially inferred design". My claim all along has been that if they had applied Dembski's method, they would have come up with a false positive.

R0b said...

I said: And until he was corrected, Joe thought that detachability was a sign of fabrication rather than a requirement for specification.

Joe responded: Yes, in my haste to figure out what secondclass was talking about I re-read TDI- a book I had read some 7 years ago- and misread Dembski.

How long ago did you read NFL? Detachability is discussed throughout the book, as it is in TDI. If you don't understand detachability, you don't understand Dembski's approach.

Joe G said...

secondclass:
My claim all along has been that if they had applied Dembski's method, they would have come up with a false positive.

And that is false for the reasons already provided.

Just because you don't understand flowcharts should not be confused with no one understands them.

And now it is obvious they wouldn't have inferred design if they applied due diligence.

Case closed c-ya

R0b said...

I'll respond to your answer to the first challenge while you work on answering the other 7.

Your claim my pulsar analysis is void because I didn't traverse the nodes of the EF in order. This claim is specious for so many reasons that it will probably take several posts to cover them.

Here is the significant portion of my analysis:

Nobody knows anything about pulsars yet, so our chance hypothesis is random noise, giving us a P(T|H) of 5*10^-27092. To be generous with SpecRes, we assume that all signals that repeat every 34 bits are equally simple.

Apparently, I should have inserted the following:

Nobody knows anything about pulsars yet, so our chance hypothesis is random noise, giving us a P(T|H) of 5*10^-27092. This probability is not high. Nor is it intermediate. To be generous with SpecRes, we assume that all signals that repeat every 34 bits are equally simple.

Later, I calculate that the probability is small enough to infer design, and the fact that it's small means that it's neither high nor intermediate, making those two inserted statements redundant. But according to you, the argument is not valid until I insert them, even though they're superfluous.

Zachriel said...

Zachriel: The researchers involved (Bell & Hewish) in the discovery of pulsars never made an inaccurate scientific inference of design (though they certainly did consider the possibility).

joe g: "Thanks for the support. However someone did dub the signal LGM for little green men. So THERE you have it secondclass. The case is closed. The design inference was never made by the scientists doing the research- according to Zachriel."

The case is closed. This statement is based on a faulty premise.

joe g: "A properly applied EF and the researchers who initially inferred design wouldn't have."

The researchers "never made an inaccurate scientific inference of design". They included design with many other explanations, sought additional evidence and ruled out design without ever referencing Dembski or his Explanatory Filter.

R0b said...

Joe's response to my first challenge was: Because you don't know if the probability is small until due diligence is first applied.

To which I said: We have to calculate the probability before we can evaluate any of the nodes, and once we calculate it we can immediately tell whether it's high, intermediate, or small.

To which you responded: That's false. We calculate the probability only based on the knowledge gained via due diligence, ie research.

I agree that "We calculate the probability only based on the knowledge gained via due diligence, ie research." And since we have to calculate the probability before we can determine whether the event is high probability, it follows that the due diligence and research must be done before applying the filter.

So what do you mean by "That's false"?

R0b said...

With regards to your claims that the EF must be followed step-by-step because it's a flowchart, I respond that the ordering of nodes in flowcharts isn't always important. The example I gave was this:

1. If the temperature is greater than 80 degrees, then it's hot.

2. If the temperature is between 60 and 80, then it's moderate.

3. If the temperature is below 60, then it's cold.

Question: If the temperature is 0, can I determine that it's cold without passing through all nodes?

R0b said...

And the final refutation of your you-can't-skip-nodes argument is that Dembski does it all the time, and he advocates doing it.

See this paper where Dembski tells us how to detect design using the specified complexity criterion. Does he say anywhere in the paper that we should check for HP and IP before checking for specified complexity? No. Dembski, genius that he is, knows that if something is improbable, it necessarily follows that it's not highly probable or intermediately probable.

That paper is Dembski's most current description of his design detection method, and that's what I used in my analysis. If I had wanted to know whether pulsars could be attributed to regularity or chance, I would have stepped through the first two nodes of the EF. But since I only cared about whether it was designed, I immediately calculated the specified complexity (which maps to the sp/SP node), which is what Dembski tells us to do in that paper.

R0b said...

The funny thing about the you-can't-skip-nodes objection is that it doesn't matter because it's trivially reparable.

Here is the repaired version of my pulsar analysis:

START

Imagine that it's 1967 and the first regular periodic radio signal from space has just been discovered. Its pulse width is 0.04 seconds, so we can express the signal in binary at a rate of 25 bits per second. We observe the signal for an hour, gathering 90000 bits of information.

To determine the amount of specified complexity in this information, we use Dembski's definition given here: SC = -log2( 10^120 * SpecRes(T) * P(T|H) ), where T is the pattern, H is the chance hypothesis, and SpecRes is the number of patterns that are as simple and as improbable as T. Nobody knows anything about pulsars yet, so our chance hypothesis is random noise, giving us a P(T|H) of 5*10^-27092. This probability isn't high. Nor is it intermediate. To be generous with SpecRes, we assume that all signals that repeat every 34 bits are equally simple. (This particular signal repeats every 1.337 seconds, which is about 33.4 bits.) This gives us a SpecRes of 1.7*10^10.

Putting it all together we get -log2(10^120 * 1.7*10^10 * 5*10^-27092) = about 90000 bits of specified complexity. Since this value is far greater than 1, we have a very solid design inference. False positive.

Given that SpecRes is inversely correlated with simplicity, a simple periodic signal generates more specified complexity than a signal that answers our questions in morse code, all else being equal. Since we've found specified complexity to be unreliable for the former signal, by what logic should we consider it reliable for the latter signal?

[Edit: I miscalculated P(T|H) by a factor of 25. The final answer is still about 90000 bits.]

END


Now your objection is moot, Joe, as are my first two challenges to you, so you're welcome to skip them.

#9: Please tell me what's wrong with my repaired pulsar analysis.

R0b said...

To recap, Joe, here are your challenges so far:

#3: Under what chance hypothesis (identifiable when pulsar signals were first discovered) is a pulsar signal high or intermediate probability?

#4: Please provide a quote from Dembski showing that not everything that has a small probability is complex according Dembski's usage of the term.

#5: Please provide a quote from Dembski indicating that a knowledge of designers' capabilities is necessary in order to infer design.

#6: Please provide evidence that I don't know that simplicity of description requires pre-existing knowledge.

#7: I said that the Caputo case exhibits specified complexity according to Dembski, and that you don't know that. Please provide evidence that this statement is a lie.

#8: Provide a quote from Dembski demonstrating that the Caputo court did not share Dembski's inference.

#9: Please tell me what's wrong with my repaired pulsar analysis.

Joe G said...

joe g: "A properly applied EF and the researchers who initially inferred design wouldn't have."

Zachriel:
The researchers "never made an inaccurate scientific inference of design". They included design with many other explanations, sought additional evidence and ruled out design without ever referencing Dembski or his Explanatory Filter.

LoL! Is Dembski even 40? IOW in 1967 there wouldn't be any reason to reference Dembski or the EF.

Joe G said...

secondclass:
Your claim my pulsar analysis is void because I didn't traverse the nodes of the EF in order. This claim is specious for so many reasons that it will probably take several posts to cover them.

It is only specious to those who do not understand flow charts.

And if you don't understand flow charts there is no reason to discuss the EF with you- for obvious reasons-> It is a flow chart.

I am sure the researchers, had it been available to them, would have understood it and applied it properly. And by doing so they would have came to the same "conclusion" that they finally arrived at.

R0b said...

Joe, believe it or not, I understand flowcharts. Sometimes order is crucial; sometimes it isn't. Is it crucial to put the baking soda in before the baking powder?

I've given several reasons why we can start with the third node if we're only interested in whether the event was designed or not. You've never explained why order matters with a chain of mutually exclusive decision nodes. The fact is that it doesn't. I've shown that Dembski skips right to the third node in his latest design detection method.

But if it will further the discussion, I'll say that I was a very bad boy in skipping the first two nodes. To make amends, I added two redundant sentences to my analysis, so now I'm traversing all of the nodes.

Now please tell me what's wrong with my analysis.

Zachriel said...

Joe, you never corrected your misstatement.

joe g: "A properly applied EF and the researchers who initially inferred design wouldn't have."

The researchers never made a scientific inference of design. You need to correct this misstatement.

Alan Fox said...

Ye Gods Zachriel, you and Pixie have incredible stamina. I especially liked Joe's "A branching tree is not an example of a nested hierarchy".

Anonymous said...

I want to know exactly what the fuck all of this is. I have been on the trail of underground internet fasism. Blog are a main piece of the puzzle and this site has something to do with it. Email me at XronartestX@aim.com....Keep free speech alive!

Anonymous said...

Hey Al Fox the forums where your a moderator at are killing free speech.....You are working with a fascist underground...I WANT TO KNOW WHAT ALL THIS SHIT on this website IS ABOUT!!!

Anonymous said...

Sorry for curseing but I have been unwillinly part of some psedo-experiements and this stuff is clearly a main factor. I want answers to what I have been going through. I am very angry and will not be a sheep. I will not be a victim rather you give me answers or not, because someone is going to be held accountable

Anonymous said...

My post have no been going through....

Anonymous said...

ok... they work here, i have got in involved in some strange manipulation study. I have been gathering bits of information until I found my way here. All the research surrounding your field is enough for me to conclude that you must how be involved or have some knowledge of what I am talking about...My story is too long to go through the entire thing, but it was a semi-tramatic experience at times.

Anonymous said...

Me getting answers today is in your best intrest. I promise after today I will not be the victim.

Alan Fox said...

Jordan,

I really have no idea what you are talking about. Also I commented on the subsequent thread where you also posted.

Unknown said...

Alan Fox is created this I said to discredit me, because I threatened him that I would tell people about the racial slurs he threw at me in a email.

R0b said...

One more post to note another case of Joe getting Dembski completely and hilariously wrong. In sections 3.4 and 3.5 of NFL, Dembski describes CSI as a coincidence of conceptual and physical information, with conceptual information referring to a specification, and physical information referring to an event that meets the specification. Dembski is very clear about this, and even depicts it in figures.

But here is how Joe explains it:

CSI can be understood as the convergence of physical information, for example the hardware of a computer and conceptual information, for example the software that allows the computer to perform a function, such as an operating system with application programs. In biology the physical information would be the components that make up an organism (arms, legs, body, head, internal organs and systems) as well as the organism itself. The conceptual information is what allows that organism to use its components and to be alive. After all a dead organism still has the same components. However it can no longer control them.

Apparently Joe thinks that a computer sans software has no conceptual information, and therefore no CSI. And that living things lose all their CSI when they die. Good one, Joe.