[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index]

Re: Gaia theropod follow-up: a "new" phylogeny



<But the point is that the "calulator" in a cladistic analysis NEVER gets
exactly the right numbers.>
<To put it another way, you might say that tweaking is trying to make the
analysis MORE objective by removing homolasy biases.>
<Tweaking is a way of looking for clues.  Its not going to produce definitve
results any more then an analysis with all the characters.>

So the analyst intervenes with the data, trying to make them more 'right'.
This is worthwile; the analyst is trying to improve the result, and the
cladistic program is treated as a tool for doing calculations too complex
for
the analyst to do in her/his head, and not in itself a means of discovery.
(except see below)  The results have to be verified and manipulated.
The source data is assumed to be potentially misleading to the program.

<To answer your question another way, I think that if the programmer could
recognize every reversal and convergent trait in the fossil record, the
analysis could theoretically be coded in enough detail to perfectly
duplicate evolution.  The algorithm could do it if the programmer knew what
to feed it.  The problem is that we CAN'T recognize all the homoplasy, and
so we can't see the fine details in such characters that would enable us to
code them separately.>

This conclusion is surprising to me.  Given that evolution works in various
ways
(including neotony and new features developed by adaptation), and sometimes
over a comparatively brief period of time, the idea that the results of
applying an
algorithm to a set of samples would always conclusively show relationships
appeared ambitious.  You are, presumably, assuming that the number of
samples would be sufficient to allow the program to identify the progression
of
taxa.
At any rate, this is not the current situation.  The fact that an analysis
is the result
of using a cladistic computer program does not ipso facto give it extra
credibility.

<If anything, tweaking complicates the big picture, because it doesn't
involve just stopping with the comprehensive tree and saying "well, we have
so low resolution here, but maybee we'll sort it out someday".  It carries
the analysis further by experimenting with alternatives.>

The problem I see with these experiments, which are feedback loop
interactions
with the program, is that the algorithm is being made the judge of accuracy.
If I
continuously rearrange my data and the taxa included in order to see if I
can
come to what the program sees as a 'good' result, then I have incorporated
all
and only the assumptions in the program as to the way evolution works.  The
problem is thus not the tweaking, but the replacement of the judgment of the
person doing the analysis by the judgment of the computer.  This will
produce
substantial consistencies among analysts using the same algorithm, but does
this
approach really produce more 'accurate' (in quotes because we are dealing
with temporary hypotheses) outcomes?
I am, by the way, not being critical of what you say.  The fact that the
models
produced by someone like HP Holtz would be better than my own using the
same computer program is intuitively very satisfying.  The existence of a
computer program does not turn the analyst into a servant of his/her pc.