One
of the
major end-the-world-in-2012 theories is Timewave Zero, which purports
to
produce mathematical evidence that there is a singularity at the end of
2012. A
singularity is a point beyond which something cannot be known. It may
foretell
a physical catastrophe, or it may foretell a Rapture of
the Nerds – or
anything in between.Is
it likely that
this theory, which occupies pages of formulae and can be bought on a
computer
program, is predicting the end of the world as we know it?
Timewave Zero
was proposed by Terence McKenna and Dennis McKenna in their book, "The
Invisible Landscape". "Timewave Zero" is the
McKenna
brothers' way of representing the flow of "novelty" over time. The
graphic representation is derived from the standard sequence of
hexagrams of
the I Ching in what is known as the King Wen sequence. At one point,
the graph
dives through the axis – this is presumed to be infinite
'novelty' – and ends.
This is the end of history. It is not necessarily a violent or final
end of
humanity, but a point after which a discussion of changes in 'novelty'
becomes
useless.
The book is about more than a mathematical theory. The first part is
about the
relationship between the nervous system and the physical universe via
the
linking mechanism of quantum mechanics (or at least electron spin
resonance), a
discussion which appears to be intended to show that consciousness is
more than
an accidental spark of the human nervous system. Quantum Mechanics,
which
fundamentally depends on a conscious observer, leads the most rigorous
and
mathematical minds into hair-tearing frenzies as they then need to have
fundamental definitions for consciousness and observerdom. (c.f.
Penrose, p
1031 to 1033) In the hands of the less rigorous, there can be even more
to say
on the subject, as in this book.
The book begins with the relationship between Shamanism and
mind-altering
substances. The authors claim that drug molecules in the neurons which
cause
the hallucinations of the shaman slip inside the spiral of DNA,
altering its
electron spin resonance. If you take these drugs and sing, the
electromagnetic
field produced by your singing, they go on, could affect these drug/DNA
complexes, and this could produce a hologram of an idea, which is a
sort of
flying saucer. They call this the "apparent functioning of the audilely
induced intercalation of harmine into the genetic matrix." (p. 121)
(Harmine is a drug molecule.)
Accordingly, then, the brothers took drugs, which made them believe an
insect
was guarding them. They felt schizophrenic and met bizarre life-forms.
This apparently
proved their point. "In the wake of our experiment there have been,
besides enormous disruptions of statistical norms in the form of
accumulations
of meaningful coincidences, instances of physical phenomena absolutely
beyond
current understanding. Telepathic phenomena especially, in our
subjective
judgment, were manifest several times during this
period…[some] may violate the
usual laws of conventional physics."
They then have a "sharp shift of focus" for part II, and discuss the I Ching in detail. The I
Ching is
an oracle, but the brothers do not use it as an oracle. They discuss
the layout
of the pieces of the oracle in their standard sequence. (Think of it as
describing the pieces on a chess board before the game begins.) The I
Ching
"pieces" are 64 hexagrams or designs of six lines in a vertical
column. Each line can be broken or unbroken. Because there are two
states
(broken or unbroken) and six lines, there are 2^6 combinations, or 64
different
possible hexagrams. In this arrangement, the King Wen
sequence, the 64 are
arranged in 32 pairs and the differences between the pairs can be
described in
shorthand as the number of lines which change between the first and the
second
member of the pair. To stress the point again, the King Wen is the
standard or
book sequence of the hexagrams, not part of a divinatory activity.
To the McKennas, the I Ching is related in a fundamental way to a
calendar they
postulate for the Ancient Chinese. A lunar year is 13 months of 29.53
days (384
days). 384 is 64 multiplied by 6. The I Ching has sixty four hexagrams.
Furthermore,
29.53 days X 6 is 384 (one lunar year)
384 X 64 is 67 solar years and 124 days or two sunspot cycles (McKenna
says the
Ancient Chinese were aware of the sunspot cycle)
7 solar years, 124 days X 64, is 4306 years, about two Zodiacal ages
4306 years X 6 is 25, 836 solar years or about one precession of the
equinoxes.
McKenna knows that there is little direct evidence that this proposed
calendar
was actually used in Ancient China, and so he says things like, "It is
possible to suggest that in Neolithic China the sequence of 64
hexagrams X 6
was actually known and used as a calendar, " (p132) and that this
"suggests a possible historical speculation". McKenna goes to great
lengths to explain that this calendar, if it were ever used, would be a
very
accurate one. "The relative clumsiness of the traditional Chinese
calendar
seems to lend support to this idea that it conceals an older, reformed
lunar
calendar. [It] had an average length of 360 days into which two
intercalary
months were inserted every five years. This means that a lunar year of
12
months would have 354.4 days per year, which is short of the average
duration
of 360 days. The solar year, on the other hand is 5.25 days longer than
the
360-day year. When 10.85, the sum of 5.25 and 5.6 is taken times 5 (the
five
years into which the two intercalary months are inserted), the result
is 54.26
days, which is approximately five days less than exactly two lunar
months.
These calculations show that, in spite of the insertion of the two
extra lunar
months every five years, this calendar…would still gain 4.8
days every five
years; compare this to the loss of one day every 454.5 lunar yeas for a
thirteen month lunar year of 384 days, in only one intercalary day
every ten
years. This latter represents a level of accuracy improved by a factor
of many
thousands. Even in comparison with [our calendar] this calendar is
accurate by
a factor well over one hundred. [Even without intercalation, it is
twice as
accurate as ours.] (p. 133)
McKenna is taken with the concept of 64. He points out that there are
64 codons
in nucleic acid. This is true, and it's true for the same reason as the
number
of hexagrams in the I Ching – very simple arithmetic. There
are four 'letters'
in DNA and each 'word' is three letters long. So there are 4^3, or 64
possible
words in DNA. He does not mention that most of these 'words' are
'redundant' –
variant spellings for the same amino acids. There are approximately 20
amino
acids and a stop codon and a start codon. The other 'words' call up the
same
amino acids. For instance, Tryptophan only has one spelling: UGG. But
UUA, UUG,
CUU, CUC, CUA and CUG all spell "Leucine". The genetic code behaves
exactly as you would expect something to behave that was parsimonious
(uses as
few letters as possible). 4^2 (16) would be too few, and 4^4 would be
far too
many; 4^3 is just right. More bizarrely, he also says that the number
of folk
taxonomies or 'conceptual categories' also equals 2^6, or 64. This is
like
reading one of those books that say there are seven basic plots, or five
basic
plots or three – the number of folk taxonomies in the world
probably exceeds
the number of categories in this particular folk taxonomy. Are there
really
'64' – no more, no less – ways you can divide up
reality? I think Jorge Luis
Borges
probably did more than that all by his self – for fun!
The rationale
seems to be that you can't think about more than six orthogonally
related
binary dimensions. Maybe so, but I don't think it is an actual limit,
like the
number of codons in DNA or the number of hexagrams in the I Ching.
The King Wen sequence is artificial, in the sense that the hexagrams
are not
arranged in random pairs. There was thought behind the sequence, just
as there
was thought behind the arrangement of chessmen on a chess board. It's
easier to
see this as a diagram.
There is a vast
number of "logical orders" that a series of hexagrams can be in, for
instance, if the first line is complete, then broken, then the second
line is
complete, then broken and so on. Or you can have a one be "right way
up" and the next one be the same one "upside down". The reason
behind the King Wen sequence isn't known, but it has been thought out.
For
instance, a type of difference called a "five" (five lines are
different between pairs) never appears; the type of difference called a
"two" appears twenty times and the type of difference called a
"six" appears nine times. "Even" to "odd"
transitions appear three times more often than odd to even. The
McKennas found
that there was a logical beginning and end to the sequence –
and this occurs at
the standard King Wen beginning and end – which is not
random. Millions of
tries with a random number generator only found a few sequences with
this type
of logic. It clearly isn't due to chance. If the first order of
difference
(number of line changes between pairs) is graphed, McKenna found a
"singularity" – the first three and last three differences
are
identical, so the graph can be joined up at that point.
So far so good. The sequence is graphed, from beginning to end, and
then
reversed and the reversal laid over the first graph. Then the McKennas
start
saying things like, "We will consider the space-time continuum as a
modular wave-hierarchy. This hierarchy is composed of waves that have,
on the
most simple level, the configuration that is generated by figure 18b
[the two graphs
put together running backward and forward from the singularity]. The
energy map
of changes plotted backward upon itself forms pairs that, when added
together
always equal 64. The graph was seemingly constructed to be superimposed
backward upon itself, suggesting to us that time was understood to
function
with the same holographic properties that have long been an accepted
part of
the phenomenon of the perception of three-dimensional space." (p. 146)
This isn't meaningful – the changes do add to 64, obviously,
but "energy
pairs" doesn't mean anything here – they are drawings. The
graph wasn't
"seemingly constructed" at all, since the McKennas themselves
constructed it for a reason and have no need to assume anything
– the Ancient Chinese
probably didn't construct it at all. (The McKennas even say on p. 143
"whether they graphed the first order of difference, as we have
done…is
moot".) And "holographic properties" are not really "an
accepted part of the phenomenon of the perception of three-dimensional
space"
– despite a chapter earlier on holograms, this statement is a
stretch and has
no mathematical basis. Further on they say, "We have called the
quantized
wave-particle, whatever its level of occurrence within the hierarchy or
its
duration, eschaton". By now, they are simply manipulating words. The
"eschaton" is a religious term, not a physics or biological one.
There is an appendix of the book detailing the mathematics which go
into
changing this simple line into "Timewave Zero". The appendix is on
line here:
What isn't discussed in
McKenna's book at all is what is called The Half Twist, or the Watkins Objection. Nowhere
in the
book, but buried in the user manual for the computer program is an
unexplained
mathematical operation.
I'll copy Watkins' objection here at length.
"The formula is really quite inelegant, and I personally found
it hard
to believe that if a map of temporal resonance was encoded into the
King Wen
sequence, it would look like this. In any case, my main concern was
with the
powers of -1. These constitute the "missing step" which isn't
mentioned in The Invisible Landscape, but which turns up as a footnote
of the
TimeExplorer software manual. On p.79 we find Now we must change the
sign of
half of the 64 numbers in angle_lin[] as follows (snip) When reading
this, I
immediately thought "WHY?", as did several friends and colleagues who
I guided through the construction. There is no good reason I could see
for this
sudden manipulation of the data. Without this step, the powers of -1
disappear
from the formula, and the "data points" are a different set of
numbers, leading to a different timewave. McKenna has looked at this
timewave
and agrees that it doesn't appear to represent a map of "novelty" in
the sense that the "real" timewave is claimed to. It is possible that
by changing the "zero date" Dec.
21, 2012,
one could obtain a better fit, but there's no longer any
clear motivation to attempt this, as the main reason for taking the
original
timewave seriously were McKenna's (often very convincing) arguments for
historical correlation. These would all be rendered meaningless without
the
aforementioned step. The footnote associated with this step reads: This
is the
mysterious "half twist". The reason for this is not well understood
at present and is a question which awaits further research. This struck
me as
absurd. After all, why introduce such a step into an (already
overcomplicated)
algorithm whilst admitting that the reason for doing so is "not well
understood at present"? I confronted McKenna on this issue, and he
immediately grasped the significance of my challenge. He would have to
either
(1) justify this mysterious "half twist" or (2) abandon the timewave
theory altogether."
McKenna did not respond adequately at the time, and Watkins concluded:
"Therefore we see that not only is the inclusion of the "half
twist" failing to guarantee the "preservation" of some geometric
property to which McKenna has referred, but the failure is precisely
because of
its inclusion. McKenna's stated reason for this (crucial) step of the
construction is unacceptable. As a mathematician who has met and talked
with
him, who is sympathetic with the majority of his other work, and who is
only
interested in spreading clarity, I must conclude that the "timewave"
cannot be taken to be what McKenna claims it is."
In fact, McKenna
addresses this much later and says,
"The revised wave is very similar in shape but its values are generally
more Novel." Note: very similar in shape. He does not himself go on to
address whether similar is as good as identical.
Sheliak's paper
on the resolution of the
Watkins Objection gives the correlation as
"comparisons ranged
from a low of 0.73 to a high of 0.98 with an average correlation of
0.86."
For the non-mathematically inclined, a correlation of 0.86 is good, if
say, you
wanted to compare the days you eat an apple compared with the days the
doctor
stays away. For a complicated graph that is supposed to predict things
down to
the events of one day in 20 billion years, it is not a good
correlation.
Once all the transformations are done on the data, what one can see is
a complicated
line. It dives down (increases "novelty") at the end. The reason why
the line should represent "novelty" is not well explored. The
definition seems fuzzy. Habit and routine are hills on the graph.
Valleys are
increases in "novelty". The axis is marked in units, but a given unit
of "novelty" is not defined. The line is said to be fractal, so in
the general overview, the entire history of the universe is said to be
represented, and variations in "novelty" are Big Units. In a
close-up, only a few years are shown at once, and the line now
represents
"novelty" only on Earth, and, generally speaking, only where
"novelty" affects the United
States of America.
The units of novelty are smaller.
McKenna explains the fractal scaling of the wavy line (waveform)
generated by
these processes. Briefly, fractal scaling means that a magnification of
a small
portion of the wavy line looks exactly like the overall wavy line.
McKenna's
table, on p. 154, goes from 72 X 10^9 years (billion years) to 1.597 X
10 ^-27
of a second, which he calls in the "range of Planck's constant". Note
that this is based on a lunar calendar that may or may not have been
used,
which only needs a day intercalating every few hundred years. This is
very
accurate by our leap-year-every-four-years standards. Remember the part
above
where it is well over one hundred times more accurate? One hundred is
one
followed by two zeros. The range of the graph is from billions of years
to one
and a half xontoseconds (as I am assured by Wikipedia is the proper
word for
one octillionth or one quadrilliardth). The "very accurate calendar"
that's one hundred times as good as ours is scalable over a range of
one to one
followed by 42 zeros without any kind of adjustment. This seems
unlikely.
So, if this timewave can describe the "novelty" in the universe on
scales from xontoseconds to billions of years, and it comes to an end
at the
certain point, how do we determine the exact date of that endpoint? The
answer
is surprising. You guess at the endpoint, based on how you think things
fit.
McKenna says, "First let me explain that we chose the end of the Mayan
Calendar as the "end date" for this graph because we found good
agreement between the events that comprise the historical records and
the wave
itself when this end date was chosen." There is no actual predicted end
date for the timewave – the end date is based on people
examining the up and
down slopes and assigning dates to each one based on how much
"novelty" they believed existed in the universe/earth/United States
on any given epoch/year/day. Although McKenna does not say it in this
edition
of the book, his original assumption was that the final 67.29 year
cycle of the
timewave began on the most "novel" day he could think of, the atomic
bombing of Hiroshima.
Now, this makes two
assumptions – first it is assumed, without any real
justification, that our
planet is already in the final cycle – and it is further
assumed that Hiroshima
was a 'novel' event
in a universally meaningful way. This would put the end date of history
in
mid-November 2012. When McKenna heard about the Mayan calendar, he
apparently
decided that the date should be moved later, to 2012-12-21, to coincide
with
it. Ref: http://www.hermetic.ch/frt/zerodate.html
Others have also noted this assumption. Calleman, in his
criticism of John Henry
Jenkin's Mayan Calendar theory, says, "On this
point some words
should also be said about the work of the late Terence McKenna, who
wrote the
foreword to Jenkins book. Based on the I Ching McKenna developed a
Time-Wave
function that he claimed would come to an end at the end of the Long
Count, December
21, 2012.
Unfortunately, McKenna is not here to
respond, but Peter Meyer who developed the mathematics of the Time-Wave
in the
McKennas’ book The Invisible Landscape, pointed out already
in his life-time
that McKenna’s time wave, that had been anchored in human
history merely by the
choice of one single event, the Hiroshima Bombing, actually did not end
on
December 21, 2012, but on November 18, 2012. Since McKenna chose not to
respond
to this very serious criticism, we can only assume that it was wishful
thinking
that had led McKenna to say that the end of his time-wave coincided
with the
end-date of the Mayan Long Count."
So the chosen end date for the dive into infinite novelty was
2012-12-21. It is
not a prediction, but an assumption. One usually tests a theory by
making
predictions. Normally, a theory that correctly predicts a future event
is
regarded very highly. It is possible for a theory to be highly regarded
if it
"predicts" something in the past, if that past is not yet known (i.e.
the discovery is in the future). For instance, a theory that says an
asteroid
hit the earth and wiped out the dinosaurs in a certain range of years
(in the
past) is predictive if it says that you can dig in a certain stratum
and find
an Iridium layer –if someone does dig there, and does find an
Iridium layer, it
supports the theory.
What does the Timewave Zero graph predict? Actually, not very much. It
"predicts"
that the end of the Reagan years (in the past) would be followed by a
period of
sharply increasing "novelty". This is not very persuasive. First, the
event is in the past. You're looking at the graph with a history book
in front
of you, and secondly, you are consciously or unconsciously making
assumptions
about what is "novelty" and whether you've had a lot of it or a
little. The graphs are in units of novelty, but a unit has no exact
value even
at a given scale. How novel is Tiananmen
Square?
How novel was 1968?
Was the invasion of Kuwait
more or less novel
than the fall of the Berlin
wall? Does Tiananmen
square
increase or decrease
novelty, and does the fall of the Berlin
wall do the same, or
the opposite? On the larger scales, isn't a large-scale extinction
event a
decrease in novelty? I can certainly make a case that it has a
potential to
spur a great deal of novelty, but in itself, what is it? The
development of
language was probably an increase in novelty, but what does a broad
peak on a
graph labeled "possible language acquisition" actually mean? How can
the Sixties be a time of steadily decreasing novelty, with the change
coming in
1968? Why does the Black Death occur at the end of an astonishing
increase in
novelty and reverse it? I mean, I can see why it reverses it, but what
caused
the increase just before it broke out? You can read a large number of
these
"predictions" about past events at http://web.archive.org/web/20070806054338/http://levity.com/eschaton/twzdemo.html
(note: this is the web archive as the original has been taken down. The
Wayback Machine does not always answer on the first call, so you may
have to click more than once).
There is a video of McKenna going through recent (to him) history on YouTube
here.
Here are a couple of screenshots with my annotations:
Another still:
John Sheliak published a paper called Standard,
Revised, and Random Generated
TimeWave Results which go over the correlations
of historical events
with the timewave after the Watkins Objection has been addressed. These
are
graphs mentioned above with the 0.73 to 0.98 correlation to the old
timewave.
Some of the graphs on this new improved page are similar to the
original
graphs, but in at least one case, where the slope has been reversed,
the
explanation has been reversed also, to match the new slope. For
instance,
McKenna held a strong belief that war is habitual, not novel. Sheliak points
out (with a graph) "Figure
19 shows the standard and revised TimeWave comparison graphs for the
period
1935-1955, and there are obvious similarities and clear differences
between the
two waves. Both graphs show that WWII begins and ends during steep
ascents into
habit, but they describe somewhat diverging processes, for much of the
middle
period of the war. The revised TimeWave shows that a very novel process
is
apparently at work for much of the period of the war. The standard
TimeWave
does show novel influences, but it is neither as consistent nor
dramatic as for
the revised TimeWave. Some very potent novel process seems to be
occurring
during much of the war period, and that process may be suppressing a
major
ascent into habit that might otherwise be happening. Could this novel
process
be the development of nuclear science and technology, eventually
leading to the
production and use of nuclear weapons?" He
goes on to suggest that it may seem offensive, but the knowledge of
nuclear weapons is after all knowledge of how stars work, and not
related to
what use the knowledge is put. The use of the power is apparently on a
steep
uphill slope – a sign of habit. "Perhaps the process
of becoming more
aware of nature, and ourselves - is very novel indeed. It is the sacred
knowledge of the shaman, who returns from an immersion into an aspect
of
nature, with guidance or healing for her or his people. We seem to have
lost
the sense of sacred knowledge with its accompanying responsibility,
somewhere
along the way. Perhaps it is time to regain that sense, and reclaim
responsibility for our knowing." What use is a
prediction, if
the graph going either way can be explained by the same phenomenon?
Some of McKenna's other correlations seem like stretches. For instance,
at his
eschaton page, "Here are a series of interesting direct resonances. The
explosion of new life forms in the Cambrian, the crucifixion of Christ
and the
murder of Anwar Sadat are in perfect resonance," says one web page.
Cambrian, Christ and Anwar Sadat? Graphs
are here.
Show me the assassination of President
Kennedy instead – oh,
it's not there.
One big event in the past we can all agree on increasing novelty was
the Big
Bang. When does the Big Bang occur in Timewave Zero? At 22 billion
years ago.
Scientists today calculate it as around 13.7 ± 0.2 billion
years. This doesn't
dismay the boosters of Timewave Zero. It's a chance to be proven right,
eventually!
In summary, Timewave Zero is a wavy line derived from the arrangement
of
hexagrams in an Ancient Chinese book. You could assume that the
vertical axis
represents "novelty". The graph contains a singularity, where the
line goes below the axis and cannot return. The McKennas assume that
the end
date is 2012-12-21. The graph does not "predict" this date. Rather,
this date is an assumption of the theory. If you call this the end
date, then
you can count days backwards and see if the historical record shows
'novelty'
or habituality at that time and compare it with the graph. Some people
can see
correlations with historical events.
I'll leave you with McKenna's own words from the book: "It
is possible to suggest…in the wake of the closure of the
vacuum fluctuation, the photonic forms existing and able for the first
time to
obey laws relevant to themselves as photonic holograms (forms) rather
than as
photonic holograms expressed through matter or antimatter, as objects
or
anti-objects. Such a process, though farfetched, would utilize a set of
naturally occurring phenomena (vacuum fluctuations and higher
dimensional
matrices) to create the sort of basic ontological mutation in the
nature of
matter such as, we suggest, may characterize some future shift of
epochs. The
photonic shell, left in the wake of fourth-dimensional merging of
holograms of
matter, and of antimatter, may be the key to a clearer understanding of
the
archetype of a paradisiacal existence at time's end. The spiral
implosion of
time may entail the universe, and every entity in it, meeting and
canceling its
antimatter double to create, through this union of opposites, an
ontological
mutation from matter in a photonic form, which represents tremendous
freedom."
Or of course it may not. I'm willing to bet that it doesn't.
References:
1.The Invisible Landscape, Terence McKenna and Dennis McKenna,
HarperSanFrancisco, 1994