Introduction
We have assayed scientific theories concerning parallel universes, and discovered that they naturally constitute a four-tiered hierarchy of multiverses (Figure 1), accommodating progressively greater divergences from our own universe:
- Level I: Other Hubble volumes exhibiting disparate initial conditions.
- Level II: Other post-inflationary bubbles potentially manifesting divergent effective laws of physics (constants, dimensionality, particle constitution).
- Level III: Other branches of the quantum wavefunction affording naught qualitatively novel.
- Level IV: Other mathematical structures possessing divergent fundamental equations of physics.
Whilst the Level I universes conjoin seamlessly, distinct demarcations exist between those within Levels II and III, occasioned by inflating space and decoherence, respectively. The Level IV universes are entirely discrete and necessitate conjoint consideration solely for prognostication of one's future, given that “you” may exist in more than one of them.
Albeit it was Level I that brought Giordano Bruno into conflict with the inquisition, few astronomers to-day would suggest that space terminates abruptly at the edge of the observable universe. It is ironic, and perchance attributable to historic happenstance, that Level III has garnered the most censure in recent decades, being the sole level that contributes naught qualitatively novel to the typology of universes.
Future prospects
Ample prospects exist for the future testing, and potentially the refutation, of these multiverse theories. In the forthcoming decade, substantially enhanced cosmological measurements pertaining to the microwave background radiation, the large-scale matter distribution, et cetera, shall scrutinise Level I by further constraining the curvature and topology of space, and shall assess Level II by furnishing rigorous tests of inflation. Advances in both astrophysics and high-energy physics ought furthermore to elucidate the extent to which sundry physical constants are finely calibrated, thus weakening or fortifying the case for Level II. Should the extant global endeavour to construct quantum computers prove successful, it shall provide further corroboration for Level III, inasmuch as such computers would, in essence, be exploiting the parallelism of the Level III multiverse for parallel computation (Deutsch 1997). Conversely, experimental validation of unitarity violation would negate Level III. Finally, success or failure in the grand challenge of modern physics—the unification of general relativity and quantum field theory—shall shed further light on Level IV. Either we shall eventually discover a mathematical structure consonant with our universe, or we shall encounter a limitation to the unreasonable effectiveness of mathematics and be compelled to abandon Level IV.
The measure problem
Intriguing theoretical quandaries also arise within the ambit of multiverse theories, preëminently the measure problem. As multiverse theories gain credence, the vexatious issue of how to compute probabilities in physics is escalating from a minor inconvenience into a major embarrassment. The reason that probabilities assume such importance lies in the fact that, should there indeed exist manifold copies of “you” with identical past lives and memories, one could not compute one's own future even with complete knowledge of the entire state of the multiverse. This stems from the inability to determine which of these copies constitutes “you” (they all perceive themselves to be). All one can predict, ergo, is probabilities concerning what one will observe, corresponding to the fractions of these observers that experience divergent phenomena. Regrettably, computing what fraction of the infinitely many observers perceive what is exceedingly subtle, as the answer hinges upon the order in which one enumerates them! The fraction of integers that are even is 50% if ordered 1, 2, 3, 4..., but approximates 100% if ordered alphabetically as a word processor might (1, 10, 100, 1000, ...).
When observers reside within disconnected universes, there exists no patently natural manner in which to order them, and one must sample from the divergent universes with statistical weights, referred to by mathematicians as a “measure”. This problem arises in a mitigated and tractable form in Level I, becomes acute at Level II, has instigated considerable debate within the context of extracting quantum probabilities in Level III (de Witt 2003), and is dire at Level IV. At Level II, for instance, Vilenkin and others have published predictions for the probability distributions of sundry cosmological parameters, contending that divergent parallel universes that have inflated by varying amounts should be accorded statistical weights proportional to their volume (e.g., Garriga & Vilenkin 2001a). On the other hand, any mathematician shall avow that 2 × ∞ = ∞, thus there exists no objective sense in which an infinite universe that has expanded by a factor of two has grown larger. Indeed, an exponentially inflating universe possesses what mathematicians term a time-like Killing vector, implying that it is time-translationally invariant and hence unchanging from a mathematical perspective. Moreover, a flat universe with finite volume and the topology of a torus is equivalent to a perfectly periodic universe with infinite volume, both from the mathematical bird's-eye view and from the frog's-eye view of an observer therein. Why, then, should its infinitely smaller volume grant it zero statistical weight? Given that Hubble volumes commence repeating even within the Level I multiverse (albeit in a random order, not periodically) after approximately 1010 metres, should infinite space truly be accorded greater statistical weight than a finite region of that scale? This conundrum must be resolved to observationally test models of stochastic inflation. If that be deemed egregious, consider the challenge of assigning statistical weights to divergent mathematical structures at Level IV. The apparent relative simplicity of our universe has prompted many to suggest that the correct measure somehow involves complexity. For instance, one might reward simplicity by weighting each mathematical structure by 2−n , where n denotes its algorithmic information content measured in bits, defined as the length of the shortest bit string (computer program, say) that would specify it (Chaitin 1987).
This would correspond to uniform weights for all infinite bit strings (each representable as a real number such as .101011101...), not for all mathematical structures. Should there exist such an exponential penalty for high complexity, we should likely expect to find ourselves inhabiting one of the simplest mathematical structures sufficiently complex to accommodate observers. However, the algorithmic complexity depends upon how structures are mapped to bit strings (Chaitin 1987; Deutsch 2003), and it is far from evident whether there exists a most natural definition to which reality might subscribe.
The pros and cons of parallel universes
Should one believe in parallel universes? Allow us to conclude with a concise discussion of arguments for and against. Firstly, it has been demonstrated that this is not a binary question; rather, the most pertinent issue is whether there exist 0, 1, 2, 3, or 4 levels of multiverses. Figure 1 summarises evidence pertaining to the divergent levels. Cosmological observations corroborate Level I by indicating a flat, infinite space exhibiting ergodic matter distribution, and Level I coupled with inflation elegantly obviates the initial condition problem. Level II is supported by the success of inflation theory in elucidating cosmological observations, and it can account for the apparent fine-tuning of physical parameters. Level III is supported by both experimental and theoretical evidence for unitarity, and it elucidates the apparent quantum randomness that so troubled Einstein, without relinquishing causality from the bird's-eye view. Level IV elucidates Wigner's unreasonable effectiveness of mathematics in describing physics and addresses the query: “Why these equations, and not others?”
The principal arguments against parallel universes posit that they are wasteful and peculiar; ergo, let us examine these two objections in turn. The first argument is that multiverse theories are vulnerable to Ockham's razor, given that they postulate the existence of other worlds that we can never observe. Why should nature be so ontologically wasteful and indulge in such opulence as to encompass an infinity of divergent worlds? Intriguingly, this argument can be inverted to advocate for a multiverse. When we perceive nature as wasteful, what precisely are we disturbed about her wasting? Assuredly not “space”, given that the standard flat universe model, with its infinite volume, elicits no such objections. Assuredly not “mass” or “atoms” either, for the same reason: once one has squandered an infinite quantity of something, who cares if one squanders some more? Rather, it is likely the apparent diminution in simplicity that appears disturbing, the quantity of information requisite to specify all these unseen worlds. However, as discussed in greater detail in Tegmark (1996), an entire ensemble is frequently simpler than one of its members. For instance, the algorithmic information content of a generic integer n is of the order log2 n (Chaitin 1987), the number of bits required to express it in binary. Nonetheless, the set of all integers 1, 2, 3, ... can be generated by a rather trivial computer program, thus the algorithmic complexity of the entire set is smaller than that of a generic member. Similarly, the set of all perfect fluid solutions to the Einstein field equations exhibits a smaller algorithmic complexity than a generic particular solution, given that the former is specified simply by providing a few equations, whilst the latter necessitates the specification of vast quantities of initial data on some hypersurface. Loosely speaking, the apparent information content rises when we restrict our attention to one particular element within an ensemble, thus forfeiting the symmetry and simplicity inherent in the totality of all elements considered together. In this sense, the higher-level multiverses exhibit less algorithmic complexity. Transitioning from our universe to the Level I multiverse obviates the need to specify initial conditions, upgrading to Level II obviates the need to specify physical constants, and the Level IV multiverse of all mathematical structures possesses essentially no algorithmic complexity at all. Given that it is merely in the frog's-eye view—in the subjective perceptions of observers—that this opulence of information and complexity truly exists, a multiverse theory is arguably more economical than one endowing only a single ensemble element with physical existence (Tegmark 1996).
The second common complaint concerning multiverses is that they are peculiar. This objection is aesthetic rather than scientific, and, as previously stated, truly only makes sense within the Aristotelian worldview. Within the Platonic paradigm, one might anticipate observers to complain that the correct Theory of Everything (TOE) was peculiar if the bird's-eye view diverged sufficiently from the frog's-eye view, and there is every indication that this is the case for us. The perceived peculiarity is hardly surprising, given that evolution endowed us with intuition solely for the everyday physics that possessed survival value for our distant ancestors. Thanks to ingenious inventions, we have glimpsed slightly more than the frog's-eye view of our normal inside perspective, and indeed, we have encountered bizarre phenomena whenever departing from human scales in any manner: at high speeds (time decelerates), on small scales (quantum particles can be at multiple locations concurrently), on large scales (black holes), at low temperatures (liquid Helium can flow upwards), at high temperatures (colliding particles can alter identity), et cetera. Consequently, physicists have, by and large, already accepted that the frog's-eye and bird's-eye views are markedly divergent. A prevalent modern view of quantum field theory is that the standard model is merely an effective theory, a low-energy limit of a yet-to-be-discovered theory that is even more removed from our comfortable classical concepts (involving strings in 10 dimensions, say). Many experimentalists are becoming blasé concerning the production of so many “peculiar” (but perfectly repeatable) experimental results, and simply accept that the world is a more peculiar place than we had thought, and proceed with their calculations.
It has been demonstrated that a common attribute of all four multiverse levels is that the simplest and arguably most elegant theory inherently involves parallel universes, and that one must complicate the theory by incorporating experimentally unsupported processes and ad hoc postulates (finite space, wavefunction collapse, ontological asymmetry, et cetera) to explain away the parallel universes. Our aesthetic judgement, therefore, boils down to what we deem more wasteful and inelegant: many worlds or many words. Perhaps we shall gradually become more accustomed to the peculiar ways of our cosmos, and even find its strangeness to be part of its charm.
Acknowledgements: The author (Max Tegmark, Dept. of Physics, Univ. of Pennsylvania, Philadelphia, PA 19104; max@physics.upenn.edu) wishes to thank Anthony Aguirre, Aaron Classens, Angelica de Oliveira-Costa, George Musser, David Raub, Martin Rees, Harold Shapiro, and Alex Vilenkin for stimulating discussions. This work was supported by NSF grants AST-0071213 & AST-0134999, NASA grants NAG5-9194 & NAG5-11099, a fellowship from the David and Lucile Packard Foundation, and a Cottrell Scholarship from Research Corporation.