Is God a Programmer? analyzing the deep-universe simulation hypothesis at the Planck scale
The simulation hypothesis is the proposal that all of reality, including the Earth and the rest of the universe, could be an artificial simulation, such as a computer simulation. Neil deGrasse Tyson put the odds at 50-50 that our entire existence is a program on someone else’s hard drive [1]. David Chalmers noted “We in this universe can create simulated worlds and there’s nothing remotely spooky about that. Our creator isn’t especially spooky, it’s just some teenage hacker in the next universe up. Turn the tables, and we are essentially gods over our own computer creations [2] [3] [4].
The commonly postulated ancestor simulation approach, which Nick Bostrom called "the simulation argument", argues for "high-fidelity" simulations of ancestral life that would be indistinguishable from reality to the simulated ancestor. However this simulation variant can be traced back to an 'organic base reality' (the original programmer ancestors and their physical planet).
The Programmer God hypothesis[5][6][7] conversely states that a (deep universe) simulation began with the big bang and was programmed by an external intelligence (external to the physical universe), the Programmer by definition a God in the creator of the universe context. Our universe in its entirety, down to the smallest detail, and including life-forms, is within the simulation, the laws of nature, at their most fundamental level, are coded rules running on top of the simulation operating system. The operating system itself is mathematical (and potentially the origin of mathematics).
Any candidate for a Programmer-God simulation-universe source code must satisfy these conditions;
Philosophy of mathematics is that branch of philosophy which attempts to answer questions such as: ‘why is mathematics useful in describing nature?’, ‘in which sense, if any, do mathematical entities such as numbers exist?’ and ‘why and how are mathematical statements true?’ This reasoning comes about when we realize (through thought and experimentation) how the behavior of Nature follows mathematics to an extremely high degree of accuracy. The deeper we probe the laws of Nature, the more the physical world disappears and becomes a world of pure math. Mathematical realism holds that mathematical entities exist independently of the human mind. We do not invent mathematics, but rather discover it. Triangles, for example, are real entities that have an existence [8].
The mathematical universe refers to universe models whose underlying premise is that the physical universe has a mathematical origin, the physical (particle) universe is a construct of the mathematical universe, and as such physical reality is a perceived reality. It can be considered a form of Pythagoreanism or Platonism in that it proposes the existence of mathematical objects; and a form of mathematical monism in that it denies that anything exists except these mathematical objects.
Physicist Max Tegmark in his book "Our Mathematical Universe: My Quest for the Ultimate Nature of Reality"[9][10] proposed that Our external physical reality is a mathematical structure.[11] That is, the physical universe is not merely described by mathematics, but is mathematics (specifically, a mathematical structure). Mathematical existence equals physical existence, and all structures that exist mathematically exist physically as well. Any "self-aware substructures will subjectively perceive themselves as existing in a physically 'real' world".[12]
The principle constraints to any mathematical universe simulation hypothesis are;
1. the computational resources required. The ancestor simulation can resolve this by adapting from the virtual reality approach where only the observable region is simulated and only to the degree required, and
2. that any 'self-aware structures' (humans for example) within the simulation must "subjectively perceive themselves as existing in a physically 'real' world".[13]. Succinctly, our computer games may be able to simulate our physical world, but they are still only simulations of a physical reality (regardless of how realistic they may seem) ... we are not yet able to program actual physical dimensions of mass, space and time from mathematical structures, and indeed this may not be possible with a computer hard-ware architecture that can only process binary data.
As a deep-universe simulation is programmed by an external (external to the universe) intelligence (the Programmer God hypothesis), we cannot presume a priori knowledge regarding the simulation source code other than from this code the laws of nature emerged, and so any deep-universe simulation model we try to emulate must be universal, i.e.: independent of any system of units, of the dimensioned physical constants (G, h, c, e .. ), and of any numbering systems. Furthermore, although a deep-universe simulation source code may use mathematical forms (circles, spheres ...) we are familiar with (the source code is the origin of these forms), it will have been developed by a non-human intelligence and so we may have to develop new mathematical tools to decipher the underlying logic. By implication therefore, any theoretical basis for a source code that fits the above criteria (and from which the laws of nature will emerge), could be construed as our first tangible evidence of a non-human intelligence.
A physical universe is characterized by measurable quantities (size, mass, color, texture ...), and so a physical universe can be measured and defined. Contrast then becomes information, the statement this is a big apple requires a small apple against which big becomes a relative term. For analytical purposes we select a reference value, for example 0C or 32F, and measure all temperatures against this reference. The smaller the resolution of the measurements, the greater the information content (the file size of a 32 mega-pixel photo is larger than a 4 mega-pixel photo). A simulation universe may be presumed to also have a resolution dictating how much information the simulation can store and manipulate.
To measure the fundamental parameters of our universe physics uses physical constants. A physical constant is a physical quantity that is generally believed to be both universal in nature and have a constant value in time. These can be divided into
1) dimension-ed (measured using physical units kg, m, s, A ...) such as the speed of light c, gravitational constant G, Planck constant h ... as in the table
constant | symbol | SI units |
---|---|---|
Speed of light | c | |
Planck constant | h | |
Elementary charge | e | |
Boltzmann constant | kB |
2) dimension-less, such as the fine structure constant α. A dimension-less constant does not measure any physical quantity (it has no units; units = 1).
There are also dimension-less mathematical constants such as pi. The mathematical constant is a number that can occur within the simulation, pi for example can emerge from the rotation of an object. The fundamental physical constant conversely is a parameter specifically chosen by the Programmer and is encoded into the simulation code directly, and so whilst it may be inferable, it is not derived from mathematical constants (Richard Feynman on the fine structure constant). It should also be dimension-less otherwise the simulation itself becomes dimensioned (if the simulation is running on a celestial computer it is merely data, it has no physical size or shape or ...), and so the dimensioned constants (G, h, c, e...) must all be derivable (derived from within the simulation) via the (embedded in the source code) dimension-less physical constants (of which the fine structure constant alpha may be an example).
Physicist Lev Okun noted "Theoretical equations describing the physical world deal with dimensionless quantities and their solutions depend on dimensionless fundamental parameters. But experiments, from which these theories are extracted and by which they could be tested, involve measurements, i.e. comparisons with standard dimension-ful scales. Without standard dimension-ful units and hence without certain conventions physics is unthinkable [14].
The Planck scale refers to the magnitudes of space, time, energy and other units, below which (or beyond which) the predictions of the Standard Model, quantum field theory and general relativity are no longer reconcilable, and quantum effects of gravity are expected to dominate (quantum gravitational effects only appear at length scales near the Planck scale).
Although particles may not be cognizance of our 'laws of physics', they do know the 'laws of nature'. These laws of nature would run directly off the universe OS (operating system), and so below this OS, 'physics' as we know it must necessarily break down. At present the Planck scale is the lowest known level, consequently any attempt to detect evidence of an underlying simulation coding must consider (if not actually begin at) this, the Planck scale[15].
The SI units for the dimension-ful mksa units are; meter (length), kilogram (mass), second (time), ampere (electric current). There are Planck units that represent these SI units, and so a simulation could use them as discrete building blocks; Planck length (the smallest possible unit of length), Planck mass (the unit of mass), Planck time (the smallest possible unit of time), Planck charge (the unit of charge). The speed of light then becomes c = 1 Planck length/1 Planck time. These units would define the resolution and so information carrying capacity of the simulation universe.
As well as our decimal system, computers apply binary and hexadecimal numbering systems. In particular the decimal and hexadecimal are of terrestrial origin and may not be considered 'universal'. Furthermore numbering systems measure only the frequency of an event and contain no information as to the event itself. The number 299 792 458 could refer to the speed of light (299 792 458 m/s) or could equally be referring to the number of apples in a container (299 792 458 apples). As such, numbers require a 'descriptive', whether m/s or apples. Numbers also do not include their history, is 299 792 458 for example a derivation of other base numbers?
Present universe simulations use the laws of physics and the physical constants are built in, however both these laws and the physical constants are known only to a limited precision, and so a simulation with 1062 iterations (the present age of the universe in units of Planck time) will accumulate errors. Number based computing may be sufficient for ancestor-simulation models where only the observed region needs to be calculated, but has inherent limitations for deep universe simulations where the entire universe is continuously updated. The actual computational requirements for a Planck scale universe simulation based on a numbering system with the laws of physics embedded would be an unknown and consequently lead to an 'non-testable' hypothesis. This is a commonly applied reasoning for rejecting the deep universe simulation.
A mathematical constant such as pi refers to a geometrical construct (the ratio of circle circumference to circle radius) and so is not constrained by any particular numbering system (in the decimal system π = 3.14159...), and so may be considered both universal and eternal. Likewise, by assigning geometrical objects instead of numbers to the Planck units, the problems with a numbering system can be resolved. These objects would however have to fulfill the following conditions;
1) embedded attribute - for example the object for length must embed the function of length such that a descriptive (km, mile ... ) is not required. Electron wavelength would then be measurable in terms of the length object, as such the length object must be embedded within the electron (the electron object). Although the mass object would incorporate the function mass, the time object the function time ..., it is not necessary that there be an individual physical mass or physical length or physical time ..., but only that in relation to the other units, the object must express that function (i.e.: the mass object has the function of mass when in the presence of the objects for space and time). The electron could then be a complex event (complex geometrical object) constructed by combining the objects for mass, length, time and charge into 1 event, and thus electron charge, wavelength, frequency and mass would then be different aspects of that 1 geometry (the electron event) and not independent parameters (independent of each other).
2) The objects for mass, length, time and charge must be able to combine with each other Lego-style to form more complex objects (events) such as electrons and apples whilst still retaining the underlying information (the mass of the apple derives from the mass objects embedded within that apple).
Not only must these objects be able to form complex events such as particles, but these events themselves are geometrical objects and so must likewise function according to their geometries. Electrons would orbit protons according to their respective electron and proton geometries, these orbits the result of geometrical imperatives and not due to any built-in laws of physics (the orbital path is a consequence of all the underlying geometries). However, as orbits follow regular and repeating patterns, they can be described (by us) using mathematical formulas. As the events grow in complexity (from atoms to molecules to planets), so too will the patterns (and likewise the formulas we use to describe them). Consequently the laws of physics would then become our mathematical descriptions of the underlying geometrically imposed patterns.
3) These objects would replace coded instructions (the instruction sets would be built into the objects) thereby instigating a geometrically autonomous universe. The electron 'knows' what to do by virtue of the information encoded within its geometry, no coded electron CALL FUNCTION is required. This would be equivalent to combining the hardware, software and CPU together such that the 'software' changes (adjusts) to the changing 'hardware' (DNA may be an analogy).
Note: A purely mathematical universe has no limits in size and can be infinitely large and infinity small. A geometrical universe (that uses objects) has limitations, it can be no smaller than the smallest object for example and has discrete parts (those objects). The philosophy of the TOE (theory of everything) is a debate between the mathematical universe and the geometrical universe. The principal difference being that in a mathematical universe, the dimensionless constants need not be of greater significance than the dimensioned, whereas in a geometrical universe the dimensioned constants are constructed from the dimensionless constants within the universe (for the simulated universe in sum is dimensionless, and so only the dimensionless constants can be embedded in the source code). This distinction between mathematical and geometrical would only be apparent at the Planck scale.
The laws of physics are our incomplete observations of the natural universe, and so evidence of a simulation may be found in ambiguities or anomalies within these laws. Furthermore, if complexity arises over time, then at unit time the 'handiwork' of the Programmer may be notable by a simplicity and elegance of the geometries employed ... for the Programmer by definition has God-level programming skills. Here are noted 2 potential examples.
The dimensions of mass, space and time are considered by science to be independent of each other, we cannot measure the distance from Tokyo to London using kilograms and amperes, or measure mass using space and time. Indeed, what characterizes a physical universe as opposed to a simulated universe is the notion that there is a fundamental structure underneath, that in some sense mass 'is', that time 'is' and space 'is' ... thus we cannot write kg in terms of m and s. To do so would render our concepts of a physical universe meaningless. The 26th General Conference on Weights and Measures (2019 redefinition of SI base units) assigned exact numerical values to 4 physical constants (h, c, e, kB) independently of each other (and thereby confirming these as fundamental constants), and as they are measured in SI units (kg, m, s, A, K), these units must also be independent of each other (i.e.: these are fundamental units, for example if we could define m using A then the speed of light could be derived from, and so would depend upon, the value for the elementary charge e, and so the value for c could not be assigned independently from e).
constant | symbol | SI units |
---|---|---|
Speed of light | c | |
Planck constant | h | |
Elementary charge | e | |
Boltzmann constant | kB |
We are familiar with inverse properties; plus charge and minus charge, matter and anti-matter ... and we can observe how these may form and/or cancel each other. A simulation universe however is required to be dimensionless (in sum total), for the simulated universe does not 'exist' in any physical sense outside of the 'Computer' (it is simply data on a celestial hard-disk).
Our universe does not appear to have inverse properties such as anti-mass (-kg), anti-time (-s) or anti-space (anti-length -m), therefore the first problem the Programmer must solve is how to create the physical scaffolding (of mass, space and time). For example, the Programmer can select 2 dimensioned quantities, here denoted r, v [16] such that
Quantities r and v are chosen so that no unit (kg, m, s, A) can cancel another unit (i.e.: the kg cannot cancel the m or the s ...), and so we have 4 independent units (we cannot define the kg using the m or the s ...), however if 3 (or more) units are combined together in a specific ratio, they can cancel.
This fX, although embedded within are the dimensioned structures for mass, time and length (in the above ratio), would be a dimensionless mathematical structure, units = 1.
Defining the dimensioned quantities r, v in SI unit terms.
Mass
Length
Time
And so, although fX is a dimensionless mathematical structure, we can embed within it the (mass, length, time ...) structures along with their dimensional attributes (kg, m, s, A ..). In the mathematical electron model (discussed below), the electron itself is an example of an fX structure, it (felectron) is a dimensionless geometrical object that embeds the physical electron parameters of wavelength, frequency, charge (note: A-m = ampere-meter are the units for a magnetic monopole).
The premise being that at the macro-level (of planets and stars) these fX ratio are not found, and so this level is the domain of the observed physical universe, however at the quantum level, fX ratio do appear, felectron as an example, the mathematical and physical domains then blurring. This would also explain why physics can measure precisely the parameters of the electron (wavelength, mass ...) but has never found the electron itself.
A satellite can orbit the earth at any distance, but in the atom, the electron is limited to prescribed orbits. The main electron orbits are defined by a number n (the principal quantum number). This seems to be because each orbit level can fit only a certain number of electrons; orbit level 1 can carry 1 electron, orbit level 2 can carry 4 electrons, orbit level 3 can carry 9 electrons ... in this 1, 4, 9, 16, 25 ... n2 series (if we include spin, then it becomes 2n2). We cannot find electrons between these defined orbits, thus quantum theories were born. This would be as if the satellite can only orbit the earth at fixed distances; i.e.: 1km, 4kms, 9kms, 16kms ... at the 1km orbit there is room for only 1 satellite, at the 4km orbit there is room for 4 satellites and so on ...
A rocket can smoothly fly into space, but an electron can only 'jump' between n orbits, this process takes time, and so if the electron cannot fly between orbits like a rocket (because this would mean it is no longer quantized), then it must sort of shrink in 1 orbit while growing in the other orbit ... the official explanation uses more scientific words.
Curiously, if we map an electron as it leaves the atom as a continuous (non-quantized) motion (as we would a rocket taking off and leaving earth), then we find a spiral trajectory. When we decode this spiral, we find that at set orbits the spiral angles cancel to give these exact same integer values; 4, 9, 16, 25 ...
The suggests that the H atom is not defined by quantization, but rather quantization is a property of this (hyperbolic) spiral. If so, then the electron could travel classically between orbits as with the rocket example.
The spiral can be represented in Cartesian coordinates by
The spiral has only 2 revolutions approaching 720° (a = ) as the radius approaches infinity. If we set start radius r = 1, then at given angles radius r will have integer values (the angle components cancel). We then set r = Bohr radius to solve transition frequencies (= the spiral perimeter) for an electron travelling between these orbitals (some slight distortion occurs as the electron approaches the proton).
“God vs. science debates tend to be restricted to the premise that a God does not rely on science and that science does not need a God. As science and God are thus seen as mutually exclusive there are few, if any, serious attempts to construct mathematical models of a universe whose principle axiom does require a God. However, if there is an Intelligence responsible for the 14 billion year old universe of modern physics, being the universe of Einstein and Dirac, and beginning with the big bang as the act of 'creation', then we must ask how it might be done? What construction technique could have been used to set the laws of physics in motion?” [17]
The (dimensionless) simulation clock-rate would be defined as the minimum 'time variable' (age) increment to the simulation. Using a simple loop as analogy, at age = 1, the simulation begins (the big bang), certain processes occur, when these are completed age increments (age = 2, then 3, then 4 ... ) until age reaches the_end and the simulation stops.
'begin simulation FOR age = 1 TO the_end 'big bang = 1 conduct certain processes ........ NEXT age 'end simulation
Quantum spacetime and Quantum gravity models refer to Planck time as the smallest discrete unit of time and so the incrementing variable age could be used to generate units of Planck time (and other Planck units, the physical scaffolding of the universe). In a geometrical model, to these Planck units could be assigned geometrical objects, for example;
Initialize_physical_constants; FOR age = 1 TO the_end generate 1 unit of (Planck) time; '1 time 'object' T generate 1 unit of (Planck) mass; '1 mass 'object' M generate 1 unit of (Planck) length; '1 length 'object' L ........ NEXT age
The variable age is the simulation clock-rate, it is simply a counter (1, 2, 3 ...) and so is a dimensionless number, the object T is the geometrical Planck time object, it is dimensioned and is measured by us in seconds. If age is the origin of Planck time (1 increment to age generates 1 T object) then age = 1062, this is based on the present age of the universe, which, at 14 billion years, equates to 1062 units of Planck time.
For each age, certain operations are performed, only after they are finished does age increment (there is no time interval between increments). As noted, age being dimensionless, is not the same as dimensioned Planck time which is the geometrical object T, and this T, being dimensioned, can only appear within the simulation. The analogy would be frames of a movie, each frame contains dimensioned information but there is no time interval between frames.
FOR age = 1 TO the_end (of the movie) display frame{age} NEXT age
Although operations (between increments to age) may be extensive, self-aware structures from within the simulation would have no means to determine this, they could only perceive themselves as being in a real-time (for them the smallest unit of time is 1T, just as the smallest unit of time in a movie is 1frame). Their (those self-aware structures) dimension of time would then be a measure of relative motion (a change of state), and so although ultimately deriving from the variable age, their time would not be the same as age. If there was no motion, if all particles and photons were still (no change of state), then their time dimension could not update (if every frame in a movie was the same then actors within that movie could not register a change in time), age however would continue to increment.
Thus we have 3 time structures; 1) the dimension-less simulation clock-rate variable age, 2) the dimensioned time unit (object T), and 3) time as change of state (the observers time). Observer time requires a memory of past events against which a change of state can be perceived.
The forward increment to age would constitute the arrow of time. Reversing this would reverse the arrow of time, the universe would likewise shrink in size and mass accordingly (just as a white hole is the (time) reversal of a black hole).
FOR age = the_end TO 1 STEP -1 delete 1 unit of Planck time; delete 1 unit of Planck mass; delete 1 unit of Planck length; ........ NEXT age
Adding mass, length and time objects per increment to age would force the universe expansion (in size and mass), and as such an anti-gravitational dark energy would not be required, however these objects are dimensioned and so are generated within the simulation. This means that they must somehow combine in a specific ratio whereby they (the units for mass length, time, charge; kg, m, s, A) in sum total cancel each other, leaving the sum universe (the simulation itself) residing on that celestial hard-disk, dimensionless.
We may introduce a theorectical dimensionless geometrical object fPlanck within which are embedded the dimensioned objects MLTA (mass, length, time, charge), and from which they may be extracted.
FOR age = 1 TO the_end add 1 fPlanck 'dimensionless geometrical 'object' { extract 1 unit of (Planck) time; '1 time 'object' T extract 1 unit of (Planck) mass; '1 mass 'object' M extract 1 unit of (Planck) length; '1 length 'object' L } ........ NEXT age
Thus no matter how small or large the physical universe is (when seen internally), in sum total (when seen externally), as a construct of the dimensonless variable age and the dimensionless geometrical structure fPlanck, it is data without physical form.
As the universe expands outwards (through the constant addition of units of mass and length via fPlanck), and if this expansion pulls particles with it (if it is the origin of motion), then now (the present) would reside on the surface of the (constantly expanding at the speed of light; c = 1 Planck length/1 Planck time) universe, and so the 'past' could be retained, for the past cannot be over-written by the present in an expanding universe (if the now is on the surface). As this expansion occurs at the Planck scale, information even below quantum states, down to the Planck scale, can be retained, the analogy would be the storing of every keystroke, a Planck scale version of the Akashic records ... for if our deeds (the past) are both stored and cannot be over-written (by the present), then we have a candidate for the 'karmic heavens' (Matthew 6:19 But lay up for yourselves treasures in 'heaven', where neither moth nor rust doth corrupt, and where thieves do not break through nor steal).
This also forms a universe time-line against which previous information can be compared with new information (a 'memory' of events), without which we could not link cause with effect.
In a simulation, the data (software) requires a storage device that is ultimately hardware (RAM, HD ...). In a data world of 1's and 0's such as a computer game, characters within that game may analyze other parts of their 1's and 0's game, but they have no means to analyze the hard disk upon which they (and their game) are stored, for the hard disk is an electro-mechanical device, is not part of their 1's and 0's world, it is a part of the 'real world', the world of their Programmer. Furthermore the rules programmed into their game would constitute for them the laws of physics (the laws by which their game operates), but these may or may not resemble the laws that operate in the 'real world' (the world of their Programmer). Thus any region where the laws of physics (the laws of the game world) break down would be significant. A singularity inside a black hole is such a region [18].
For the black-hole electron, its black-hole center would then be analogous to a storage address on a hard disk, the interface between the simulation world and the real world. A massive (galactic) black-hole would be as an entire data sector.
The surface of the black-hole would then be of the simulation world, the size of the black hole surface reflecting the stored information, the interior of the black-hole however would be the interface between the data world and the 'hard disk' of the real world, and so would not exist in any 'physical' terms. It is external to the simulation. As analogy, we may discuss the 3-D surface area of a black-hole but not its volume (interior).
The scientific method is built upon testable hypothesis and reproducible results. Water always boils (in defined conditions), at 100°C. In a geometrical universe particles behave according to geometrical imperatives, the geometry of the electron and proton ensuring that electrons will orbit nuclei in repeating and predictable patterns. The laws of physics would then be a set of mathematical formulas that describe these repeating patterns, the more complex the orbits, the more complex the formulas required to describe them and so forth. However if there is a source code from which these geometrical conditions were programmed, then there may also be non-repeating events, back-doors built into the code (a common practice by terrestrial programmers), these by definition would lie outside the laws of physics and so be labelled as miracles, yet they would be no less valid.
Particles form more complex structures such as atoms and molecules via a system of orbitals; nuclear, atomic and gravitational. The 3-body problem is the problem of taking the initial positions and velocities (or momenta |momentum|momenta) of three or more point masses and solving for their subsequent motion according to Newton's laws of motion and Newton's law of universal gravitation.[20]. Simply put, this means that although a simulation using gravitational orbitals of similar mass may have a pre-determined outcome, it seems that for gods and men alike the only way to know what that outcome will be is to run the simulation itself.
Physicist Eugene Wigner (The Unreasonable Effectiveness of Mathematics in the Natural Sciences) [21]
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.
The following is adapted from the mathematical electron model [22], this model illustrates how a Planck scale deep universe simulation could be implemented using geometrical objects.
The biggest problem with any mathematical universe approach is constructing a physical reality (the physical dimensions of mass, space and time) from mathematical structures. Our computer games may be able to simulate our physical world, but they are still simulations of a physical reality. The 1999 film The Matrix and the ancestor simulation both still begin with a physical level (a base reality), the planet earth. Here we look at the theory behind constructing physical units from mathematical structures.
Geometrical objects are selected whose attributes are mass M, length L, time T, ampere A. These MTLA objects are the geometry of 2 dimensionless physical constants; the fine structure constant α and Omega Ω (alpha = 137.035999084, Omega = 2.0071349496) and so are themselves dimensionless.
Attribute | Geometrical object | SI unit equivalent |
---|---|---|
mass | kg | |
time | s | |
length | m | |
velocity | m/s | |
ampere | A |
These MLTA objects may interact with each other, this can be represented by assigning to each attribute a unit number θ (i.e.: θ = 15 ⇔ kg). This unit number dictates the relationship between the objects [23]. As such a mathematical relationship cannot occur in a purely 'physical' universe, evidence of a unit number relationship can therefore be taken as evidence that we are in a simulation, for such a relationship is a requirement of a simulation universe.
As alpha and Omega have numerical values, so too the MLTA objects can be expressed numerically. We can then convert these objects to their Planck unit equivalents by including a scalar.
attribute | geometrical object | scalar (unit number) |
---|---|---|
mass | k (θ = 15) | |
time | t (θ = -30) | |
velocity | v (θ = 17) | |
length | l (θ = -13) | |
ampere | a (θ = 3) |
For example, = 25.3123819353... and so we can use scalar v to convert from dimensionless geometrical object V to dimensioned c.
As the scalar incorporates the dimension quantity (the dimension quantity for v = m/s or miles/s), the unit number relationship applies, and so we then find that only 2 scalars are needed. This is because in a defined ratio they will overlap and cancel, for example in the following ratios;
scalar units for ampere a = u3, length l = u-13, time t = u-30, mass k = u15 (uΘ represents unit)
For example if we know the numerical values for a and l then we know the numerical value for t, and from l and t we know k … and so if we know any 2 scalars (α and Ω have fixed values) then we can solve the Planck units (for that system of units), and from these, we can solve (G, h, c, e, me, kB).
In this table the 2 scalars used are r and v.
attribute | geometrical object | unit number θ | scalar r(8), v(17) |
---|---|---|---|
mass | 15 = 8*4-17 | ||
time | -30 = 8*9-17*6 | ||
velocity | 17 | v | |
length | -13 = 8*9-17*5 | ||
ampere | 3 = 17*3-8*6 |
Solving for the physical constants. The scalars are unit system dependent, we will need different scalars for different units (meters or miles or ... etc.). Using α, Ω and CODATA 2014 (c and μ0 have exact values) gives for scalars v (θ=17), r (θ=8).
constant | geometrical object | calculated (α, Ω, r, v) | CODATA 2014 [24] |
---|---|---|---|
Planck constant | 6.626 069 134 e-34, u19 | 6.626 070 040(81) e-34 | |
Gravitational constant | 6.672 497 192 29 e11, u6 | 6.674 08(31) e-11 | |
Elementary charge | 1.602 176 511 30 e-19, u-27 | 1.602 176 620 8(98) e-19 | |
Boltzmann constant | 1.379 510 147 52 e-23, u29 | 1.380 648 52(79) e-23 |
Thus we may have dimensioned units from within (when seen from inside) the simulation, yet still maintain a dimensionless universe externally (external to the universe). We only need a geometrical object fX that is itself dimensionless but embeds the MLTA objects. The electron (fe) is an example.
If the electron is a mathematical particle, and the universe is constructed from mathematical particles, then the universe itself is a mathematical universe
This 'mathematical electron' formula; fe embeds the units ALT (AL as an ampere-meter are the units for a magnetic monopole).
The AL magnetic monopoles confer the electric properties of the electron and also determine the duration of the electron frequency (0.2389 x 1023 units of Planck time). At the conclusion of this electric (magnetic monopole) 'wave-state' , the AL units intersect with time T, the units then collapse thereby exposing a unit of M (Planck mass) for 1 unit of Planck time. This is a variation on the Black hole electron where the electron here is centered on this unit of Planck mass, but this mass is normally obscured by the electric (AL) cloud.
In order that the electron may have dimensioned (measurable) parameters; electron mass, wavelength, frequency, charge ... the geometry of the mathematical electron (the electron 'event' ) includes (embeds) the geometrical MLTA (mass, length, time, charge) objects, this electron 'event' then dictating how those MLTA objects are arranged into dimensioned electron parameters. The electron itself can be considered as equivalent to a programming sub-routine, does not have dimension units of its own (there is no physical electron), instead it (the electron) is a geometrical formula that encodes the MLTA information required to implement those electron parameters. It is these parameters and not the electron that we are measuring (the existence of the electron is inferred, it is not observed).
electron mass (M = Planck mass) = 0.910 938 232 11 e-30
electron wavelength (L = Planck length) = 0.242 631 023 86 e-11
elementary charge (T = Planck time) = 0.160 217 651 30 e-18
Rydberg constant = 10 973 731.568 508
The electron formula embeds dimensioned quantities yet is a dimensionless mathematical formula (the scalars have cancelled). Using this unit number relationship we can find other examples of combinations of the physical constants which reduce to their MLTA equivalents (the scalars have cancelled). The precision of the results depends on the precision of the SI constants; combinations with G and kB return the least precise values.
As the theory requires that column 1 (because the scalars have cancelled) is column 2 (i.e.: not just equals), this table can be used to validate the premise that the objects MLTA are natural units, i.e.: units used by the universe itself (embedded in the simulation code).
Note: the geometry (integer n ≥ 0) is common to all ratios where units and scalars cancel. Dimensionless combinations are characterized by this geometrical base-15.
CODATA 2014 (mean) | (α, Ω) | units uΘ = 1 |
---|---|---|
α | α | |
1.000 8254 | = 1.0 | |
0.228 473 639... 10-58 | 0.228 473 759... 10-58 | |
0.326 103 528 6170... 10301 | 0.326 103 528 6170... 10301 | |
0.170 514 342... 1092 | 0.170 514 368... 1092 | |
73 095 507 858. | 73 035 235 897. | |
3.376 716 | 3.381 506 |
Scientific American 2005: These constants (G, h, c, e, me, kB) form the scaffolding around which the theories of physics are erected, and they define the fabric of our universe, but science has no idea why they take the special numerical values that they do, for these constants follow no discernible pattern. The desire to explain the constants has been one of the driving forces behind efforts to develop a complete unified description of nature, or "theory of everything". Physicists have hoped that such a theory would show that each of the constants of nature could have only one logically possible value. It would reveal an underlying order to the seeming arbitrariness of nature [25].
The simulation clock can give us the expansion of the universe in size and mass via these Planck objects. A 14 billion year old universe would put age = 1062. This expansion can also be used to introduce motion (particle momentum) by pulling particles with it, the problem however is that this expansion occurs at the speed of light (c = 1 unit of Planck length per 1 unit of Planck time), and so we need to provide a reference for our surroundings (if everything is moving away from us at the speed of light then we have no means to detect anything). One solution is for an expanding 4-axis hyper-sphere universe to project onto a fixed 3-D (Newtonian) background using the mathematics of perspective. If we can perceive only this 3-D background, then we will perceive the motion of all objects as relative to our motion. The expansion of the universe at the speed of light will be 'invisible' to us. This can be achieved using the electromagnetic spectrum.
The mathematics of perspective is a technique used to project a 3-D image onto a 2-D screen (i.e.: a photograph or a landscape painting), using the same approach here would implement a 4-axis expanding hypersphere super-structure in which 3-D space is the projection [26].
The expanding hyper-sphere can be used to replace independent particle motion (momentum) with motion as a function of the expansion itself, as the universe expands (adding units of mass, space and time in the process), it pulls all particles along with it. The particle aspect of the universe thereby resides on the hyper-sphere surface (3-D space). As photons (the electromagnetic spectrum) have no mass state, they cannot be pulled along by the universe expansion (consequently they are date stamped, as it takes 8 minutes for a photon to travel from the sun, that photon is 8 minutes old when it reaches us), and so photons would be restricted to a lateral motion within the hyper-sphere. As the electromagnetic spectrum is the principal source of information regarding the environment, a 3-D relative space would be observed (as a projected image from within the 4-axis hyper-sphere), the relativity formulas can then be used to translate between the hyper-sphere co-ordinates and our observable 3-D space co-ordinates [27].
In hyper-sphere co-ordinate terms; age (the simulation clock-rate), and velocity (the velocity of the universe expansion as the origin of c) would be constants and thus all particles and objects, as they are pulled along by this hypersphere expansion, would travel at, and only at, the speed of light c (the photon does not travel away from us at velocity c, it is we who travel away from the photon at c), however in 3-D space co-ordinate terms, time and motion would be relative to the observer.
To simulate gravity, orbiting objects A, B, C... are sub-divided into discrete points, each point representing 1 unit of Planck mass mP (for example, a 1kg satellite would be divided into 1kg/mP = 45940509 points). Each point in object A then forms an orbital pair with every point in objects B, C..., resulting in a universe-wide, n-body network of rotating point-to-point orbital pairs [28]. Each orbital pair rotates 1 unit of Planck length lp per unit of Planck time tp at velocity c (c = lp/tp) in hypersphere space co-ordinates, when mapped over time, gravitational orbits emerge between the objects A, B, C...
The basic simulation uses only the start position of each point, as it maps only rotations of the points within their respective orbital pairs, information regarding the macro objects A, B, C...; momentum, center of mass, barycenter etc ... is not required.
Each point represents 1 unit of Planck mass, however particles have a mass much less than Planck mass, and so a point could comprise 1020 or more particles. The point itself can then be sub-divided into particle-particle orbital pairs, the classic example is the H atom (with a single proton-electron orbital pair), in the atomic orbital version of the simulation, the H atom electron transition frequencies are used as reference for studying particle-particle orbital pairs.
The simulation treats particles, not as distinct entities but as oscillations between an electric wave-state (duration particle frequency) to mass point-state (duration 1tp). As the mass point-state occurs seldom relative to the electric wave-state (duration Planck mass/electron mass = 1023 units of Planck time), the atomic orbital is predominately a wave-state rotation effect. Therefore, to ensure for an orbital point that at every unit of Planck time there is on average a particle in the mass point-state, and so for every unit of time there will 1 unit of Planck mass (i.e.: a gravity effect will occur at every unit of Planck time), and if for example we have only electrons, then we would require 1023 electrons, such that on average 1 electron will be in the mass point state at that unit time.
Gravity is therefore not weaker than the electric force, rather it is stronger at the Planck scale (point-point orbitals rotate faster than wave-wave), its apparent weakness is simply because point-point rotations when mapped over time seldom occur relative to wave-wave in orbitals (the probability of occurrence as the inverse of the gravitational coupling constant). This also means that gravitational orbits as we observe them are time emergent properties of rotating orbitals, at the Planck scale there is no gravity or electric force.
Note: For mapping atomic orbitals, the simulation includes an additional alpha term to account for the slower wave-wave rotation, other than this, the principal difference between gravitational and atomic orbitals is one of scale. There are not 2 separate forces used by the simulation, instead particles are treated as oscillations between the 2 states (electric and mass), the orbitals themselves however are essentially the same and so the same program can be used for both states.
Particles are treated as an electric wave to (Planck) mass point oscillation, the wave-state as the duration of particle frequency in Planck time units, the point-state duration (this state can be assigned mapping coordinates) as 1 unit of Planck time, the particle itself is an oscillation between these 2 states (i.e.: not a fixed entity). If we require 1020 assorted particles to make 1 Planck mass, then each gravity point has 1020 particles. This point would then have an average mass = Planck mass and so gravity (orbital pair rotation) can occur at each unit of Planck time.
We can then zoom in and create an orbital which has 2 particles instead of 2 mass points. This orbital is predominately electric-wave to electric-wave, the simplest example is the H atom (an electron to proton orbital pair), and as transition frequencies have been precisely measured, we can use them as reference to analysis individual orbitals [29]. The gravitational orbital can then be considered as the scaling up of the underlying atomic orbitals, the gravitational orbit as the time averaging of gravitational orbitals.