Entropic Scalar EFT - From Entanglement Microstructure to Gravity and Cosmic Structure

Abstract

We propose that the vacuum has a finite local entanglement capacity and that matter consists of localized defects of that substrate. General relativity emerges as the low-energy capacity geometry rather than being assumed independently. A minimal tetrahedral microstructure determines, with no free parameters, all coefficients of a covariant scalar EFT through a closed ultraviolet-to-infrared chain. The theory recovers Newtonian gravity, fixes the galactic acceleration scale, produces a specific radial-acceleration law, and retains no gravitational slip with standard post-Newtonian values. Newton's constant is independently derived through two routes yielding the same scale. Extensions cover causal transport, a cosmological sector addressing the Hubble tension, a uniquely determined strong-field completion, and a Many-Pasts interpretation recovering Born-rule quantum mechanics with an entropic arrow of time.


Full Text

Entropic Scalar EFT: A Microphysical Entanglement Theory of Gravity, Dynamics, and Cosmological Structure

Jacob Chinitz

April 12, 2026

Abstract

We propose that the vacuum carries a finite local entanglement capacity and that what we call matter consists of localized defects of that same substrate. Inertial mass is the entan- glement content of a defect, gravity is the long-wavelength restructuring of the surrounding capacity, and the galactic phenomenology usually attributed to dark matter is the extended reach of that restructuring rather than an additional particulate component. Because the substrate has a finite maximum update rate, exact spatial isotropy, and no external background manifold, its continuum description is necessarily Lorentzian and generally covariant. General relativity is therefore not assumed as an independent starting point; it emerges as the low-energy capacity geometry of the substrate itself. The main technical result is a closed static weak-field derivation with no free parameters. A minimal tetrahedral boundary microstructure determines, through admissibility closure, edge transport, finite loop dressing, and continuum matching, all coefficients of a covariant scalar EFT. That EFT recovers Newtonian gravity, fixes the galactic acceleration scale a0, produces a specific radial-acceleration law rather than assuming one, and retains no gravita- tional slip together with the standard post-Newtonian values at the order treated. Newton’s constant G is independently derived through two routes—the matched EFT dictionary and a standalone electron-anchor reduction—both yielding the same gravitational scale. What is new is both the interpretation of gravity and the degree of closure: a finite ultraviolet counting problem is carried through to a predictive weak-field theory in which the metric sector, the scalar coefficients, and the observable outputs are all determined by the same substrate. The broader framework extends to time-dependent transport, a trace- coupled cosmological sector, a bounded-occupancy nonlinear completion, and a Many-Pasts interpretation that recovers standard Born-rule quantum mechanics operationally. Those sectors are less complete than the static weak-field chain, which remains the central result.

1. Introduction: The Physical Claim 4

2. Canonical Field Content and Definitions 5

3. The Three Postulates 6 3.1 Information–Geometry Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2 Mass–Entropy Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.3 Many-Pasts Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4. Relativistic Continuum Structure 7 4.1 Capacity budget and continuum symmetry . . . . . . . . . . . . . . . . . . . . . . 7 4.2 Dependency Map of the Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

5. Why a Tetrahedral Boundary Ensemble 8

6. Admissibility Closure 9 6.1 Minimal isotropic kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6.2 Closure condition and uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.3 Effective sharing entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

7. Edge Kernel and Tree-Level Coupling 10

8. Finite-Loop Renormalization 11

9. Continuum Stiffness and SI Normalization 12

10. Covariant Action 13

11. Field Equations and Bridge Law 14

12. Newtonian Gravity and the Point-Source Limit 15

13. Electron Anchor and the Mass–Entropy Relation 15

14. Galactic Dynamics 16

15. Lensing, PPN, and Weak-Field Consistency 17

16. Why Dynamics Requires Extension Beyond the Static Branch 18

17. Causal Transport and Telegrapher Dynamics 18

18. Cosmology and the Hubble-Tension Sector 19

19. Why These Sectors Belong 20

20. Strong-Field Branch and Bounded Occupancy 20

21. Many-Pasts: Operational Reduction and Arrow of Time 21

Table 1
Table 1

22. Candidate Microstructure Hamiltonian and Underlying Dynamics 22

23. Closure-Status Table 22

24. Falsifiability and Observational Tests 24 24.1 Static weak-field falsifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 24.2 Dynamical falsifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 24.3 Cosmological falsifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 24.4 Correlated-constant falsifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 24.5 Many-Pasts status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

25. What the Theory Would Have to Get Wrong to Fail 25

26. Comparison with Other Approaches 25 26.1 Relative to ΛCDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 26.2 Relative to MOND-like interpolation programs . . . . . . . . . . . . . . . . . . . . 26 26.3 Relative to Verlinde-style emergent gravity . . . . . . . . . . . . . . . . . . . . . . 26 26.4 Relative to TeVeS and other multi-field modified gravities . . . . . . . . . . . . . . 26

27. Conclusion 26

Part I. Physical Idea and Canonical Definitions

1. Introduction: The Physical Claim

The central hypothesis is that the vacuum’s local entanglement capacity is a real dynamical re- source and that the localized objects we call matter are defects of that same resource rather than independent agents acting on it from outside. In this framework inertial mass is the entangle- ment content of a localized defect read through κm, gravity is the long-wavelength field carried by the surrounding restructuring of vacuum capacity around that defect, and what is usually modeled by particulate dark matter is the long-range reach of that same restructuring. The weak-field manifestation of that medium is a scalar EFT written in terms of a vacuum-relative entanglement field Sent(x) and its deficit relative to the background capacity. At continuum scale the defect sector is written in ordinary stress-energy variables, but its ontology is unchanged: it is still the coarse description of localized entanglement defects rather than a separate substance.

This is meant as a genuine replacement proposal for part of the usual dark-sector story, not simply as a new vocabulary laid on top of it. In the standard picture, one keeps visible matter and Einstein gravity, then introduces additional dark components to account for the missing gravitational response. Here the alternative hypothesis is that the vacuum already carries a finite entanglement-capacity structure, and that what we call matter is a localized defect of that structure. The same medium is then asked to explain ordinary weak-field gravity, the galactic excess usually attributed to dark matter, and the homogeneous mode relevant in cosmology.

The proposal is also meant to reach deeper than an ordinary scalar extension of Einstein gravity. If the underlying substrate has one finite update budget, an exact isotropic local struc- ture, and no external background manifold, then the continuum description should already be Lorentzian and covariant before any further phenomenology is added. In that reading, GR is not an external geometric stage to which the entanglement field is later appended. It is the low-energy capacity geometry of the same substrate. The scalar sector then keeps track of how that geometry is depleted and redistributed by localized defects.

The derivation proceeds from microstructure to observables. After introducing the field con- tent, postulates, and normalization conventions, the static weak-field coefficient chain is derived from a minimal tetrahedral boundary ensemble with admissibility closure, edge coupling, and finite loop dressing. The resulting EFT then recovers Newtonian gravity, the galactic accel- eration scale, the radial acceleration relation, lensing consistency, and the leading weak-field post-Newtonian structure without per-system tuning. Time-dependent transport, cosmology, strong-field completion, and the Many-Pasts sector are taken up afterward as extensions of the same framework, though not all of those sectors are developed to the same degree of closure.

The central claim is not merely that entanglement-inspired effects can imitate aspects of gravity, but that a single closure program can be carried from

microstructure −→coefficient chain −→continuum EFT −→observables.

That chain is the primary claim.

The logical order is simple. Part I says what the theory is about, fixes the variables, and explains why the continuum description should already be relativistic and covariant if the sub- strate picture is correct. Part II asks whether a minimal UV boundary structure can actually determine the coefficients that later appear in that continuum theory. Part III asks whether those coefficients reproduce ordinary gravity and the galactic weak-field phenomenology. Only after that chain is visible do the later parts discuss time dependence, cosmology, strong-field completion, and quantum-foundational interpretation.

The physical hypothesis is global, but the most complete derivational closure is the static weak- field UV-to-EFT chain. Other sectors are developed as controlled consequences or structured frontier extensions.

2. Canonical Field Content and Definitions

We define the fundamental continuum variable as the vacuum-relative coarse-grained entangle- ment assigned to a UV probe cell of size L∗centered at x:

Sent(x) ∈R,

measured in nats and therefore dimensionless. This is not a literal microscopic entropy density at a mathematical point. It is the leading scalar order parameter associated with a vacuum-relative entanglement defect after coarse-graining over a UV cell.

This definition is meant to keep the microscopic and continuum pictures tied together. At continuum level, Sent(x) is the field that can appear in an action and field equation. At the same time, it is not introduced as an arbitrary extra scalar. It is the coarse variable that records how much local entanglement capacity remains available in the underlying medium after averaging over a UV cell.

The asymptotic vacuum-capacity baseline is denoted S∞, and the deficit field is

δS(x) ≡S∞−Sent(x).

Positive δS denotes reduced available vacuum entanglement capacity in the neighborhood of a localized defect or defect distribution. It is the extended restructuring field sourced by the defect sector, not an independent medium acted on by matter from outside. For nonlinear work it is useful to define the bounded occupancy fraction

q(x) ≡Sent(x)

S∞ = 1 −δS

S∞ ∈[0, 1].

The variables Sent, δS, and q therefore describe the same local physics in three closely related ways: available capacity, missing capacity relative to vacuum, and surviving-capacity fraction. The weak-field theory is most transparent in δS because it talks directly to the Newtonian potential. The nonlinear completion is most transparent in q because boundedness is built in from the start. The operational meanings are:

• q = 1: vacuum capacity fully available in the absence of local defect-induced restructuring;

• 0 < q < 1: partial local capacity reduction around a defect configuration;

• q = 0: complete local exhaustion of available capacity on the physical branch.

The principal coefficients and derived quantities used throughout are:

γ : entanglement-field stiffness, (1)

κ : defect–entropy coupling, (2)

κm(ℓ) : mass-per-entropy map at scale ℓ, (3)

gshare,max = ln(1680), (4)

gshare,eff : admissibility-weighted effective sharing entropy, (5)

Jbare, Jtree eff , J(ren) eff : UV edge-kernel couplings, (6)

a0 = cH0gshare,eff

The gravitational potentials are denoted Φ and Ψ, and the canonical weak-field bridge will be written as Φ c2 = −δS

2S∞ .

These same symbols reappear in the UV closure chain, in the continuum action, and in the phenomenology sections. From this point onward each one keeps the same meaning, so the later derivations can build on a single notation rather than shifting between parallel conventions.

These definitions are fixed canonically and used without further redefinition below.

3. The Three Postulates

3.1 Information–Geometry Equivalence

The first postulate states that vacuum-relative entanglement structure contributes to spacetime curvature on equal footing with ordinary stress-energy. In the EFT this means that the scalar field Sent(x) enters a covariant action, contributes its own stress-energy, and couples to a trace- equivalent defect source. At continuum scale that source is written in the usual stress-energy variables, but ontologically it is the coarse description of the localized defect sector. In weak field, metric response is governed not by absolute entropy but by the deficit relative to the vacuum-capacity baseline.

The role of Postulate I is to say what gravity is sensitive to. Einstein gravity already tells us that geometry responds to physical content. The present extension says that local entanglement- capacity structure is part of that content. Once that is accepted, gradients and deficits of the entanglement field are no longer metaphorical; they belong in the gravitational bookkeeping alongside the usual stress-energy variables.

3.2 Mass–Entropy Equivalence

The second postulate identifies inertial mass with the entanglement content of a localized defect. At scale ℓ, m(ℓ) = κm(ℓ) ∆S.

For elementary fermionic sectors the canonical defect increment is

∆Sf = ln 2,

because a spin-1/2 fermionic face exclusion creates a binary occupied/unoccupied defect of the local network and therefore carries exactly one bit of missing entanglement. This provides the cleanest anchor for the mass–entropy map. Mass and entanglement are therefore not two separate substances linked by an empirical proportionality; they are two descriptions of the same localized defect sector at different levels of coarse-graining. For composite sectors, the relevant quantity is the fully dressed bound-state entanglement budget rather than a bare constituent count.

The purpose of this postulate is to remove the temptation to think of matter as external to the medium. In the present ontology, a particle is already a localized defect of the entanglement substrate. Writing m = κm∆S therefore does not assert an analogy between two independent things. It asserts that the inertial content of the defect is the entanglement content of the defect, read in mass units.

3.3 Many-Pasts Hypothesis

The third postulate is part of the full framework, but not every weak-field derivation depends on it directly. In canonical closed form the operational history weight is

P(H|P) ∝e−D(H,P),

equivalently the branch α = 1, β = 0 of the generalized family. This closed operational branch is fixed because exact Born recovery forces α = 1 and forbidding any extra signaling-sensitive operational bias channel forces β = 0. Its consequences are developed later as part of the theory’s interpretive and cosmological completion sector.

It is worth saying explicitly why this postulate remains in the theory even though the weak- field gravity chain does not need it at every step. Postulates I and II define the gravitational ontology directly. Postulate III belongs to the broader framework because the same entangle- ment substrate is also being asked to support an account of branch realization and temporal asymmetry. It is therefore part of the total theory, but it enters the derivational order later.

The three postulates define the ontology of the theory. The main text treats them as theory- defining inputs, not as derived outputs.

4. Relativistic Continuum Structure

4.1 Capacity budget and continuum symmetry

In the present framework the continuum description is expected to be covariant not because a geometric axiom is added at the outset, but because the substrate itself is finite-capacity, isotropic, and relational.

The first ingredient is a finite maximal update rate, denoted by the same constant c that later appears in the transport relation D/τ0 = c2. In the present interpretation, c measures the largest rate at which the substrate can propagate and reorganize information. A defect at rest spends that budget entirely on local temporal evolution. A defect in motion must spend part of the same budget on spatial restructuring of the surrounding network. Because the substrate is isotropic, the cost of motion can depend only on the rotational scalar v2 at leading order, and the boundary conditions are fixed: the temporal rate is maximal at v = 0 and vanishes when the full budget is exhausted at v = c. The surviving temporal fraction is therefore

r

1 −v2

dt =

c2 .

In this sense special-relativistic time dilation is read here as a capacity-budget relation rather than as an independent postulate about flat spacetime. Once a finite invariant speed, vacuum homogeneity, and exact isotropy are in place, the Lorentz transformation law follows as the corresponding inertial symmetry rather than the Galilean one.

The same capacity language also unifies motion-induced and gravity-induced clock slowing. In the nonlinear branch the surviving-capacity fraction is

q = Sent

S∞ ,

so smaller q means that less local update capacity remains available. Motion reduces the tempo- ral share of the budget by consuming part of it in spatial transport; a nearby defect reduces the local budget by depleting available capacity. The two familiar time-dilation effects are therefore interpreted as two regimes of one mechanism.

The second ingredient is the relational character of the substrate. It is not embedded in a prior physical manifold whose coordinate labels carry independent meaning. The physical con- tent is the pattern of local capacities, defects, and neighborhood relations within the network itself. Continuum coordinates are therefore descriptive labels imposed on that relational struc- ture, not additional physical data. Smooth changes of coordinates relabel the same underlying configuration rather than altering the physics. In continuum language this is precisely why the low-energy description should be written in generally covariant form.

The upshot is that the metric sector of the EFT is not being introduced from outside. Lorentzian geometry is the natural coarse description of a finite-capacity, isotropic, relational substrate, and the Einstein sector is its lowest-order continuum gravitational expression. The entanglement scalar then tracks how that same capacity geometry is redistributed by localized defects. The resulting low-energy theory can therefore be written in the usual covariant lan- guage, but the intended logic runs from substrate properties to geometry, not the other way around. As with any discrete substrate, this is a continuum statement: exact Lorentz and dif- feomorphism symmetry belong to the coarse theory, while lattice-scale corrections may survive near the UV cutoff.

4.2 Dependency Map of the Theory

The logical flow of the theory can be summarized compactly as

Postulates →UV boundary ensemble →admissibility closure

→edge kernel →finite renormalization →continuum matching →weak-field EFT

→static observables →transport / cosmology /

strong field / Many-Pasts.

This is a dependency graph, not an epistemic-equality graph. The static weak-field sector, the UV coefficient chain that feeds it, and the operational Born-recovery branch are more tightly closed than the cosmological or strong-field sectors. Part VI makes that difference explicit in a closure-status table.

The remainder of the argument follows this order so that each later result can build on the same coefficient choices and the same field dictionary.

Part II. UV Coefficient Chain

5. Why a Tetrahedral Boundary Ensemble

The microstructural problem is to identify a minimal discrete boundary-cell architecture capable of supporting finite channel entropy, isotropic closure data, and a continuum scalar response. The canonical choice adopted here is a tetrahedral cell with four structural ingredients:

• a tetrahedral volumetric cell;

• half-integer fermionic face data on each face;

• injective face assignment;

• binary orientation/parity.

This package is not presented as the only imaginable UV completion of emergent gravity. It is presented as the minimal architecture currently known to us that supports the needed closure properties. The tetrahedron is the minimal volumetric simplex in d = 3, injectivity preserves independent boundary information across the four faces, and parity doubling captures the two

orientations of the cell. The face-state multiplicity is then not chosen from a menu. Postulate II identifies elementary defects as fermionic, so each face carries half-integer base spin

j0 = 1

2, 3

2, 5

2, . . .

Two cells sharing a face therefore generate the effective boundary sector

j0 ⊗j0 = 0 ⊕1 ⊕· · · ⊕2j0.

Postulate I selects the maximum-capacity boundary channel, so the effective face label is the top channel jeff = 2j0,

with |M| = 2jeff + 1 = 4j0 + 1

distinguishable face states. Injectivity across four tetrahedral faces requires at least four distinct labels, so |M| ≥4 =⇒ 4j0 + 1 ≥4.

The only half-integer option below j0 = 3/2 is j0 = 1/2, which gives jeff = 1 and |M| = 3, so it fails the injectivity condition. The first fermionic choice that works is therefore

j0 = 3

2, jeff = 3, |M| = 7.

In that sense the seven-state face sector is derived from fermionic face data, maximum-capacity channel selection, tetrahedral injectivity, and minimality; it is not selected because it later happens to fit G or a0. The same face-level structure is also where the elementary matter sector enters: fermionic face exclusion creates the binary one-bit defect increment ∆Sf = ln 2 used later in the electron anchor.

The resulting combinatorial state count is

Ωtet = 2 × P(7, 4) = 2 × 840 = 1680,

so the combinatorial sharing ceiling is

gshare,max = ln(1680) ≈7.427.

The exact K2 spectrum and multiplicities are carried in the appendices. The essential physical point is that the UV theory begins with a finite microscopic counting problem rather than a free continuum ansatz.

This fixes the minimal structural package used in the ultraviolet construction; the supporting derivations are collected in the appendices.

6. Admissibility Closure

6.1 Minimal isotropic kernel

The UV boundary ensemble is not used with a flat weighting. The admissibility family is

pη(b) ∝e−ηK2(b),

where K2 is the unique leading quadratic closure-defect scalar compatible with tetrahedral symmetry. This choice is not made because it “works” phenomenologically. It is the minimal

isotropic maximum-entropy kernel under normalization and fixed quadratic closure moment. Higher invariants such as K4 would correspond to additional UV information and therefore to subleading refinements rather than competing leading kernels.

The reason for introducing this weighting is that the raw combinatorial ensemble is too per- missive to be the whole UV story. Some boundary configurations are closer to the regular closure pattern expected of a smooth medium, while others are more distorted. The scalar K2 is the minimal rotationally invariant way to measure that distortion. The admissibility kernel there- fore says, in the mildest possible form, that more badly closed configurations should contribute less to the effective coarse ensemble.

6.2 Closure condition and uniqueness

The canonical closure condition is ⟨K2⟩η = 3

2η.

This is the self-consistency condition for the admissibility kernel itself. The parameter η sets how strongly the weighting suppresses badly closed configurations, so the ensemble generated by that weighting must in turn exhibit the fluctuation scale that η presumes. In that sense the equation is the entropic analogue of a mean-field fixed-point condition: η is not chosen externally, but fixed by requiring the admissibility kernel to be consistent with its own induced closure fluctuations.

On the exact discrete spectrum this equation has a unique solution,

η∗= 0.0298668443935.

The closed branch is locally stiff: small fractional changes in η produce only small fractional changes in the downstream effective sharing entropy.

6.3 Effective sharing entropy

The admissibility-weighted effective sharing entropy is

gshare,eff = 7.41980002357.

The distinction between gshare,max and gshare,eff is therefore not ad hoc loss inserted by hand. It is the difference between the raw combinatorial ceiling and the admissibility-closed effective boundary entropy that actually propagates into observable couplings.

This distinction is one of the conceptual pivots of the framework. The later continuum theory does not inherit the naive channel-counting ceiling; it inherits the portion of that channel space that remains after closure is imposed. That is why the downstream couplings should be read as consequences of admissibility-closed sharing rather than of raw combinatorics alone.

At this stage the effective sharing entropy is no longer a free choice. The exact spectrum, multiplicities, uniqueness proof, and stiffness numerics are preserved in the appendices for ref- erence.

7. Edge Kernel and Tree-Level Coupling

The same UV closure data fix the tree-level edge kernel. The geometric bridge is the tetrahedral identity 4 X

i=1 ˆniˆnT i = 4

which implies a channel-averaged transverse fraction of 2/3. This gives the bare edge smoothness coupling

Jbare = 2

3η∗.

The interpretation of Jbare is straightforward: it is the cost assigned to mismatch between the occupancy variables of two neighboring coarse cells. If adjacent cells disagree strongly, the edge pays a larger penalty; if they agree, the penalty is small. The factor 2/3 is the geometric fraction that survives after averaging the tetrahedral channels into the isotropic continuum limit.

For a z = 4 regular coarse adjacency graph, the tree-to-lattice reduction then yields

Jtree eff = Jbare

3 = 2η∗

9 .

The division by 3 comes from the branching geometry of the rooted z = 4 graph. One neighboring link points back toward the source, while the remaining z−1 = 3 links carry the forward transport into the tree. Thus Jtree eff is not simply the microscopic edge penalty itself, but the part of that penalty that survives as net long-range transport after the local branching structure is taken into account.

The same chain also fixes the horizon-normalization target

σ∗= π gshare,eff ,

and the rooted shell observable converges to that target rapidly enough that the nonlocal cor- rection is already strongly constrained by small shell depth.

At this point the story is no longer just one of state counting. The edge kernel measures how costly it is for neighboring coarse cells to disagree in local occupancy. The tetrahedral identity is what makes this bridge controlled: it is the statement that the four discrete channel directions average to the correct isotropic tensor structure in the continuum limit.

Tree-level edge transport is fixed at this point by the same microscopic data that fixed the ad- missibility closure. The shell hierarchy and phase-selection checks are preserved in Appendix C.

8. Finite-Loop Renormalization

Tree level is not the whole UV story. The full lattice admits local closed-return motifs that recycle part of the transmitted information before it contributes to net coarse transport. The leading correction is organized as a local Dyson self-energy dressing,

J(ren) eff = Jtree eff 1 + Jtree eff Σret .

The need for this step is physically straightforward. A purely tree-like transmission rule would let the relevant amplitude move outward once and never locally return. A real coarse graph is not that simple. Some of the transmitted information cycles back through short closed motifs before contributing to long-distance transport. The renormalized coupling is therefore the true stiffness felt by the coarse field after these local returns have been resummed.

The structural decomposition is the key result. There are seven sector-diagonal local returns, together with one permutation-symmetric shared closure-singlet. The singlet is weighted by the same transverse projection and branch dilution factors that define the tree edge map, 2

so the leading local self-energy is

Σret = 7 + 2

9 = 65

9 .

Information sent along an edge need not simply move outward once and for all. Some of it can circulate through short local loops before contributing to coarse transport. The seven sector- diagonal returns are the seven face-label channels that return independently without mixing. In addition there is one collective mode, symmetric across channels, that returns as a shared closure-singlet rather than as a channel-specific loop. The self-energy is therefore not a generic loop number but a sum of seven independent return channels plus one shared mode weighted by the same projection and branching factors already present in the tree map. Hence

c(ren) loop ≡J(ren) eff Jtree eff = 1 1 + Jtree eff Σret ≈0.95426,

and J(ren) eff ≈0.00633348.

This reproduces the shell-target crossing near Jbare,cross ∼0.019 at the 0.05% level.

What matters here is that the loop correction is no longer schematic. The finite renormaliza- tion is written as an explicit local self-energy, and the remaining open question is not whether such a correction exists but whether the same number can be recovered from a completely motif-level derivation with no residual compression.

9. Continuum Stiffness and SI Normalization

The last UV step is not a thermodynamic one. The lattice quadratic form is interpreted as a Euclidean action weight, IE

ℏ= J(ren) eff

X

⟨ab⟩ (Qa −Qb)2,

2

over a microscopic four-cell of size

∆V4 = L4 ∗ c .

Up to this point the derivation has determined a dimensionless lattice weighting. The continuum EFT, however, needs a dimensionful coefficient multiplying derivatives of a field in spacetime. The Euclidean-action interpretation is what upgrades the lattice closure data into a continuum action density with the right units and the right covariant target.

The same tetrahedral identity used in the edge-kernel reduction then yields the continuum coefficient for the occupancy field Qocc,

γQ = 4ℏc

3L2∗ J(ren) eff .

The field normalization is fixed by horizon capacity:

S = πQocc.

2(∂S)2 convention is

Therefore the canonical EFT coefficient in the γ

γ = 4ℏc 3π2L2∗ J(ren) eff .

If the canonical UV cell is identified with the Planck cell, L∗= LP , this can be written as

γ = 4J(ren) eff 3π2 c4

G .

Within the Euclidean-action convention already assumed by the EFT, the SI-normalized weak- field stiffness coefficient is fixed rather than left schematic.

At a high level, Part II has now completed the micro-to-continuum coefficient story. The tetrahedral ensemble determines the effective sharing entropy, the edge kernel and loop dressing turn that entropy data into a discrete stiffness, and the Euclidean matching turns the discrete stiffness into the continuum coefficient γ that appears in the weak-field EFT.

Closed UV-to-IR chain. The UV coefficient chain can now be summarized as

{Ωtet, K2, η∗, gshare,eff, Jbare, Jtree eff , Σret, J(ren) eff , γ} −→{κ, G, a0, gobs(gbar)}.

The first bracket is the micro-to-continuum closure chain; the second bracket collects the weak- field observables it feeds. No later section re-derives this chain in full.

With that normalization in place, the stiffness chain runs from the discrete ensemble to the weak-field EFT without an extra coefficient choice. The remaining microscopic question is independent confirmation of the same action-kernel interpretation from fuller inhomogeneous dynamics, not an unresolved normalization constant.

Part III. Weak-Field EFT and Static Phenomenology

10. Covariant Action

With that continuum symmetry structure in place, the canonical weak-field EFT takes the covariant form

I = Z d4x √−g  c4

 ,

16πGR −γ

2gµν∂µSent∂νSent −λSent −κχSent

with χ(x) ≡−T µµ

c2 .

The action should be read as the simplest weak-field continuum realization of the ontology already stated in Part I and the coefficient chain already derived in Part II. The metric sector remains the familiar Einstein one at low energy, but it is now interpreted as the continuum capacity geometry of the substrate rather than as an independent starting theory. It is coupled to a scalar field that tracks available entanglement capacity and to a source channel written in ordinary stress-energy notation while still being interpreted microscopically as the defect sector of the same medium.

At the EFT level χ is written in ordinary stress-energy language, but ontologically it is the coarse trace channel of the localized defect sector. Here γ is the continuum stiffness fixed by the UV chain, while κ is the defect–entropy coupling fixed by the canonical source map,

κ = Ξρ L2∗κm(L∗),

and λ controls the background branch. The local UV insertion factor feeding this source map is no longer left fully schematic: Appendix C now fixes the rigid defect amplitude through

the exact defect cost ∆Sdef = ln(7/6) and the tetrahedral on-site Green constant Gtet(0) = 0.448220394 . . .. What remains in Ξρ is not merely one geometric prefactor, but the full UV–IR source normalization that converts local defect entropy per cell into the continuum mass-density sourcing of the EFT, including the hierarchy built into the vacuum-capacity normalization. Local weak-field dynamics are studied in the renormalized branch

λren ≡λ + γ□Sbg = 0,

so that local perturbations are sourced only by the defect sector, written at continuum level in ordinary matter variables.

The Einstein–Hilbert coefficient is written here in the already-matched Einstein normalization of the weak-field EFT. The nontrivial claim is not that G is inserted as an independent extra input, but that the entanglement-side dictionary reproduces the same coefficient through

G = c2κ 8πγS∞ .

In other words, the action is presented in the observational normalization of the metric sector, and the closure program shows that the entanglement sector matches that normalization rather than introducing a separate free gravitational constant. This is the simplest covariant realization of the closure chain: one metric, one scalar entanglement field, one trace-equivalent defect-source channel, and one renormalized background branch. The standard Einstein normalization is used here because it is the empirical weak-field normalization of the continuum geometry, while the entanglement-side derivation shows how that same normalization is recovered from the substrate dictionary rather than posited as an unrelated second input.

The logical order matters. The action is not meant to suggest that every term has been guessed independently from phenomenology. Rather, once this weak-field covariant form is accepted as the correct low-energy language of the substrate, the earlier UV closure chain fixes the entanglement-side coefficients that appear in it.

This fixes the canonical weak-field action.

11. Field Equations and Bridge Law

Varying the action with respect to Sent gives the sourced scalar equation

γ□Sent = λ + κχ.

Varying with respect to the metric yields

Gµν = 8πG

 T (matter) µν + T (ent) µν  ,

c4

with the canonical scalar stress-energy induced by the entanglement sector.

These two equations separate the two jobs played by the scalar. The scalar equation tells us how the local entanglement-capacity variable responds to defect sources. The metric equation tells us how that scalar response then contributes back to spacetime curvature. The bridge law below is what turns those two statements into an ordinary weak-field gravitational potential.

The weak-field bridge law is not inserted as an arbitrary interpolation. Under locality, multi- plicative redshift composition, additivity of independent deficits, and standard weak-field metric normalization, the unique leading bridge is

Φ c2 = −δS

In the static weak-field branch the emergent Newton constant is therefore

G = c2κ 8πγS∞ .

This bridge law is where the derivation stops speaking only in the language of entropy variables and starts speaking directly in the language of observable gravity. Without it the theory would remain a scalar model with an entanglement interpretation. With it, the deficit field acquires a unique weak-field normalization in terms of the ordinary gravitational potential.

This bridge closes the canonical weak-field continuum dictionary of the theory.

12. Newtonian Gravity and the Point-Source Limit

In the renormalized static weak-field sector the scalar equation reduces to

∇2δS = −κ

γ ρ.

This is the point where the micro-to-macro chain becomes operationally familiar. Once the background is renormalized away and the source is nonrelativistic, the scalar sector obeys an ordinary Poisson equation for the deficit field. The unusual quantity is δS, but the mathematical structure is the same one that underlies standard weak-field gravity.

For a point source M,

δS(r) = κM

4πγr.

Using the bridge law, Φ c2 = −δS

2S∞ ,

the gravitational acceleration becomes

g(r) = c2κ 8πγS∞

M

r2 = GM

r2 .

Thus Newtonian gravity is recovered as the weak-field response of the entanglement-capacity medium.

Nothing qualitatively exotic has to be inserted at the last step to recover ordinary gravity. The same sourced scalar equation and the same bridge law already imply the familiar point-mass force law. In that sense Newtonian gravity appears here not as a starting axiom but as the first infrared limit of the entanglement medium.

Interpretation. Ordinary gravity is the small-deficit, weak-curvature limit of the extended entanglement restructuring around localized defects. The Newtonian 1/r2 law is therefore emer- gent, not fundamental.

Once the bridge law, source convention, and UV stiffness are fixed, the Newtonian limit is closed.

13. Electron Anchor and the Mass–Entropy Relation

The mass–entropy relation requires a clean elementary anchor because, in this framework, the elementary matter sector is the localized defect sector itself. For a single fermionic face-exclusion defect the canonical increment is ∆Sf = ln 2.

The physical content is that a single excluded face is a binary occupied/unoccupied topological defect and therefore carries exactly one bit of missing entanglement. At the electron Compton scale ℓ= λe the mass–entropy map gives

κm(λe) = me

ln 2.

This is the first step of the anchor logic. If the elementary fermionic defect carries ∆Sf = ln 2, then dividing the electron mass by that fixed entropy increment gives the mass-per-entropy conversion at the electron’s own scale. The electron is the cleanest place to do this because it is the lightest simple fermionic defect and is not obscured by hadronic compositeness.

The UV normalization is κm,UV = ℏ cL∗

1 ln 2,

and the canonical running law is

1+αcl , αcl = 0

L∗

κm(ℓ) = κm,UV

in the closed branch. The electron sector supplies a sharp anchor for the mass–entropy map and an independent route into the weak-field closure chain. The next step is then to run that conversion back to the UV scale. The quantity κm,UV is the fundamental mass-per-entropy conversion attached to the cutoff cell itself, and the running law tells us how that conversion appears at longer physical scales. So the logic of the section is: one bit fixes the electron- scale conversion, the running law connects the electron scale back to the UV scale, and the same conversion then feeds the weak-field normalization chain. Appendix D also records the compact companion branch in which G is obtained directly from (ℏ, c, me) together with the local sharing factor and transport exponent, so that the gravitational normalization can be read not only through the matched EFT dictionary but also through a standalone electron-anchor reduction.

For composite hadrons the claim is different. The relevant quantity is the dressed vacuum- subtracted bound-state entropy,

mhadron = κm(ℓH)Sdressed ent,H ,

with the dressed entropy budget generated by confinement, gluonic structure, trace-anomaly dynamics, and chiral vacuum reorganization. A finished lattice derivation of that dressed entropy is not yet available. What is claimed here is structural compatibility between the mass–entropy map and the standard QCD mass budget.

The elementary-fermion anchor is settled in the simple sectors, while the hadronic sector remains structurally compatible but not yet fully coefficient-complete.

14. Galactic Dynamics

The galactic sector is one of the main payoffs of the coefficient chain. The characteristic accel- eration scale is a0 = cH0gshare,eff

4π2 .

This formula already shows why the galactic phenomenology is not independent of the UV story. The same effective sharing entropy that appears in the closure chain now reappears in the acceleration scale governing departure from the Newtonian branch on galactic outskirts.

The corresponding 1 + 2 channel decomposition separates one longitudinal slot aligned with the baryonic acceleration gradient and two transverse slots carrying the cosmic background scale. This fixes the galactic dimensionless variable as

x = rgbar

a0 .

This is the right variable because it measures baryonic forcing relative to the intrinsic acceler- ation scale set by the same closure chain. When gbar ≫a0, the system should reduce to the ordinary Newtonian branch; when gbar ≪a0, the response should cross into the low-acceleration completion. The square-root form is the one selected by the canonical 1 + 2 channel structure and is exactly what reproduces the deep-MOND scaling later in the section.

For the massless bosonic entanglement mode, the minimal stationary completion is therefore the Bose–Einstein occupancy branch

1 + nB(x) = 1 1 −e−x ,

This bosonic language is not an extra fit ingredient added after the fact. The collective excitation of a scalar entanglement field is itself bosonic, so once the weak-field response is organized as occupancy of a massless scalar mode, Bose–Einstein statistics are the minimal stationary completion. The role of a0 is then to provide the effective scale against which that occupancy is measured, so the galactic law becomes an occupancy statement rather than an empirical interpolation formula chosen by hand.

The resulting radial-acceleration law is

gobs = gbar 1 + nB(x)  = gbar

gbar/a0 .

1 −exp  − p

This has the correct asymptotic limits:

gbar ≫a0 =⇒gobs ≈gbar, (8)

gbar ≪a0 =⇒gobs ≈√a0gbar. (9)

The deep-MOND branch therefore gives the baryonic Tully–Fisher law

v4 ≈a0GMb.

Once the channel geometry is fixed, the weak-field medium has a minimal bosonic occupancy completion. The interpolation law is not chosen after looking at galaxy data. It is the way the entanglement response fills the available modes when the baryonic source is weak compared with the intrinsic acceleration scale set by the same microstructural chain.

Structurally, the same UV channel geometry that fixes the microscopic coefficient chain also feeds the galactic EFT. There is no separate per-galaxy interpolation function chosen by hand.

The galactic branch is fixed up to the same channel-identification structure already used elsewhere in the weak-field EFT.

15. Lensing, PPN, and Weak-Field Consistency

Because the entanglement sector is scalar, it does not generate anisotropic stress at leading weak-field order. Hence Φ = Ψ

to the order treated in the present EFT. This means that light bending and dynamical mass estimates are sourced by the same leading metric response. In effective-halo language, the entanglement response can be rewritten as

ρhalo(r) = 1 4πGr2 d dr

h r2(gobs −gbar) i ,

which yields the familiar 1/r2 outer-halo profile in the asymptotic branch.

This matters because a theory can match galactic rotation curves and still fail lensing if the two metric potentials slip apart. The weak-field branch here avoids that problem at leading order. The same response that governs the dynamics also governs light deflection, so the theory is not buying galactic support at the price of a leading weak-field inconsistency.

The same weak-field structure also returns the GR post-Newtonian values at the order treated:

γPPN = βPPN = 1 + O(Φ2/c4).

Thus the leading weak-field EFT does not purchase galactic phenomenology by introducing gravitational slip or obvious solar-system-scale pathologies.

At leading weak-field order this sector is closed. Higher-order precision confrontation remains an audit task rather than an architectural gap.

Part IV. Time-Dependent, Transport, and Cosmological Sectors

16. Why Dynamics Requires Extension Beyond the Static Branch

The static weak-field branch is not the whole theory. If the entanglement-capacity medium is physical, it must admit relaxation, propagation, and causal response to changing sources. The time-dependent sector should therefore not be read as an optional add-on. It is the natural dynamical extension of the same medium that produces the static weak-field EFT.

This section is only a bridge into the dynamical sectors; no independent closure claim is being made here.

17. Causal Transport and Telegrapher Dynamics

The canonical time-dependent completion is the telegrapher equation

τ0∂2 t δS + ∂tδS = D∇2δS + Aχ(x, t),

with static-matching condition A D = κ

γ .

This equation is introduced because a physical medium should not respond instantaneously to changing sources. The static Poisson equation is appropriate when the source has already settled, but once sources evolve in time one needs both propagation and relaxation. The telegrapher form is the minimal causal extension that still reduces to the static branch when time dependence becomes negligible.

Causality requires D τ0 = c2,

so the transport sector propagates disturbances at finite speed. In the canonical no-new-IR-scale branch,

τ −1 0 = H0, D = c2

H0 .

This transport equation separates two roles that were easy to blur in earlier drafts. Ordinary galactic support still belongs to the near-stationary static branch. The telegrapher sector governs how the same medium propagates, relaxes, and develops lag when sources evolve in time. This choice is therefore not used to generate ordinary static galactic support. It governs transport, lag, relaxation, and merger phenomenology around the near-stationary weak-field branch.

For galactic modes the Appendix E analysis shows that the long relaxation time does not destroy the static limit. Galactic modes lie deep in the underdamped regime, so the static Poisson branch is recovered as the exact time average relevant to ordinary galactic dynamics. The assumption here is that the source is quasi-static on galactic timescales and supported on wavelengths far shorter than the critical scale λc ∼4πc/H0; under those conditions the oscillatory transient averages out instead of competing with the static branch.

This is why the transport sector is not being used to manufacture the ordinary galactic law after the fact. The static branch still does that job. The transport equation is there to describe what happens when the source history is no longer quasi-static: propagation delay, relaxation, and merger-era lag.

The transport branch is closed at the level of D/τ0 = c2 and the preferred choice τ −1 0 = H0. Detailed merger phenomenology remains frontier.

18. Cosmology and the Hubble-Tension Sector

The cosmological sector should be read as the homogeneous continuation of the same scalar medium, not as an unrelated dark-energy add-on bolted onto the weak-field theory. What changes here is not the ontology but the kinematic regime: the background mode becomes dynamically relevant on horizon scales while the local weak-field branch remains encoded in the inhomogeneous fluctuations.

The cosmological sector uses the same field split,

S(x, t) = S(t) + s(x, t),

where S(t) is the homogeneous mode and s(x, t) the inhomogeneous sector responsible for local weak-field dynamics. The vacuum baseline is fixed by apparent-horizon capacity,

S∞(t) = πRA(t)2

L2∗ .

This decomposition is essential. The homogeneous mode and the local weak-field fluctuations are not two unrelated scalar fields. They are two kinematic sectors of the same field. The split lets the background mode change cosmological evolution without automatically rewriting the local weak-field equations that already fixed the galactic phenomenology.

Because the entanglement field couples to the trace of the stress-energy tensor, the homoge- neous mode is largely dormant during radiation domination but becomes active near matter– radiation equality. This gives a transient early-energy component of the same general type used in early-dark-energy resolutions of the Hubble tension. In the closed cosmological branch treated here, the effect reduces the sound horizon and shifts the CMB-inferred Hubble constant upward from the high-67 range toward the high-68 to low-69 range.

What matters here is not just the direction of the shift but the timing. A successful Hubble- tension mechanism must turn on near the right epoch, alter the sound horizon in the right direction, and then decouple cleanly enough from the local weak-field sector that the galactic branch is not spoiled. The entanglement medium has exactly that qualitative structure.

The local weak-field predictions are protected by the separation between S(t) and s(x, t). This is the role of the shear-lock logic: changing the homogeneous background mode does not rewrite the local static Poisson branch that governs galactic dynamics and lensing.

The claim is therefore a mechanism with the right direction, timing, and qualitative separation of scales, not a finished precision cosmology package. What is shown is that the trace-coupled homogeneous mode turns on in the relevant epoch and pushes the sound horizon in the re- quired direction; what remains open is the full perturbation propagation and likelihood-level confrontation. The homogeneous mode modifies the background history; the inhomogeneous branch continues to govern the local weak-field observables already fixed earlier in the deriva- tion. That separation is what allows the cosmological extension to remain part of the same scalar medium rather than a re-tuning of the galactic sector.

This is a structurally supported and directionally successful extension, but it is not yet Boltzmann-closed.

Part V. Nonlinear, Interpretive, and Completion Sectors

19. Why These Sectors Belong

The most directly constrained micro-to-weak-field chain is now in hand. The next sectors de- velop three further pieces required for overall framework completeness: nonlinear completion, operational quantum reduction, and candidate underlying dynamics. They belong to the same ontology, but they should not be read as resting on identical evidence.

20. Strong-Field Branch and Bounded Occupancy

The purpose of the strong-field section is not to claim a finished black-hole solution. It is to replace an unspecified breakdown region with the minimal nonlinear completion compatible with the weak-field bridge and the bounded-capacity interpretation of the field.

That goal is modest but important. A weak-field theory that simply says “the approximation fails somewhere near horizons” leaves the ontology incomplete exactly where one would most want to know what the variables mean. The bounded-occupancy completion is meant to supply that missing meaning even though it does not yet solve the full strong-field equations.

The weak-field variable is the deficit δS. For strong field the natural variable is the bounded occupancy fraction

q(x) = Sent(x)

S∞ ∈[0, 1].

The canonical nonlinear completion is

N2 = q, gtt = −q.

This is not introduced as a convenient ansatz. If the static lapse satisfies N 2 = f(q), then vacuum normalization requires f(1) = 1, horizon normalization requires f(0) = 0, the weak- field bridge requires f′(1) = 1, and multiplicative composition of independent capacity-reduction layers requires f(q1q2) = f(q1)f(q2).

The continuous solutions are f(q) = qα, and the weak-field condition forces α = 1. Thus

N2 = q

is the unique continuous multiplicative completion compatible with the weak-field bridge.

The force of this uniqueness statement is that it turns the nonlinear completion into a rule rather than a handwave. Once one demands the vacuum limit, the horizon limit, the weak-field derivative match, and multiplicative composition of independent deficit layers, the lapse cannot be chosen freely. The bounded variable points to one minimal completion, not to an arbitrary family.

In this picture the horizon is the level set q = 0, i.e. complete local exhaustion of available entanglement capacity. The nonlinear completion is therefore not built by adding an independent scalar hair parameter but by slaving the occupancy field directly to the lapse. Full constrained exterior/interior solutions, scalar backreaction, interior regularity, and exact microstate-to-area matching remain frontier questions.

The same bounded variable also sharpens the black-hole reading of the framework. In spherical symmetry the weak-field exterior solution gives

f(r) ≡δS(r)

S∞ = 2GM

c2r ,

so ordinary compactness is already the weak-field capacity-depletion fraction. The strong-field continuation therefore interprets the horizon kinematically as the first radius at which surviving capacity vanishes,

q(rh) = 0 ⇐⇒ f(rh) = 1 ⇐⇒ rh = 2GM

c2 .

The strong-field continuation does not present a solved black-hole theory. It shows instead that the weak-field scalar is not abandoned at strong field: it is completed by a bounded variable and a unique multiplicative lapse rule, so the nonlinear regime is at least posed by a concrete prescription rather than left as an unnamed failure zone. The full self-consistent strong-field exterior and interior problem is explicitly deferred.

Bounded occupancy and the horizon criterion give a minimal nonlinear completion rule. The full self-consistent strong-field exterior and interior problem, including scalar backreaction, re- mains the principal open problem of the strong-field sector.

21. Many-Pasts: Operational Reduction and Arrow of Time

Many-Pasts belongs in the master manuscript because the framework is not only a gravity mechanism. It is also a proposal about branch realization and temporal asymmetry on the same entropic substrate. Operationally, however, it is deliberately conservative.

With P(H|P) ∝e−D(H,P),

the Born rule is recovered exactly because

e−D(H,P) = Tr(ΠP ρH→now)

in the projective laboratory limit. Exact Born recovery forces α = 1, and forbidding any extra signaling-sensitive operational bias channel forces β = 0. No-signaling is preserved exactly in this operational branch.

The remaining content is interpretive and cosmological. The arrow of time is recovered through conditional typicality: among histories consistent with present macroscopic records, overwhelm- ingly many exhibit entropy growth toward the future direction defined by those records. This adds no new laboratory probability law; it offers a global consistency account of branch realiza- tion and temporal asymmetry.

Operational closure is exact in the laboratory sector; the extra content added here is inter- pretive and cosmological.

22. Candidate Microstructure Hamiltonian and Underlying Dy- namics

The UV closure chain is not meant to float free of possible microscopic dynamics. The candidate realization developed in the appendices is a GFT/condensate picture in which spacetime emerges from a condensate of discrete tetrahedral building blocks, while what is macroscopically read as matter appears as fermionic defects of that same substrate. In Madelung form,

n(x)eiθ(x),

σ(x) = p

the condensate hydrodynamics generically generate a positive scalar stiffness for the logarithmic- density variable, providing the condensate-side origin of the EFT kinetic term. This does not by itself replace the explicit coefficient closure already carried out in Appendix C, but it shows that the EFT is not a free phenomenological decoration.

This provides a coherent microscopic realization supporting the UV closure chain, but not yet a first-principles inhomogeneous derivation of every continuum term.

Part VI. Closure Status, Falsifiability, and Research Program

23. Closure-Status Table

The closure bookkeeping is concentrated here in one place so the rest of the text can simply derive, state, and move on.

Table 2
Table 2

Quantity / Claim

Sector Status Type of Sup- port

Where Estab- lished Ωtet, gshare,max UV counting Closed exact combina- torics

Part II, App. B

Closed exact K2 spec- trum and multi- plicity closure

η∗ admissibility closure

Part II, App. B

gshare,eff UV entropy Closed exact weighted evaluation

Part II, App. B

Jbare, Jtree eff UV edge kernel Closed tetrahedral isotropy identity

Part II, App. C

Σret = 65/9 finite-loop UV Fixed in the min- imal UV return sector

explicit seven- channel plus singlet return count

Part II, App. C

J(ren) eff finite-loop UV Fixed by the UV return resumma- tion

derived from Σret Part II, App. C

Table 3
Table 3

Quantity / Claim

Sector Status Type of Sup- port

Where Estab- lished γ continuum stiff- ness

Closed in canon- ical EFT conven- tion

Euclidean-action normalization

Part II, App. C

Local defect inser- tion constant

UV source map Closed on the canonical lattice branch

exact defect counting + tetra- hedral on-site Green function

App. C

κ source coupling Closed only through the matched weak- field dictionary

UV local inser- tion fixed; ex- plicit κ/γ esti- mate; full UV–IR bridge not inde- pendently derived

Part III, App. C– D

Weak-field bridge law

EFT / gravity Closed in canon- ical weak-field branch

uniqueness from multiplicative composition

Part III, App. D

G weak-field gravity Closed up to an- chor / boundary inputs

EFT bridge + UV coefficients

Part III, App. D

Electron anchor mass sector Closed up to ele- mentary anchor

one-bit fermionic defect branch

Part III, App. D

Independent electron-anchor G branch

weak-field normal- ization

Coherent inde- pendent cross- check

closed-form algebraic re- duction from (ℏ, c, me, gshare,loc, utr)

App. D

a0 galactic EFT Fixed in the closed weak-field realization

UV entropy + cosmic scale

Part III, App. C

1 + 2 channel ge- ometry + bosonic occupancy

RAR law galactic EFT Fixed in the closed weak-field realization

Part III, App. C

No slip / lensing consistency

weak-field metric Closed at leading weak-field order

scalar-stress structure

Part III, App. D

PPN leading val- ues

weak-field metric Structurally sup- ported

weak-field expan- sion

Part III, App. F

Telegrapher rela- tion D/τ0 = c2

transport Closed in canon- ical transport branch

causal closure Part IV, App. E

Canonical τ −1 0 = H0 branch

transport Fixed in the min- imal transport closure

no-new-IR-scale choice

Part IV, App. E

Hubble-tension mechanism

cosmology Structurally sup- ported extension

homogeneous trace-coupled mode

Part IV, App. E

Bounded occu- pancy q, N 2 = q

strong field Closed as min- imal nonlinear completion

uniqueness of multiplicative completion

Part V, App. F

Capacity- saturation horizon criterion

strong field Structurally supported as a kinematic continuation

weak-field deficit continuation

Part V, App. F

Bekenstein– Hawking area-law bridge

strong field Coherent consis- tency bridge

horizon-capacity matching

App. F

Table 4
Table 4

Quantity / Claim

Sector Status Type of Sup- port

Where Estab- lished Strong-field exte- rior/interior solu- tions

strong field Frontier completion prob- lem

Part V, App. F

α = 1 theorem Part V, App. G

Many-Pasts Born recovery

quantum founda- tions

Closed opera- tionally

β = 0 theorem Part V, App. G

No-signaling in op- erational branch

quantum founda- tions

Closed opera- tionally

Arrow-of-time ac- count

quantum founda- tions

Coherent exten- sion

conditional typi- cality / counting

Part V, App. G

Candidate mi- crostructure Hamiltonian

UV realization Coherent exten- sion

condensate / GFT realization sketch

Part V, App. H

Lepton-shell mass extension

particle-sector ex- tension

Coherent exten- sion

constrained shell ladder with finite generation count

App. I

Gauge- redundancy extension

gauge sector Coherent exten- sion

baseline- redundancy construction with Maxwell/Yang– Mills form

App. I

Numerical robust- ness checks

validation layer Supportive audit layer

cross-sector con- sistency tests

App. J

EFT consistency checklist

field-theory audit Supportive audit layer

no-ghost / no- tachyon / causal- propagation checklist with explicit vac- uum dispersion stability

App. D

This is the official epistemic map, replacing the diffuse appendix-style anti-ad-hoc ledger.

24. Falsifiability and Observational Tests

24.1 Static weak-field falsifiers

The static weak-field sector stands or falls on a small number of concrete checks. The most direct are the shape of the RAR transition, the baryonic Tully–Fisher scaling in systems where the EFT should apply, and the weak-field lensing sector. A persistent need for gravitational slip where the scalar stress predicts none would be especially damaging, because it would break the same no-slip structure used to keep dynamics and lensing aligned.

24.2 Dynamical falsifiers

The dynamical extension is more vulnerable, and its failure modes are correspondingly sharper. Time-dependent halo lag, merger offsets, or relaxation signatures that cannot be reconciled with the telegrapher relation D/τ0 = c2 would indicate that the causal completion has the wrong propagation structure even if the static branch survives.

24.3 Cosmological falsifiers

Cosmology presents a different kind of test. The question there is not whether the mechanism points in the right direction, but whether a full Boltzmann treatment allows the trace-coupled

homogeneous mode to reduce the sound horizon without spoiling the CMB or structure-growth observables. If it cannot, the cosmological extension fails on its own terms.

24.4 Correlated-constant falsifiers

One of the more distinctive signatures of the framework is that the same microstructural chain feeds both the weak-field gravitational normalization and the galactic acceleration scale. A precision program that could test correlated shifts in G, a0, and the RAR normalization would probe the theory more sharply than isolated single-observable fits, because it would confront the shared coefficient origin directly.

24.5 Many-Pasts status

The Many-Pasts sector is not likely to be challenged first by ordinary laboratory deviations from quantum mechanics, because it is built to reproduce the usual operational structure there. Its more immediate points of failure are internal ones: failure of exact Born recovery, failure of no-signaling, or incompatibility with the thermodynamic arrow structure it is supposed to illuminate.

These points define the canonical falsifiability map.

25. What the Theory Would Have to Get Wrong to Fail

Placed together, the main failure modes have a simple shape:

• If future weak-field observations require persistent gravitational slip in the relevant galactic or cluster regimes, the canonical weak-field branch fails.

• If the RAR transition shape systematically departs from the derived bosonic occupancy law in systems well described by the static branch, the canonical galactic EFT fails.

• If the weak-field UV coefficient chain cannot be reconciled with an independently validated microscopic derivation, the canonical UV closure loses support.

• If the cosmological trace-coupled homogeneous mode cannot survive full Boltzmann likeli- hood confrontation, the cosmology sector fails even if the static weak-field branch survives.

• If the bounded-occupancy nonlinear completion proves inconsistent with viable strong-field solutions, the strong-field branch fails while the weak-field theory may still remain viable as an EFT.

These are failure modes rather than a separate derivational sector.

26. Comparison with Other Approaches

Because the framework aims to replace dark matter and partially reorganize the usual dark- energy story, it is useful to state briefly how its logic differs from nearby alternatives.

26.1 Relative to ΛCDM

The contrast with ΛCDM begins at the level of ontology. Standard cosmology explains the relevant phenomenology by adding dark matter and an independent cosmological constant or dark-energy sector to otherwise standard gravity. Here the visible matter sector is retained, but it is interpreted as the macroscopic description of localized defects in a vacuum-capacity medium whose weak-field response supplies the effective extra gravitating component. The

same closure chain is then asked to feed G, a0, the RAR law, weak-field lensing consistency, and the homogeneous cosmological mode.

26.2 Relative to MOND-like interpolation programs

MOND-like programs usually begin from an acceleration law or interpolation function and ask how much galaxy phenomenology it can explain. The present logic runs the other way. The interpolation law is not taken as primary; it is downstream of the UV entropy, the 1 + 2 channel geometry, and the bosonic occupancy branch. The galactic law is thus treated as an output of the same micro-to-IR closure chain rather than as the phenomenological starting point.

26.3 Relative to Verlinde-style emergent gravity

Verlinde-style emergent-gravity programs share the broad intuition that gravity may be entropic, but they are usually formulated at the level of thermodynamic reasoning or horizon-inspired force laws. The present framework is trying to do something narrower and more explicit: fi- nite tetrahedral boundary counting, admissibility closure, edge coupling, finite renormalization, Euclidean-action normalization, and only then a continuum scalar EFT. Whether that chain is ultimately correct is an empirical matter, but it is a different kind of proposal from a purely macroscopic entropic argument.

26.4 Relative to TeVeS and other multi-field modified gravities

Multi-field relativistic MOND completions such as TeVeS typically introduce additional vector or tensor sectors in order to repair lensing or cosmological problems. The present weak-field construction instead keeps a single scalar entanglement field within a low-energy Einstein con- tinuum sector that is itself interpreted as emergent from the substrate, and relies on the no-slip structure Φ = Ψ at leading order to keep lensing and dynamics aligned. That economy is at- tractive if the branch survives confrontation with data, and immediately vulnerable if future observations demand persistent slip or extra weak-field structure.

This comparison is meant as context rather than as a derivational sector.

27. Conclusion

The central achievement of the manuscript is a weak-field closure result. A finite entanglement- capacity microstructure is carried through admissibility closure, edge transport, finite renormal- ization, continuum matching, and a covariant scalar EFT to produce Newtonian gravity, the galactic acceleration scale, the RAR law, and weak-field lensing consistency without per-system tuning. More broadly, the manuscript argues that the continuum metric sector itself should be understood as the low-energy capacity geometry of the same substrate, so Einstein gravity appears here as the continuum limit of the framework rather than as an independent foundation underneath it.

That does not finish the whole framework, but it does change the shape of the open problems. The remaining tasks are no longer the invention of a missing theory; they are the hard completion tasks of an existing one: independent graph-level confirmation of the finite-loop self-energy, fuller microscopic derivation of the action-kernel normalization, full Boltzmann cosmology, and self-consistent strong-field solutions. Time-dependent transport, cosmology, bounded-occupancy strong field, and Many-Pasts all remain part of the same ontology, though they do not yet stand at the same level of derivational closure as the static weak-field chain.

Appendix A: Symbol Dictionary and Canonical Conventions

Appendix A gathers the conventions used throughout the technical material that follows. Its purpose is simply to keep the later appendices readable by fixing the units, field definitions, and couplings in one place before the denser calculations begin.

A.1 Units, signature, and entropy normalization

All dimensional quantities are expressed in SI units unless noted otherwise. The metric signature is (−, +, +, +). Entropies are measured in nats, so Boltzmann’s constant is absorbed into the entropy normalization. The canonical UV cell has spatial scale L∗and volume V∗= L3 ∗; when the Planck branch is invoked explicitly, L∗= LP .

These conventions matter because the argument repeatedly moves between a dimensionless UV counting problem and a dimensionful continuum EFT. The units and signature are what make those two descriptions comparable rather than merely suggestive.

A.2 Core scalar variables

The canonical continuum variable is the vacuum-relative coarse-grained entanglement field

Sent(x),

with vacuum baseline S∞and deficit

δS(x) = S∞−Sent(x).

For nonlinear work the bounded occupancy fraction is

S∞ = 1 −δS

q(x) = Sent(x)

S∞ ∈[0, 1].

The source channel is χ(x) = −T µµ

c2 ,

which is the continuum trace channel of the localized defect sector and reduces to the ordinary mass density ρ in the nonrelativistic static limit.

A.3 Couplings and derived observables

The main-text conventions are

γ : entanglement-field stiffness, (10)

κ : continuum defect–entropy coupling, (11)

κm(ℓ) : mass-per-entropy map at scale ℓ, (12)

gshare,max = ln(1680), (13)

gshare,eff : admissibility-weighted sharing entropy, (14)

Jbare, Jtree eff , J(ren) eff : UV edge couplings, (15)

a0 = cH0gshare,eff

4π2 . (16)

The canonical weak-field bridge and Newton closure are

2S∞ , G = c2κ 8πγS∞ .

Φ c2 = −δS

Collected in one place, these formulas also make clear which quantities are downstream of the closure chain. The UV data determine the couplings and stiffness first; the observable weak-field constants appear only after the bridge and source map are fixed.

A.4 Notation map

The manuscript uses one notation set throughout. Earlier variants such as mixed gshare / gshare,eff usage, duplicate bridge-law derivations, or shifted definitions of the scalar variable are not carried in parallel; where they matter historically, they are translated into the present conventions before use.

Appendix A serves as the reference layer for those conventions.

Appendix B: UV Boundary Ensemble and Admissibility Closure

Appendix B records the finite ultraviolet counting problem in its explicit form. It shows how the theory begins from a discrete boundary ensemble and ends with a unique admissibility-closed entropy rather than with an unconstrained continuum ansatz.

B.1 Minimal tetrahedral package

The canonical UV cell is a tetrahedron with four structural ingredients:

• a tetrahedral volumetric cell;

• half-integer fermionic face data on each face;

• injective face assignment across the four faces;

• binary orientation/parity.

Postulate II identifies the elementary defect sector as fermionic, so each face carries half-integer base spin j0. For a shared face the effective boundary sector is

j0 ⊗j0 = 0 ⊕1 ⊕· · · ⊕2j0.

Postulate I selects the maximum-capacity channel, hence jeff = 2j0 with |M| = 2jeff +1 = 4j0+1 distinguishable face states. Injectivity across four faces requires |M| ≥4. The j0 = 1/2 option fails because it gives jeff = 1 and |M| = 3. The first half-integer choice that works is therefore j0 = 3/2, giving jeff = 3 and the canonical seven-state face sector. The resulting state count is

Ωtet = 2 × P(7, 4) = 1680, gshare,max = ln(1680) = 7.42654907240.

This is the minimal discrete package currently used in the framework to obtain a finite, isotropic, auditable boundary-channel structure.

The important feature is not just that the counting closes, but that it closes for structural reasons. Fermionic face data, injectivity, and maximum-capacity channel selection together force the seven-state face sector instead of leaving it as a tunable menu choice.

The minimality statement can also be written as a short proof. A volumetric cell in d = 3 needs at least four faces, so a tetrahedron is the first admissible simplex. The closure surrogate is three-component, so the face sector must be rich enough to support a nontrivial quadratic spectrum in d = 3 rather than a degenerate one-dimensional label count. Postulate II makes the face data fermionic, hence half-integer. Maximum-capacity channel selection then gives

jeff = 2j0, |M| = 2jeff + 1 = 4j0 + 1.

Injectivity across four faces requires |M| ≥4. The only half-integer option below j0 = 3/2 is j0 = 1/2, which gives jeff = 1 and |M| = 3, so it fails. The first admissible fermionic choice is therefore j0 = 3/2, giving jeff = 3 and the canonical seven-state face sector. In that precise sense, the (4-face, 7-state) tetrahedral package is the minimal architecture compatible with a three-component isotropic closure mode, injective boundary information, and finite volumetric counting.

B.2 Closure invariant, kernel, and unique fixed point

The canonical scalar closure invariant is

4 X

4 X

K2(b) = 48 −1

3 S2 −Σ2 , S =

i=1 mi, Σ2 =

i=1 m2 i .

The admissibility family is

pη(b) = 1 Z(η)e−ηK2(b), Z(η) = X

b∈B e−ηK2(b).

The closure condition ⟨K2⟩η = 3

has the unique solution η∗= 0.0298668443935.

Because the parity-symmetric ensemble is finite, the root-finding problem can be written directly from the exact discrete spectrum itself. The distinct closure-defect values and their degeneracies are K2 122

3 134

3 142

3 146

3 152

3 154

3 mult 96 96 96 288 192 144

3 54 164

3 166

3 170

K2 158

3 mult 384 192 48 96 48

with total multiplicity 1680 as required. In particular,

a nae−ηK2 a, ⟨K2⟩η = P a naK2 ae−ηK2 a P a nae−ηK2a ,

Z(η) = X

where (K2 a, na) run over the table above. The closed-branch value η∗is therefore the unique root of an exact finite-spectrum equation, not an unseen numerical fit. The corresponding effective sharing entropy is gshare,eff = − X

b∈B pη∗(b) ln pη∗(b) = 7.41980002357.

The closed-branch moments used in the UV stiffness discussion are

⟨K2⟩η∗= 50.2229154254, Varη∗(K2) = 15.6889750078, aUV = 0.0637390269.

These values quantify the local stiffness of the canonical closure point rather than a tunable phenomenological uncertainty.

This is where the admissibility parameter stops being free. The kernel introduces η, and the closure condition removes its arbitrariness again by demanding that the fluctuation scale produced by the weighting agree with the weighting itself.

B.3 Rooted reduction and local benchmarks

Rooting on the shared face reduces the exact parity-symmetric ensemble to 140 rooted mi- crostates and 69 rooted closure classes. The rooted classes can be labeled by α = (m•, K2), so the same reduced state space already supports the local evaluation, the cavity benchmark, and the later shell propagation. The local information observable

σ(r) ind = H(X | Yr)

has the principal pre-nonlocal benchmarks

σtoy ind = 0.44997, (17)

σloc ind = 0.44708, (18)

σBethe ind (J = 0) = 0.44749. (19)

Here the Bethe value is the homogeneous cavity evaluation on the 69×69 rooted-class interaction graph at zero transport coupling,

z−1

, X

X

µα ∝wα

α µα = 1,

β Uαβ(0)µβ

with z = 4 and Uαβ(0) the rooted shared-face compatibility matrix before shell transport is turned on. In other words, σBethe ind (J = 0) is the cavity-theory benchmark of the same explicit rooted ensemble, not a disconnected numerical insert. The horizon target implied by the effective sharing entropy is σ∗= π gshare,eff = 0.42340665.

The gap between the local benchmarks and σ∗is therefore a genuinely shell / loop problem rather than a failure of the local admissibility closure.

That separation matters for the later UV story. It means the remaining work is not to repair the local closure ensemble, but to propagate it more accurately through transport and return structure.

B.4 What is fixed at this stage

By the end of the admissibility calculation, the framework has already fixed the microscopic counting ceiling, the unique closure point, the effective sharing entropy, and the local stiffness moments. What remains for the next appendix is not another entropy choice, but the propaga- tion of those quantities into edge transport, finite renormalization, and continuum normalization.

Appendix B completes the UV counting problem and records the unique admissibility closure together with the local benchmarks needed by the coefficient chain.

Appendix C: Edge Kernel, Finite Renormalization, and Contin- uum Matching

Appendix C carries the middle part of the UV-to-IR derivation. Appendix B fixed what the local boundary ensemble is. Appendix C asks how that local data propagate into edge transport, loop dressing, and finally the continuum stiffness coefficient of the weak-field EFT.

C.1 Channel-averaged isotropy identity and tree coupling

Let ˆni be the four face normals of a regular tetrahedron. The exact identity

4 X

i=1 ˆniˆnT i = 4

3I3

implies a channel-averaged transverse fraction of 2/3. The bare edge stiffness is therefore

3η∗= 0.0199112296.

For a rooted z = 4 coarse adjacency graph, the tree-to-lattice map gives

Jtree eff = Jbare

z −1 = 2η∗

9 = 0.0066370765.

This is the first place where local closure data become a transport law. The tetrahedral identity fixes the isotropic projection, and the rooted branching structure determines how much of the microscopic edge penalty survives as net outward propagation on the coarse graph.

C.2 Horizon target and shell convergence

The horizon-capacity target is

σ∗= π gshare,eff = 0.42340665.

At the derived coupling the explicit shell values are

σ(2) ind = 0.42143, σ(3) ind = 0.42166, ∆2→3 = 0.00023.

The residual shift from the target is already small and stable by shell depth r = 2, isolating the remaining correction to the loopy local-return sector rather than a broad nonlocal ambiguity.

So the shell calculation narrows the open problem substantially. The tree branch already lands very near the target, and the residual discrepancy can be assigned specifically to local returns rather than to an uncontrolled long-range correction.

C.3 Finite-loop self-energy closure

The leading loopy correction is organized as a local Dyson dressing:

J(ren) eff = Jtree eff 1 + Jtree eff Σret .

The structural decomposition is

9 = 65

Σret = 7 + 2

9 ,

and each term has a concrete return-channel origin. A short return motif leaves a shared face, explores a local closed loop, and re-enters the same coarse edge before contributing to net long- range transport. In the canonical label basis m = −3, −2, . . . , 3, there are exactly seven ways to do this without changing sector. These are the seven sector-diagonal returns, one for each face-label channel, and together they contribute

Tr(I7) = 7.

In addition to these label-preserving loops, permutation symmetry allows one collective mode shared across all channels. Writing

Psing = |u⟩⟨u|, u = 1 √

7(1, 1, . . . , 1),

this shared return is rank one. Any additional off-diagonal return sector would break the per- mutation symmetry of the canonical local ensemble, so there is no second independent collective channel to count. Only the transverse scalar branch feeds back into the coarse transport law, so the singlet first acquires the same 2/3 projection factor that appeared in the tree coupling. It is then diluted by the rooted branching factor 1/(z −1) = 1/3 on the z = 4 graph, because

only one of the three outward branches returns to the original edge. The collective contribution is therefore

Tr 2

 = 2

3 1 3Psing

9,

since Tr(Psing) = 1. Equivalently,

Rret = I7 + 2

9Psing, Σret = Tr(Rret) = 7 + 2

9.

This is the sense in which the finite-loop coefficient is counted rather than guessed: seven inde- pendent label-preserving returns plus one shared singlet return with exactly the same projection and branching weights already fixed in the tree map. Hence

c(ren) loop ≡J(ren) eff Jtree eff = 1 1 + Jtree eff Σret ≈0.95426,

and J(ren) eff ≈0.00633348.

This reproduces the shell-target crossing near Jbare,cross ∼0.019 at the stated level of agreement.

The local Dyson dressing is therefore doing one precise job: it corrects the tree branch by accounting for the short motifs that recycle amplitude before it contributes to true coarse trans- port. The renormalized coupling is not a new parameter, but the tree coupling after local returns have been summed.

C.4 Euclidean-action normalization and continuum stiffness

The lattice quadratic form is interpreted canonically as a Euclidean action weight,

ℏ= J(ren) eff

IE

X

⟨ab⟩ (Qa −Qb)2,

2

over the microscopic four-cell

∆V4 = L4 ∗ c .

The same tetrahedral identity then yields

γQ = 4ℏc

3L2∗ J(ren) eff

for the occupancy field Qocc. With the horizon-capacity normalization

S = πQocc,

2(∂S)2 gives

the canonical convention γ

γ = 4ℏc 3π2L2∗ J(ren) eff = 4ℏc 3π2L2∗

2η∗/9 1 + (2η∗/9)(65/9).

If L∗= LP , this is

γ = 4J(ren) eff 3π2 c4

G ≈8.556 × 10−4 c4

G .

This is the decisive stiffness-side matching step. Up to here the derivation has produced a dimensionless lattice weighting; after Euclidean normalization, that same weighting becomes the dimensionful continuum stiffness that appears in the weak-field action.

C.5 Local defect insertion and the source-side lattice constant

The stiffness-side matching is not the only UV quantity that can be closed locally. For the canonical rigid defect insertion, excluding one of the seven admissible face labels from one face removes exactly one-seventh of the isotropically averaged local partition weight. Therefore the logarithm of the isotropically averaged partition ratio is exactly

∆Sdef := −ln DZdef

iso = ln7

E

6.

Zvac

This is the exact isotropic source benchmark in the canonical seven-label ensemble. The isotrop- ically averaged defect free-energy cost differs from it only at O(10−5) because the admissibility weighting breaks label symmetry only weakly.

To propagate that local defect into the lattice field equation one needs the on-site Green function of the tetrahedral/diamond nearest-neighbor Laplacian. Writing the bond vectors as

δi ∈ (1, 1, 1) √

 ,

3 , (1, −1, −1) √

3 , (−1, 1, −1) √

3 , (−1, −1, 1) √

3

the corresponding lattice constant is

4 X

4 d3k 16 −|f(k)|2 , f(k) =

Gtet(0) = 1 VBZ

Z

i=1 eik·δi,

BZ

with numerical value Gtet(0) = 0.448220394(5).

Using the field normalization S = πQocc, the rigid local defect shift is

δQdef = ∆Sdef

π = ln(7/6)

π .

The corresponding local source amplitude in lattice units is therefore

sdef = J(ren) eff δQdef Gtet(0),

so that sdef J(ren) eff = ln(7/6)

πGtet(0) = 0.109472228 . . .

is a pure number fixed by the same UV lattice geometry.

This does not by itself finish the full continuum source coupling. What it closes is the local lattice insertion factor. The remaining content of the source map is the full UV–IR normalization that converts local defect entropy per cell into the continuum quantity Ξρ appearing in

κ = Ξρ L2∗κm(L∗).

That bridge is not a single leftover geometric number. It includes the mass-per-entropy map κm, the continuum density convention, and the vacuum-capacity normalization against S∞that carries the cosmological hierarchy. Thus the weak point of the source sector is no longer an unknown local lattice response; it is the full UV–IR source dictionary.

This distinction is worth making explicit because it changes the status of the source problem. The local defect insertion factor is now a finite UV result. What remains open is the larger normalization problem that turns one-cell defect entropy into a continuum mass-density source.

C.6 First explicit UV estimate of κ/γ

The exact local moments of the admissibility-closed ensemble also supply a first explicit UV estimate of the continuum source ratio. From the variance in Appendix B,

aUV := 1 Varη∗(K2) = 0.0637390269,

which is the local zero-mode inverse susceptibility of the closure scalar. Combining this with the same nearest-neighbor gradient template used in the stiffness matching, together with the fixed normalization S = πQocc, gives the first explicit lattice estimate of the source-to-stiffness ratio:

κ γ ≈1.487 × 103 Ξρ c(ren) loop L4∗κm(L∗)

.

This estimate is valuable because it replaces an open functional freedom by a definite algebraic form built from already-derived UV data: the local branch curvature aUV, the derived gradient template, the loop-renormalization factor, and the standard source map. At the same time, it should not be overstated. It is an explicit UV narrowing, not an independent closure of the full source sector, because it still depends on the full UV–IR bridge encoded in Ξρ and on the loop-renormalization factor.

C.7 UV-to-IR payoff

At this stage the weak-field UV coefficient chain is explicit:

Ωtet →gshare,eff →Jbare →Jtree eff →Σret →J(ren) eff →γ.

The same chain feeds a0 = cH0gshare,eff

4π2 ,

and, once the matched source map is fixed, determines the leading ratio κ/γ used throughout the weak-field sector. The remaining open issue on the source side is not a missing local lattice factor but an independent derivation of the full UV–IR dictionary linking defect entropy, mass density, and vacuum-capacity normalization.

Appendix C closes the canonical UV branch up through the SI-normalized weak-field stiffness coefficient, the local defect insertion constant, and a first explicit UV estimate of κ/γ. The remaining source-side open item is the full UV–IR source normalization bridge rather than a missing local lattice factor.

Appendix D: Weak-Field Technical Derivations, Electron Anchor, and EFT Consistency

Appendix D gathers the weak-field derivations that are conceptually central but too dense to repeat in full in the main line. It is best read as a technical support layer for the bridge law, Newtonian recovery, the electron anchor, and the basic consistency checks of the EFT.

D.1 Bridge-law uniqueness

The weak-field bridge is derived once and then retired everywhere else. Let the lapse be written as N = e−F(δS/S∞).

Additivity of independent deficits requires F(x+y) = F(x)+F(y), so continuity implies F(x) = cx. Standard weak-field metric normalization fixes c = 1/2, giving

N = e−δS/(2S∞)

and therefore Φ c2 = −δS

2S∞ to leading order. This is the unique weak-field bridge compatible with locality, additive inde- pendent deficits, and multiplicative redshift composition.

Writing the argument this explicitly removes one of the most common ambiguities in modified- gravity proposals. The bridge from entropy deficit to gravitational potential is not being chosen phenomenologically after the fact; it is fixed by the structural requirements of the weak-field limit itself.

D.2 Point source, Newton limit, and lensing

In the renormalized static branch, ∇2δS = −κ

γ ρ.

For a point source M,

4πγr, g(r) = c2κ 8πγS∞

δS(r) = κM

M

r2 = GM

r2 .

Because the leading entanglement stress carries no anisotropic stress,

Φ = Ψ

at the order treated. The effective-halo rewrite is

ρhalo(r) = 1 4πGr2 d dr

h r2(gobs −gbar) i .

Thus the same deficit field controls both orbital dynamics and light bending in the leading weak-field regime.

That shared control is the key weak-field consistency test. A viable branch must not reproduce galactic support only by sacrificing lensing, and the scalar deficit sector avoids that failure at the order treated.

D.3 Electron anchor and composite matter

The canonical fermionic entropy increment is

∆Sf = ln 2.

The UV mass normalization is κm,UV = ℏ cL∗

1 ln 2,

and the running law in the closed branch is

1+αcl , αcl = 0.

L∗

κm(ℓ) = κm,UV

At the electron Compton scale ℓ= λe this gives

κm(λe) = me

ln 2,

which is the clean elementary anchor used here. Composite hadrons are not reduced to a bare constituent count. Their mass budget is assigned to a dressed bound-state entropy

mhadron = κm(ℓH)Sdressed ent,H ,

whose microscopic decomposition must include confinement, gluonic structure, trace-anomaly contributions, and chiral vacuum reorganization.

The contrast between the two sectors is deliberate. The electron is a clean one-bit defect anchor; hadrons are not. Their inertial content must therefore be assigned to a dressed entropy budget rather than to a naive constituent count.

D.4 Independent electron-anchor derivation of G

Besides the matched weak-field identity

G = c2κ 8πγS∞ ,

the framework also admits a standalone electron-anchor reduction in which G appears as an output rather than as an input normalization. The branch uses the standard constants (ℏ, c, me), the reduced Compton scale

λe = ℏ mec,

a local transport-sharing factor gshare,loc, a fixed SI normalization marker utr with units m−2, and a transport-geometry exponent αtr. In the minimal single-scale isotropic branch used here,

gshare,loc ≡gshare,eff, utr = 1 m−2, αtr = 1

2.

Define F ≡ 4 ln 2 gshare,loc .

Eliminating the implied UV length at the last algebraic step yields the closed-form branch expression

G = 4π2 utr c3αtr+2λ2αtr+4 e m2 e F 2 ℏαtr+2

1/αtr , F = 4 ln 2 gshare,loc .

For the canonical transport exponent αtr = 1

2, this specializes to the quartic law

#2 g4 share,loc.

" π2 utr c7/2λ5 em2 e 4(ln 2)2 ℏ5/2

G =

This is the most explicit algebraic version of the branch: once (ℏ, c, me, λe) and the local sharing factor are specified, G is output directly. This branch is non-circular because G is not assumed in the input list; it appears only after eliminating the implied UV length from the electron-anchor chain. Equivalently, the relation is invertible:

F(G) = 4π2 utr c3αtr+2λ2αtr+4 e m2 e ℏαtr+2Gαtr

1/2 , gshare,loc = 4 ln 2

For the canonical choice αtr = 1/2, the scaling is quartic,

G = 4δgshare,loc

G ∝g4 share,loc, δG

gshare,loc .

Using the simple test point gshare,loc = 7.4 gives

Gpred = 6.700223 × 10−11 m3 kg−1 s−2,

while imposing the strict minimal-closure identification

gshare,loc = gshare,eff = 7.41980002357

gives Gpred = 6.772222 × 10−11 m3 kg−1 s−2.

In the canonical manuscript this branch is not used to replace the matched EFT derivation. Its role is to show that the same framework also contains a compact independent reduction in which Newton’s constant is output from the electron anchor, the local sharing factor, and standard constants.

For that reason this branch is best read as a cross-check rather than as the primary derivation of G. It shows that the weak-field normalization is overconstrained in a useful way: the EFT matching and the electron-anchor reduction point toward the same gravitational scale.

D.5 EFT consistency checklist

The weak-field EFT does not rely only on successful phenomenology; it also passes a standard consistency checklist at the level claimed here.

• ‘No ghost‘: the scalar kinetic term carries positive sign because γ > 0.

• ‘No tachyon‘: the quadratic fluctuation operator contains no mass term at this order.

• ‘Correct-sign sourcing‘: the defect-source coupling lowers the available entanglement ca- pacity around positive-mass defect configurations rather than generating repulsive static behavior in the weak-field branch.

• ‘Causal propagation‘: the transport completion satisfies D/τ0 = c2, so the time-dependent sector propagates at finite signal speed.

• ‘Weak-field unitarity below cutoff‘: once the scalar sector is quantized around the weak- field branch, the absence of ghost or tachyonic modes leaves an ordinary sub-cutoff scalar EFT rather than an obviously pathological one.

• ‘Energy-condition role‘: the scalar gradient sector contributes positive local stiffness en- ergy, while cosmological acceleration enters through the background branch rather than through a ghost-like local degree of freedom.

These statements are made at the EFT level claimed here. They do not replace the need for a fuller UV derivation, but they do show that the weak-field scalar sector is not buying phenomenology by obvious field-theoretic pathology.

The checklist is intentionally modest. Its role is not to prove ultraviolet completion of the full framework, but to show that the low-energy scalar sector used in the weak-field branch passes the standard first tests of EFT health.

The one place where an explicit formula is worth recording is linear vacuum stability in the time-dependent sector. Writing a small perturbation δs about the vacuum branch, the linearized telegrapher equation is τ0 ¨δs + ˙δs −D∇2δs = 0.

For a plane-wave mode e−iωt+ik·x, this gives the dispersion relation

τ0ω2 + iω −Dk2 = 0.

With τ0 > 0 and D > 0, the corresponding mode frequencies have non-growing time dependence, so the vacuum is linearly stable. The same sign structure is what underlies the earlier no-ghost and no-tachyon statements: positive kinetic stiffness, positive transport coefficients, and no negative mass-squared term in the linearized sector.

D.6 Quadratic fluctuations and weak-field stability

Expanding the action about an on-shell background yields the quadratic fluctuation operator

I(2)[δS] = − Z d4x √−g γ

2 gµν∂µδS ∂νδS.

There is no quadratic mass term at this order, so the low-energy scalar sector contains one massless bosonic mode. Stability requires γ > 0, which is reinforced in the microscopic realization appendix by condensate hydrodynamics.

This is also the local EFT reason the bosonic occupancy language in the galactic section is natural rather than decorative. The weak-field branch genuinely contains a stable massless scalar mode whose occupation can be discussed meaningfully.

Appendix D provides the technical support layer for the weak-field bridge, Newton limit, electron anchor, standalone G branch, and EFT consistency audit.

Appendix E: Transport, Cosmology, and Hubble-Tension Imple- mentation

Appendix E collects the time-dependent and homogeneous extensions of the static branch. The common purpose of these subsections is to show that the same scalar medium can propagate causally, relax toward its static limit, and support a cosmological background mode without losing contact with the weak-field structure already derived.

E.1 Telegrapher equation and causal closure

The time-dependent deficit field obeys

τ0∂2 t δS + ∂tδS = D∇2δS + Aχ, A D = κ

γ .

Causality requires D τ0 = c2.

In the canonical no-new-IR-scale branch,

τ −1 0 = H0, D = c2

H0 .

This is the minimal causal completion of the static Poisson sector. The telegrapher form supplies propagation and relaxation, but it is chosen so that the static weak-field law remains the exact late-time limit rather than being replaced by a new phenomenological rule.

E.2 Static-limit recovery for galaxies

For a Fourier mode k, the telegrapher characteristic equation

τ0s2 + s + Dk2 = 0

has the roots s = −1

2τ0 ± iωk, ωk ≃ck

whenever 4τ0Dk2 ≫1. Galactic wavelengths are far below the critical scale

λc = 4πc

H0 ≈54 Gpc,

so galactic modes are deeply underdamped. Time-averaging the sourced solution over intervals large compared with 2π/ωk returns the static Poisson branch exactly, and the residual pondero- motive correction scales parametrically as

2 ∼10−8.

δFpond

Fstatic ∼e−T/(2τ0) ωorb

ωk

That estimate is why the transport sector does not undercut the static galactic results. The oscillatory contribution is present, but it is parametrically too small to compete with the near- stationary weak-field branch in ordinary galactic systems.

E.3 Homogeneous mode and cosmological sourcing

The cosmological split is S(x, t) = S(t) + s(x, t),

with S(t) the homogeneous mode and s(x, t) the inhomogeneous weak-field sector. The back- ground capacity is normalized by the apparent horizon,

S∞(t) = πRA(t)2

L2∗ , RA(t) = c p

H2 + kc2/a2 .

Because the field couples to the trace of the stress-energy tensor, the homogeneous mode is suppressed during radiation domination and turns on near matter–radiation equality.

This timing is the central cosmological virtue of the mechanism. The homogeneous mode is quiet when it must be quiet, then becomes relevant close to the epoch where a sound-horizon shift is most useful.

E.4 Sound-horizon shift and shear lock

In the closed cosmological branch, the trace-sourced homogeneous mode acts as a transient early-energy contribution. The qualitative payoff is a smaller sound horizon and an upward shift of the CMB-inferred Hubble constant toward the upper-68 / low-69 km s−1 Mpc−1 range. Local weak-field predictions are protected by the separation between S(t) and s(x, t): the homogeneous mode changes the background branch without rewriting the local static Poisson law.

The point of this appendix is therefore qualitative but substantial. It shows how the homoge- neous mode can matter cosmologically without forcing a re-tuning of the local weak-field sector that already fixed the galactic branch.

Appendix E closes the transport relation and preferred branch, while the cosmological sector remains structurally supported but not yet Boltzmann-closed.

Appendix F: Strong-Field Completion and Post-Newtonian Bound- ary

Appendix F keeps the strong-field discussion on its proper footing. It does not attempt to solve the black-hole problem in full. Instead it records the bounded-variable completion rule, the resulting horizon criterion, and the exact point at which the weak-field expansion must give way to a nonlinear treatment.

F.1 Bounded occupancy and unique lapse prescription

The nonlinear completion is posed on the bounded variable

q(x) = Sent(x)

S∞ ∈[0, 1].

Let the static lapse satisfy N2 = f(q).

Vacuum normalization requires f(1) = 1, horizon normalization requires f(0) = 0, and weak- field recovery requires f′(1) = 1. If independent capacity-reduction layers compose multiplica- tively in both lapse and surviving occupancy,

f(q1q2) = f(q1)f(q2).

The continuous solutions are f(q) = qα, and the weak-field condition fixes α = 1. Hence

N2 = q, gtt = −q.

The horizon is therefore the level set q = 0, i.e. complete local exhaustion of available capacity.

The force of this subsection is uniqueness. Once one insists on the vacuum limit, the horizon limit, weak-field matching, and multiplicative composition, the nonlinear lapse prescription is no longer a matter of taste.

F.2 Capacity saturation, compactness, and horizon radius

The black-hole reading of the bounded-occupancy branch begins from the same weak-field vari- able used throughout the weak-field development. In spherical symmetry the static weak-field solution is ∇2δS = −κ

γ ρ, δS(r) = κM

4πγr,

while the Newton matching relation is

G = c2κ 8πγS∞ .

Hence the dimensionless capacity-depletion fraction is

f(r) ≡δS(r)

S∞ = 2GM

c2r .

Thus the ordinary compactness parameter is already the weak-field entanglement deficit written as a fraction of available vacuum capacity.

The nonlinear surviving-capacity variable is

q(r) = Sent(r)

S∞ = 1 −δS(r)

Near vacuum this is just the weak-field relation already used in the main text. The black-hole continuation interprets the horizon as the first surface at which surviving capacity vanishes,

q(rh) = 0 ⇐⇒ f(rh) = 1 ⇐⇒ 2GM

c2rh = 1,

so that rh = 2GM

c2 .

In this form the Schwarzschild radius is not imported from outside as an unrelated geometric fact. It is the radius at which the vacuum-relative deficit reaches complete local capacity exhaustion in the bounded nonlinear branch.

That is the main reason this kinematic continuation is worth keeping even though the full strong-field solution is deferred. It shows that the weak-field scalar language continues to mean something at the onset of strong gravity, rather than simply being abandoned there.

F.3 Horizon microstate capacity and area-law consistency

Even without a finished strong-field exterior, the framework should still be checked against the one macroscopic entropy law any entanglement-based gravity proposal must respect:

SBH = A 4L2 P .

In the present construction, this law is read as the macroscopic horizon-capacity condition. The tetrahedral microstructure contributes a finite boundary-channel capacity through gshare,max = ln(1680) and its closure-weighted refinement gshare,eff, but the manuscript does not identify one tetrahedral cell with one literal Planck-area horizon bit in a naive one-to-one way.

The cleaner statement is a consistency bridge. The same microstructure that fixes the weak- field closure chain supplies the finite local boundary capacity from which a horizon entropy density can be built, while the macroscopic normalization remains the standard Bekenstein– Hawking area law. In that sense there is no conflict between the combinatorial boundary counting and A 4L2 P ;

the former supplies the microscopic channel capacity and renormalized sharing structure, while the latter remains the macroscopic thermodynamic target for horizons.

This is the strongest statement presently justified in the strong-field sector. A full microstate- to-area derivation for an actual horizon remains part of the unfinished UV completion program, but the framework is at least aligned with the standard area law rather than at odds with it.

F.4 Minimal strong-field action

The simplest bounded completion may be written as

I = Z d4x√−g  c4

16πGR −γS2 ∞ 2 gµν∂µq ∂νq −Vsat(q) −κχS∞q  ,

with the physical branch restricted to 0 ≤q ≤1 and

V ′ sat(1) = V ′′ sat(1) = 0

to preserve the weak-field massless-scalar sector near vacuum. This bounded action should be read as a completion rule for posing the nonlinear problem, not as a finished derivation of the

full strong-field exterior. No universal strong-field potential has yet been derived that closes the scalar backreaction problem independently of the exterior mass scale.

In other words, the appendix gives a clean statement of what is known and what is not. The nonlinear variable, the horizon criterion, and the area-law compatibility statement are fixed; the self-consistent exterior and interior solutions remain open.

F.5 PPN boundary and breakdown of the weak-field expansion

In the weak-field Solar-System regime, the scalar sector yields

γPPN = βPPN = 1 + O(Φ2/c4),

with the remaining PPN coefficients vanishing in the canonical covariant branch. Weak-field truncations fail only when |Φ|

c2 = O(1), δS S∞ = O(1),

which is exactly the regime where the bounded-occupancy completion must replace the linear bridge.

This also makes the boundary between sectors precise: the strong-field completion begins exactly where the weak-field expansion ceases to be quantitatively trustworthy.

Appendix F fixes the minimal nonlinear completion rule and the kinematic horizon criterion. The full self-consistent strong-field exterior and interior problem remains frontier.

Appendix G: Many-Pasts Operational Closure and Arrow of Time

Appendix G records the operational content of the Many-Pasts branch in compact form. The main thing to keep in view is that this branch is conservative where laboratory quantum me- chanics is concerned and ambitious only in the larger interpretive and cosmological claims built on top of that operational core.

G.1 Closed operational weight

The canonical history weight is

P(H|P) ∝e−D(H,P), D(H, P) = −ln Tr ΠP ρH→now  .

This is the operational branch with α = 1, β = 0.

Writing it this way matters because the generalized family is no longer left open in practice. The laboratory branch is fixed before any interpretive discussion begins.

G.2 Born recovery and no-signaling

In the ordinary projective laboratory limit,

e−D(H,P) = Tr ΠP ρ  ,

so the standard Born structure is recovered exactly. This fixes α = 1. An independent signaling- sensitive bias channel is forbidden, which fixes β = 0. The result is ordinary operational quantum mechanics rather than a modified laboratory theory.

That is the core closure claim of the appendix. Whatever additional content the Many-Pasts sector adds, it does not do so by changing standard Born-rule laboratory predictions.

G.3 Arrow of time from conditional typicality

Let h = {Mt}t<t0 be a macrohistory conditioned on present records Mt0. If the count of compatible microhistories is Nh, then

P(h|Mt0) ∝Nh.

Under coarse-grained factorization,

ln P(h|Mt0) ≈ X

t<t0 S(Mt) + X

t<t0 ln T(Mt+∆t|Mt) + const,

so entropy growth appears as a counting dominance effect among record-compatible histories rather than as a new laboratory coupling.

The arrow-of-time claim should therefore be read as a statement about conditional counting in the space of histories, not as the introduction of a new dynamical force.

Appendix G settles the laboratory sector operationally and leaves the cosmological and arrow- of-time content as a coherent interpretive extension.

Appendix H: Microscopic Realization and Coarse-Graining

Appendix H addresses a different question from the weak-field appendices. Instead of asking whether the coefficient chain is internally closed, it asks whether a plausible microscopic real- ization exists in which the same scalar stiffness and defect ontology arise naturally.

H.1 GFT condensate realization and coarse-graining

The candidate microscopic realization is a GFT/condensate picture with bosonic tetrahedral quanta ϕ(g1, . . . , g4) and fermionic defects ψ. In the condensate regime, the coarse field may be written as σ(x) = p

n(x) eiθ(x).

The hydrodynamic identity

|∇µσ|2 = (∇µn)2

4n + n(∇µθ)2

shows that if

Sent(x) = S0 + α lnn(x)

nbg ,

then the coarse action contains a positive scalar stiffness

γ ∼Zσnbg

2α2 > 0.

The coarse source channel arises from fermionic face exclusion: what is macroscopically read as matter is a localized defect of the condensate, and the surrounding reduction of available occupancy is the long-wavelength field captured by the EFT. In this sense the microscopic appendix plays one clean role: it shows that the EFT is not hanging in midair, even though a finished first-principles derivation of every inhomogeneous continuum coefficient from the full underlying kernel is not yet available.

That is why the appendix remains brief but important. It does not replace the explicit coefficient derivation carried out earlier, but it shows that the ontology and sign choices of the EFT are compatible with a concrete microscopic picture rather than merely with an abstract formalism.

Appendix H gives a coherent microscopic realization supporting the closed weak-field chain, but not yet a closure-defining first-principles derivation of every continuum term.

Appendix I: Mass and Gauge Extensions

Appendix I collects sectors that are structurally connected to the same entanglement logic but are not part of the closed weak-field core. They are kept here because they show how the framework may extend, not because the main derivation depends on them.

I.1 Mass extensions and lepton-shell sector

Beyond the electron anchor, the charged-lepton extension is formulated as a shell spectrum of fermionic defect excitations,

log mN = C0 + B0N + A0N2, N = 0, 1, 2.

The physical picture is that the electron is the ground-state fermionic defect, while the muon and tau are successive radial entanglement-shell excitations of the same core structure. In that reading, the quadratic log-mass ladder is not an arbitrary three-parameter fit laid on top of the particle spectrum, but the closure form taken by a short finite shell sequence.

This shell ladder is not independent of the gravity sector. Its degeneracy structure is tied to the same sharing-entropy logic that fixes the weak-field couplings, so the particle hierarchy and the gravitational normalization are not being treated as disconnected subsystems.

Within this extension, the finite tetrahedral boundary topology also constrains the charged- lepton shell ladder to terminate after three generations. That statement should be read carefully: it is a striking structural payoff of the current shell picture, but it belongs to the extension layer rather than to the closed weak-field core. Still, if the framework really admits only three charged- lepton generations in this construction, that is a genuine and falsifiable output rather than an external Standard Model input.

This sector is treated as a constrained extension of the same entanglement closure logic rather than as a replacement for the electron anchor. Composite hadrons remain part of the dressed bound-state entropy program rather than a completed output of the current extension.

The distinction matters. The electron anchor remains the clean weak-field entry point, while the heavier mass sectors are exploratory continuations of the same logic rather than closure- defining ingredients.

I.2 Gauge-structure extension

The same baseline-redundancy logic that underlies the gravity sector can be extended to gauge sectors. For a conserved charge sector Q, introduce an entropy-like potential SQ(x) and require that physical observables depend only on differences of that potential rather than on its absolute baseline. Promoting that redundancy to a local symmetry requires a compensating connection. In the Abelian case, local baseline redundancy is implemented by

DµSQ = ∂µSQ −qAµ,

with SQ →SQ + α(x), Aµ →Aµ + 1

q ∂µα.

This yields the standard Abelian gauge structure, with Maxwell-type dynamics for Aµ and straightforward non-Abelian generalization for multiplet-valued entropic potentials, where the same redundancy principle leads to Yang–Mills covariant derivatives and field strengths in the usual form.

The point here is not that the full gauge sector has been derived, but that the baseline- redundancy logic used elsewhere in the framework naturally points toward familiar gauge struc- ture rather than away from it. Gravity and gauge sectors are then aligned by a common principle: only baseline-invariant deficit information is physically meaningful.

Appendix I remains a coherent extension layer: structurally linked to the same entanglement logic, but not part of the closed static weak-field derivation chain.

Appendix J: Numerical Checks and Robustness

Appendix J is intentionally modest. It does not add new derivations. It collects the main numerical cross-checks that make it easier to see that the same coefficient chain survives repeated contact with independent benchmark calculations.

J.1 Cross-sector numerical checks

The cross-check program used throughout includes:

• the one-bit fermionic defect check ∆Sf = ln 2;

• the rooted-shell convergence check σ(2) ind ≃σ(3) ind;

• the UV closed-branch moments ⟨K2⟩η∗, Varη∗(K2), and aUV;

• cross-sector consistency between the electron anchor, Newton closure, and the galactic scale a0.

These checks do not replace the derivations, but they show that the same coefficient chain survives independent numerical scrutiny across the sectors where closure is claimed.

That is exactly the right role for this appendix. It is an audit layer for internal consistency, not a substitute for the analytic logic developed earlier.

Appendix J is supportive rather than closure-defining, serving as an audit layer for numerical consistency rather than an additional derivational sector.

📝 About this HTML version

This HTML document was automatically generated from the PDF. Some formatting, figures, or mathematical notation may not be perfectly preserved. For the authoritative version, please refer to the PDF.