Heisenberg and Quantization

WikiMechanics is bare naked quantum mechanics. Particle attributes are always quantized because we use a finite categorical scheme of binary distinctions to describe sensation. Quantization comes from the logical structure of the descriptive method, even for a continuous sensorium. Time is quantized too because the time coordinate depends on a chain's event index. Indices are always integers so t changes in steps. Motion is discontinuous in principle, and sometimes this is even observed as quantum leaping and tunnelling. We are cautious about using calculus because the logical foundations of both differential and integral calculus are proven using assumptions about continuity. So WikiMechanics does not require calculus; instead calculations are designed to be implemented on digital computers, in a finite number of discrete steps.

Werner Heisenberg, 1901—1976.
Werner Heisenberg, 1901—1976.

Another consequence of unmitigated quantum mechanics is that there are lower limits for the certainty of some measurements. For example, consider a particle P described by a historically ordered repetitive chain of events

$\Psi ^{\sf{P}} = \left( \sf{\Omega}_{1}, \sf{\Omega}_{2} \ \ldots \ \sf{\Omega}_{\it{i}} \ \ldots \ \sf{\Omega}_{\it{f}} \ \ldots \ \right)$

If P is isolated then the elapsed time between events $i$ and $f$ is

$\Delta t = \left( \, f-i \right) \hat{\tau}$

where $\hat{\tau}$ is the period. Consider measuring the elapsed time from observations of event indices and periods. The experimental uncertainty in repeated measurments of $\Delta t$ is written as $\delta t$. By the usual rules for assessing the propagation of experimental errorsXlink.png this uncertainty is given by

$\delta t = \left( \delta f + \delta i \right) \hat{\tau} + \left( \, f-i \right) \delta \hat{\tau}$

As discussed previously the uncertainty in the period is bounded by

$\delta \hat{\tau} \ge \hat{\tau} k_{S}$

And some unavoidable uncertainty is also associated with event indices. They are required to be integers, so rounding-off errors are $\delta i \ge 1/2$ and $\delta f \ge 1/2$. Then

$\delta t \ge \hat{\tau} + k_{S} \left(\, f-i \right) \hat{\tau}$

And since $i < f$ we know that $\, f - i \ge 1$ so

$\delta t \ge \left( 1 + k_{S} \right) \hat{\tau}$

or in terms of the energy

$\delta t \ge h \left( 1 + k_{S} \right) /E$

Thus the uncertainty in a time measurement can be decreased by working with high energy particles. In contrast, the hypothesis of temporal homogeneity contends that $\delta E \ge k_{S} E$ where

$k_{S} = \frac{ 1}{2} \sqrt{ 1 + 1 / \pi \ \vphantom{\Sigma^{2}} } - \frac{ 1}{2}$

so the uncertainty in an energy measurement is increased for larger particles. The two effects cancel for the product of the uncertainties

$\delta E \, \delta t \ge h k_{S} \left( 1 + k_{S} \right)$

leaving a constant

$\delta E \, \delta t \ge h /4\pi$

This is one of Werner Heisenberg'sXlink.png uncertainty relationships.
Right.png Next step: cause and effect.
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License