... not sure about the simulations you mentioned (or others) with regard to calculations and coding, but here's a good one (tho' it's only a little over a minute long): Galaxy Collision: Simulations vs Observations. It sure would be nice to know what was used, Newtonian mechanics or Einstein's general relativity. It's hard to believe that a simulation properly coded with the latter would fail to work.

It would be very interesting to see what assumptions they have made and what was actually coded into the simulation...

I've yet to finish bashing out the trig for my reworking of the sun - earth system...

There's a most excellent thread-in-progress over at the Bad Astronomy and Universe Today forum, especially with regard to proper order of performing calculations of the positions, vectors, etc. â€“ Implementing Solar System Gravity Simulations

The goal there is to get the simulation to match observations over longer intervals of time (to make the simulation accurate), rather than to determine whether gravity works instantaneously or adheres to the principles of general relativity. Clearly the order in which calculations are done (among other things) makes a profound difference as to the accuracy of the simulation, and it's exciting to think that some proof might be derived from that to show just how fast gravity works. How does the trig you're working on compare to the Euler-Cromer method?

(edit) - I thought there were other experiments performed specifically to measure (or constrain) the rate at which the gravitational force propagates - I don't recall off the top of my head ... I'll have to check ...

Quote:

I would like to guess that the same effects acting there may well be applicable to the electron/photons and Young's Slits example...

The LIGOs and the LHC are very different experiments, but the neat thing is that both should provide clues on how to resolve the issues about how gravity and quantum mechanics â€œmixesâ€? :)

There's a most excellent thread-in-progress over at the Bad Astronomy and Universe Today forum, especially with regard to proper order of performing calculations of the positions, vectors, etc. â€“ Implementing Solar System Gravity Simulations

The goal there is to get the simulation to match observations over longer intervals of time (to make the simulation accurate)...

Interesting. I'll take a first look over a lunch break!

Quote:

How does the trig you're working on compare to the Euler-Cromer method?

I'm comparing three competing theories to see if they give the same numbers. I'm suspecting that there are some convenient cancellings-out to let things work.

Quote:

(edit) - I thought there were other experiments performed specifically to measure (or constrain) the rate at which the gravitational force propagates - I don't recall off the top of my head ... I'll have to check ...

Let me know what you find. My searches have found mainly just controversy about gravity "speed".

Quote:

The LIGOs and the LHC are very different experiments, but the neat thing is that both should provide clues on how to resolve the issues about how gravity and quantum mechanics â€œmixesâ€? :)

The Heisenberg Uncertainty Principle can be derived from wave arguments. It's really mathematical/classical wave properties re-dressed and re-interpreted. A couple of assumptions contribute to this:

- waves can be superposed.
- waves superpose linearly.
- waves can vary continuously in wavelength and frequency, with the product being ( phase ) velocity.
- such properties are independent of direction of propagation.
- such properties are independent of position.
- no medium is implied.

If you want to model a 'particle' as a summation of waves then you would think that implies restriction/locality in space. To achieve this one needs to combine waves with a spread of wavelengths. That way as one moves either side of some 'centre' the waves will become progressively out of phase with each other and hence tend to cancel out. In QM this would imply a lessening of probability either side of some maximal region.

To achieve a 'tighter' packet one needs to introduce into the wave mix those that will interfere more at a given distance from the peak. This simply means you have to choose a greater spread of wavelengths, and as such will be more out of phase at that distance - hence cancelling to near or at zero. Conversely a narrower spread of wavelengths will cancel only at longer distances away from the centre. This implies an inverse relationship between wavelength spread and packet/particle 'width'.

In QM momentum is associated linearly with wavenumber, or inverse wavelength, ie:

p ~ k ~ 1/wavelength

so combining this

delta(p) * delta(x) > some minimum

and that 'some minimum' includes PI, and with some experimental input you get Planck's constant involved.

So again : QM uncertainty arose out of the 'wave mechanics' approach, which was later melded to the matrix/operator methods.

Similiarly if one considers wave packets in the time domain then the same logic applies. QM associates energy with frequency, or inverse time, and you get

delta(E) * delta(t) > some minimum

The pairs (p,x) and (E,t) are 'conjugate' in that they participate in these uncertainty relations.

Cheers, Mike.

( edit ) That is, each of a pair is conjugate to the other in the same pair.

( edit ) It can be helpful to think some problems through in, say, momentum space. This is 3D but the units are in inverse lengths [ m^(-1) ]. Similarly for an energy co-ordinate, which has dimensions of inverse time. The special relativity 4-vector (E,p) uses both, and with QM represents oscillation in time along with oscillation in distance! Diffraction from crystals, even classically explained, involves all of the above thinking.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

As for solar system models, you could do pretty well with Special Relativity alone - thus incorporating the correct time delays etc.

Discussions of the 'speed of gravity' are somewhat moot in that we aren't detecting gravitons anyhows. We are detecting photons and that determines our timings between reference frames. If cause and effect are then deduced to hold assuming such an approach, you'd conclude that gravity and light effects propagate at the same speed.

The Earth's gravitational field strength at it's surface is 9.8 m/sec, and ~ 1/3600 of that same field at that moon ( Earth/Moon separation ~ 60x Earth's radius ). This is typical of the Solar system generally - you only get significant field strengths close in to large bodies. So the error in approximating curved ( GR ) spacetime with flat ( SR ) is not as big as you'd think when doing large separation orbital stuff. Mercury is the famous exception, and even so the GR effect was arc-seconds per century. Admittedly small perturbations/residuals do accumulate to significance given long enough, but it can be hard to distinguish model inaccuracies vs. erroneous initial conditions.

Poincare ( in classical terms ) realised this difficulty, hence what is now termed 'chaos', 'sensitive dependence upon initial conditions' or butterflies and whatnot. For simulations it is almost a rule that the 'step time' needs to be honed relative to some desired upper bound upon error. Specifically nearer approaches lead to greater accelerations and hence require smaller intervals in the calculations to save error compared with further out.

You could even go back to Roemer's observations of Jovian moon's ..... :-)

Cheers, Mike.

( edit ) Believe or not, the Moon 'falls' less than a quarter of an inch per second toward the Earth! ( away from the tangent line it would have taken had the Earth not been there )

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

So is the Heisenberg Uncertainty Principle then more a limitation for performing measurements/observations, and not so much a fundamental restriction on anything that has energy/position/momentum? Are classical analytical attempts to understand the QM interactions (the couplings?) hampered by not ever being able to know the exact initial conditions, which then leads to dependence on statistics and probabilities for any analysis?

In the double-slit experiment, how can there not be some kind of electron-electron interaction taking place between the test particle and the material with the slits? The interference pattern doesn't occur without there first being some kind of material that has two properly spaced voids. Further, any electron-electron interaction necessarily involves also photons, and it sort of makes sense then, that these photons are what interfere, in the process of mediating the path of the particles during the coupling (or series of them) â€“ the action of a coupling (the wavy line in Feynman diagrams) must happen at light-speed, and if Huygen's principle is used for the photons involved in the couplings, then might that be how the particles can appear to have interfered while also appearing to have gone through both slits at once? In other words, it's not that the particles feel their way around, but more like the photons of interactions do, which then arrive back in the vicinity of the coupling with wave after wave of photon-tested results, so to speak, and these results add/cancel in the process of mediation of where (and where not) the particle should ultimately arrive at the detector â€“ Are there experiments/observations that prove for certain that no interaction whatsoever takes place between the test particle and the material with the slits?

So is the Heisenberg Uncertainty Principle then more a limitation for performing measurements/observations, and not so much a fundamental restriction on anything that has energy/position/momentum?

Ah, I see you are on the cusp of understanding that you don't understand quazee kwantum mechanics! But that's OK, as no-one else does ...... :-)
They are the same thing, in that QM requires a thing to be 'true' only if it can be measured to be that. Einstein railed against this stuff with much the same gut reaction that you are having, hence his 'God doesn't play dice ....'. Planck and many others couldn't stand it either.

Quote:

Are classical analytical attempts to understand the QM interactions (the couplings?) hampered by not ever being able to know the exact initial conditions, which then leads to dependence on statistics and probabilities for any analysis?

Lack of precision in initial conditions is a worry with QM or classical. The difference is that QM proposes that error is not ever reducible to zero. It's really a matter of scale, in that classical theory arose from modelling of things at a much larger size than the scenarios that QM was later developed to describe. Planck's constant defines the scale at which QM begins to dominate.

Also the type of statistics used is not that of 'ordinary large numbers' of things but the quite peculiar rules to combine amplitudes, which are then 'squared' to get a probability. Remember that the interference patterns we are discussing can be experimentally created using literally one photon/electron at a time, specifically each particle is 'alone' in the apparatus and it's fate is no different to whether any others are about when it is. Set up the gadget and go sailing for a few months, hence the thought that the particle interferes with itself. So we aren't using the statistical/stochastic/randomness methods that we apply to a can of gas molecules say.

So a baseball, at human scale, has a de Broglie wavelength of some ridiculously small amount and is so massive compared to individual atoms that there is no way one could ever tickle any wave/interference/probability behaviours out of it. Now gradually reduce that ball, keep throwing chunks of atoms away to pare it down to only hand fulls of atoms/molecules. Then the momentum and size are such that it now matters for prediction/explanation that their product is subject to an uncertainty relation. Infinite precision to yield a 'points with paths' model is not possible.

We have discovered the universe in an unhelpful order. Had we first known of how little things work, then extending that to larger stuff would be simple. We'd say something like 'oh, the interference fringes just merge as it gets bigger' or 'diffraction subsides and you can use the new Point Model Approximation that the weird Darktar Hewson invented, and look up the Continuous Path Equations of Professor Chipper'. But that is an inevitable penalty of being complex macroscopic organisms trapped within the thing we wish to explain.

Quote:

In the double-slit experiment, how can there not be some kind of electron-electron interaction taking place between the test particle and the material with the slits? The interference pattern doesn't occur without there first being some kind of material that has two properly spaced voids.

Quite right, many potential travellers from source to detection plane may be interrupted by whacking into a wall of atoms, some don't and are able to reach the distant side. We see the latter and not the former. You could have a grid of detectors on the source side of the wall, but that would be of no interest in explaining how the other electrons behaved while passing through the slits.

Quote:

Further, any electron-electron interaction necessarily involves also photons, and it sort of makes sense then, that these photons are what interfere, in the process of mediating the path of the particles during the coupling (or series of them) â€“ the action of a coupling (the wavy line in Feynman diagrams) must happen at light-speed, and if Huygen's principle is used for the photons involved in the couplings, then might that be how the particles can appear to have interfered while also appearing to have gone through both slits at once? In other words, it's not that the particles feel their way around, but more like the photons of interactions do, which then arrive back in the vicinity of the coupling with wave after wave of photon-tested results, so to speak, and these results add/cancel in the process of mediation of where (and where not) the particle should ultimately arrive at the detector â€“ Are there experiments/observations that prove for certain that no interaction whatsoever takes place between the test particle and the material with the slits?

I think you're getting at some sort of 'advanced wave' hypothesis here. Indeed I believe this was tried, but I can't recall the objections that buried it. Certainly one is tempted to try this line of thinking as experimental results do tend to suggest some sort of 'try before you buy' behaviour. I think alot of theorists have shied away from this area as it drives a Mack truck right through our cherished and comfortable intuitions about causality. To be absolutely consistent you'd have to hypothecate this stuff across universal scale, so that the detection of a photon from the Andromeda Galaxy would be subject to it! It would need a complete re-definition of time too, and not just comparisons across observer stations ( the Relativities ) but at each observation point. Good thinking Chipper! :-)

Cheers, Mike.

( edit ) Perhaps a stronger argument of particles being fundamentally restricted rather than just our measurements restricted, is the stability of atoms. There is no classical explanation for why things just don't sit still on top of each other, like an electron on a proton. QM/Heisenberg says the two must wriggle out/away from each other. The act of confining creates a base energy level ( -13.6 eV in the Hydrogen atom ) which cannot be gone under. Classically the lower limit on the energy of interaction in Hydrogen is minus infinity @ separation zero @ velocity zero, after energies have radiated away and rather quickly at that!

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

You could consider the interference pattern from two slits as being due to the sum of paths for propagation from source to detector without any screen present, minus the subset of those paths that are then excluded by placing the barriers in! So when no obstruction is present, I divide the set of all paths between some point A and some point B into two classes : those that will remain in the calculation ( because they are where a slit will be later 'placed' ) and those that will be tossed out of the integral ( because they are where a wall will be put ). This is valid because to calculate an amplitude, and thus a probability, for a point B ( on the detection plane ) I only need to sum the paths that refer to a particle arrival at point B. If a wall, or any other situation, prevents said arrival then it deserves a deduction of a path(s) from the history integral.

'Other situations' could be some path detector apparatus near the slit(s) that puts some paths out of inclusion in a history sum ( for a given target point to detect at ). We visualise this as the particle being knocked 'off course' because the path is no longer toward the detector at point B. Instead it may go to some point C ( also on the detection plane ) and contributes to counts there. Likewise point B's counts may include some detections that would have been recorded elsewhere if not for some deflection towards it. Smearing, if you like.

However still keep in mind the crucial principle that the pattern is qualitatively different for when we do path detection compared with when we don't. When we do path detection we are actually doing a different experiment, and our calculations/statistics at the target plane now include propagators from source to some path detection point ( near a slit ) and again from that intermediate point to the screen. So QM implies not just some irreducible amount of disturbance with measurement, but also a different type of disturbance - the occurrence of 'mid-flight' detection totally changes the path sets considered to sum to an amplitude! The maths changes and cross/interference terms ( from the other slit ) are lost.

Gradually tone down the level of disturbance produced by a path detector ( increase the wavelengths or reduce the energies of whatever we are probing with ) and the fringes will return to the pattern, but now you don't-know/can't-resolve which slit had a particle pass through it, and the paths for both slits now contribute to the integral! Our path detectors have now been relegated to simply informing us that a particle has reached the detection plane side of the barrier.

As mentioned earlier : at any time during the transition from fringes to smear and back, by varying the path probe energy continously, one can use near co-incidence ( or lack of it ) between 'path' detectors and 'target' detectors to separate out two co-habiting groups of particles. The first subset shows interference fringes, but no path information. The second subset shows a smear, but contains 'which slit' information. The second group numerically dominates the total of all detections when the path detectors are set one way ie. higher energy/momenta. The first group numerically dominates the total of all detections when the path detectors are set to lower energy/momenta. Slide the path detector/probe energies from low level to high in a continuous manner and you will get a gradual change in these subset mix/proportions of the total detected.

When you look, nature swaps new rules in! :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Thanks Mike, everything you pointed out is helpful, as usual :)

Quote:

When you look, nature swaps new rules in! :-)

Despite the probabilities in the rules, the interference pattern doesn't look like something that would result from dice being cast. But whatever form the particle (or atom or molecule) is in while 'non-local', somehow nothing more than a photon is required to 'localize' it ... I'm stumped at the moment, but can't help thinking that the speed of light seems very slow, considering the range of velocities between zero and instantaneous, the speed of light it far closer to the former ... that, and isn't there also a probability (amplitude) for a photon to go faster than c?

edit: Would there necessarily be any limit to how fast a virtual photon could propagate?

## RE: RE: ... not sure

)

There's a most excellent thread-in-progress over at the Bad Astronomy and Universe Today forum, especially with regard to proper order of performing calculations of the positions, vectors, etc. â€“ Implementing Solar System Gravity Simulations

The goal there is to get the simulation to match observations over longer intervals of time (to make the simulation accurate), rather than to determine whether gravity works instantaneously or adheres to the principles of general relativity. Clearly the order in which calculations are done (among other things) makes a profound difference as to the accuracy of the simulation, and it's exciting to think that some proof might be derived from that to show just how fast gravity works. How does the trig you're working on compare to the Euler-Cromer method?

(edit) - I thought there were other experiments performed specifically to measure (or constrain) the rate at which the gravitational force propagates - I don't recall off the top of my head ... I'll have to check ...

The LIGOs and the LHC are very different experiments, but the neat thing is that both should provide clues on how to resolve the issues about how gravity and quantum mechanics â€œmixesâ€? :)

## Late edit: Yup, Jupiter

)

Late edit: Yup, Jupiter passed in front of quasar and an experiment was performed to measure the speed of gravity, see: Physicist Defends Einstein's Theory and 'Speed of Gravity' Measurement

## RE: Late edit: Yup,

)

Look at the follow-ups as to what "speed" (or rather the "speed" of what) was actually measured...

I believe nothing new was measured there for gravity.

Cheers,

Martin

See new freedom: Mageia Linux

Take a look for yourself: Linux Format

The Future is what We all make IT (GPLv3)

## RE: There's a most

)

Interesting. I'll take a first look over a lunch break!

I'm comparing three competing theories to see if they give the same numbers. I'm suspecting that there are some convenient cancellings-out to let things work.

Let me know what you find. My searches have found mainly just controversy about gravity "speed".

Indeed so, and hopefully so!

Cheers,

Martin

ps: Still musing over the last few posts!

See new freedom: Mageia Linux

Take a look for yourself: Linux Format

The Future is what We all make IT (GPLv3)

## The Heisenberg Uncertainty

)

The Heisenberg Uncertainty Principle can be derived from wave arguments. It's really mathematical/classical wave properties re-dressed and re-interpreted. A couple of assumptions contribute to this:

- waves can be superposed.

- waves superpose linearly.

- waves can vary continuously in wavelength and frequency, with the product being ( phase ) velocity.

- such properties are independent of direction of propagation.

- such properties are independent of position.

- no medium is implied.

If you want to model a 'particle' as a summation of waves then you would think that implies restriction/locality in space. To achieve this one needs to combine waves with a spread of wavelengths. That way as one moves either side of some 'centre' the waves will become progressively out of phase with each other and hence tend to cancel out. In QM this would imply a lessening of probability either side of some maximal region.

To achieve a 'tighter' packet one needs to introduce into the wave mix those that will interfere more at a given distance from the peak. This simply means you have to choose a greater spread of wavelengths, and as such will be more out of phase at that distance - hence cancelling to near or at zero. Conversely a narrower spread of wavelengths will cancel only at longer distances away from the centre. This implies an inverse relationship between wavelength spread and packet/particle 'width'.

In QM momentum is associated linearly with wavenumber, or inverse wavelength, ie:

p ~ k ~ 1/wavelength

so combining this

delta(p) * delta(x) > some minimum

and that 'some minimum' includes PI, and with some experimental input you get Planck's constant involved.

So again : QM uncertainty arose out of the 'wave mechanics' approach, which was later melded to the matrix/operator methods.

Similiarly if one considers wave packets in the time domain then the same logic applies. QM associates energy with frequency, or inverse time, and you get

delta(E) * delta(t) > some minimum

The pairs (p,x) and (E,t) are 'conjugate' in that they participate in these uncertainty relations.

Cheers, Mike.

( edit ) That is, each of a pair is conjugate to the other in the same pair.

( edit ) It can be helpful to think some problems through in, say, momentum space. This is 3D but the units are in inverse lengths [ m^(-1) ]. Similarly for an energy co-ordinate, which has dimensions of inverse time. The special relativity 4-vector (E,p) uses both, and with QM represents oscillation in time along with oscillation in distance! Diffraction from crystals, even classically explained, involves all of the above thinking.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

## As for solar system models,

)

As for solar system models, you could do pretty well with Special Relativity alone - thus incorporating the correct time delays etc.

Discussions of the 'speed of gravity' are somewhat moot in that we aren't detecting gravitons anyhows. We are detecting photons and that determines our timings between reference frames. If cause and effect are then deduced to hold assuming such an approach, you'd conclude that gravity and light effects propagate at the same speed.

The Earth's gravitational field strength at it's surface is 9.8 m/sec, and ~ 1/3600 of that same field at that moon ( Earth/Moon separation ~ 60x Earth's radius ). This is typical of the Solar system generally - you only get significant field strengths close in to large bodies. So the error in approximating curved ( GR ) spacetime with flat ( SR ) is not as big as you'd think when doing large separation orbital stuff. Mercury is the famous exception, and even so the GR effect was arc-seconds per century. Admittedly small perturbations/residuals do accumulate to significance given long enough, but it can be hard to distinguish model inaccuracies vs. erroneous initial conditions.

Poincare ( in classical terms ) realised this difficulty, hence what is now termed 'chaos', 'sensitive dependence upon initial conditions' or butterflies and whatnot. For simulations it is almost a rule that the 'step time' needs to be honed relative to some desired upper bound upon error. Specifically nearer approaches lead to greater accelerations and hence require smaller intervals in the calculations to save error compared with further out.

You could even go back to Roemer's observations of Jovian moon's ..... :-)

Cheers, Mike.

( edit ) Believe or not, the Moon 'falls' less than a quarter of an inch per second toward the Earth! ( away from the tangent line it would have taken had the Earth not been there )

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

## So is the Heisenberg

)

So is the Heisenberg Uncertainty Principle then more a limitation for performing measurements/observations, and not so much a fundamental restriction on anything that has energy/position/momentum? Are classical analytical attempts to understand the QM interactions (the couplings?) hampered by not ever being able to know the exact initial conditions, which then leads to dependence on statistics and probabilities for any analysis?

In the double-slit experiment, how can there not be some kind of electron-electron interaction taking place between the test particle and the material with the slits? The interference pattern doesn't occur without there first being some kind of material that has two properly spaced voids. Further, any electron-electron interaction necessarily involves also photons, and it sort of makes sense then, that these photons are what interfere, in the process of mediating the path of the particles during the coupling (or series of them) â€“ the action of a coupling (the wavy line in Feynman diagrams) must happen at light-speed, and if Huygen's principle is used for the photons involved in the couplings, then might that be how the particles can appear to have interfered while also appearing to have gone through both slits at once? In other words, it's not that the particles feel their way around, but more like the photons of interactions do, which then arrive back in the vicinity of the coupling with wave after wave of photon-tested results, so to speak, and these results add/cancel in the process of mediation of where (and where not) the particle should ultimately arrive at the detector â€“ Are there experiments/observations that prove for certain that no interaction whatsoever takes place between the test particle and the material with the slits?

## RE: So is the Heisenberg

)

Ah, I see you are on the cusp of understanding that you don't understand quazee kwantum mechanics! But that's OK, as no-one else does ...... :-)

They are the same thing, in that QM requires a thing to be 'true' only if it can be measured to be that. Einstein railed against this stuff with much the same gut reaction that you are having, hence his 'God doesn't play dice ....'. Planck and many others couldn't stand it either.

Lack of precision in initial conditions is a worry with QM or classical. The difference is that QM proposes that error is not ever reducible to zero. It's really a matter of scale, in that classical theory arose from modelling of things at a much larger size than the scenarios that QM was later developed to describe. Planck's constant defines the scale at which QM begins to dominate.

Also the type of statistics used is not that of 'ordinary large numbers' of things but the quite peculiar rules to combine amplitudes, which are then 'squared' to get a probability. Remember that the interference patterns we are discussing can be experimentally created using literally one photon/electron at a time, specifically each particle is 'alone' in the apparatus and it's fate is no different to whether any others are about when it is. Set up the gadget and go sailing for a few months, hence the thought that the particle interferes with itself. So we aren't using the statistical/stochastic/randomness methods that we apply to a can of gas molecules say.

So a baseball, at human scale, has a de Broglie wavelength of some ridiculously small amount and is so massive compared to individual atoms that there is no way one could ever tickle any wave/interference/probability behaviours out of it. Now gradually reduce that ball, keep throwing chunks of atoms away to pare it down to only hand fulls of atoms/molecules. Then the momentum and size are such that it now matters for prediction/explanation that their product is subject to an uncertainty relation. Infinite precision to yield a 'points with paths' model is not possible.

We have discovered the universe in an unhelpful order. Had we first known of how little things work, then extending that to larger stuff would be simple. We'd say something like 'oh, the interference fringes just merge as it gets bigger' or 'diffraction subsides and you can use the new Point Model Approximation that the weird Darktar Hewson invented, and look up the Continuous Path Equations of Professor Chipper'. But that is an inevitable penalty of being complex macroscopic organisms trapped within the thing we wish to explain.

Quite right, many potential travellers from source to detection plane may be interrupted by whacking into a wall of atoms, some don't and are able to reach the distant side. We see the latter and not the former. You could have a grid of detectors on the source side of the wall, but that would be of no interest in explaining how the other electrons behaved while passing through the slits.

I think you're getting at some sort of 'advanced wave' hypothesis here. Indeed I believe this was tried, but I can't recall the objections that buried it. Certainly one is tempted to try this line of thinking as experimental results do tend to suggest some sort of 'try before you buy' behaviour. I think alot of theorists have shied away from this area as it drives a Mack truck right through our cherished and comfortable intuitions about causality. To be absolutely consistent you'd have to hypothecate this stuff across universal scale, so that the detection of a photon from the Andromeda Galaxy would be subject to it! It would need a complete re-definition of time too, and not just comparisons across observer stations ( the Relativities ) but at each observation point. Good thinking Chipper! :-)

Cheers, Mike.

( edit ) Perhaps a stronger argument of particles being fundamentally restricted rather than just our measurements restricted, is the stability of atoms. There is no classical explanation for why things just don't sit still on top of each other, like an electron on a proton. QM/Heisenberg says the two must wriggle out/away from each other. The act of confining creates a base energy level ( -13.6 eV in the Hydrogen atom ) which cannot be gone under. Classically the lower limit on the energy of interaction in Hydrogen is minus infinity @ separation zero @ velocity zero, after energies have radiated away and rather quickly at that!

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

## Further thoughts. You

)

Further thoughts.

You could consider the interference pattern from two slits as being due to the sum of paths for propagation from source to detector without any screen present, minus the subset of those paths that are then excluded by placing the barriers in! So when no obstruction is present, I divide the set of all paths between some point A and some point B into two classes : those that will remain in the calculation ( because they are where a slit will be later 'placed' ) and those that will be tossed out of the integral ( because they are where a wall will be put ). This is valid because to calculate an amplitude, and thus a probability, for a point B ( on the detection plane ) I only need to sum the paths that refer to a particle arrival at point B. If a wall, or any other situation, prevents said arrival then it deserves a deduction of a path(s) from the history integral.

'Other situations' could be some path detector apparatus near the slit(s) that puts some paths out of inclusion in a history sum ( for a given target point to detect at ). We visualise this as the particle being knocked 'off course' because the path is no longer toward the detector at point B. Instead it may go to some point C ( also on the detection plane ) and contributes to counts there. Likewise point B's counts may include some detections that would have been recorded elsewhere if not for some deflection towards it. Smearing, if you like.

However still keep in mind the crucial principle that the pattern is qualitatively different for when we do path detection compared with when we don't. When we do path detection we are actually doing a different experiment, and our calculations/statistics at the target plane now include propagators from source to some path detection point ( near a slit ) and again from that intermediate point to the screen. So QM implies not just some irreducible amount of disturbance with measurement, but also a different type of disturbance - the occurrence of 'mid-flight' detection totally changes the path sets considered to sum to an amplitude! The maths changes and cross/interference terms ( from the other slit ) are lost.

Gradually tone down the level of disturbance produced by a path detector ( increase the wavelengths or reduce the energies of whatever we are probing with ) and the fringes will return to the pattern, but now you don't-know/can't-resolve which slit had a particle pass through it, and the paths for both slits now contribute to the integral! Our path detectors have now been relegated to simply informing us that a particle has reached the detection plane side of the barrier.

As mentioned earlier : at any time during the transition from fringes to smear and back, by varying the path probe energy continously, one can use near co-incidence ( or lack of it ) between 'path' detectors and 'target' detectors to separate out two co-habiting groups of particles. The first subset shows interference fringes, but no path information. The second subset shows a smear, but contains 'which slit' information. The second group numerically dominates the total of all detections when the path detectors are set one way ie. higher energy/momenta. The first group numerically dominates the total of all detections when the path detectors are set to lower energy/momenta. Slide the path detector/probe energies from low level to high in a continuous manner and you will get a gradual change in these subset mix/proportions of the total detected.

When you look, nature swaps new rules in! :-)

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

## Thanks Mike, everything you

)

Thanks Mike, everything you pointed out is helpful, as usual :)

Despite the probabilities in the rules, the interference pattern doesn't look like something that would result from dice being cast. But whatever form the particle (or atom or molecule) is in while 'non-local', somehow nothing more than a photon is required to 'localize' it ... I'm stumped at the moment, but can't help thinking that the speed of light seems very slow, considering the range of velocities between zero and instantaneous, the speed of light it far closer to the former ... that, and isn't there also a probability (amplitude) for a photon to go faster than c?

edit: Would there necessarily be any limit to how fast a virtual photon could propagate?