Introduction to Delayed Choice Quantum Eraser Experiment
Physicist Thomas Campbell talks in his lecture about the nature of reality and he claims that we are living in a virtual reality. He comes to this conclusion due to quantum physics or to be more precise due to the results of the double slit experiment. This article will explain the setup and the result of this experiment in easily understandable terms and will then point out Tom's reasoning.
The content of this entire article is also available as a video which might even present the key points in more comprehendable terms due to animated graphics :
1. General introduction to the double slit experiment
The basics on the double slit experiment have already been summarized in a different article in the Knowledge Base, thus only the key points will be revisited here :
This experiment originated from the questions whether light is a wave of a particle. When a wave hits a single slit it produces an effect called diffraction. Diffraction means that a new circular wave is being created behind the slit. When we look at two slits next to each other these two newly created waves interact with each other and create to an interference pattern.
In the double slit experiment photons are being shot at a double slit and behind the double slit the result of the experiment is being monitored on a screen. If a photon - and you could also say if light - behaves like a wave, then the image on the screen should show an interference pattern of the two newly created waves behind the double slit as it is shown in this image. On the other hand if a photon behaves like a particle, we would simply expect two bars on the screen. Each particle would fly either through the left or the right slit and would cause a spot of light on the screen behind that slit. Repeating the experiment many times would lead to two bars formed by all the accumulated individual dots.
In the basic setup of the double slit experiment we always see an interference pattern on the screen. So photons apparently are not particles but behave like a wave, because they show diffraction at the slit and thus cause an interference pattern that can be measured on the screen.
If a photon detector is placed at each slit it becomes possible to identify each individual photon as it flies through ether slit. After modifying the experiment in this way, the image on the screen suddenly shifts from an interference pattern to only two bars, one behind each slit, just as one would expect it if photons were particles.
By performing a measurement at each of the slits, the photons changed their behavior from that of a wave to that of a particle. Since measuring through which slit each photon traveled is not quite as simple as "placing a photon detector at each slit" the precise setup of the experiment will now be presented in the second chapter of this article.
2. Delayed Choice Quantum Eraser Experiment
This experimental setup shows the "delayed choice quantum eraser" experiment as it was proposed in 1982 by Scully and Drühl. At that time the required measurement equipment was not available thus the experiment was first performed in 1999. Even though the setup looks a little complicated, you will have understood it in a few minutes :
Behind the double slit a beta barium borate crystal is used, which causes the photons from either slit to split into two identical entangled photons with half the frequency of the original photon. Why this splitting is required will become clear in a minute. The yellow lense has the task to redirect photons which have traveled through the first slit and thus are on the red path and photons which traveled through the second slit and thus are on the blueish path so that either one hits sensor D0 at exactly the same location. The intention behind this is being able to measure at sensor D0 whether a bar or a diffraction pattern is being created by the photon without being able to determine through which slit the photon originally traveled. The information about which slit it came from is deliberately erased by only using one sensor which can not distinguish which slit the photon came from.
Lets first take a look at the red path. A photon flies through the first slit and is split into two identical photons at the beta barium borate crystal. The first of the two identical photons flies towards the yellow lense, the second flies downwards and is redirected by two prisms until it finally hits the first green semi-permeable mirror. The 3 green mirrors all have the same attributes, they allow the light to simply pass through 50% of the time, in the other 50% the light is completely reflected and thus redirected. So the decision of which path the photon travels at the green mirrors is a random process with a 50/50 chance. On average every second photon is redircted upwards and hits sensor D4. In the other case the photon is not affected at all and simply travels through the mirror. It then strikes the grey mirror which simply reflects it an it reaches the second green mirror.
Unfortunately the depiction of this second green mirror is slightly incorrect in the original graphic and it should rather be oriented as indicated here so that the reflection angle matches the setup. In the first case, the photon is not affected by the mirror, simply travels through it and hits sensor D1. In the second case it is reflected and hits sensor D2.
Lets take a look at the blueish path of photons which traveled through the second slit : They are also split into two identical photons and the one travelling downwards is also redirected by the prisms. At the first green mirror every second photon is reflected and hits sensor D3. In the second case, the photon simply travels through the mirror, hits the grey mirror and subsequently this second green mirror, which you should again imagine to be turned by a few degrees. 50% of all photons travel straight through the mirror and hit sensor D2, the other 50% are reflected and hit sensor D1.
Photons which traveled through the first slit and thus are on the red path can only reach sensors D1, D2 or D4, they can never reach sensor D3. Which of the 3 sensors they actually hit is a random process that depends on the green mirrors but they will inevitably reach one of these 3 sensors. The same logic applies to the blueish path. Photons which traveled through the second slit, can only reach sensors D1, D2 or D3 but never sensor D4. Thus it should be clear that a photon that hits sensor D4 inevitably traveled through the first slit, because photons who traveled through the second slit have no way to reach sensor D4. The same logic applies for sensor D3, any photons reaching it must have traveled through the second slit. For photons which end up in sensor D1 or D2 it is not possible to determine through which slit they actually traveled because those two sensor can be reached on the red path from the first slit as well as on the bluish path from the second slit.
Now you have almost understood the setup of the experiment. One element that is missing is the function of sensor D0 at the top and the reason why the photons traveling through either slit had to be split into two entangled photons by the beta barium borate crystal.
Sensors D1 to D4 are identical sensors, they can only detect that a photon has hit the sensor but are not recording any image. Whereas sensor D0 is similar to a camera and is able to record the exact location where the photon ended up on the screen.
The last aspect that needs to be understood before looking at the results of the experiment is related to the time it takes the photons to travel through the experimental setup. Light travels at the speed of light, thus the shorter the path the earlier the photon reaches the sensor. The experiment is designed in a way so that the path of photon traveling towards the yellow lense always hits sensor D0 first, before its entangled partner reaches the first green mirror. Thus the result of the measurement of sensor D0 is always recorded first and then with a short delay the partner photon hits one of the sensor D1 to D4.
The Coincidence Counter on the right has the task to process the result of the measurement of all 5 sensors. It establishes a connection between the measurement result of sensor D0 with the entangled photon detected by sensors D1 to D4. This way every single photon measured by sensor D0 can be precicely assigned to one of the sensors D1 to D4 where its entangled partner was measured. The measurement data of sensor D0 can thus be split up into 4 individual images, each assigned precisely to one of the sensors D1 to D4.
This picture depicts the raw measurement data of sensor D0 on the upper left. This raw data only shows a wide bar of light in the center of the screen so in this raw state the data is pretty much useless. It is only when each individual spot of light of the raw data is assigned to the sensors D1 to D4 where the entangled partner photon was measured, that a pattern becomes visible :
Two aspects stand out immediately : The data assigned to sensor D1 and D2 shows an interference pattern while the data assigned to sensor D3 and D4 shows no interference pattern.
For photons which reached sensor D1 or D2 it is impossible to determine through which of the two slits they traveled, since the path information has been "erased" by the 50/50 chance events at the green mirrors. Their entangled partner always creates an interference pattern, similar to the double slit experiment without photon detectors at the slits.
For photons which reached sensor D3 or D4 the path of the photon through the first of the second slit can be identified. Their entangled partner does not create an interference pattern, similar to the double slit experiment with photon detectors at each slit.
Why should this result really make us think about the nature of reality ?
The problem with this result is that the photon that travels on the upper path reaches sensor D0 at a time, when its entangled partner photon is not even at the first green mirror. At this moment the photon can not really know whether it should create an interference pattern or not because due to the longer path the result which sensor its entangled partner has traveled to is still undetermined.
How can the upper photon know in advance for every single measurement on which path its entangled partner will be traveling through the green mirrors ? It can not know, because this decision at the green mirrors is a random event with a 50/50 chance.
Nevertheless the results show that obviously the upper photon knew exactly what to do in each case, otherwise the data assigned to sensors D1 or D2 would not form an interference pattern, while the data assigned to sensors D3 or D4 shows no interference pattern. Did you grasp the paradoxical nature of the result of this experiment ? In case you didn't, maybe jump back and re-watch the last few minutes.
What do scientists have to say about the results of this experiment ? When approaching the results of the Delayed Choice Quantum Eraser experiment with Newtonian physics and a purely materialistic world view there is no way of explaining its results, thus many scientist simply avoid the entire topic because it seems as if causality and time are being violated here.
While the formulas of quantum physics can describe the effect mathematically, it is difficult for any laymen to grasp these rather abstract derivations and conclusions. As an alternative to these sophisticated formulas I would like to present you now in the third chapter of this article with a model of reality physicist Thomas Campbell has come up with because his model can explain these strange results in easy language.
3. Thomas Campbell's model of a virtual reality
Thomas usually presents his My Big Toe model of reality in a two day workshop in about 16 hours. Since only few people are willing to invest this much time not to mention reading a 800 page book, this will be an attempt to use only selected aspects of his model to explain as good as possible what might be going on here. So be prepared that not everything will be derived from square one and that a few things will simply be assumed which Tom derives in detail in his book. Tom claims that we are living in a virtual reality. What exactly is this guy talking about ? The basic idea will be explained with the help of the following graphic :
The lower part of this graphic represents our 3D physical reality. In it resides everything that we perceive around us. Tom postulates that there is an additional meta layer of information that contains all information about our physical reality but is located outside of our 3D reality. This meta-layer is indicated in the upper part of this graphic. Higher dimensions as a general concept is not really uncommon. Many different scientific models like string theory use this concept so the assumption that there is something beyond our 3D reality construct is not unscientific at all.
In order to better understand this concept we will be taking a look at entangled particles as an example since they have been used in the delayed choice quantum eraser experiment. Due to the process how entangled particles are created, one has a spin-momentum that is up and one has a spin-momentum that is down. You can visualize spin-up as a particle rotating around its own axis clockwise and spin-down as a particle rotating around its own axis counterclockwise.
Due to physical conservation laws the spin of one particle can not change unless the spin of its entangled partner changes at exactly the same time. This changing at exactly the same time even applies if both particles are separated by a distance of several light-years.
In quantum physics this effect was once called spooky action at a distance by Albert Einstein, because this effect has puzzled scientists for decades. It seems as if the information about the spin is transmitted instantaneously between both particles and thus seems to travel faster than the speed of light, which would violate the theory of relativity. How does Tom explain this phenomenon ?
Tom says, that actually the information about both entangled particles is stored in the meta-information layer outside of 3D reality. Our 3D reality is created based on this information, thus the term virtual reality. If the spin of the left particle is being changed, then this information is available instantaneously on the meta-information layer.
With currently available technology we can only measure the spin of the particles but not actively change it at our will. Tom thinks that this might be possible in the near future. - - - Since information about both particles is stored virtually in the same location on this meta-information layer, changing the information about one particle would directly change also the information about its entangled partner.
Since the entangled partner in 3D reality is also only created based on information stored on the meta-information layer, its spin changes instantaneously in our 3D reality without any time delay whatsoever. No information is transfered between both particles within 3D reality. Since no information is transfered within 3D reality, relativity theory is not being violated. The key point here is that the information was transfered outside of 3D reality not inside of it.
No information is transfered between both particles within 3D reality. Since no information is transfered within 3D reality, relativity theory is not being violated. The key point here is that the information was transfered outside of 3D reality not inside of it.
We will now use the concept of a meta-information layer in combination with an analogy of a 3D computer game : Tom claims that in 3D reality objects only have to be rendered if our eyes can see them directly. The term rendered is used in the same sense as in a 3D computer game where objects are rendered in order to display them on the screen.
If in a 3D computer game our avatar is looking straight ahead he can not see any objects that are located behind him, so these objects do not have to be rendered. Only when our avatar turns around and looks in the opposite direction the objects that used to be behind him and thus were not visible to him enter his field of vision and thus have to be rendered.
So do objects still exist in the computer games reality, when they are behind our avatar ? Well they definitely exist on the level of information which in case of a computer game means in the working memory of our computer but they are not visible on the screen. In our 3D reality we are convinced that objects are always there, even if nobody is looking at them. But this is just an assumption, there is no way of knowing for sure.
If information on the meta information layer is changed, this can appear to us as if our physical 3D reality changed retro-causally. This is possible as long as no evidence within 3D reality exists - like data from a photon detector - which prohibits this from happening. Reality has to be consistent at all times and any evidence or data that exists within 3D reality must not lead to any contradictions.
Our perception that events have changed retro-causally is based on our assumption that reality is objective and exists independently from us. If our 3D reality is not really objective but is created continuously based on information on the meta-information layer, then it can appear to us as if the deletion of the data of the photon detectors leads to a change of the measurement data of the screen - seemingly reaching backwards in time - when actually all that changed was data on the meta-information layer. And how does Tom explain the delayed choice quantum eraser experiment ?
Tom's explanation is that we are not living in an objective reality and that the photon reaching sensor D0 does not really have to decide immediately whether it should create an interference pattern or not. From a 3D reality viewpoint the result of the measurement of sensor D0 remains undeterminated until its partner photon reaches one of the sensors D1 to D4 and then the data of sensor D0 is determinated in a way that does not violate the consistency rules of reality.
We will never know that there was a short period of time where the result of the measurement was not available because we have no way of looking at this short gap in the data. The 3D reality created after this decision is made is based on data from the meta-information layer and it will have covered up the gap and filled it with consistent data.
If the partner photon hits sensor D3 or D4 the measurement of sensor D0 has to reflect the behavior of a particle. If the partner photon hits sensor D1 or D2 the measurement of sensor D0 has to show an interference pattern. The key element here is that 3D reality has to be consistent with all data available within 3D reality. The measurement results of sensor D0 have to come out this way in order to be consistent with the measured data of sensors D1 to D4.
At this point it should be clarified that the phrase "the measurement collapses the probability distribution to a physical particle" has not really been accurate. Because if we consider Tom's model to be correct, everything is information only. Every conscious observer perceives 3D reality based on the information of the meta information layer and consistency of the experiences of all observers is guaranteed by ensuring consistency of the information their experience is based upon.
It is not easy to get our mind acquainted to this way of thinking, because we are so used to linear thinking and assuming that reality is objective. Even though this way of looking at the results of the double slit experiments can explain its strange results and behavior, it will probably still take a long time until scientists are willing to open their minds to this new perspective.
500 years ago everybody was convinced that the earth was flat, will it take another 500 years until the idea that we are living in a virtual reality and that reality is not objective will be an accepted concept. Who knows, hopefully it wont take that long but until we get there, it might be your job to at least spend a few more minutes thinking about what you have just learned.
Information about all the events within 3D reality that you are learning about in the news might be interesting but information about reality itself might actually be important because if Tom's model of reality is correct, changing information on the meta information layer can change the 3D reality that is being created and within certain limitations everybody can access this meta information layer.
If you want to learn more about this, maybe take a look at Tom's lectures on Youtube or at his book that is available for free on Google books. Several short video segments with Tom have been collected in this article. All of Tom's information is available for free because Tom does not want to sell you any snake oil, he only wants to present you with a different way of looking at the world, because he knows from personal experience that this new viewpoint can be really empowering.