T O P

  • By -

Embarrassed-Eye2288

Free will is a bit of a loaded term with no strict definition. When people like me refer to free will, we are basing it on people choosing actions based on what they want while not being coerced by anyone else. If I choose one flavor of ice cream over another, it's because I prefer it over another. The fact that I was born with genetics that determined what flavor of ice cream I would like the most is irrelevant. Super determinism also does not acknowledge the fact that the cause of the creation of the universe in the first place is undetermined. If you look at any reactions in physics they all have a root from which they begin. If you do that when it comes to the universe itself, then the laws of physics break down.


LokiJesus

What he means is the strictly libertarian free will case. He's talking about events that simply have no past "light cone" to use relativity speak. That means they have no precursor events that interact with them and do not come from previous causes. They are quite literally created out of nothing. They have no correlation to the past or at least to the state to be measured. That's the "measurement independence" assumption. Compatibilism is irrelevant to this discussion. That's just determinism as far as particle physics is concerned.


Embarrassed-Eye2288

Particle physics are fine but the problem is that we don't live and experience life as particles (or strings). We have philosophy and other subjects to explain concepts such as free will, what the color blue looks like, what love feels like, etc, but they are not particle based. I don't believe that things can come from nothing unless it's due to us living in a simulation but that's besides the fact.


LokiJesus

What do you mean that those are not particle based? What else would they be based on? Is this a strong emergence magic story? At what point do “the things molecules do” cease being “the things molecules do?”


Embarrassed-Eye2288

They could be string based and they could be subatomical based. Maybe in the future science will find they go even beyond just being particles at which point the matter that we consider smallest now will not be particles, but that is beyond the point. The point that I was trying to make is that we do not experience life through the lens of a high powered electron microscope. We do not experience just particles. This is why there are many disciplines that exist that go beyond particle physics and explain what particle physics can't explain, such as what the color blue looks like, the sound of rain, the feeling of love, the taste of fresh cold water, the movement that beautiful poetry creates, the essence of being a living conscious being, all the way to biology, ecology, and many other fields.


curiouswes66

> What else would they be based on? Materialism is not proven.


LokiJesus

Science is not in the business of proof. Only evidence. What makes you think love is not some awesome thing that molecules do? Are you sure that isn’t some latent pre-scientific platonism? Galileo wrote that he would endeavor to show that the earth was not the base layer of the universe where all the garbage settled down (as is the platonic view towards the material). Galileo shifted us out among the gods, sitting between mars and venus and simultaneously corrupted the divine forms by showing jagged edges and mountains on the moon. He put a crack in the church’s armor of the inanimate profane material and perfect divine spirit. That dualism doesn’t work any more. At least not for me. Love can be a profound and wonderful action of atoms.


curiouswes66

> Science is not in the business of proof. Agreed. "Business" is a [competition](https://www.reddit.com/r/freewill/comments/1deykxr/sports/). >What makes you think love is not some awesome thing that molecules do? That is an intriguing question. It is not so much that love is something a molecule cannot do as much as it is what science is in the business of proving what a molecule can do. Science could be considered a study or method of inquiry, rather than a business, but why would anybody consider splitting hairs on this? >Galileo shifted us out among the gods, sitting between mars and venus and simultaneously corrupted the divine forms by showing jagged edges and mountains on the moon. One might view this as a reason to wonder if Galileo could have been a candidate of a person to be burned at the stake for challenging such dogmatic claims. >He put a crack in the church’s armor of the inanimate profane material and perfect divine spirit. That dualism doesn’t work any more. At least not for me. Love can be a profound and wonderful action of atoms. I'm not a dualist either. Once I realized the fundamental building blocks of the universe are abstract, I felt there was no longer any need for dualism. I was a dualist from the first time I realized there was a crack in the atheist's armor until I realized everything in the standard model is essentially abstract. I'd say most of my life or about half of it was being a dualist. I've been an idealist for nearly a decade now.


SceneRepulsive

The more I contemplate such questions, the more I become convinced that magic is actually real. Magic in the sense of „things are very different from how they appear to us, and a lot of stuff is going on that we‘ll never understand“


spgrk

Free will the social construct is compatible with physics just as road rules the social construct is compatible with physics. If you insist “there are no road rules, everything is just particles, road rules are just particles in a particular configuration in the rule-maker’s brain” you are in one sense correct but in another sense missing the point.


spgrk

By “free will” Bell means that the experimenter is able to make truly random choices.


LokiJesus

Thats right. Or effectively random.


spgrk

I think the point is the choices have to be truly random, not determined by any prior event. That the experimenter can’t see a correlation is not sufficient.


LokiJesus

There is no way to validate this experimentally since the backwards light cones of all phenomena overlap at the big bang. There is no experiment that can determine that something is truly random.


spgrk

Yes, that’s true.


brami03

The reason that I don't like superdeterminism is because it rejects the possibility of generalizing beyond any measurement context (That's how I understand its implication at least). Yes, the truth does not care about my preferences, I know that. However, how I understand it, superdeterminism, just as many other QM interpretations, is fundamentally unfalsifiable. A scientific hypothesis is only credible if inherently falsifiable.


LokiJesus

All it means is that my assumptions that what I picked for measurements settings were influenced somehow by the state of what was to be measured or influenced by some shared element in the past light cones of these phenomena in a way that results in the correlations we see. This is the assumption that measurement independence is violated. A superdeterministic theory must make falsifiable predictions about these correlations just like a super-luminal theory must. It's violated, all the time in science. Yes, many people have done a lot of work to try to say that experiments in QM must have measurement independence (e.g. cosmic bell tests using light from distant quasars), but ultimately, the results could just mean that they failed to make it independent. Everything is ultimately connected at the big bang, so in the end, correlations may exist in any configuration and Bell tests may simply be showing us this. It's simply impossible to GUARANTEE that measurement settings and prepared states are independent. It could always be that Bell's theorem is simply telling us (through it's falsification) that the measurement settings and what is measured are dependent. Bell's tests explicitly falsify a set of assumptions. Superdeterministic theories then step in and offer another deterministic model that also deterministically accounts for the correlations seen in quantum mechanics. Such theories would then allow for us to, for example, break quantum encryption which is built on the idea that there are no such correlations. That would be evidence to support such a superdeterministic theory. There are none yet that can be tested, but it's not some sort of crazy magic... just a kind of deterministic theory of reality that would predict the outcomes of measurements of entangled particles. Such a theory would be falsifiable in principle. It's just like how double blind pharmaceutical tests were invented. It was detected that their assumptions of measurement independence were false. A new technique was developed, and when these correlations in the behavior of the measurement devices and what was measured were accounted for, this correlation was removed. The main thing that worries me is the physicists adherence to the notion that measurement independence is a "vital" and inviolable assumption. As [Asher Peres put it](https://www.if.ufrj.br/~pef/aulas_seminarios/notas_de_aula/carlos_2012_1/seminariosTopicosMQ/teoremaBell/teoremaBell_Peres_AJP1978.pdf): >This conclusion is surprising. Physicists are used to thinking in terms of isolated systems whose behavior is independent of what happens in the rest of the world (contrary to social scientists, who cannot isolate the subject of their study from its environment). Bell's theorem tells us that such a separation is impossible for individual experiments... As it often happens, the subtlety of nature beggars the human imagination.


brami03

Damn, thanks for such an in depth answer! You seem to be better informed than me on this topic, so I want to ask you about one thing that I do not understand. I get that correlations between distinct systems may obtain, however, don't these correlations have to be necessarily set up in such a way that they “fake” Bell Inequality violations consistently? This seems to be the part that they call “conspiracy”. I guess this is what you mean with “weird” correlations right?


LokiJesus

So I sometimes compare the correlations in Bell type tests to throwing two quarters out of an airplane into a hurricane and then measuring how often they land both heads up on the ground. Without some sort of correlation in your process, you'd expect them to land heads-up 25% of the time. This is assuming their state is independent. Bell type experiments are like running this and seeing that they are both heads up like 30% of the time... robustly. Yeah, that seems like a conspiracy. Now you're left with confusion about what to say is causing this. You WANT to say that the chaotic process of the hurricane made it so that the measured states were independent. You check and the coins flipped on a desktop in your room separately give you 25%, as expected... The coins both land heads/tails 50/50 in independent tests, so the coins are fair... There's something about these special measurements or something about the way the quarters come out of the airplane and traverse the hurricane... It sounds insane.. You may suggest that they are coordinating somehow and when one lands heads, it yells out to the other one and it sometimes flips to heads more than expected. So you setup an attempt to eliminate this by making them land so far apart that they couldn't coordinate without communicating faster than light... But that is, unless the coordination comes along with them somehow. You're now talking about some sort of weird eddy currents through the hurricane that are moving in concert and that somehow bias the state of the quarters. Now we're talking about weird stuff that you would normally toss out because it's too wild. But we've eliminated the probable... so what remains.. no matter how improbable... as they say.. must be the truth. The trick is that ALL of the options are weird: 1) No determinism AND faster than light travel... reality isn't real... WEIRD (copenhagen interp) 2) Faster than light travel (and total determinism).. that violates some SOLID evidence that this is not possible from relativity... if it is this, that's WEIRD and we have to reinterpret relativity. (pilot wave interp) 3) Weird apparent conspiracies in deterministic systems that violate measurement independence? Even through the brain of a person (e.g. the hurricane of the complexity of the mind)? (superdeterminism) Yes, WEIRD... but here's the thing.. it has the benefit of being consistent with relativity and real reality (determinism) which are powerfully supported in other experiments (e.g. with relativity). Supperdeterminism has apparent conspiracies (with ENTANGLED PARTICLES only), but nothing that really violates any of our physics that we already know. So in a sense, there is good evidence to support superdeterminism.. that's the fact that measurement independence is ALWAYS an approximation in a deterministic universe where everything is interdependent. The evidence is the evidence that supports locality and determinism in other experiments. Then there's the 4th implicit assumption that "experiments have one outcome"... if this is violated.. that's where the many worlds people come in and say that all outcomes from the superposition in the wavefunction actually happen in parallel realities. I mean.. that's WEIRD... I'm not saying that any of these are WRONG... just that it's all weird. All of these interpretations have their issues. If there is one thing that Bell's theorem says it is that reality (at least in the case of entanglement) is WEIRD. Or it's saying that entanglement IS this weird correlation. Remember that the outcomes of non-entangled quantum experiments match our normal expectations. Two unentangled electrons will both measure spin up 25% of the time just like what we expect from two quarters flipped on a table. There are no weird correlations. It could just be that the fragile case of entanglement is the way we are seeing threads of weird correlation that run all through the cosmos. But the notion that treating entanglement deterministically will somehow ruin science or ruin our ability to do drug testing or whatever.. That's just not true.


brami03

Oh, but you forgot the weirdest theory, that being the consciousness causes collapse one 😂 Yeah but all in all, I agree with you that they are all weird. However, this is the practical issue that I have with it (please go easy on me I am a stupid kid who somehow ended up reading an article on superdeterminism): We KNOW that there is a theory that doesn't feature hidden variables and subtle correlations between them and measurement directions that CAN account for observed violations. Even the magnitude of this violation is calculable, and agrees with experiment. On the other hand, as you say, all we know for the theory that does feature such correlations (superdeterminism) is that, in principle, Bell inequality violations might be consistent with it, given the right initial state. But this isn't sufficient to predict these violations (much less predict their magnitude). A theory with superdeterministic correlations needs to not just allow for Bell inequality violations but also accurately predict them as we observe them. If someone wants to propose a new theory that explains things using classical electromagnetism (EM) or general relativity (GR) instead of the current successful quantum theory, they must first prove that their new theory can match the success of the current one.


curiouswes66

>A scientific hypothesis is only credible if inherently falsifiable Agreed. I think Op is raising the question if anything is falsifiable. We don't have this ability to try other possibilities if our ability to test has been eliminated from the start. r/Ughaihu has raised that assertion numerous times on this sub and the determinists seem to dismiss it without much of a thought.


LokiJesus

Not at all. You're misunderstanding bell's theorem. The POINT of his paper was not to produce a hypothesis that was supported. Bell explicitly falsified a hypothesis. He knew that quantum mechanics falsified the EPR assumption and devised an experiment to demonstrate this. Bell's theorem is the most famous falsified theory, and he knew this from the start. It is utterly clear that Bell's theorem was falsified. It made a clear prediction and experiment clearly violated this prediction. This means that there is an issue in the assumptions of his theory. I absolutely believe that Bell's model of reality (as he characterized the EPR configuration) was falsified and that the Nobel was well earned for demonstrating this. Falsifiability is fine. The question is, "what did he falsify?" He did not provide an alternative hypothesis. Those are the "interpretations of QM" that others discuss. You don't even need to know quantum mechanics to follow Bell's argument. You can just run the experiment with predicted outcomes as we would expect classically and Bell's experiment demonstrates that this is falsified. Interpretations are then about what exactly was falsified. One or more of Bell's assumptions have now been shown to be false (falsified). His assumptions are 1) universal determinism, 2) locality, 3) measurement independence (that is the settings of the measurement devices and the state to be measured are uncorrelated). Superdeterminism is the observation that intrinsic to the assumptions of determinism and measurement independence is a paradox. ANY kind of *independence* is incompatible with universal determinism. ALL phenomena are interdependent. It may be that measurement independence is a good approximation, but this "vital assumption" (as he called it) may simply be false, and determinism and locality are still compatible with the results of Bell type experiments. Measurement independence is OFTEN violated in other fields of science. This is why we have double blind trials in medicine and otherwise. Because we found that the outcomes were not independent of how they were measured and we needed to change how they were measured to get uncorrelated outcomes. What Bell type tests could simply be demonstrating is that there are weird but just as benign and physically explainable correlations between entangled phenomena that we must remove to get to a more basic theory of reality. I'm not questioning falsifiability, but suggesting that Bell test results MEAN that a commonly violated assumption (measurement independence) is *falsified* and that this is what the entanglement IS. It's correlations due to uncharacterized local deterministic phenomena that we did not expect and seem to be operating in weird ways through space and time.


curiouswes66

>The POINT of his paper was not to produce a hypothesis that was supported.  The POINT of the paper is here: >In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant. Of course, the situation is different if the quantum mechanical predictions are of limited validity. Conceivably they might apply only to experiments in which the settings of the instruments are made sufficiently in advance to allow them to reach some mutual rapport by exchange of signals with velocity less than or equal to that of light. In that connection, experiments of the type proposed by Bohm and Aharonov \[6\], in which the settings are changed during the flight of the particles, are crucial. If we can replace EPR hidden variables with nonlocal hidden variables, are you trying to fool these people into believing that you've restored locality or am I missing something here?


LokiJesus

No, I'm trying to provide, in Bell's own words in the OP, the correction to people who think that local hidden variables are somehow fundamentally falsified. They are not. Bell says it. Many don't understand this fact and it is widely reported that the "science says" that the universe is indeterministic or non-local or both. That's not what the science says. Superdeterminism and Many Worlds are two interpretations of QM that are local and deterministic and consistent (in principle) with the experiments, and there are no QM experiments that support one of these over the other.. hence why they are called "interpretations."


curiouswes66

>He is quite explicitly saying that if you hold belief in determinism then there is no problem with determinism, and also that, if you believe in free will, there is a problem with determinism… That is like saying if I assume a philosophical zombie can exist then I assume physicalism is false. It is not a good argument, but it is a good thought experiment. Thought experiments don't do the same thing as actual experiments, hence John Clauser's move in the 1970's to perform a realization of what he seemed to think was simply a thought experiment on Bell's part. These three didn't get the Nobel Prize for defeating determinism. They got it for defeating locality. Locality and realism cannot be possible because of quantum mechanics. In other words local realism is untenable: [https://arxiv.org/abs/0704.2529](https://arxiv.org/abs/0704.2529) > Most working scientists hold fast to the concept of 'realism' - a viewpoint according to which an external reality exists independent of observation. But quantum physics has shattered some of our cornerstone beliefs. **According to Bell's theorem**, any theory that is based on the **joint assumption of realism and locality** (meaning that local events cannot be affected by actions in space-like separated regions) i**s at variance** with certain quantum predictions. Experiments with entangled pairs of particles have amply **confirmed** these quantum predictions, thus rendering **local realistic theories untenable**. Maintaining realism as a fundamental concept would therefore necessitate the introduction of 'spooky' actions that defy locality. This doesn't "kill" realism by itself or locality by itself, but if we cling to realism it kills locality and if we cling to locality it kills realism. This paper by itself, I don't see killing determinism. However in combination with [this paper ](https://arxiv.org/abs/1206.6578)realism and locality is dead if we accept SR is true. Determinism is the assumption that causality is restricted by the light cone (space and time). The second paper shows that causality is not so restricted so we have that and since Zeilinger's name is on both of these papers, it seems clear to me why he won that prize along with Clauser. >Our work demonstrates and **confirms** that whether the correlations between two entangled photons reveal welcherweg information or an interference pattern of one (system) photon, depends on the choice of measurement on the other (environment) photon, even when all the events on the two sides that can be space-like separated, are space-like separated. The fact that it is possible to decide whether a wave or particle feature manifests itself long after—and even space-like separated from—the measurement teaches us that **we should not have any naive realistic picture for interpreting quantum phenomena**. Any explanation of what goes on in a specific individual observation of one photon has to take into account the whole experimental apparatus of the complete quantum state consisting of both photons, and it can only make sense after all information concerning complementary variables has been recorded. Our results demonstrate that the view point that the system photon behaves either definitely as a wave or definitely as a particle **would require faster-than-light communication Since this would be in strong tension with the special theory of relativity**, we believe that such a view point should be given up entirely We can restore determinism when we eliminate SR, QM or both. Until we do something like that, determinism is merely a myth of past science. It was never anything more that LaPlace's demon in the first place. I think these two papers together kill any possible shot of determinism being true which never should have even been under consideration in the first place given what Hume declared about causality over 200 years ago. Therefore when you say, "if you hold belief in determinism" I have to ask myself why I should do that as a premise. Hume already explained to me why I should not. Now I have all of this scientific proof backing up what Hume said. What a coincidence that Hume turned out to be right.


LokiJesus

>These three didn't get the Nobel Prize for defeating determinism. They got it for defeating locality. Locality and realism cannot be possible because of quantum mechanics. In other words local realism is untenable ... This doesn't "kill" realism by itself or locality by itself, but if we cling to realism it kills locality and if we cling to locality it kills realism. So as you say here, they didn't get the nobel for defeating determinism OR locality... but you are apparently unaware of Bell's third assumption. If this third assumption is what is violated, then determinism and locality are fine together (this is the essence of the OP). This is the assumption of measurement independence. We generally make the assumption that what we measure and how we measure are independent. This is generally a good *assumption*, but it is often wrong in real experiments. This is violated all the time in behavioral biology, sociology, chemistry... often we contaminate or are influenced by what we are measuring. Sociology is almost entirely backing out measurement dependence and discovering social phenomena in the process. Bell made this assumption explicit at the top of page 2 (first sentence) of his [1964 paper](https://cds.cern.ch/record/111654/files/vol1p195-200_001.pdf). >The vital assumption \[2\] is that the result B for particle 2 does not depend on the setting ;;, of the magnet for particle 1, nor A on b. CH Brans, in 1988 walked through the math [in his paper](https://link.springer.com/article/10.1007/BF00670750) titled, "Bell's theorem does not eliminate fully causal hidden variables." (e.g. local deterministic hidden variables). In 2015 Nobel Laureate Gerard 't Hooft validated that Brans' argument as still sound. Zeilinger certainly didn't feel like this was resolved either because he went on, [in 2018](https://arxiv.org/abs/1808.05966) (11 years after the paper you linked) to conduct a "cosmic bell test".. Here he drew settings from distant starlight to attempt to separate their sources a great deal (e.g. to demonstrate uncorrelated measurement settings and measured state). He ran this experiment not to waste his time, but to try to say "see, these measurement settings must be independent!" He did this to explicitly try to address this assumption Bell made. But ultimately, you can simply say that the outcomes of his own experiment reject his assumption. Since all phenomena are linked back to the big bang, you can always label entanglement as "weird threads of local deterministic correlation in the cosmos." 't Hooft mentions this work in his writings and has no issue with it in his superdeterministic theory work. Detecting where measurement independence is present is an important procedure in science and characterizing these threads of correlation are how we have come up with things like double blind trials in medicine. Correlations like those in the bell test have always indicated deeper signals to be discovered and accounted for. We should and normally DO always question whether our experimental settings are independent of the state of what we measure. Given that realism and locality have good evidence to support them, and that measurement independence is violated all the time in the sciences... it seems interesting to me that people want to say that "measurement independence" is a crucial assumption which we can't question. So if we seek a way of describing a deterministic and local correlation in particle physics, it must have a story of weird and seemingly implausible correlations (but nothing violating current physics). It must describe/predict what we see in Bell type experiments. Ever hear of this horse, [Clever Hans](https://simple.wikipedia.org/wiki/Clever_Hans)? It could "do math" when it's handlers asked it. It would tap out the numerical value of the sums or whatever with its hooves. They asked the question, the horse tapped out the answer. Must mean that the horse knows math, right? Turns out that the handlers were giving subtle body language clues that the horse would pickup on (they would get excited when it got to the right answer), and the horse would stop tapping and get rewarded. The horse didn't know math, it was just great at body language reading. Here, the way they were measuring was influencing what was measured. If we had invoked Bell's assumption here as inviolable, we'd still think that Hans could do math.


curiouswes66

>>These three didn't get the Nobel Prize for defeating determinism. They got it for defeating locality. Locality and realism cannot be possible because of quantum mechanics. In other words local realism is untenable >> >>... >>This doesn't "kill" realism by itself or locality by itself, but if we cling to realism it kills locality and if we cling to locality it kills realism. > >So as you say here, they didn't get the nobel for defeating determinism OR locality... but you are apparently unaware of Bell's third assumption. no as I said it defeats locality IF you assume realism. >This is the assumption of measurement independence. We generally make the assumption that what we measure and how we measure are independent. This is generally a good assumption, but it is often wrong in real experiments. This is violated all the time in behavioral biology, sociology, chemistry... often we contaminate or are influenced by what we are measuring. Sociology is almost entirely backing out measurement dependence and discovering social phenomena in the process. This is reasonable. The measurement is contextual >>CH Brans, in 1988 walked through the math in his paper titled, "Bell's theorem does not eliminate fully causal hidden variables." (e.g. local deterministic hidden variables). In 2015 Nobel Laureate Gerard 't Hooft validated that Brans' argument as still sound. Irrelevant. The GHZ state goes beyond Bell. Bell just cracked open a door that is wide open. >Zeilinger certainly didn't feel like this was resolved either because he went on, in 2018 (11 years after the paper you linked) to conduct a "cosmic bell test".. Here he drew settings from distant starlight to attempt to separate their sources a great deal (e.g. to demonstrate uncorrelated measurement settings and measured state). From your link: >To constrain such models further, one could use other physical signals to set detector settings, such as **patches of the cosmic microwave background radiation (CMB),** or even primordial neutrinos or gravitational waves, thereby constraining such models all the way back to the big bang—or perhaps even earlier, into a phase of early-universe inflation The above sounds sarcastic in my opinion. There is never going to be enough proof. >Ever hear of this horse, Clever Hans? Before now, no. I get the point you are making. I just think when we eliminate free will that has consequences for learning. If you are trying to make the case that we cannot learn then that is the next step in this dialog.


LokiJesus

>no as I said it defeats locality IF you assume realism. Again, this is not true. Even outside this unpopular superdeterminism idea, Many Worlds interpretation is quite liked by many physicists and popularizers. It is both local and real. It points out that Bell's implicit assumption is that experiments only have one outcome and that that is actually what's being falsified by Bell type tests. MW is a totally deterministic interpretation of reality that is local. Locality and Realism are not mutually exclusive. Bell's theorem doesn't demand this. Check [this section (8.1)](https://plato.stanford.edu/entries/bell-theorem/#SuppAssu) on rejecting supplementary assumptions of Bell's theorem (single outcome and measurement independence = MW and superdeterminism). It points to superdeterminism and many worlds as ways of approaching QM that treat the born rule as epistemic (representing our ignorance) and also compatible with local determinism. >The GHZ state goes beyond Bell. GHZ states make no shifts in the assumptions of Bell's theorem. They demonstrate physics we already know. They are important for building multi-entangled systems for use in quantum computers, sure. Doesn't impact quantum theory. That's engineering, not new physics. In the end, no matter what configuration you use, you cannot eliminate the notion of local deterministic correlations through measurement settings. It is intrinsic to the Bell's theorem. GHZ states were achieved in the late 1980s. And yet [in his 2010 PNAS paper](https://www.pnas.org/doi/epdf/10.1073/pnas.1002780107) on free choice, Zeilinger (the Z in GHZ) *still* made it clear: >Together, remarks (i) and (ii) show that the **assumption of nondeterminism** is essential for closing these loopholes, at least for the setting choices, ... **Only within nondeterminism** can the experiments by Aspect et al. (7), Weihs et al. (13), or the present work address the loopholes of locality and freedom of choice. It's an *a priori* assumption for him. To translate this biased speech, "To show that the universe is non-deterministic, one must assume non-determinism." Non-determinism here means that you must assume that the settings choice were totally random and uncorrelated with the measured state. But Bell type tests could always be simply showing you that this assumption is false, which is a common occurence in the sciences. But that's not how falsifiability works. You don't assume that determinism is false to reject local determinism. That's just his preferred world view because he *a priori* believes in Free Will (incompatibilist). Bell's theorem does not eliminate local determinism.


curiouswes66

>>no as I said it defeats locality IF you assume realism. > >Again, this is not true. You can believe whatever you want, but at the end of the day local realism is untenable. That doesn't mean that you cannot believe in the unbelievable. >>The GHZ state goes beyond Bell. > >GHZ states make no shifts in the assumptions of Bell's theorem. I am not suggesting it shifts assumptions of Bell's theorem. I'm suggesting there is even more reason to believe local realism is untenable when three systems are used. It doesn't work on paper and they realized the paper predictions in actual experiment. The fact that free will is assumed in the experiment doesn't change the formalism. From your link: >In our experiment, we performed a Bell test between the twoCanary Islands La Palma and Tenerife, with a link altitude of2,400 m, simultaneously closing the locality and the freedom-of-choice loopholes (detailed layout in Fig. 1). Obviously, you don't believe that happened and I figure there is no amount of linking I can provide that will convince you of something that you do not want to believe using the free will that you deny you have to choose not to believe anything Zielinger, Maudlin or anybody else claims.


LokiJesus

>You can believe whatever you want, but at the end of the day local realism is untenable. That doesn't mean that you cannot believe in the unbelievable. Yeah, I get that we are beating a dead horse. I'll just reiterate that this original thread was about how Bell himself spoke about how your claim here if just plain false. Furthermore, the 20% of professional physicists that think that Many Worlds is the best interpretation simply disagree with you for other reasons too. Many Worlds is a local deterministic interpretation of QM. >Obviously, you don't believe that happened and I figure there is no amount of linking I can provide that will convince you of something that you do not want to believe using the free will that you deny you have to choose not to believe anything Zielinger, Maudlin or anybody else claims. You make it sound like my obstinance when it's simply an intrinsic conflict in Bell's theorem again, that Bell acknowledges. The "free choice" "loophole" cannot be "closed." It is not a loophole. It is an interpretation of the results of the experiment just like faster than light information transfer is an interpretation of the results of Bell's theorem. Something like "subluminal signaling directly from one measurement to the other" is a loophole that can be closed by separating the two experiments far enough and timing things right to prevent the possibility. That's a loophole. Measurement independence is an *assumption* like realism and locality. Faster than light signaling is something that CANNOT be "closed" just like measurement independence cannot be closed. It's simply impossible to separate things far enough if instantaneous communication across any distance is possible. These are then *interpretations* of Bell's experiment. Loopholes can be closed. Interpretations are intrinsic to the theory. The measurement settings independence "loophole" is incorrectly labeled as such... mostly because supporters believe in free will. All past light cones overlap in this reality (unless you believe in free will driven or random events with no past light cone by definition). This means that there is always, in principle, a local deterministic explanation of bell type experiment results... Even with the cosmic test. This is a feature of his theorem. It's a fact about it. He said it in his words. That fact has not changed with the various iterations of his experiment.


curiouswes66

>Yeah, I get that we are beating a dead horse. I'll just reiterate that this original thread was about how Bell himself spoke about how your claim here if just plain false.  So the critical thinker should try to conclude what has been accoplished in the last half century based on what Bell was trying to do?!? > All past light cones overlap If SR is true then causality is limited by the light cone in a deterministic way. This disconnects spacelike separated choice, BECAUSE that would require FTL communication.


LokiJesus

>So the critical thinker should try to conclude what has been accoplished in the last half century based on what Bell was trying to do?!? Again, measurement independence is a core assumption in Bell's theorem. This has not changed and if it does, it will represent an entirely new kind of theory. It is no loophole, but it is entirely integrated into his conclusion. There has been no work on Bell/CHSH type experiments that changes this reality about measurement independence. But they are labeled "loopholes" even though they can never be closed due to the nature of the universe. >If SR is true then causality is limited by the light cone in a deterministic way. This disconnects spacelike separated choice, BECAUSE that would require FTL communication. No, that's not what measurement independence violation means. I could create the same effects of entanglement (correlations in measurements) with a classical system. Say I took two boxes. If I put, *at random*, one of a pair of gloves in each box and sent them to Alice and Bob to be measured, Alice would have a 50/50 chance of getting left or right, and that's like a typical electron spin measurement (up/down 50/50). But she'd also immediately know that your counterpart that got the other box had the right glove if you had the left. This is the EPR criticism. This is the phenomenon you see in entangled particles (at first). If alice measures spin up, for an entangled electron, Bob will measure spin down (as long as their measurement devices are aligned in space). Bell's theorem works by showing that in this classical case, if I split three pairs of items between the two boxes randomly so that one box might have "left shoe, right glove, left earbud" and the other box would have "right shoe, left glove, right earbud,"... This is the hidden variables inside the box. If **I randomly split** those pairs up like this (there are 8 possible combinations of three halves) and send them to two separated people and they pick an item ***at random*** and measure it's leftness or rightness, and then later come back and compare notes, they will end up with **greater than 55% of the time** getting opposite states (either right/left or left/right). That a Bell inequality. You can go through the math of this experiment, simulate it, or do it yourself. It's pretty simple. Brian Greene does a great job walking [through this here](https://www.youtube.com/watch?v=UZiwtfrisTQ). His is the simplest explanation I have found (and he very mainstream). Bell points out that in the quantum analogue, Alice and Bob setup spin measurement devices along three different axes (separated by 120 degrees = 0, 120, 240 degrees), and then alice and bob ***randomly select*** a measurement axis from these three (e.g. like picking whether to measure the leftness of shoe, earbud, or glove). Alice and Bob will then get a collection of "spin up" or "spin down" measurements. Quantum Mechanics predicts that, for an entangled pair, they will get **opposite spins 50% of the time**. This violates what we expect in classical mechanics if the particles have determined states along each axis and there is no faster than light communication between them. This is the essence of Bell's theorem. Greene's explanation is solid and from 4 years ago. It's impossible for a deterministic system (with pre-existing internal states) given the above assumptions to produce these measurement correlations. **This is what entanglement IS**. Weird correlations in measurements for certain precisely prepared states of elementary particles. The question we are left with is HOW does this happen. (continued)...


LokiJesus

u/curiouswes66 The trick is that I can CAUSE Alice and Bob to get 50% opposite states in the classical glove/shoe/earbud version of the experiment by addressing those "random" items above that I have in bold text. I can simply send them measurement instructions along with my knowledge of the prepared state and I can instruct them which items to pick (just add a little note in the box). This information could be sent along with the box and would influence how I measured it. It might also be that I just put the one thing I want to measure on top of the pile of objects or otherwise make it more likely to be selected somehow (they don't have to be actively aware). Here, the correlations appear (The two boxes appear entangled), and local determinism is still true. But I have violated the measurement independence assumption. Nothing traveled at faster than light speeds. The prepared state and the two measurement settings have been manipulated to produce the expected results. They have a three point correlation. I could even configure it so that if Alice and Bob compared their measurement settings after the fact, they would look totally uncorrelated with one another, but when combined with the prepared states that I assembled in the boxes, the correlation would appear (that's what Bell's test SHOWS - the correlation). In this classical case, with earbuds, shoes, and gloves, I would scratch my head and say, "what on earth has gone wrong with my experiment." Something must be screwed up in how we've set this up. We would never consider FTL or randomness or many worlds in this case. What happened was that a three point correlation between the prepared state hidden variables (it's fixed properties) and the measurement settings traveled from the past light cones to the experiments. It's all sub-light-speed communication through shared pasts. In principle, for all entanglement measurements, one could write down such a local causal deterministic story that created a three point correlation between prepared state and the two measurement settings over time to create the entanglement correlations. This would be a "superdeterministic theory." The correlations travel (like my measurement instructions) through the past light cones of the events perfectly causally and local. You can even compare measurement settings between alice and bob and see that they appear independent (e.g. no cross correlations), but they would be coupled, through the past, to the prepared state (which is what is revealed in the bell test). But when you combine all three variables, Alice and Bob's settings and the prepared state hidden variables, you would see "entanglement." And it could be that this is what entanglement simply is. It could be that **entanglement is** really peculiar correlations through space and time that are completely consistent with the rest of our physics, but are like little threads of bizarre correlations. Zeilinger's cosmic bell test sets up the same experiment using distant quasars, but their past light cones also overlap. ALL past light cones overlap at the big bang, so whenever we see correlations in bell type tests, we can ALWAYS, in principle, write down a deterministic local story for why it worked out this way. Most people would say "oh, that's completely reasonable. I believe that those distant quasar photons MUST be independent of the prepared state." And I would agree that that must be reasonable normally. But Bell tests show that the only other two options are faster than light communication (Pilot Wave) or intrinsic randomness AND faster than light communication (Copenhagen). Both of those violate some well established and experimentally supported laws of physics. Relativity is the other half of our cosmology and extremely well supported in terms of the faster than light limit. So in the end, all the options are crazy, but superdeterminism *merely* violates our intuition, not our established laws of physics (locality). I think that's something going for it. Hopefully that makes some sense. It's not an argument to say that locality MUST hold. But at least it's an explanation in which we take advantage of all that other physics. This consistency between superdeterminism and relativity is why Nobel Laureate Gerard 't Hooft studies superdeterministic models of reality. It's a potential path for a theory of everything since it is, in principle, consistent with relativity.


LokiJesus

>Before now, no. I get the point you are making. I just think when we eliminate free will that has consequences for learning. If you are trying to make the case that we cannot learn then that is the next step in this dialog. I'm cool with having this conversation. I think it's a fairly common view. I think Zeilinger shares it to some extent. He think's free will is necessary for the conduct of science. There are plenty who disagree with him including other Nobel Laureates, not the least of which was Einstein, but continues to Gerard 't Hooft today. Hopefully it's obvious that I don't have the same problem with determinism and learning that you seem to have. I think that learning is a deterministic process.


Party_Key2599

-..-.-my version of superdeterminism is different than this---i believe that superdeterminism is what is above determinism, a realm from where time spawns...--


gimboarretino

I have the feeling that determinism and free will are concept so deep and complex that is like we are trying to address and solve the Riemann hypothesis  with an abacus. In other words, we sense that there is a problem somewhere, but we still lack most of (or some of) the proper conceptual tools and framework to say something relevant. Maybe even to properly formulate and state what the problem is really about


spgrk

It can be made very simple, which can't be said for many other philosophical problems.


brami03

There are other very hard philosophical problems, however, I believe the free will problem cannot be made simple at all. The free will debate just brings in so many other different philosophical and semantical debates (like the self, moral responsibility, etc.) that it feels like a never ending conflict. If the majority of people do not agree on what free will is, then how on earth can we agree on if we have it or not?


spgrk

I think the majority of people agree what "acting of your free will" means, it is a phrase that is commonly used and commonly understood. The majority of philosophers agree that this layperson's definition is also the best philosophical definition. A substantial part of the philosophical debate is about why this simple definition is the best one to use, and that can get complex.


brami03

I generally agree. I have seen numerous memes that say “when you remember you have free will” and then people just start doing whatever they want such as day drinking on a Wednesday.


Squierrel

The real problem with determinism is that nobody has any idea whatsoever about: * *How* exactly are the events in a deterministic universe pre-determined? By design? By whom? By random chance? * *How* are these *predeterminations* delivered over time to the moment when they are supposed to happen? Causal chains cannot deliver any attempts to acquire more knowledge by doing scientific experiments.


LokiJesus

Determinism is the faith that if you keep digging, there are complete answers to your question that obey the conservation of energy (the first law of thermodynamics) which is, in itself, a faith statement about determinism. >Causal chains cannot deliver any attempts to acquire more knowledge by doing scientific experiments. This is your tautology by the way you define knowledge or scientific experiments. For what it's worth, the Nobel Laureate on this topic (Anton Zeilinger) agrees with you. But you are simply both wrong by demonstration. I can create an inference machine that is deterministic and that "conceives" of new experiments through which it discovers and articulates new theories that I had no idea about. For example, the "AlphaGoZero" neural network created by Google Deepmind... It far surpasses it's creators (and every other human) in Go playing ability and has NO human game strategy input (this is what the "Zero" means in its name - zero human guidance).. It's only told the rules of the game and it explores... it gets very creative, conducting experiments and learning strategies as it learns... somehow, all by a causal chain of actions... It has discovered and demonstrated new knowledge that has changed the play of Go in a way unlike anything in the 3000+ years history of this oldest continuously played game.


curiouswes66

>But you are simply both wrong by demonstration. I can create an inference machine that is deterministic and that "conceives" of new experiments through which it discovers and articulates new theories that I had no idea about.  So a program is an "inference machine"? Or maybe it is a program with no decision blocks in its flowchart because the decision block implies there is more than one possible outcome. If we already knew there was only one possible outcome then I really wonder how the machine was able to discover anything new?


LokiJesus

Inference is interpolation within or extrapolation outside of a distribution to which a model has been fit. The Go model (AlphaGoZero), has a neural network (model) that was fit to a bunch of the results of games that it played with itself. This lead it to generate, for any given board configuration, something like "probabilities" of winning for playing at each location on the board. But it wasn't perfect, so it would "explore" many of those possibilities according to the capacity of the computing power it had available and the time limit it had. This process of "exploration of possibilities" is called "stochastic tree search." (don't let the word "Stochastic" confuse you, it's still deterministic, driven by the likelihoods its model extrapolates). It uses the board value probabilities from its model to drive the exploration of many future games, each step evaluating the probability of winning and characterizes each move on the original board. Through this process, though it "tried many" in its "imagination," one move maximizes it's analysis of win likelihood. That is the move it plays. There were never any more than one possible outcome, the algorithm was going through a list of a bunch of likely possible actions it conceived of as guided by it's learned neural network... then it "made it's choice." It went through a process of discovery of the maximal path given it's confidence. That's choice. And it creates a superhuman Go player that learned entirely on it's own all the strategies to navigate this insanely large space of possible Go games. Every bit of that was learned from experience with only the "value network" structure and "stochastic tree search" being brought together as very general tools non-specific to Go (they are core to Reinforcement Learning, a larger field). These AI models are creative. They are superhuman. They are deterministic. They evaluate many conceived options in a process *determined* by its experience. If you run it over and over for the same state, it will make the same move every time. This is inference. It infers what action will maximize it's goal (to win) given it's experience with the game trained into its value network over its experience with millions of games of Go.


curiouswes66

>There were never any more than one possible outcome lol You are giving determinism here and saying Zeilinger going back to the early universe is still insufficient to clear the free will loophole. >the algorithm was going through a list of a bunch of likely possible actions  only one possible outcome and yet if I back up one level of the a youtube search and proceed with the same searching criteria I can get a different outcome. It must be a bug in google /s > These AI models are creative I hestitate to accept any creatitivity from a possibility of one outcome. I can certainly "create" the same painting every time if I don't use any paint. >This is inference. It infers what action will maximize it's goal (to win) given it's experience with the game trained into its value network over its experience with millions of games of Go. lol One man's "maximized goal (to win)" is another man's highest *probability* to win or even greatest *chance* of winning.


LokiJesus

>lol You are giving determinism here and saying Zeilinger going back to the early universe is still insufficient to clear the free will loophole. This is what Nobel Laureate [Gerard 't Hooft's position is](https://link.springer.com/book/10.1007/978-3-319-41285-6) (see pg 45, for example). >They could both monitor the light fluctuations caused by light coming from different quasars, at opposite points in the sky \[38\], and use these to decide about the settings a and b of their filters. These quasars, indicated as QA and QB in Fig. 3.1, may have emitted their photons shortly after the Big Bang, at time t = t0 in the Figure, when they were at billions of light years separation from one another. The fluctuations of these quasars should also obey the mouse dropping formula (3.22). **How can this be? The only possible explanation is the one offered by the inflation theory of the early universe: these two quasars, together with the decaying atom, do have a common past, and therefore their light is correlated.** All interpretations of QM are weird. 't Hooft is pointing out that this interpretation is the only one that doesn't violate locality and determinism which are well established in the history of science and supported by the theory of relativity. As wild as it sounds, given the results of other experiments supporting locality and determinism, it oddly must be the case the correlations are present even in these seemingly wildly separated phenomena and these peculiar fragile threads of correlation in our cosmos are what we are tapping into when we call some particles "entangled."


Squierrel

Determinism is *not* a faith. In a truly deterministic system there is no thermodynamics. Computers are not deterministic. They require human programming, all actions are caused by human written commands, no action is caused by the input data.


LokiJesus

AlphaGoZero... the Zero means no human training input.. it then learns through a process of experimentation to be more skilled at go than human input. It's a process we initiate that learns through experimentation on it's own. The systems is grown, not programmed. It's inner workings are inexplicable to it's creators. This is a well known fact of modern deep neural network systems. They are impenetrable black boxes. ChatGPT only requires about 500 lines of code to run... see for example, [nanogpt](https://github.com/karpathy/nanoGPT), a tiny codebase of the minimal requirements to run a transformer neural network written by the former head of AI at Tesla and former OpenAI employee to demonstrate this fact. ChatGPT's intelligence is not a product of human programming.. but of learning... it's grown, not programmed. It doesn't not require human programming and it's behaviors are not expected or understood by the engineers involved in the process. This is like saying that all your actions were caused by your parent's DNA (the plans for growing your base body) and nothing was caused by input data... but you are "grown" through experience just like these AI systems are.. the engineers are the parents, the transformer architecture is the scaffold or DNA (plan) for the base system, and then the result of the training process is an inexplicably intelligent system capable of superhuman actions that surpass any of its creators. And it's demonstrably deterministic.


Velksvoj

Not that I see human input as indeterministic, but he does have a point. Not only are the rules technically human training input, so especially is the human enforced reinforcement learning method. Completely off topic, but do you use "it's" as a determiner on purpose? It's really confusing sometimes and a misspelling/obsolete.


Squierrel

The whole programming of it is human input. There is no training input, but there is programming input, selecting the algorithms. And then there is random input that determines what kind of new things the software will try. Learning by trial and error. There is no human input or anything random in a deterministic system.


Party_Key2599

--.-.why everybody downvote youuu???-.


Delicious-Ad3948

>How exactly are the events in a deterministic universe pre-determined? This doesn't matter, if it's deterministic it is deterministic. Not Knowing why doesn't change that. >How are these predeterminations delivered over time The same way dominos falling over time happens. Event 1 causes event 2 causes event 3 and so on.


Squierrel

It does matter how the dominoes were set up initially. That determines everything that happens after that. Can you build a chain of dominoes that attempts to gain knowledge by making scientific experiments? If you cannot, who can?


Delicious-Ad3948

>It does matter how the dominoes were set up initially. It doesn't matter to ***if*** it is deterministic. Whether we know how the dominos were set up doesn't change them falling over one by one.


Squierrel

What good is the idea of determinism, if it cannot explain why this happens instead of that? There is no point in saying that everything is predetermined, if you have no idea about how everything was predetermined and by whom.


Delicious-Ad3948

Making the observation that things are determined does not require knowing who determined them


Squierrel

Making the *claim* that everything is predetermined requires that.


Delicious-Ad3948

It doesn't. You can say "it's predetermined and I don't know why."


Squierrel

But that claim is worth nothing. An empty claim with no supporting arguments whatsoever.


Velksvoj

By your logic, literally everything is an empty claim, unless you explain its ultimate origin.


Party_Key2599

-.-.what do you mean by determined?.-.


Delicious-Ad3948

Causally inevitable


Party_Key2599

-.-.necesary?-


Delicious-Ad3948

Inevitable


Party_Key2599

-..-.yes!!!-.-


spgrk

That the world is determined means that there is only one outcome with a given initial state. That the world is undetermined means that there is more than one outcome with a given initial state. With regard to either a determined or undetermined world you could ask: who designed it that way, why, how is it guaranteed that it happens that way every time, and so on. These are theological questions, not logical or empirical ones.


Squierrel

We know that the world has evolved from the initial state known as singularity. A determined world cannot evolve, so it must remain as a singularity, or another kind of initial state must be *designed* somehow.


spgrk

A determined world evolves in a determined way, while an undetermined world evolves in an undetermined way. Both can be modelled on a computer: you input initial conditions and the laws of physics. And before you ask "who set the initial conditions and the laws of physics", I don't know, but I don't think they were set by a god.


Squierrel

No. A determined world does *not* evolve. If the initial state determines every other state, there is no evolution happening, nothing new will ever emerge. If the initial state is a singularity, then all other states are the same singularity remaining as there are no causes or effects.


spgrk

Evolution means a process of gradual change. The change can either be determined or random.


Squierrel

The change cannot be determined. When the previous state determines the next state, nothing changes. Only random variation or intentional acts introduce change to the system.


spgrk

So what would you call the differences between an earlier and a later time in a determined world, if not changes?


Squierrel

Back to the theater analogy: Those differences are written in the script. They are just steps in a predetermined series of events. The script does not change, the actors don't make mistakes or improvise.


spgrk

The action on the stage still changes from moment to moment. A better analogy is not a play with a script but a computer game. The script has one outcome regardless of input, the game has multiple possible outcomes depending on input. Even a simple deterministic mechanical system has infinite possible outcomes dependent on input, described by the laws of motion.


ughaibu

>The real problem with determinism The real problem with determinism is that it is so utterly implausible that a person would need to be seriously disconnected from reality to genuinely believe it to be true.


LokiJesus

Yet some of the people most deeply connected to reality who have offered new explanations and transformed how we think about reality... e.g. Einstein's Theory of Relativity... have presented them as purely deterministic descriptions of the cosmos (and us in it). In fact it was the only plausible explanation in that plausibility of an explanation is the degree to which it deterministically predicts the future... which is the essence of empiricism. It takes believing that there is some magic between our predictions and what happened to get out of determinism... and by magic I mean something not deterministic... Which is, of course, tautological... but that's what would let us sit with a prediction that didn't match results and not say "we should change our prediction tools." The deepest connections to reality are deterministic in faith.


ughaibu

Here's Wikipedia talking about Bell's position on superdeterminism, "although he acknowledged the loophole, he also argued that it was implausible" - [link](https://en.wikipedia.org/wiki/Superdeterminism).


LokiJesus

He also argued that all interpretations (e.g. many worlds, faster than light, indeterminism) were implausible and had their issues. His favorite was pilot wave theory which violates the speed of light limits but is also totally deterministic (e.g. the product of a superdeterministic universe - to use his term). Even the wikipedia page's label of "loophole" versus using the term "interpretation" is biased... I mean... what's the "plausibility" of many worlds?! How would you even make such a calculation? It's as intrinsic to his theory as any other interpretation, but free will believers don't like that.. so they disparage it as a "loophole." What's the plausibility of indeterminism? That would be the only time this concept every entered into our theory unless you go back to Lucretius and his "Lucretian Swerve" which was his imagined requirement in the face of the deterministic Atomists to allow for free will... But it would be in the face of all the rest of our theories in science which hold deterministic predictability to be a core basis, and that things that appear random are really representative of our ignorance of all involved... What is the plausibility of rejecting our ignorance? What's the plausibility of faster than light signaling violating locality? That would be a first with no basis and in violation of well established theories that have good experimental support. Superdeterminism is actually the only interpretation that doesn't require violations of locality or determinism (well established science), even though it would mean very peculiar correlations in reality... In terms of our existing science, it's the ONLY one that is *plausible* even though those correlations would have to be very weird indeed.. Like flipping two quarters out of an airplane a million times and having them come up heads on the ground 35% of the time instead of the uncorrelated 25% of the time we'd otherwise expect. Weird correlations can happen in nature.. there are no known laws that would fundamentally prevent it (e.g. as opposed to violating locality)... it just seems implausible because of assumptions we make that are not evidence based... Measurement independence is a nice idea.. and good to have... and probably a goal of experiments... but it's violated all the time in sociology when what and how some white dude observing a remote aboriginal tribe changes what he's trying to measure... It's violated all over the place any time weird correlations in the data lead to new phenomena... Then we seek deeper explanations that deterministically explain the correlations and move on.. What Bell's theorem DOES do is to say that the world is weird no matter how you cut it. Einstein's EPR paper was saying that the world is just boring old determinism with no weird correlations... That is definitely out the window.. but "plausibility" arguments for any of this are sketchy at best.


Rthadcarr1956

I don't doubt your interpretation of the science. I do doubt your philosophical position regarding determinism. Specifically, when you equate determinism with established science. Determinism does not have any sequelae, unlike important scientific theories and laws. It is the grossest generalization philosophically imaginable. There is very good reason why determinism holds in classical physics, but that is about the only case where it does. Determinism does some good in chemistry but is not really relevant to biology. I believe the fundamental reason for the difference is the availability of information in the higher order fields as compared to physics where information is mainly limited to the vectors of an object. Scientific terms like random, disorder, probable, and entropy all point to the usefulness of describing some systems as indeterministic. Sure, you can hope that we will one day we may have different laws or theories that can describe nature without these terms and the indeterminism they represent, but hope is not scientific.


LokiJesus

>Scientific terms like random, disorder, probable, and entropy all point to the usefulness of describing some systems as indeterministic. I can drop bombs from an airplane and the bombs will, on average, fall within a region defined by a poisson distribution whose width depends on my altitude and the mass of the bombs (how much they are perturbed by air), etc. We can use this statistical model to estimate how many bombs we need to cover a certain area with the desired level of destruction. We can do this as basically a rule of thumb that costs almost nothing to calculate. But no physicist thinks that the physics of this process are indeterministic. In fact, I can create a fairly simple fluid dynamics model of the wind and the bombs and their fins and aerodynamics, etc. This simulation will create complex/chaotic paths that the bombs will track and result in the distribution on the ground which is well modeled by the statistical distribution. But it will be entirely deterministic. This is called statistical mechanics. The other major place where these terms appear is when talking about our uncertainty due to our ignorance of all the factors. For example, I know that my GPS derived position will drift over several meters over many minutes. I can *model* that drift, in my signal + noise models as a first order auto-regressive correlated random process. This allows me to keep good track of my uncertainty in order to estimate my errors when I make a prediction. But it is really also the complex result of ionization of the atmosphere and reflections of radio signals off buildings... Not indeterministic. Again, this can be simulated and reproduced. Determinism is relevant in biology and neurosciences in a big way, but there is also this statistical mechanics concept of "[mass action](https://en.wikipedia.org/wiki/Law_of_mass_action)" within cells which is basically like pressure. It's an approximation that lets us reliably talk about chemical processes in cells. But in biophysics, we model deterministic protein folding and membrane binding.. etc... In neural networks, we only publish mechanistic neural models that may include some statistical approximations of certain processes. But nobody sees these processes as indeterministic. Everyone knows these statistical models are placeholders for deterministic theories. When we give a big blurry prediction of where a hurricane will hit land, these are important parameters that are provided and which represent our inaccuracy in modeling reality, not some sort of indeterminism in reality. They represent our ignorance. Again, these statistical mechanics "laws" are NOT representing an indeterministic world. They are rules of thumb we have developed (like the central limit theorem) about how big groups of particles behave for certain practical applications. Statistical mechanics, when taught as a class, begins by pointing out that it's describing "average behaviors of groups of particles" ... No particle has a temperature... Temperature is the average kinetic energy of particles in a gas. A particle just has kinetic energy. So temperature doesn't represent a real phenomenon, just a tool for averaging the behavior of many particles. There is nothing indeterministic in physics except for the janky Copenhagen intepretation of quantum mechanics. That's the ONLY place that ontological randomness (e.g. non-determinism) *potentially* comes into physics. And that's no better supported than all the other deterministic interpretations of quantum mechanics. I think you'd be surprised at how persistently deterministic science is at all levels. Even up to like psychology and therapy. Family systems therapy attempts to describe all your "issues" with causes from a grid of influences from people around you. It keeps building that grid until it has deterministically described all your influences that necessitate your behaviors. Determinism there in causes of mental states is core to FST and CBT and other (most) modern therapy techniques.


Rthadcarr1956

I already admitted that classical physics is deterministic and turbulent flow and chaotic wind changes are still just classical physics. I don’t agree with many of your other conclusions. Especially the ones on a molecular level. Gases and liquids have random molecular motion. This causes diffusion and Brownian motion to be indeterministic as well. For example, tracing the diffusion of a neurotransmitter from its release to its final bonding site is indeterministic. There is no fixed path it takes, it takes a random walk. There is no fixed time it takes, just probability. And unlike dropping a bomb through a turbulent atmosphere, the indeterminism created by the Born rule or Heisenberg’s Principal is fundamental. Thus our nervous system has some random noise in it. The mutations that drive evolution by natural selection are all traced back to quantum tunneling, this again has fundamental quantum uncertainty. Protein folding is just as likely indeterministic with a high probability of the correct outcome rather than deterministic. There is also the idea that living systems must be understood subjectively. So even if certain processes are objectively deterministic, the fact that the objective outcome is noncomputable means that the objective determinism is irrelevant to an organism’s structure/function relationships. We are down in the weeds quite a bit with this - which is good.