Tuesday, April 8, 2014

A quick philosophical thought experiment

Everyone knows about the brain-in-a-vat scenario, where all a person's experiences are actually courtesy of a kind of simulated universe, fed to them by a scientist. The usual questions kicked around are, how do you know you are not a brain in a vat, how do you know solipsism is not true, etc, etc.

I have another question: does the possibility of your being a brain in a vat at all increase if you live next door to a group of scientists with brains in vats? Does the technological capability to enact this situation with any given person influence the likelihood we should assign to our being brains in vats?

29 comments:

Vand83 said...

I immediately thought of the movie "The 13th floor" when I read this. Are you familiar with the movie's premise? If so, doesn't the movies conclusion present the same dilemma to the main character?

Crude said...

Never saw it. Or heard of it. It doesn't surprise me that the question isn't original, but I haven't seen much discussion of this.

I'll google around for it.

malcolmthecynic said...

My first response is to answer "Yes", because in your scenario we have reason to believe that such a thing is actually possible. Right now we have no actual reason to believe such a thing even CAN be done - only that it's not self-contradictory, and is a fun thing to talk about on the internet and in philosophy class.

I feel like there's more to this question than what meets the eye, though.

Crude said...

Well, I'm specifically talking about such a scenario. Not really going anywhere with this intentionally, I just am wondering about it.

Gyan said...

Sound philosophy starts from things (Father Jaki). That things are.
Solipsism is a disease of mind, of reason. It can not be disproved, can not be argued out of, since the reason itself is sick.

The truth of Jaki's point I realized while reading Russell's Introduction to Philosophy where he argues for idealism using table as an example that the modern science shows that the table is really not what it seems (solid, having particular thing-like features). The point is that Russell wrote a BOOK and I was reading a BOOK. The reading and doing of philosophy presumes existence of things such as BOOKS and also engaging in debates and arguments presumes existence of other minds.

So, it is weird to write a BOOK saying that things are not basic and we only know sense impressions.

lotharlorraine said...




Hello Crude, this is a terrific question which completely depends on one's own concept of probability .


According to frequentism, the probability of being in a simulated reality is defined by the ratio (in a theoretical infinite multiverse) between the number of simulated universes and the number of total universes: p(Sim) = (N(Sim) / N(Uni))inf.

In practice, p(Sim) = f(Sim) + error. So before the observation p(Sim) = 0 + error, after it p(Sim) = 0.5 + error'.

Since the new sample is so small, we have absolutely no idea how big the error still is.
So this would not change too much to our epistemic situation.


According to Bayesianism, the probability of being in a simulated reality is defined as the degree of belief a rational agent ought to have this is the case.
There are many forms of Bayesianism out there.
According to the one utilized by the extraordinarily intelligent Dr. Richard Carrier, the degree of belief is always exactly equal to the frequency, no matter how small the sample .
So after the observation it would have evolved from 0 to 0.5
The hugest problem of this system is that our overwhelming ignorance isn't taken into account by any means, that's why the new number has very few significance (if any).
This is of course one of the main problems of his ideological use of Bayesianism for disproving Christ's existence.

Objective Bayesians use the principle of indifference (which I criticized here ) and would say that in a situation of utter ignorance, the prior probability of each possibility is equal, thus p(Sim) = p(nonSim) = 0.5

As I explained in the link just above, this is akin to magical thinking since it turns complete ignorance into very specific knowledge.

After the observation of the simulating brain, they would try to actualize the probability using Baye's theorem. The problem is that p(our reality is a simulation GIVEN there is a simulation in our reality) isn't objectively defined.


In either case, I don't think this discovery would have such a huge impact.

Crude said...

According to frequentism, the probability of being in a simulated reality is defined by the ratio (in a theoretical infinite multiverse) between the number of simulated universes and the number of total universes: p(Sim) = (N(Sim) / N(Uni))inf.

Man, I gotta say, this sounds a whole lot like 'Let's just make up numbers.'

According to the one utilized by the extraordinarily intelligent Dr. Richard Carrier,

Since when is he extraordinarily intelligent? Last I checked he was a windbag who got shown up by the McGrews the last time he tried to get cocky about Bayes. He's the atheist's version of the 'I can prove God exists using just this banana!' man.

So after the observation it would have evolved from 0 to 0.5

Why .5? Why not higher, if there are multiple brains in vats?

Also, there exist plenty of computer simulations of reality right now - just rather small parts of it. Do I have to include all of those programs in my estimation of whether I'm living in a simulated world? Because if so it seems like I have to believe it.

Either way, I think the big lesson here may be 'it's a mistake to try and spit out rapt numbers for probability with regards to metaphysical questions'. Rather leaves us in the dark in some capacities, but I think we can manage that.

aporesis said...

I think most people would intuitively side with Malcolm on this one but I have to say that I am a little unconvinced. Wouldn't the idea of other brains in a vat merely indicate the conceivability of the notion rather than the fact that it was a realistic possibility (which, given the fact that we are discussing it at the moment is not really a major point).
If I really was a brain in a vat, there would be no reason to suggest that those feeding me the perceptions of others would actually do so based on ideas about the real technology that was keeping me alive so there would be no reason to suppose the validity of these perceptions. There seems no real link between being there being other brains in a vat and me being one.
But it is an interesting question and like many things, perplexes me.

Craig said...

Surely, this would increase the probability: as Malcolm says above, proving that the scenario is physically possible would be a point in its favor.

Depending on what other changes go with the scenario, the effect might be dwarfed, increased or counteracted. Can we talk to these brains in vats? What do they say it's like in there? For that matter, if our science has increased enough to make that possible, we must have made a bunch more discoveries: which of them are relevant to the question?

(You know, I suppose that means that if a group of scientists with brains in vats moved in next to me tomorrow, I'd have to increase my estimate of being in a simulation *drastically*. The world arguably makes more sense as a comprehensive fake than than if it's got scientists like that in it, and I've never heard of them until now.)

malcolmthecynic said...

Wouldn't the idea of other brains in a vat merely indicate the conceivability of the notion rather than the fact that it was a realistic possibility (which, given the fact that we are discussing it at the moment is not really a major point).

Right now, all we know is that the notion is not contradictory. We don't have any reason to believe it's possible - but if such things actually exist, we WOULD know it's possible.

Crude said...

I'm extremely sympathetic to the idea that having that technology in your world - indeed, within walking distance - would raise the probability that you were a brain in the vat. The magnitude of that increase is another issue (as you can see in my discussion with Lothar, I'm pretty skeptical about assigning number figures to this sort of thing.) But 'increase of some kind'.

But here's one problem I have with this particular kind of reasoning.

If I AM a brain in a vat... then aren't I mistaken about that technology existing in 'my' world? My world is created by that machine.

I'm not sure this is a major problem, but something interesting seems to be going on there.

aporesis said...

Yes, that was the point I was trying to make (badly).

lotharlorraine said...

I obviously was a bit ironic about Carrier :-)

I do believe he is extraordinarily arrogant and constantly overestimates himself.

He needs to be debunked, not interacted with, since he has repeatedly proven to be unwilling to seriously consider critiques of his cherished ideas.

Crude said...

Lothar,

Ah, sorry, I thought you were serious! My mistake.

malcolmthecynic said...

If I AM a brain in a vat... then aren't I mistaken about that technology existing in 'my' world? My world is created by that machine.

I'm not sure this is a major problem, but something interesting seems to be going on there.


I thought of that, but rejected it as an objection because it amounts to "If I'm a brain in a vat then I have no more reason to think I'm a brain in a vat if I know it's possible for there to be other brains in vats".

In order words, in order for it to work as an objection you need to assume you're a brain in a vat. And then what are you objecting to? It's not "more likely" you're a brain in a vat, you ARE a brain in a vat.

So we're back to "the odds become higher" again.

Mr. Green said...

Crude: does the possibility of your being a brain in a vat at all increase if you live next door to a group of scientists with brains in vats?

Well, strictly speaking, either it's possible [technologically, at some point in time] or it's not. Knowing about it won't change that. And since the only way to sensibly assign likelihood would be based on whether it's physically or metaphysically possible, it seems not.

So I agree with Lothar. (And I got the jab at Carrier. People assigning percentages when they have nothing to base the numbers on is also a pet peeve of mine. There's nothing wrong with just saying, "I don't know, there's insufficient data!")

If I AM a brain in a vat... then aren't I mistaken about that technology existing in 'my' world? My world is created by that machine.

Exactly! If you have evidence that scientists are doing it, then it's possible. If you only think you have evidence, because you're being manipulated by vat-scientists... then it's also possible. If you don't have evidence — then you've got no basis to form a judgement. Sure, you might think that if science were getting close, you'd at least have heard something about it, but as you note, if it is possible, then we can't trust any evidence we think we have. So either you have evidence that the technology exists or else... you don't have any evidence, period. There's nothing to contrast it with, so no way to calculate a reasonable probability.

(This is of course an interesting companion to Bostrom's simulation scenario. You can argue that intellects are immaterial and thus cannot be simulated, but minds can have simulated experiences, which is close enough to the same thing. In the end, it just shows we can never be more than conditionally confident about anything empirical. Take that, Science™!)

Crude said...

Green,

Well, I wonder about that. To use a comparison example - 'Is it reasonable to suspect I may be hit by a car tomorrow?' The answer seems to be very different granting each of these individual cases:

A) The existence of cars is logically and technologically possible.
B) It's possible, and at least one car exists.
C) It's possible, and there are cars all over the place.

Shouldn't - however marginally - the concern be going up from A-C rather than staying level? And if so, then shouldn't it be going up between these cases:

A) The existence of brains in vats is logically and technologically possible.
B) It's possible, and at least one brain in a vat exists.
C) Ronco's selling these machines on TV for $49.99.

Mr. Green said...

Crude: A) The existence of cars is logically and technologically possible.
B) It's possible, and at least one car exists.
C) It's possible, and there are cars all over the place.


Hm. Or D) cars are all over and you're a brain in a vat... (I'd add E) driving over envatted brains is the national pastime — but let's not get silly!)

It does seems like it should be parallel, but I think the difference is that the invention of automobiles does not change the rules of evidence. If cars become technologically possible and you still don't see any, then it's reasonable to assume that there aren't many around (given various other assumptions, of course).

But the brain-vats are more Heisenbergian. As soon as they become technologically possible, then it undermines any evidence you might have for or against them. If cerebral vats are possible, then you might not see any because you're in one!
Mail-order brain vats only increase the chances if they're real and not simulated vats on simulated TVs... and we can't know that without begging the question.

Crude said...

Green,

Good point about the different nature of the question between 'cars' and 'vats'. Let me ask you a different question.

Does that same nature now change whether or not you can recognize if a brain in the vat is, in fact, technologically possible? I take your initial reply of 'once they are technologically possible, then it undermines any evidence you might have for or against them' to mean 'for or against being a brain in a vat', but now I'm asking of their technological possibility undermines observation of their technological possibility.

If so, would that mean your position would be that the only way to determine technological possibility would be by a priori philosophical reflection?

malcolmthecynic said...

But the brain-vats are more Heisenbergian. As soon as they become technologically possible, then it undermines any evidence you might have for or against them. If cerebral vats are possible, then you might not see any because you're in one!

This seems really muddled to me. What you're essentially saying is that the odds of us being a brain in a vat don't go up because we might be a brain in a vat!

But we're not actually trying to figure out the probability of us being brains in vats. Of course that's technically impossible to change - we're either brains in vats or not. If we are a brain in a vat, then the probability of us being a brain in a vat is 100%, whether we know it or not.

But that was not the question. The question is whether or not this should influence the likelihood WE should assign to our brains being in vats.

I'm going to stick to my guns here after some reflection and say YES, absolutely, we should now assign a higher likelihood to the possibility. Technically the chances haven't changed but from our perspective a potentiality is now a real possibility.

I'm having trouble putting it into words, but basically what I'm saying is that I find "we might be brains in vats" a very odd thing to say when you're talking about NOT increasing the possibility of us being brains in vats. Something is off in that reasoning, even if I'm having trouble articulating what.

Crude said...

If nothing else, this is a very fun conversation. I'm thankful to all taking part in it this far.

Water into Whine said...

"Solipsism is a disease of mind, of reason. It can not be disproved, can not be argued out of, since the reason itself is sick. "

But then surely the disease is one shared by you, given that you can't disprove or argue yourself out of it.

And, strictly speaking, a philosophy which begins from 'things' begins from appearances, or positivistically, which was the problem with Hegel for example.

Crude said...

But then surely the disease is one shared by you, given that you can't disprove or argue yourself out of it.

I suspect one may say that it doesn't need to be disproved or argued out of - we can just bypass it altogether and still be reasonable, in fact better for it, despite the fact that we are very simply just discarding the idea and moving on with a different set of axioms.

Andrew W said...

In order to have a probability (or a "likelihood"), you need to have a model, even a weak one. The problem in this case is that one of two things are true:

(1) we have good information about the "real world"
(2) we have *no* information about the "real world".

The real question here is "what would constitute good evidence that the world as we^H^H I perceive it is false?". As far as I can tell, the only evidence would be
(1) if someone from outside came in and presented us with compelling evidence (c.f. "The Truman Show")
(2) perception of outside could somehow "leak" inside

Given this, I suggest that in the general case the likelihood of being a "brain in a vat" is undefined. Having access to alternative models as to how this might work doesn't help, since if we are actually a "brain in a vat" they are fictional anyway.

Crude said...

Alright. Let's complicate the matter further. Something about Andrew's post made me wonder about the following:

What if what I'm trying to ascertain is whether I became a brain in a vat /at some point/? In that case it's still an open question that I lived in the real world, that I really saw the technology that I did - I just am no longer there now.

Mr. Green said...

Crude: If so, would that mean your position would be that the only way to determine technological possibility would be by a priori philosophical reflection?

Yeah, I think it would... My vat-world could even be presenting me with a simulated physics in which the technology is impossible. If I can't somehow get information about the physical laws of the real world, I can't draw any certain physical conclusions. And metaphysically, it must be possible to experience realistic-seeming illusions, because we do that every time we dream.


Malcolm: What you're essentially saying is that the odds of us being a brain in a vat don't go up because we might be a brain in a vat! [...]
But that was not the question. The question is whether or not this should influence the likelihood WE should assign to our brains being in vats.


I agree that this sounds odd, but are you sure that isn't one of those instincts like, "I just tossed ten heads in a row, so the next flip is really really likely to be tails!"?

I think odds are "unknown" to begin with, so they can't go up or down. A chain is only as strong as its weakest link — I think this is Lothar's point too — so it doesn't matter how good our calculations are for the part about how much the odds would increase. We have to apply that number to a link that is missing altogether, and '???' x anything still equals '???'. So I think as soon as we recognise the catch, we should say, "Since I'm not justified in trusting any of my evidence, I have to throw my previous idea of what the odds are and say I don't know either way."

Mr. Green said...

Crude: What if what I'm trying to ascertain is whether I became a brain in a vat /at some point/? In that case it's still an open question that I lived in the real world, that I really saw the technology that I did - I just am no longer there now.

In the general case, I don't think that changes anything. If we assume that (somehow) I knew I started out in the real world and try to figure out if I'm still in it? Then either I saw some evidence when I was still in reality, which I could use to estimate the odds, or I was already in the vat, in which case that evidence is worthless. Hm, since I don't know where the cutoff is, I think the likelihood is still undefined. Even I remember getting kidnapped by brain-scientists and hooked up to a machine, I can't tell whether that experience was just part of the simulation. (Maybe the vat runs vat-simulations so they can see what kind of tricks people would use to try to escape!) So I'm sticking with Andrew's point that it's hopeless without outside info somehow getting in.

Oh, and of course if I'm not in a vat, then there is no "outside" to get help from. So I can't know if I'm not getting outside help because I'm unlucky or because I'm already in reality. And if I do get some outside information/help, I might be in a simulation within a simulation! So I could still never tell I'd reached real reality.

But definitely a fun question!

malcolmthecynic said...

I agree that this sounds odd, but are you sure that isn't one of those instincts like, "I just tossed ten heads in a row, so the next flip is really really likely to be tails!"?

Funny you say that. I know you're referring to the gambler's fallacy, but I always thought it was kind of funny, because in one sense it's true. Think "Rosencrantz and Guildenstern are Dead"'s coin toss scene. Getting heads 200 times in a row, or whatever the ludicrously long number is, is really, really unlikely.

When that coin flip is only compared to the previous flip, yeah, it's fifty-fifty. But the odds of a string of 200 flips all resulting in heads is extremely unlikely.

Mr. Green said...

Malcolm: I know you're referring to the gambler's fallacy, but I always thought it was kind of funny, because in one sense it's true. Think "Rosencrantz and Guildenstern are Dead"'s coin toss scene.

Indeed. Well, now we're getting into the question of why statistics works at all. After all, (if you think in terms of possible worlds) there's some world where all or most coin-tosses come up heads, so why shouldn't that be our world? Physics doesn't require it to work out that way. I think the reason has to be that God made the universe like this. So really, statistics is another design argument. But that gets back into the metaphysics... so if you have an argument why God, say, wouldn't allow us to be deceived in vats, you could argue for it that way!