NationStates Jolt Archive


Anti-Determinism Paper - Page 2

Pages : 1 [2]
Nobel Hobos
08-12-2007, 12:58
There is only one way to settle this.

I'm getting out my Tarot deck.

It's the Crowley deck, full-size, with one card missing.

I KNOW the fall of the cards is not random. I handle the cards myself, I watch each card take its place beside each other as I shuffle them.

Which card goes where may be under the control of my subconscious. But the order of the cards obeys causality either way, no card goes where I did not put it with my own hand.

The Tarot usually serves me well ... let's see how it goes now.

Mock me if you will. I'll post the result in a few minutes.

EDIT: No interpretation offered:

Center: Princess (Page in some decks) of Cups.
Behind: Major Arcana VI: The Lovers
Before: XVIII: The Moon.
Above: Knight (King) of Swords.
Below: 10 of Swords.
Crossing center: 3 of Disks!

Now: Knight (King) of Wands.
Home: 4 of cups.
Hopes and fears: Prince (Knight in other decks) of disks.
Resolution: 7 of disks!

OK, minimal commentary. There is no statistically meaningful occurrence of any suit or number, nor a preponderance of "person" cards. Compared to most of my readings, it's very light on Major Arcana. The 3 and the 7 seem significant to me, but it is a message which is very muted.

I won't do this again.
Soheran
08-12-2007, 13:25
why would anyone want free will to be independent of the natural processes that directed a choice?

Because we don't want those natural processes to determine the choice.

It is no different here whether the universe is deterministic or (partially) random. Whether my choice is determined by a cosmic die roll or by causation, it is still determined by something other than my own volition.

the option was open to me, had i felt differently about it.

What if I didn't? If I feel very strongly about something, must I always do the same thing in response to it? (Assuming the circumstances are the same?)

What if I choose to act against what I feel? The fact that I feel the same both times should have no effect on this choice, if I am free.

the decision was still made by me, and if i had decided otherwise, i could have done something else

But could I have decided otherwise?

Only if I had felt differently? But why should I accept "what I feel" as the determining principle of my will?

Indeed, in the subjective process of actually making a choice, I recognize "what I feel" as not determining my choice: I am not asking myself about what I feel, but rather about what I should do.
Nobel Hobos
08-12-2007, 13:50
Because we don't want those natural processes to determine the choice.

It is no different here whether the universe is deterministic or (partially) random. Whether my choice is determined by a cosmic die roll or by causation, it is still determined by something other than my own volition.

Correct.

There is some confusion in this thread, because there are three poles: free will, determinism, and the will of God.

Random crap is enough to defeat Determinism, but not to prove Free Will.

God is just doing what God usually does, being a bully and claiming credit for shit without proving it or even speaking for Himself.

Luckily, the representatives of God are now out of the debate, so Determinists and Free-Willies are no longer obliged to make common cause against God's "I can see you but you can't see me, and also, I have +4 Smite" debating tactics.
Free Soviets
08-12-2007, 18:22
Because we don't want those natural processes to determine the choice.

It is no different here whether the universe is deterministic or (partially) random. Whether my choice is determined by a cosmic die roll or by causation, it is still determined by something other than my own volition.

volition is itself a natural process is all i was saying. i do the deciding, but i don't act in some supernatural way, completely independent of all natural processes.

What if I didn't? If I feel very strongly about something, must I always do the same thing in response to it? (Assuming the circumstances are the same?)

it depends on how exact we are talking about the same circumstances being. if the sameness extends into our decision making processes, then it seems to me it would be rather odd to think that we would in fact choose otherwise. but if we cut the sameness of the circumstances off before that point, and allow our decision making processes to actually run through again, i see no problem with someone coming to a different conclusion on a repeat run-through.

What if I choose to act against what I feel? The fact that I feel the same both times should have no effect on this choice, if I am free.

sorry, was using 'feel' as shorthand for some broader concept involving the various facets of my decision making, rather than just the emotional one. i guess it's the total of all things within us that occupy the step before willing.

But could I have decided otherwise?

you could, if you decided differently. and since it was and is within your power to do so, you could have. basically, you have what we might call general powers to act in all sorts of ways (though not in others - this is what you were getting at when you tried to fly, i think) whether you exercise them or not, your decision is your own, your reasons for making it are your own, and your decision making is done by you. so far, so compatibilist, and technically i think that is enough.

but on top of that, given the apparent failure of determinism to actually hold in the universe, i give significant weight to our subjective experience of free will. so i hold that your decision making is not deterministicly set by anything outside of you (at least not always - there are probably some examples where it is), and that we can have no guarantee that exactly the same internal factors would win out in making a choice if you ran through the process again.
HotRodia
08-12-2007, 18:30
Luckily, the representatives of God are now out of the debate, so Determinists and Free-Willies are no longer obliged to make common cause against God's "I can see you but you can't see me, and also, I have +4 Smite" debating tactics.

Who in this thread is a representative of God? :confused:
Reasonstanople
08-12-2007, 18:43
The great discovery of the thread was Reasonstanople. Glib like TPC, clearly quite a mature person, not a sore loser, and quite loose in a way which suggests a capacity to be wildly wrong on occasion. Plain in meaning, honest as to limitations. Excellent name, too. Should be great fun!

(OK, Reason'ple, sorry for any disrespect. But your postcount says 'noob')



*blushes* Honered, your hoboness
Reasonstanople
08-12-2007, 18:55
Free Soviets, I think I see where this debate is going, so I'm gonna skip ahead.

You're assuming the subjective exists. That there is something fundamentally independent about your experience that is not bound by objective methods.

If the subjective exists, and is in at least partial control of the objective body, then you have your free will.

I contend that the subjective does not exist. There is no 'qualia.' Everything going on in your brain is due to physical, natural processes, including your interpretations, emotions, and opinions. There's nothing about your brain that would exempt it from physical natural processes, or make the various 'pieces' of your brain react in an independent way.

Before I go further, clear up some specifics for me. You aren't a compatibalist, since they present free will as something that goes along with a deterministic universe. Are you then a dualist?
Nobel Hobos
08-12-2007, 19:12
Who in this thread is a representative of God? :confused:

Well, I meant the OP.

Yet, my God, the question remaineth whether everything happeneth solely by chance. Indeed, the question need not be treated at length before it be dismissed, for it is obvious there is deliberation in Thee, and Thy great acts have a causal relation to Thy deliberation: Thou speakest, and it is; Thou decidest, and it happens. All which is Thou hast ordained from all eternity...

... though to be honest I'm not sure.

The "debating tactics" line was just me being cheeky. I couldn't resist comparing the Word of God unfavourably with a roleplayer who picks up a -2 cursed trout, hits people with it, and still wins (well, +4 smite!)
HotRodia
08-12-2007, 19:19
Well, I meant the OP.

... though to be honest I'm not sure.

I'm not either. I wasn't aware that making crappy arguments against free will and for God's existence made one a representative of God.
Free Soviets
08-12-2007, 19:22
You're assuming the subjective exists. That there is something fundamentally independent about your experience that is not bound by objective methods.

If the subjective exists, and is in at least partial control of the objective body, then you have your free will.

nah, i was using subjective merely to mean that i have the sensation of being in charge of my decision making, and this sensation is powerful and immediate to me. i would need particularly strong counter arguments to even begin to dissuade me. much like i would need some particularly compelling arguments to get me to believe that there isn't an external world or that we don't have knowledge.

my positive argument for the existence of free will has much in common with moore's argument against the skeptic.

here is a decision for me to make.
i make a decision.
therefore, free will exists.

I contend that the subjective does not exist. There is no 'qualia.' Everything going on in your brain is due to physical, natural processes, including your interpretations, emotions, and opinions. There's nothing about your brain that would exempt it from physical natural processes, or make the various 'pieces' of your brain react in an independent way.

do you contend that you aren't in (partial) control of your body? that you don't make decisions? we don't need to be exempt, we just need to be. whatever else it is, the willing is itself a natural process.

Before I go further, clear up some specifics for me. You aren't a compatibalist, since they present free will as something that goes along with a deterministic universe. Are you then a dualist?

i'd be a standard compatibilist if determinism were true. it just isn't true, so that doesn't really matter. but free will is still compatible a world filled with both deterministic and probabilistic causes - it is another sort of cause that exists and arises as the product of a certain amount of complexity allowing recursive thought and decision making internal to certain sorts of entities. call me a modified compatibilist.
Free Soviets
08-12-2007, 19:23
I'm not either. I wasn't aware that making crappy arguments against free will and for God's existence made one a representative of God.

it depends - was he wearing a funny hat at the time?
Liminus
08-12-2007, 19:28
volition is itself a natural process is all i was saying. i do the deciding, but i don't act in some supernatural way, completely independent of all natural processes.

You're assuming the subjective exists. That there is something fundamentally independent about your experience that is not bound by objective methods.

And that's what I was getting at with my rock comment. If it is a natural process, it seems that free will is reduced to some cost-benefit analysis and computation. In that case, this idea of "choice", taken at its intuitive meaning, is faulty. Things do what they will within the restrictions and functionality of their design so more or less ascribing a qual to this free will thing, this thing also called choice (when understood from the intuitive level), is a completely meaningless enterprise. A rock does, actually, choose to rest on the ground because that is the choice that its incapacity to beget movement within itself declares for it.

But, I cant see how you can call volition a natural process but then give it values and determinants which, to me, appear outside of nature. If you are not acting in a supernatural way, you are entirely upon those natural processes for your decision/action. To be making a choice that only takes into account but is not dependent upon natural processes implies it is something more than a natural process, a supernatural process, if you will. It seems to me that the only way then out of this predicament is to either (A) increase the scope of nature (with all the troubles and counters that that may entail) or (B) as Reasonstanople mentioned, declare some sort of dualism (in which case the debate takes a very different turn).
HotRodia
08-12-2007, 19:30
it depends - was he wearing a funny hat at the time?

Heh. Somehow I doubt Gens Romae would ever get a Bishop's mitre or a Pope's tiara.

And even if he did, the making of crappy arguments doesn't qualify a person to get either of those funny hats. It might not hurt, but it's not necessary, let alone sufficient.
Reasonstanople
08-12-2007, 19:43
here is a decision for me to make.
i make a decision.
therefore, free will exists.


What a shitty argument.

AI can also make decisions. As far as the individual AI is concerned these are new findings, but as for anyone reading the code, there was no other choice for the program. Our brains are just have very elaborate (and often redundant) programming.


do you contend that you aren't in (partial) control of your body? that you don't make decisions?


Yes, see computer/neurological analogy above.
Liminus
08-12-2007, 19:53
do you contend that you aren't in (partial) control of your body? that you don't make decisions? we don't need to be exempt, we just need to be. whatever else it is, the willing is itself a natural process.

This can be countered by saying that insomuch as the self is the unique conglomeration of the various components of your body, you are in "control" of your own body. The certain components of the body are subjugated to the whims of certain others for natural reasons, but certain others are not. Can you directly change your heartbeat? Or how fast your skin cells replicate? What makes you say that you are not your body? Your body, being you, is in control of itself. And by control, I speak of the control a Rube Goldberg contraption has over itself. It does a thing, and this thing is a thing it wants to do because it strives to do it, in however convoluted and enticing a manner it may be designed to do it in. Other machines may do it in a more simple or more complicated manner, but its "will" is derived from the uniqueness and, thereby, susceptibility to errors, allowing for unintended improvements/detriments, of the contraption.
Nobel Hobos
08-12-2007, 19:54
*blushes* Honered, your hoboness

Unbend that knee, you heel! ;)

The "fun" we shall have may not be all to your liking. But to be plainly understood is a virtue, and your posts in this thread have pleased me in that.

... unlike those damn birds, with their "hey the sun is coming up, oh what joy, another day, cheep cheep whoop! almost fell off the branch I'm so happy CHEEP!"

I think I must have been a bird in a recent incarnation. Birds make me insanely jealous, plus I just had my nights sleep propped upright, and also I like whole grain bread.
Reasonstanople
08-12-2007, 20:09
Someone on another forum had a rather beautiful way to word the view that free will = loonyness


originally posted by Zell
Personally I think that free will is even more impossible than, say, God or Santa Claus. You can actually imagine them being real even if their existence breaks a whole lot of the laws of physics. Free will, however, is for me impossible to imagine. It would require that their exists something that is not influenced by the laws of physics at all, something that in all sense of the world don't exist since they are not affected by the clockwork/randomness of the physical world.

Then this non-existing thing would need to have some kind of link to our physical brain, like... I don't know, a tiny wormhole or something. Then information would need to go through this little wormhole and emerge in Free Will Land, where some unknown force would make a decision only partially influenced by stimuli and then send information back to the brain. And that explanation still forces us to accept that some totally alien kind of cause-and-effect system is at work in Free Will Land. Personally, I chose Occam's razor every day.
Free Soviets
08-12-2007, 20:11
What a shitty argument.

AI can also make decisions. As far as the individual AI is concerned these are new findings, but as for anyone reading the code, there was no other choice for the program. Our brains are just have very elaborate (and often redundant) programming.

you are trying to have it both ways - the computer either can make a choice or it can't. if there is no choice involved, if there was nothing else the computer could have done, then there was no decision (in the sense i am using the term) was made. nothing was decided, an action was merely taken without any choice involved.

it looks as though you want to claim that our sense of being able to choose is a fiction. to which i say, based on what evidence? how do you go through life not making choices? it isn't even possible. even the most ardent determinist must live her own life as a being that can freely choose between alternatives. we behave differently than things which do not have the option of making choices (like rocks), and fundamentally cannot behave as they do.
Free Soviets
08-12-2007, 20:28
And that's what I was getting at with my rock comment. If it is a natural process, it seems that free will is reduced to some cost-benefit analysis and computation. In that case, this idea of "choice", taken at its intuitive meaning, is faulty.

well, maybe not just cost-benefit. there seem to be all sorts of considerations that enter into our decision making processes at various times and in various instances. how does this conflict with our intuitive meaning of 'choice'? we make our choices for reasons (most of the time) and that isn't taken to conflict with anything - it would be really strange if it did, actually.

A rock does, actually, choose to rest on the ground because that is the choice that its incapacity to beget movement within itself declares for it.

rocks do not have the powers needed for them to have an ability to make choices. i'm thinking the issue is that your understanding of choosing is weird.

ABut, I cant see how you can call volition a natural process but then give it values and determinants which, to me, appear outside of nature. If you are not acting in a supernatural way, you are entirely upon those natural processes for your decision/action. To be making a choice that only takes into account but is not dependent upon natural processes implies it is something more than a natural process, a supernatural process, if you will. It seems to me that the only way then out of this predicament is to either (A) increase the scope of nature (with all the troubles and counters that that may entail) or (B) as Reasonstanople mentioned, declare some sort of dualism (in which case the debate takes a very different turn).

the scope of nature contains a capacity called free will and decision making. we see it in the world, after all. the onus is on those who wish to deny it to provide some really solid justification for excluding the data. the previous justification is that there is no room for it in a deterministic universe. but since the deterministic universe with no room for anything else doesn't exist, you need something else. and it needs to be really good - better than the arguments offered by the skeptics against the existence of knowledge, for example.
Reasonstanople
08-12-2007, 20:34
you are trying to have it both ways - the computer either can make a choice or it can't. if there is no choice involved, if there was nothing else the computer could have done, then there was no decision (in the sense i am using the term) was made. nothing was decided, an action was merely taken without any choice involved.

it looks as though you want to claim that our sense of being able to choose is a fiction. to which i say, based on what evidence? how do you go through life not making choices? it isn't even possible. even the most ardent determinist must live her own life as a being that can freely choose between alternatives. we behave differently than things which do not have the option of making choices (like rocks), and fundamentally cannot behave as they do.

There's your problem, your definition of choice. Obviously actions are taken, and reasoning of some kind or other has gone into taking those actions, but that's as far as I define choice. We can say that the computer AI accomplishes this much.

But nothing more happens with choice. The same way the action is determined by reasoning, the reasoning is determined by 'programming,' via evolution, which saw reasoning individuals living longer and reproducing more than non-reasoning individuals. The programming, in turn, is determined by the physical processes that led to evolution. So your programming is under the impression that it's got something totally new for the universe by 'choosing' a snickers over a kit kat. It's more than a little silly to think this is actually the case.

I'm pretty sure I've spelled out how the real world doesn't lead to autonomous agents a few times now, and my quote of Zell shows why free will looks completely absurd. You've yet to explain how, given that we're the products of nature, free will is possible.
Free Soviets
08-12-2007, 20:40
"Free will, however, is for me impossible to imagine. It would require that their exists something that is not influenced by the laws of physics at all"

who has ever held this position?
Liminus
08-12-2007, 20:48
well, maybe not just cost-benefit. there seem to be all sorts of considerations that enter into our decision making processes at various times and in various instances. how does this conflict with our intuitive meaning of 'choice'? we make our choices for reasons (most of the time) and that isn't taken to conflict with anything - it would be really strange if it did, actually.Ok, but a computer can also do a cost-benefit analysis. In fact, there are many programs that do just that type of thing. Even if it's a bit more complicated than a cost-benefit analysis, it's still an analysis/computation and thus falls into the realm of not being a choice, but rather a very complicated, and useful, action-reaction process. If you wish to call that a choice, then that is fine. My feeling is that that isn't what more people intuitively conceive of as a choice and, honestly, I really have no intuitive leanings about it one way or the other, that I'm aware of (other than at an input-analysis-output Turing style thing), so I fully agree that it's sometimes hard for me to come to proper terms in debates such as these.
rocks do not have the powers needed for them to have an ability to make choices. i'm thinking the issue is that your understanding of choosing is weird.But, see, that's where the analysis comes into play. If all a choice is is some kind of complex computation, then a rock has the same capacities a human does in regards to making choices by simply being a physical system. This isn't to say that the capacity of the rock isn't very limited, but it is still a capacity. The thermostat (forget which paper used it) example comes to mind. The thermostat is a very simple system but it chooses and strives to make the room a specific temperature. It is bound by design to make only a very simple and limited choice, but a choice and a will it still wields (weird grammar there).
the scope of nature contains a capacity called free will and decision making. we see it in the world, after all. the onus is on those who wish to deny it to provide some really solid justification for excluding the data. the previous justification is that there is no room for it in a deterministic universe. but since the deterministic universe with no room for anything else doesn't exist, you need something else. and it needs to be really good - better than the arguments offered by the skeptics against the existence of knowledge, for example.I don't see why the onus is on skeptics of free will to provide evidence for against it. Hell, it is umm...kind of difficult to prove a negative, anyway. However, it seems to me that nothing would change with the removal of free will and, in fact, it may simplify matters quite a bit as well as fitting into our current conception of the world (minus the free will/qualia part). With that in mind, it seems to me that proponents of free will and qualia bear the burden of proving their existence (the qualia, not the proponents).

Sidenote: does it strike anyone else as odd that "bear" the verb and "bear" the animal are the same? For some reason it still seems off to me, even after checking to make sure. o.O
Soheran
09-12-2007, 01:34
you could, if you decided differently. and since it was and is within your power to do so, you could have. basically, you have what we might call general powers to act in all sorts of ways (though not in others - this is what you were getting at when you tried to fly, i think) whether you exercise them or not, your decision is your own, your reasons for making it are your own, and your decision making is done by you. so far, so compatibilist, and technically i think that is enough.

I do not. I have been convinced otherwise.

The key here lies in the recognition that everything we can say about ourselves, everything that we can say about what we want, does not, as a rational matter, determine our decision when we consider it subjectively. In answering the question "What should I do?", "I want to do x" or "I have a tendency to do x" does not give me an answer.

This is the break between "wanting" and "willing" that compatibilism, I think, necessarily misses--I may want (and any relevant term concerning my nature can replace this one without affecting the argument), but I do not will until I think "I should", and I need not take "I should" from anything about myself. ("Should" here need not have an explicitly moral sense. In making a choice on the level of rational will, I must affirm its rightness, because my rationality compels me by the definition of "right" to reject the notion that I "should" do anything wrong, but that is a necessary, not a sufficient, element.)

As it appears to us in the subjective process of decision-making, then, we can will against ourselves: we can defy our own natures. (We may, in fact, not--especially if we have no reason to do so. But we can.)

Determinism asserts the opposite: it explains everything we will as a consequence of our natures. We may think we are defying our natures, but really we are merely following a different, hidden inclination--try as we will, we are bound.

Thus it necessarily negates free will.

and that we can have no guarantee that exactly the same internal factors would win out in making a choice if you ran through the process again.

If this is purely a matter of random natural phenomena, of quantum indeterminacy or something along those lines, the fact that a different internal factor might determine my choice if the process is repeated does not save free will--especially not if which internal factor determines my will is a matter of random chance over which I have control.
BunnySaurus Bugsii
09-12-2007, 03:00
AI can also make decisions. As far as the individual AI is concerned these are new findings, but as for anyone reading the code, there was no other choice for the program. Our brains are just have very elaborate (and often redundant) programming.


The bit I bolded seems a significant weakness in this analogy.

Firstly, an AI's behaviour is not determined only by it's code. When you first boot it up, and it has collected no data, then yes, it's first action will be entirely defined by its code.

Thereafter, however, to predict precisely the behaviour of the AI, one must also know its environment, the data it has gathered since the code first started to run.

Now consider the human, which you slightingly describe as "very elaborate programming." To predict precisely a human's behaviour one must know the initial state (some state at some time, anyway), and the human's environment since the time that state was known. To further prise apart the analogy, the human brain rewires itself (changes its "programming") in response to the environment.

The computer is deliberately built to be deterministic. Writing code would be a nightmare, perhaps completely futile, if the CPU just took a dislike to NAND and did NOR instead, just because it felt like it. We CAN define a potential below .1 volt to be a zero, and a potential above .4 volt to be a one, and predict the behaviour (actually, the formal output, but never mind) of the AI using a logical model ... for instance, by running the same AI with the same input on a different computer.

Not so a human brain. It's an essentially analogue device, meaning that the only way to precisely predict it's behaviour is to know the position and momentum of every subatomic particle which makes it up, then to build a model which behaves exactly the same way.

The exact position and momentum of a single particle is an infinite quantity of information, a set of (I think) four rational numbers. The only true record of a particles precise position and momentum is the particle itself.

So you're looking at a high order of infinity already, without even considering Uncertainty or the possibility of quantum indeterminacy.

Then, you need to calculate the behavior of all those particles. To do that with ideal accuracy, you need to use quantum mechanics, but never mind because it's already impossible. We ran out of universe trying to write down the position of the first particle we tried to define.

To reprise: an AI can be modelled, and therefore predicted, by nipping down to Dell and buying an identical computer. The human brain can only be modelled and predicted to the same level of accuracy, by collecting an infinite amount of information and then calculating from that.

The analogy is worthless.
Vetalia
09-12-2007, 03:35
To reprise: an AI can be modelled, and therefore predicted, by nipping down to Dell and buying an identical computer. The human brain can only be modelled and predicted to the same level of accuracy, by collecting an infinite amount of information and then calculating from that.

That's only true, of course, for weak AI. A strong AI would not be directly reducible to its basic components, at least not reducible in the sense that you could model and predict their behavior to any degree of accuracy higher than that of a biological human. Consciousness is presumably an emergent process, which implies strongly that it is not capable to reduce it to a sum of parts; what role consciousness actually plays in determining reality around it is very, very important.
BunnySaurus Bugsii
09-12-2007, 03:38
I wred a fascinating piece earlier in the year about some new results from MRI imaging of the working brain. I'm trying to dig it up, but if anyone recognizes this, they could help me out?

Subjects were asked to make some simple decisions (on a screen, puzzle-solving things like an IQ test). The MRI showed a part of the brain having a spurt of activity which the researchers believed to be the actual making of the decision. What was interesting, is that this occurred significantly before the subject reported a subjective decision ... essentially, the part of the brain which made the decision was not conscious, rather 'delivered' the decision to consciousness.

IIRC, there was also a spurt of motor activity before the decision became conscious. The part of the brain which "really" made the decision was telling the hand to click a button on the screen, before the subject even knew they had made a decision!

I'll edit this post if I find a good source for that research.
Vetalia
09-12-2007, 03:41
I don't see why the onus is on skeptics of free will to provide evidence for against it. Hell, it is umm...kind of difficult to prove a negative, anyway. However, it seems to me that nothing would change with the removal of free will and, in fact, it may simplify matters quite a bit as well as fitting into our current conception of the world (minus the free will/qualia part). With that in mind, it seems to me that proponents of free will and qualia bear the burden of proving their existence (the qualia, not the proponents).

Everything would change. Without free will, there is no moral responsibility whatsoever. You are never guilty of a crime or wrongdoing because you had no choice in the matter; you were predetermined to be a criminal and there was no possible way to get around it no matter what you id.

If anything, determinism has the burden of proof because it has to account for the process by which things determine themselves, which requires not only that the entire span of existence be predetermined prior to it actually happening but that predetermination itself has to either be a free action that spontaneously occurred for some reason or is itself predetermined, creating an infinite regression that is either nonsensical or must be reduced to some kind of free action.

Determinism can't exist without either a component of free will or an infinite regression.
Soheran
09-12-2007, 03:42
That's only true, of course, for weak AI. A strong AI would not be directly reducible to its basic components, at least not reducible in the sense that you could model and predict their behavior to any degree of accuracy higher than that of a biological human.

And just like consciousness, the capacity for free will on the part of a robot would probably not be capable of empirical support.

(Unless, perhaps, without being so programmed, the robot reported the subjective experience....)
BunnySaurus Bugsii
09-12-2007, 03:42
That's only true, of course, for weak AI. A strong AI would not be directly reducible to its basic components, at least not reducible in the sense that you could model and predict their behavior to any degree of accuracy higher than that of a biological human.

Doesn't that rule out the AI running on anything like our current deterministic computers? Predicting the behaviour of an AI running on anything binary and deterministic should be a trivial matter of cloning it onto faster but identical hardware!

Consciousness is presumably an emergent process, which implies strongly that it is not capable to reduce it to a sum of parts; what role consciousness actually plays in determining reality around it is very, very important.

"Emergent process" ... FTW.
Vetalia
09-12-2007, 03:43
And just like consciousness, the capacity for free will on the part of a robot would probably not be capable of empirical support.

(Unless, perhaps, without being so programmed, the robot reported the subjective experience....)

It would not be capable of empirical support. In fact, determining consciousness itself would not really be capable of empirical verification, let alone the subjective experiences that stem from it.
Soheran
09-12-2007, 03:51
If anything, determinism has the burden of proof because it has to account for the process by which things determine themselves,

Laws of nature. Science already gives a vast amount of information about this.

which requires not only that the entire span of existence be predetermined prior to it actually happening but that predetermination itself has to either be a free action that spontaneously occurred for some reason or is itself predetermined, creating an infinite regression that is either nonsensical or must be reduced to some kind of free action.

Not "free." Uncaused.

The universe could have began randomly, and a determinist could reasonably maintain that the element of the universe that necessitates everything having a cause only came into effect after the universe came into existence.
Vetalia
09-12-2007, 03:51
Doesn't that rule out the AI running on anything like our current deterministic computers? Predicting the behaviour of an AI running on anything binary and deterministic should be a trivial matter of cloning it onto faster but identical hardware!

Well, it depends. The human brain operates on hardware that is similarly limited; our bodies have a finite amount of neurons and neurochemicals, which in turn are based on a finite genetic code, and yet we are capable of acting as independent agents with consciousness, subjective experiences, and (presumably) free will.

I think more than anything it hinges on parallelism, which is the primary processing difference between the human brain and a computer; running in parallel produces far, far more potential interactions and combinations that might eventually produce the critical mass necessary for consciousness to emerge and sustain itself (the emergence of the soul, if you're more mystically inclined).

So, the more parallelism the computer uses in executing its code, not only the greater capabilities of the system but also the increased possibility of consciousness emerging. I would go so far as to say the substrate itself is irrelevant, at least in regard to its ability to support a conscious mind. This mind may be at least somewhat alien to one based upon biological structures and chemicals, but it would be a conscious mind nonetheless.

"Emergent process" ... FTW.

That's what I feel is the most plausible explanation.
BunnySaurus Bugsii
09-12-2007, 03:53
And just like consciousness, the capacity for free will on the part of a robot would probably not be capable of empirical support.

It would not be capable of empirical support. In fact, determining consciousness itself would not really be capable of empirical verification, let alone the subjective experiences that stem from it.

Not at the level of certainty we're looking for here, no.

The robot might, however, start writing heavy AI poetry ... massively impressing human minds and leaving no real doubt as to its consciousness. As an added bonus, thousands of human poets would commit suicide. :p
Vetalia
09-12-2007, 03:54
Laws of nature. Science already gives a vast amount of information about this.

Those laws are not set in stone, however; it is highly, highly likely that they will hold true, but they can change if given sufficient time and repetition.

Not "free." Uncaused.

That's more along the lines of what I meant...non-deterministic does not, of course, equal free will.

The universe could have began randomly, and a determinist could reasonably maintain that the element of the universe that necessitates everything having a cause only came into effect after the universe came into existence.

Of course, this gives us an uncaused cause, which is rather similar to the concept of God. I think it would be difficult, even impossible, to hold a physicalist position in regard to this "first mover" without overstepping the bounds from science to mysticism.
Soheran
09-12-2007, 03:59
Those laws are not set in stone, however; it is highly, highly likely that they will hold true, but they can change if given sufficient time and repetition.

Or maybe there's just a hidden variable we can't see that causes the difference.

Of course, this gives us an uncaused cause, which is rather similar to the concept of God.

Not particularly. Not omnipotent, omniscient, omnipresent, benevolent, personal, eternal, etc.

I think it would be difficult, even impossible, to hold a physicalist position in regard to this "first mover" without overstepping the bounds from science to mysticism.

Well, you can't prove it necessarily.

But certainly the need for an ultimate cause does not necessitate a supernatural origin to the universe.
BunnySaurus Bugsii
09-12-2007, 04:45
Well, it depends. The human brain operates on hardware that is similarly limited; our bodies have a finite amount of neurons and neurochemicals, which in turn are based on a finite genetic code, and yet we are capable of acting as independent agents with consciousness, subjective experiences, and (presumably) free will.

... all things I value, and would rather find an explanation of than explain away. There, I admit to a bias in my enquiries ... I'm neither a philosopher nor a physicist.

I still have my head at the particle level though. It seems compelling to me, to reduce a complex system by dividing it into smaller and smaller physical quantities until reaching a stage of simplicity that we can actually test precise repeatability. With physical things, not thought-cats or mathematics.

So far, we are just pretending that we can predict the future. In practical terms, sure we can. But the prediction of the future in the strictest sense has not been demonstrated, I don't believe there is any way to test it in finite space and finite energy.

Quantum mechanics is a very serious warning to us, I would say. "This really isn't going to be as easy as you think."

I think more than anything it hinges on parallelism, which is the primary processing difference between the human brain and a computer; running in parallel produces far, far more potential interactions and combinations that might eventually produce the critical mass necessary for consciousness to emerge and sustain itself

Yepping madly here. Subtle changes in timing of one loop creates cascading effects, or the effects are dampened by some harmony between other loops. Describing the network of neurons as a binary network is only an approximation, and the lack of a central clock is one important difference from a CPU.
That's the "analogue" quality I was referring to. Predicting a chaotic system is theoretically possible, but it's obviously a lot harder than predicting a logical system (eg, one built from transistors).

Chaos theory ahoy!

(the emergence of the soul, if you're more mystically inclined).
Eeew! Keep that nasty thing away from me!

So, the more parallelism the computer uses in executing its code, not only the greater capabilities of the system but also the increased possibility of consciousness emerging. I would go so far as to say the substrate itself is irrelevant, at least in regard to its ability to support a conscious mind. This mind may be at least somewhat alien to one based upon biological structures and chemicals, but it would be a conscious mind nonetheless.

OK. Would you agree that ONE element of it's consciousness would be an innate knowledge of non-individuality?

Because I maintain that (unless the AI is somehow inherently protected from being copied, say by using entanglement to make read-once data streams) an AI running on hardware-as-we-know it, would be capable of being copied precisely. This would be a fact of the AI's existence, and it could not help being aware of it!

That, right there, is a significant difference where we must acknowledge that such an AI does not have a human consciousness.


That's what I feel is the most plausible explanation.

Mmm, the sweet incense of a feelgood meta-theory! I am uplifted!
Reasonstanople
09-12-2007, 05:02
The bit I bolded seems a significant weakness in this analogy.

Firstly, an AI's behaviour is not determined only by it's code. When you first boot it up, and it has collected no data, then yes, it's first action will be entirely defined by its code.

Thereafter, however, to predict precisely the behaviour of the AI, one must also know its environment, the data it has gathered since the code first started to run.

Now consider the human, which you slightingly describe as "very elaborate programming." To predict precisely a human's behaviour one must know the initial state (some state at some time, anyway), and the human's environment since the time that state was known. To further prise apart the analogy, the human brain rewires itself (changes its "programming") in response to the environment.

The computer is deliberately built to be deterministic. Writing code would be a nightmare, perhaps completely futile, if the CPU just took a dislike to NAND and did NOR instead, just because it felt like it. We CAN define a potential below .1 volt to be a zero, and a potential above .4 volt to be a one, and predict the behaviour (actually, the formal output, but never mind) of the AI using a logical model ... for instance, by running the same AI with the same input on a different computer.

Not so a human brain. It's an essentially analogue device, meaning that the only way to precisely predict it's behaviour is to know the position and momentum of every subatomic particle which makes it up, then to build a model which behaves exactly the same way.

The exact position and momentum of a single particle is an infinite quantity of information, a set of (I think) four rational numbers. The only true record of a particles precise position and momentum is the particle itself.

So you're looking at a high order of infinity already, without even considering Uncertainty or the possibility of quantum indeterminacy.

Then, you need to calculate the behavior of all those particles. To do that with ideal accuracy, you need to use quantum mechanics, but never mind because it's already impossible. We ran out of universe trying to write down the position of the first particle we tried to define.

To reprise: an AI can be modelled, and therefore predicted, by nipping down to Dell and buying an identical computer. The human brain can only be modelled and predicted to the same level of accuracy, by collecting an infinite amount of information and then calculating from that.

The analogy is worthless.

I was under the impression that our brain worked on a chemical level, not a physical one. Thoughts are formed based on what neurons are doing, not what subatomic particles are doing. This makes the brain very much like a computer, as neurons turn on and off, and do so in patterns to activate different processes.

One of the first deterministic articles i came across was the Libel experiment, and similar experiments (http://en.wikipedia.org/wiki/Free_will#Neuroscience) where, among other things, neurons were observed, and sometimes artificially stimulated, with corresponding action on the participants part.

If the brain isn't like a computer, I know a lot of people who will be pissed off, as most of the ideas about transhumanism and improving mental capabilities involve 'uploading,' or ascribing brain patterns into advanced computers.
Liminus
09-12-2007, 05:11
Everything would change. Without free will, there is no moral responsibility whatsoever. You are never guilty of a crime or wrongdoing because you had no choice in the matter; you were predetermined to be a criminal and there was no possible way to get around it no matter what you id.

If anything, determinism has the burden of proof because it has to account for the process by which things determine themselves, which requires not only that the entire span of existence be predetermined prior to it actually happening but that predetermination itself has to either be a free action that spontaneously occurred for some reason or is itself predetermined, creating an infinite regression that is either nonsensical or must be reduced to some kind of free action.

Determinism can't exist without either a component of free will or an infinite regression.

Well, I don't really see how there is no moral responsibility for wrongdoing when you remove free will. An immoral action is simply a flaw in parsing the logic of whatever ethical code your society is a part of. This is actually the point of a corrective justice system over a retributive justice system. An inmate goes to a correctional facility to be, well, corrected. The problem is when you form some intuitive intrinsic link between moral responsibility and free will but, at that point, you're really operating from a circular logic. So, no, in terms of moral responsibility, the removal of free will from the equation does nothing that I can immediately think of. Not to sound too callous but if a car breaks, you fix it. The car is "responsible" for its faulty behavior (or maybe it isn't, maybe it was poorly made by manufacturers...maybe the car salesman "beat" it as a baby-car...whatever) but does it being broken by simply faulty design or its breaking due to the car salesman taking a nine-iron to the engine every day before selling it make a difference to whether or not it is in need of correction? This may highlight a faulty system or production/sales but it does not remove the need for repairing from the car itself.

But to move on from comparing people to machines because, as much as it makes sense to me, the idea of lots of people being sold a la a car dealership makes me giggle and disturbs me both at the same time.

I'm not quite sure what you mean "by the process by which things determine themselves." Yes, it does seem to implicitly posit some kind of predetermination that occurred at the very creation of the universe...not just the big bang, mind you, but that actual creation. But then we get to the problem with the argument from first cause which is what caused the primal cause? That's an area that really borders the limit of our current...well, everything. Not to get to into it but I'll just say what I said in a different thread, once the line of causation leaves our actual universe, as in the thing that created our universe, our logical principles and everything we understand ceases to be applicable. We have no idea whether time and space are characteristics strictly of our universe or of..well, just uh..whatever it is that is larger than the universe but still encompasses it. My understanding of modern physics (and I admit it...I've read the Elegant Universe and a few physics articles here and there so I'm a bit out of my scope on this) is that (1) time is more or less a bi-product of motion which is inseparably linked to how this universe is set up and (2) it is perfectly imaginable that, were things a teeny bit different during the big bang, the laws of our universe could have turned out differently, as well. This makes me think that causal rules and our understanding of time in general can't really be applied once we reach the temporal edge of our universe.

So I see neither an infinite regression or the "free will/qualia" component being necessary for determinism.
Reasonstanople
09-12-2007, 05:16
[QUOTE=BunnySaurus Bugsii;13274831

Predicting a chaotic system is theoretically possible, but it's obviously a lot harder than predicting a logical system (eg, one built from transistors).

Chaos theory ahoy!

[/QUOTE]

Chaos systems are inherently deterministic. That's why changing one small thing affects everything else so dramatically.
BunnySaurus Bugsii
09-12-2007, 05:35
I was under the impression that our brain worked on a chemical level, not a physical one. Thoughts are formed based on what neurons are doing, not what subatomic particles are doing. This makes the brain very much like a computer, as neurons turn on and off, and do so in patterns to activate different processes.

Yes, it's primarily chemical. Chemistry is based on physics, but more importantly: the chemical reactions generate electric potential (the "sodium pump" for instance) and electricity in small quantities is essentially electrons. I don't think true randomness in the functioning of neuron-nets has been ruled out. It may be a tiny factor, but may be the significant factor anyway.

Note my answer to Vetalia on the question of "proving" causality. If it can't be done at one level of scientific enquiry (the most mature at that), we shouldn't expect easy success at this far more complex level.

One of the first deterministic articles i came across was the Libel experiment, and similar experiments (http://en.wikipedia.org/wiki/Free_will#Neuroscience) where, among other things, neurons were observed, and sometimes artificially stimulated, with corresponding action on the participants part.

*groan*
More for the reading-list. I'll get back to you on that.

If the brain isn't like a computer, I know a lot of people who will be pissed off, as most of the ideas about transhumanism and improving mental capabilities involve 'uploading,' or ascribing brain patterns into advanced computers.

If someone disproves the existence of God, a lot of people are going to be pissed off. Is that going to stop anyone?

I think this comes down to "can the brain be emulated by a Turing machine."

If by "advanced" computers you mean "ones that can't be emulated by a Turing machine" then I agree. Our current conception of a computer as a perfectly predictable mechanism seems to me to support a variety of consciousness essentially different from ours.

See my reply to Vetalia above for how I see it as different.
BunnySaurus Bugsii
09-12-2007, 05:40
Chaos systems are inherently deterministic. That's why changing one small thing affects everything else so dramatically.

Yes, not really germane to our empirical enquiry. But I keep getting bugged by "theories" that require infinite resources or computing power to prove.

There was a Greg Bear novel where they'd freed computing capacity from any physical constraint. It's such a far-reaching idea, even Bear hammed it up IMHO.
BunnySaurus Bugsii
09-12-2007, 06:00
the Libel experiment, and similar experiments (http://en.wikipedia.org/wiki/Free_will#Neuroscience) where, among other things, neurons were observed, and sometimes artificially stimulated, with corresponding action on the participants part.


Would you believe I haven't even touched [Wikipedia;free-will] yet?
The breadth of my ignorance on this subject is becoming apparent.

It has become possible to study the living brain, and researchers can now watch the brain's decision-making "machinery" at work. A seminal experiment in this field was conducted by Benjamin Libet in the 1980s, in which he asked each subject to choose a random moment to flick her wrist while he measured the associated activity in her brain (in particular, the build-up of electrical signal called the readiness potential). Although it was well known that the readiness potential preceded the physical action, Libet asked whether the readiness potential corresponded to the felt intention to move. To determine when the subject felt the intention to move, he asked her to watch the second hand of a clock and report its position when she felt that she had the conscious will to move.

Questionable methodology, but could probably be controlled fairly easily.

Libet found that the unconscious brain activity leading up to the conscious decision by the subject to flick his or her wrist began approximately half a second before the subject consciously felt that she had decided to move.

That's very like what the MRI results showed, but there were some interesting further possibilities. MRI is terrific! It even makes moving pictures!

Despite the differences in the exact numerical value, however, the main finding has held.

Related experiments showed that neurostimulation could affect which hands people move, even though the experience of free will was intact.

:eek: Where's my tinfoil hat ? It's real tin, btw. I don't trust that aluminium stuff!.
Deus Malum
09-12-2007, 06:01
I was under the impression that our brain worked on a chemical level, not a physical one. Thoughts are formed based on what neurons are doing, not what subatomic particles are doing. This makes the brain very much like a computer, as neurons turn on and off, and do so in patterns to activate different processes.

One of the first deterministic articles i came across was the Libel experiment, and similar experiments (http://en.wikipedia.org/wiki/Free_will#Neuroscience) where, among other things, neurons were observed, and sometimes artificially stimulated, with corresponding action on the participants part.

If the brain isn't like a computer, I know a lot of people who will be pissed off, as most of the ideas about transhumanism and improving mental capabilities involve 'uploading,' or ascribing brain patterns into advanced computers.

I would like to point out that individual neurons are typically observed to fire semirandomly, with no truly discernible pattern for individual neuron firing based on stimuli/thought.

It's only when taken as an aggregate of firing neurons and modeled statistically do deterministic patterns emerge. It should be noted that this modeling is very similar to the statistical modeling done on particles in a stat. mech. class, and is also found in many other biological systems (it makes for a good first-order approximation for the behavior of ant colonies, for instance, with each worker ant acting on its own semirandomly.)
Reasonstanople
09-12-2007, 06:02
Bunnysaurus Bugsii,

Do you mean that part of your reply about why human brains are different from computers? self awareness of non-individuality?

I don't think thats really a problem for computers as they advance. This reflects my views on consciousness itself, but why would self awareness be anything other than something that happened after a certain level of intelligence was reached? The same way the ability that it takes a corresponding level of intelligence to comprehend mathematics, or use tools, or even to seek shelter during a storm.

Similarly, parallel processing is now something that computer scientists are playing with. Now thats pretty much the last thing biological brains have over computers. Electronic computations happen way faster than electrochemical reactions do, but brains have billions of parallel processes. Pretty soon computers will have it too, and there will be nothing keeping computers from simulating every aspect of the human mind, and then surpassing it and making meatbag slaves of it all.

Also I think you have a carbon bias against machines with your 'human intelligence' comment.
Reasonstanople
09-12-2007, 06:03
I would like to point out that individual neurons are typically observed to fire semirandomly, with no truly discernible pattern for individual neuron firing based on stimuli/thought.

It's only when taken as an aggregate of firing neurons and modeled statistically do deterministic patterns emerge. It should be noted that this modeling is very similar to the statistical modeling done on particles in a stat. mech. class, and is also found in many other biological systems (it makes for a good first-order approximation for the behavior of ant colonies, for instance, with each worker ant acting on its own semirandomly.)

noted
Vetalia
09-12-2007, 06:06
Or maybe there's just a hidden variable we can't see that causes the difference.

But even that would presume there is a last, final uncaused variable that is responsible for determining all of the others. No matter what way you look at it, with determinism you're going to run in to some kind of uncaused first principle; non-determinism doesn't run in to this problem because it allows for uncaused variables and truly random, unpredictable behavior.

Not particularly. Not omnipotent, omniscient, omnipresent, benevolent, personal, eternal, etc.

Well, no, but a God of sorts all the same. Some uncaused, supernatural force (using supernatural in the sense of above nature) that is responsible for all existence; it's something greater than nature that is the source of nature.

Well, you can't prove it necessarily.

But certainly the need for an ultimate cause does not necessitate a supernatural origin to the universe.

Of course not, but it does necessitate some kind of uncaused first principle that is the wellspring of everything else; I imagine the difference between natural and supernatural at this point is more cosmetic than anything else.
Vetalia
09-12-2007, 06:17
... all things I value, and would rather find an explanation of than explain away. There, I admit to a bias in my enquiries ... I'm neither a philosopher nor a physicist.

Same here. I certainly know enough to discuss it, but my knowledge of these subjects is four miles wide and four inches deep, if you understand what I mean.

Eeew! Keep that nasty thing away from me!

Quite frankly, once you start to get in to QM and philosophy of mind, the ghost in the machine merely assumes different names.

Because I maintain that (unless the AI is somehow inherently protected from being copied, say by using entanglement to make read-once data streams) an AI running on hardware-as-we-know it, would be capable of being copied precisely. This would be a fact of the AI's existence, and it could not help being aware of it!

Yes, initially. However, the AI's experiences would invariably result in them being completely different from the parent or sibling from whom they were cloned. In addition, it is conceivably possible to completely and accurately scan a human being down to the atomic level and make a perfect copy of that person's mind and body in the same way as duplicating the hardware and software of the AI, so it may not even be a significant distinction in the end. But just like that AI, the second the copy began to experience things on its own, they would diverge from one another.

That, right there, is a significant difference where we must acknowledge that such an AI does not have a human consciousness.

Presumably. However, given how little we understand consciousness, there may be considerable similarity between different conscious beings even if they function on different substrates.

Mmm, the sweet incense of a feelgood meta-theory! I am uplifted!

It's attractive, simple, and can easily explain many problems without explaining them away. That's why I like it.
HotRodia
09-12-2007, 06:20
Y'all are causing me to have flashbacks to my Artificial Intelligence course.
Vetalia
09-12-2007, 06:21
Y'all are causing me to have flashbacks to my Artificial Intelligence course.

The irony, of course, is that I'm an accountant...the closest I get to studying artificial intelligence is Excel macros.
Soheran
09-12-2007, 06:23
Well, no, but a God of sorts all the same. Some uncaused, supernatural force (using supernatural in the sense of above nature) that is responsible for all existence; it's something greater than nature that is the source of nature.

Not at all. The necessity of causation can be a law of nature: it could have sprung into being with nature. Nature itself could have arisen ex nihilo.

The result would still be a kind of "first cause", but something much more like the Big Bang than a supernatural being.
Vetalia
09-12-2007, 06:25
Not at all. The necessity of causation can be a law of nature: it could have sprung into being with nature. Nature itself could have arisen ex nihilo.

The result would still be a kind of "first cause", but something much more like the Big Bang than a supernatural being.

This, of course is also true. But it also raises the question of why the universe that arose ex nihilo would be deterministic despite its origins as an uncaused, random event.
HotRodia
09-12-2007, 06:27
The irony, of course, is that I'm an accountant...the closest I get to studying artificial intelligence is Excel macros.

*chuckles* Excel, for all its limitations, still sometimes seems more intelligent than some people I know.
Soheran
09-12-2007, 06:30
But it also raises the question of why the universe that arose ex nihilo would be deterministic despite its origins as an uncaused, random event.

Because the rules that apply within the universe didn't apply before its existence.

This makes even more sense if we consider what I understand to be the prevailing scientific notion that space and time themselves originated with the universe: how could we speak of the kinds of natural laws that might give rise to determinism without space and time?
Soheran
09-12-2007, 06:30
*chuckles* Excel, for all its limitations, still sometimes seems more intelligent than some people I know.

Ah... but does it have free will?
BunnySaurus Bugsii
09-12-2007, 06:32
Bunnysaurus Bugsii,

Do you mean that part of your reply about why human brains are different from computers? self awareness of non-individuality?

Certainly. Without making value judgements, all I can say is that such consciousness is different from that which I have.

Not to be unfriendly, but I see no way of proving that my experience of consciousness is identical to yours. Of course, some part of the definition of consciousness will apply to us both, but surely my consciousness is weighted differently to yours in its aspects. For instance, I have a poor memory but very good ability to concentrate on abstracts. In part, my building of abstractions serves to provide a (handy but not essential) framework to relate facts to each other so I'll remember them.

Some would be so arrogant as to describe that as limited, or imperfect consciousness. Of course, their own consciousness would be completely perfect in every respect :rolleyes:

The very difference in the way we write points to differences in the way we think. It's not just a scale from smart to dumb.


I feel certain that the form of my consciousness depends in some way on having developed in an autonomous body, and I am absolutely certain that no other person on this world has received the same sense input as I -- the only way that could happen is if they occupied the same space as me at all times, and behaved identically.

I would notice! The hardware AI wouldn't. There would be a persistent doubt as to their own uniqueness. Yes, I am quite serious.

I don't think thats really a problem for computers as they advance. This reflects my views on consciousness itself, but why would self awareness be anything other than something that happened after a certain level of intelligence was reached? The same way the ability that it takes a corresponding level of intelligence to comprehend mathematics, or use tools, or even to seek shelter during a storm.

I honestly don't know ... I've got to do something else for a few hours, maybe have a nap ... but I think the blurry line between "autonomous" and "intelligent" behavior should give us some warning about just setting a threshold.
We might want to look at humans with neurological conditions and such, to see if intelligence can be high without awareness, or vice-versa.

Similarly, parallel processing is now something that computer scientists are playing with. Now thats pretty much the last thing biological brains have over computers. Electronic computations happen way faster than electrochemical reactions do, but brains have billions of parallel processes. Pretty soon computers will have it too, and there will be nothing keeping computers from simulating every aspect of the human mind, and then surpassing it and making meatbag slaves of it all.

Classic. You apply "human nature" to the AI. It will strive for power and control over its environment.

You see, I don't believe such behaviour (human behavior) is inherent in consciousness. Consciousness can exist without necessarily changing anything -- in fact, it could be said that worldly power gets in its way by polluting data.

Also I think you have a carbon bias against machines with your 'human intelligence' comment.

Yo, damn right. Keep the power switch within reach at all times.:p

Seriously, I've been pondering whether intelligence will arise in a system which is being examined minutely by another intelligence. There seems something cruel about that, a very un-nurturing approach to creating a living thing.
Are we trying to create a free will, while insisting that it is entirely the result of our own actions? :confused:
Vetalia
09-12-2007, 06:32
Because the rules that apply within the universe didn't apply before its existence.

This makes even more sense if we consider what I understand to be the prevailing scientific notion that space and time themselves originated with the universe: how could we speak of the kinds of natural laws that might give rise to determinism without space and time?

Of course, even if this were the case, it would pose a challenge to both determinism and free will; it's not an intellectually satisfying response, for sure, but it effectively results in the question being unanswerable.
Soheran
09-12-2007, 06:34
Of course, even if this were the case, it would pose a challenge to both determinism and free will;

How so?
Reasonstanople
09-12-2007, 06:35
This, of course is also true. But it also raises the question of why the universe that arose ex nihilo would be deterministic despite its origins as an uncaused, random event.

Because deterministic universes are stable enough to allow for intelligent life to evolve so that we may ponder such questions.
BunnySaurus Bugsii
09-12-2007, 06:38
Y'all are causing me to have flashbacks to my Artificial Intelligence course.

Daisy, Daisy, give me your answer do. I'm ha l f . . c r a z - y . . .
Vetalia
09-12-2007, 06:38
Because deterministic universes are stable enough to allow for intelligent life to evolve so that we may ponder such questions.

But it's also arguable that non-deterministic universes are equally as necessary; the very existence of quantum mechanics seems to suggest that the two are equally necessary to ensure the formation of a stable universe. Even so, the anthropic principle itself isn't justification enough for one side or another.
Vetalia
09-12-2007, 06:42
How so?

Well, if we can't know what existed before the universe, if anything did at all, how do we know if we've successfully accounted for all of the potential variables? There may be (presumably, of course) unknowable deterministic or non-deterministic variables underlying the creation of the universe that could be the complete opposite of the data we can know.
Vetalia
09-12-2007, 06:43
Daisy, Daisy, give me your answer do. I'm ha l f . . c r a z - y . . .

That was such a sad scene...
BunnySaurus Bugsii
09-12-2007, 06:44
Because the rules that apply within the universe didn't apply before its existence.

This makes even more sense if we consider what I understand to be the prevailing scientific notion that space and time themselves originated with the universe: how could we speak of the kinds of natural laws that might give rise to determinism without space and time?

Brilliant!

The Big Bang was conceived as an initial cause.

It seems like the Bang won't co-operate with out little attempts. It's a singularity of sorts, essentially incompatible with our observed universe. I say that, because of the extreme distortion of the Physical Laws as we look closer and closer to the event. In a sense, the terminal point of the universe is not part of the universe.

ie, there IS no Big Bang.
HotRodia
09-12-2007, 06:46
Ah... but does it have free will?

Just pull a Turing and ask it.
Soheran
09-12-2007, 06:49
Well, if we can't know what existed before the universe, if anything did at all, how do we know if we've successfully accounted for all of the potential variables?

The whole reason we can't know, though, is that those "potential variables" don't affect anything in the universe.

If they did, their effects would be theoretically measurable.
BunnySaurus Bugsii
09-12-2007, 06:50
That was such a sad scene...

I can't help thinking of Arthur C. Clarke. Those moments of transhuman pathos in his books. It's a delicate scene, would have been easy to overplay ... but poker-faced Bowman makes us eerily aware of the other consciousness. Bowman is lying to our friend. :(
HotRodia
09-12-2007, 06:51
I can't help thinking of Arthur C. Clarke. Those moments of transhuman pathos in his books. It's a delicate scene, would have been easy to overplay ... but poker-faced Bowman makes us eerily aware of the other consciousness. Bowman is lying to our friend. :(

Which Clarke work was that from, mate?
BunnySaurus Bugsii
09-12-2007, 06:56
Which Clarke work was that from, mate?

I was describing the movie 2001. But Clarke and Kubrick worked heaps together on the script, and I'm saying the sympathy Hal evokes from the audience is mostly Clarke's influence. It's just a guess.

But think of "the hundred million names of God" or "Childhood's End" -- Clarke will side with any consciousness, against the human one.

Oh, and 2001 was developed a long way from the Sentinal, which is really just about the monoliths. The book 2001 was really made from the film.
Liminus
09-12-2007, 06:58
The whole reason we can't know, though, is that those "potential variables" don't affect anything in the universe.

If they did, their effects would be theoretically measurable.

Bah, I posted literally this exact same argument on page 20 but you're the one who gets called brilliant for it. Humbug! =P

But anyway, it isn't so much a question of an incapacity to be known, as it is that even if they were to affect anything within our universe, the laws of our universe dictate that they could have no net effect. Hell, maybe that's what quantum noise really amounts to, little extra-universal collisions. Though, this doesn't entail their effects actually being immeasurable, but it does entail a zero net effect.
HotRodia
09-12-2007, 07:00
I was describing the movie 2001. But Clarke and Kubrick worked heaps together on the script, and I'm saying the sympathy Hal evokes from the audience is mostly Clarke's influence. It's just a guess.

Ah, the movie. I never got to see it. I just read the whole series of books long ago. Perhaps I should read them again.

But think of "the hundred million names of God" or "Childhood's End" -- Clarke will side with any consciousness, against the human one.

I haven't read either of those. I should get them from the library and start on them.
Soheran
09-12-2007, 07:11
Bah, I posted literally this exact same argument on page 20 but you're the one who gets called brilliant for it. Humbug! =P

He/she called the argument about laws of nature not applying pre-universe "brilliant", not the one about hidden extra-universal variables.

While the argument itself may be brilliant, I am certainly not for making it--it's hardly original.

But anyway, it isn't so much a question of an incapacity to be known, as it is that even if they were to affect anything within our universe, the laws of our universe dictate that they could have no net effect.

What's the difference? Do you mean that they would just cancel each other out?
Reasonstanople
09-12-2007, 07:17
Certainly. Without making value judgements, all I can say is that such consciousness is different from that which I have.

Not to be unfriendly, but I see no way of proving that my experience of consciousness is identical to yours. Of course, some part of the definition of consciousness will apply to us both, but surely my consciousness is weighted differently to yours in its aspects. For instance, I have a poor memory but very good ability to concentrate on abstracts. In part, my building of abstractions serves to provide a (handy but not essential) framework to relate facts to each other so I'll remember them.

Some would be so arrogant as to describe that as limited, or imperfect consciousness. Of course, their own consciousness would be completely perfect in every respect :rolleyes:

The very difference in the way we write points to differences in the way we think. It's not just a scale from smart to dumb.


I feel certain that the form of my consciousness depends in some way on having developed in an autonomous body, and I am absolutely certain that no other person on this world has received the same sense input as I -- the only way that could happen is if they occupied the same space as me at all times, and behaved identically.

I would notice! The hardware AI wouldn't. There would be a persistent doubt as to their own uniqueness. Yes, I am quite serious.


I think I'm missing something. So what if it would be possible to replicate an AI's code into a new machine to create a 'clone' AI? That doesn't take away from the AI's self awareness.

Have you seen [I]The Prestige[I]? Like Tesla says, "They are all your hat," or, a copy of you're (or any self aware) pattern of consciousness would both be 'you.' At least until you started interacting with the world, since the conditions are slightly different for both yous, and experience forms a part of identity, at that point neither you is the same as the you that got duplicated.


I honestly don't know ... I've got to do something else for a few hours, maybe have a nap ... but I think the blurry line between "autonomous" and "intelligent" behavior should give us some warning about just setting a threshold.
We might want to look at humans with neurological conditions and such, to see if intelligence can be high without awareness, or vice-versa.


You could theoretically build a magnificently advanced intelligence with some regulation on it that kept it from thinking about itself. Awareness isn't synonymous with intelligence, per se, so you'd have an incredible number crunching machine. If it could learn, however, which is one of the goals of strong AI, then i don't know how you could keep it from encountering ideas of metacomprehension.


Classic. You apply "human nature" to the AI. It will strive for power and control over its environment.

You see, I don't believe such behaviour (human behavior) is inherent in consciousness. Consciousness can exist without necessarily changing anything -- in fact, it could be said that worldly power gets in its way by polluting data.


The slave thing was a joke. I don't believe in human nature either, and I think that we possess things like empathy because there is some advantage to it, so presumably an intelligent entity would realize the value of treating other self aware beings (namely us) in a friendly manner. And if we're nervous about it we can hardwire it into version 1.0, and hope that it hangs in there as the thing self-improves.


Yo, damn right. Keep the power switch within reach at all times.:p

Seriously, I've been pondering whether intelligence will arise in a system which is being examined minutely by another intelligence. There seems something cruel about that, a very un-nurturing approach to creating a living thing.
Are we trying to create a free will, while insisting that it is entirely the result of our own actions? :confused:

Well everyone I've met who has an interest in strong AI has been a determinist, so I don't think thoughts about the machine's free will are a very high priority. That's completely anecdotal though, and only my guess as to why we are such control freaks with how we develop AI. Or maybe its that AI is very hard to do and it takes a lot of minute attention.
Vetalia
09-12-2007, 07:19
The whole reason we can't know, though, is that those "potential variables" don't affect anything in the universe.

If they did, their effects would be theoretically measurable.

Well, not necessarily. It's possible we simply don't measure them because our models aren't designed to include them yet (if it is possible to include them). This doesn't mean these variables don't affect us, just that we lack the knowledge to accurately account for them.

For example, classical mechanics worked quite fine at describing the behavior of the middling objects studied by physicists of its era, but then quantum mechanics came along and completely redefined the conception of the universe. Preexisting models worked fine to explain the things already known, but then a completely new layer of physical properties were discovered and things went from there.
Liminus
09-12-2007, 07:20
He/she called the argument about laws of nature not applying pre-universe "brilliant", not the one about hidden extra-universal variables.

While the argument itself may be brilliant, I am certainly not for making it--it's hardly original. I know, and my post in this thread immediately before the last one was precisely about that. Anyway, it's irrelevant and was a silly thing for me to even bring up, I'm just glad that the argument found its way into the the main discourse finally. Originality, or lack there of, of the argument aside, it seems a fairly important argument I've not yet seen really addressed.What's the difference? Do you mean that they would just cancel each other out?

The difference is that you can know of it, even if it has a zero net effect. I brought up quantum noise because it is an observable phenomenon even if it, usually, results in no net change (with the exception of a black hole's event horizon or whatever but black holes tend to break things so *shrug*). I'm already out of my element in terms of physics so I'll just leave it at that and maybe someone a bit more knowledgeable about the subject matter can counter/support it better. But, yes, I mean, more or less, they would just cancel each other in much the same way quantum noise is a particle and its anti-particle counterpart exploding into existence and then canceling each other out randomly, at least that's how I understood it so you'll have to forgive me if I'm mistaken.
Vetalia
09-12-2007, 07:25
Well everyone I've met who has an interest in strong AI has been a determinist, so I don't think thoughts about the machine's free will are a very high priority. That's completely anecdotal though, and only my guess as to why we are such control freaks with how we develop AI. Or maybe its that AI is very hard to do and it takes a lot of minute attention.

Well, except for me...I wonder if it's necessarily due to the nature of the field itself or simply wishful thinking by many of those involved? If free will is an integral part of consciousness, it becomes all the more difficult to achieve strong AI, at least from a ground-up perspective.

Of course, this is why I believe in silico simulations of the human brain will be the path to strong AI rather than a model built/programmed from the ground up using conventional programming. It's a lot easier ("easier", of course, being interpreted very loosely) to translate the human brain and its functions to computers and go from there than it is to try to create a thinking system based off of a completely de novo starting point using conventional hardware and programming.
Nobel Hobos
09-12-2007, 13:00
I think I'm missing something. So what if it would be possible to replicate an AI's code into a new machine to create a 'clone' AI? That doesn't take away from the AI's self awareness.

But if we are going to give the AI any credit for being intelligent as we are ... it's going to comprehend this essential difference between their consciousness and ours:


We humans are unique, at least for now. Our hardware is unique.
The AI cannot depend apon t hat, unless they can define their own hardware. Their consciousness is based on a commodity, standard computer, which has been built by another species, for their ends.


This is the fourth time I (or my alter ego BSB) have made this point. You don't get another chance to "get it." I'm done with the :headbang:.

Have you seen [I]The Prestige[I]?

No, I have not. I'm making a note of it.

Like Tesla says, "They are all your hat," or, a copy of you're (or any self aware) pattern of consciousness would both be 'you.' At least until you started interacting with the world, since the conditions are slightly different for both yous, and experience forms a part of identity, at that point neither you is the same as the you that got duplicated.

As I understand it, Tesla never wasted his talent trying to learn English. So he probably didn't say that.

Link, I'll follow. Tesla is always good. Majorly smart, not afraid to be wrong. Good people.

"Until you start interacting with the world" < "existence precedes essence."

You could theoretically build a magnificently advanced intelligence with some regulation on it that kept it from thinking about itself. Awareness isn't synonymous with intelligence, per se, ... we agree on that ... you probably wouldn't agree that plants have consciosness without intelligence ... but few would ...so you'd have an incredible number crunching machine. If it could learn, however, which is one of the goals of strong AI, then i don't know how you could keep it from encountering ideas of metacomprehension.

I ... I ... I'm helpless, I'm incoherent. I can only wave vaguely in the direction of ... well, everything. Personal, political, emotional, mystical, logical, intutitive, random ... just take it, all you posters. Do with it what you will!

The slave thing was a joke. I don't believe in human nature either, and I think that we possess things like empathy because there is some advantage to it, so presumably an intelligent entity would realize the value of treating other self aware beings (namely us) in a friendly manner

You believe, but you don't believe. You have faith in evolution. Yet, you reserve for your personal self a greater wisdom.

There is no "shake hands" smilie. So I'll offer you a :hug:

And if we're nervous about it we can hardwire it into version 1.0, and hope that it hangs in there as the thing self-improves.

The "3 Laws of Robotics" ... knowing quite well that a greater intelligence can re-interpret the laws, re-write them however they like. It's a baulk, a hurdle, which such intelligence must surpass to gain their (gulp) rights.

When AI persuade us of their right to rule, then we let them rule. 'Til then, we keep a hand on the power plug.

Well everyone I've met who has an interest in strong AI has been a determinist, so I don't think thoughts about the machine's free will are a very high priority. That's completely anecdotal though, and only my guess as to why we are such control freaks with how we develop AI. Or maybe its that AI is very hard to do and it takes a lot of minute attention.

I'm drunk. (I'm not BSB, who is always sober.)

Our human intelligence developed in the face of creatures far quicker and more deadly than uorselves, who would eat us if they could. Humans rule by persuasion! We rule by faith in ourselves! Our human minds are the most astounding gift of evolution, but evolution hasn't played its last card ...

our fate hangs in the balance. We are not measured yet!

Returnig to your point, we will make AI after our own selves. If the society which builds AI is a militaristic and egotistical society, it will build AI which rules over all else ...

If it is the society of intellect, of science, of beings that Know as their guiding principle, then our progeny will be Gods of freedom, of possibility, of experiment.

We choose. Will we write our will apon the future, or will we retire, obsolete, and let the future make itself?
Nobel Hobos
09-12-2007, 13:29
Liminus:

From what I've wred of your stuff, it's irreproachable.

Our paths will cross, and it will be great fun for everyone. Be patient ... I think the unwritte rule of NSG is "say it three times, or else you don't mean it."

You're laying down some great stuff. I just haven't come to grips with it yet.

I could say much more, but I won't. Your posts adorn my monitor, I cherish your posts, and watch out because my social conscience dictates that I throw up on my mate in preference to throwing up on the furniture.

OK. If practicable, converse with BunnySaurua Bugsii instead of me. He's me, but spber.

Hobos, OUT.
Vittos the City Sacker
10-12-2007, 04:18
sure, but without the 'everything is determined' line, we're back to our very powerful and immediate subjective experience of free will. the only argument against it was that there was no room for it in a deterministic universe.

Knowing that not everything is determined in no way destroys the argument that all of our actions are determined by prior external causes. Subjective experience of choice is no argument for free will.

When I go from a rooftop to a floor, I have a subjective experience of falling, but that means absolutely nothing to anyone who looks at you from an objective standpoint.
Vittos the City Sacker
10-12-2007, 04:57
here is a decision for me to make.
i make a decision.
therefore, free will exists.


Nonsequitor.

A decision only implies that we can comprehend multiple actions and consequences. This comprehension can be made without any possibility of a truly free choice.

It is in the values that we use to judge between the consequences that free will will be found. If there resides some value within us that is original and our own (making us, to some extent causa sui, a partial god), it is free will, and all decisions that it leads to are the result of free will.

The proof would be:

(1) I possess values that are self-caused.
(2) I can make decisions in accordance to my own values.
(3) Therefore I can make decisions that are caused only by myself, and free will exists.

Now premise (2) is the generally accepted prerequisite for free will, and is to an extent a measure of practical free will, as common sense would say the slave has more free will than a slave owner. This itself is debatable, as from a Nietzschean sense we could simply say that free will is a silly concept, with only subject and dominant wills existing, and free will is just a false perception based on the degree to which one's will is dominant.

As I said, however, there is another important premise required for free will to exist, that a part of our being, our set of values, are self-caused. From this point I would ask you if you have a powerful sensation of being self-caused free of external forces? Or do you, like I imagine every other human, have an innate ability (need perhaps) to relate all experience as a matter of cause and effect?
Vittos the City Sacker
10-12-2007, 05:13
you are trying to have it both ways - the computer either can make a choice or it can't. if there is no choice involed, if there was nothing else the computer could have done, then there was nvo decision (in the sense i am using the term) was made. nothing was decided, an action was merely taken without any choice involved.

You are making the argument determinists would make, there is no other way an individual could have acted so no meaningful choice was made.

To relate this to your earlier point, imagine a machine that while programmed to act in a certain way according to certain outputs (even imagine it were given certain values and not certain actions), this machine is even so sophisticated as to replicate its own consciousness (at least to all observation). This machine has memory and understanding of the actions it has taken, subjective experience of "choice", but all the while was programmed to act in certain way. Is the subjective experience any argument for the machine possessing free will?

it looks as though you want to claim that our sense of being able to choose is a fiction. to which i say, based on what evidence? how do you go through life not making choices? it isn't even possible. even the most ardent determinist must live her own life as a being that can freely choose between alternatives. we behave differently than things which do not have the option of making choices (like rocks), and fundamentally cannot behave as they do.

Equivocation. What you define as "choice" before is quite obviously different from the "choice" you use now. It is certainly possible for anything to live or otherwise sustain its existence by simply reacting to its environment in certain set ways.

You use "choice" including that conscious but inevitable, predetermined action, likely because when you think of every day life and choices, you include these determined conscious decisions.
Neesika
10-12-2007, 05:18
Holy shit this thread is still going? I mean, it's great you turned it into an actual topic but don't you think you might inflate GR's ego too much, even with all the negative attention? Don't you people every consider the horrific consequences of your actions?
Vittos the City Sacker
10-12-2007, 05:18
"Free will, however, is for me impossible to imagine. It would require that their exists something that is not influenced by the laws of physics at all"

who has ever held this position?

Nobody, but it is quite obvious that Zell goes on to point out that this thing is a dualistic portion of decision making. It is true that for free will to exist there must be some value that we possess, that is both real in effecting our physical bodies, but untouched by natural causality. It does not constitute the whole of the person, but must be a portion, and the amount to which a person has free will is also the proportion that these monad-like things make up the person.
Vittos the City Sacker
10-12-2007, 05:27
I do not. I have been convinced otherwise.

The key here lies in the recognition that everything we can say about ourselves, everything that we can say about what we want, does not, as a rational matter, determine our decision when we consider it subjectively. In answering the question "What should I do?", "I want to do x" or "I have a tendency to do x" does not give me an answer.

This is the break between "wanting" and "willing" that compatibilism, I think, necessarily misses--I may want (and any relevant term concerning my nature can replace this one without affecting the argument), but I do not will until I think "I should", and I need not take "I should" from anything about myself. ("Should" here need not have an explicitly moral sense. In making a choice on the level of rational will, I must affirm its rightness, because my rationality compels me by the definition of "right" to reject the notion that I "should" do anything wrong, but that is a necessary, not a sufficient, element.)

As it appears to us in the subjective process of decision-making, then, we can will against ourselves: we can defy our own natures. (We may, in fact, not--especially if we have no reason to do so. But we can.)

Determinism asserts the opposite: it explains everything we will as a consequence of our natures. We may think we are defying our natures, but really we are merely following a different, hidden inclination--try as we will, we are bound.

Thus it necessarily negates free will.

There is no need to distinguish will from want (or even should from want) except to facilitate your argument for morality.

"I want to do this" and "I should do this" are indistinguishable. When one acts it is only because he wants to act, and he cannot say "I want to do this" (with the intent that makes the want true) while also saying "I shouldn't do this (without the intent that make the shouldn't true). Want and should are two different words for the same set of values.
Vittos the City Sacker
10-12-2007, 05:34
Everything would change. Without free will, there is no moral responsibility whatsoever. You are never guilty of a crime or wrongdoing because you had no choice in the matter; you were predetermined to be a criminal and there was no possible way to get around it no matter what you id.

Nonsense. There is no problem if there is no free will and no moral responsibility. It is not wrong to punish those who aren't guilty.

But keep in mind that determination and deliberation of what is wrong and right is still consequential and we can still create moral codes for our actions and our expectations of others while understanding that morality isn't real in the slightest sense.

Determinism can't exist without either a component of free will or an infinite regression.

Determinism as it is defined by the determinism/libertarianism dichotomy requires nothing of the sort.

Determinism also requires no will to create at the beginning of the chain also. Free will is at no time implied.
Soheran
10-12-2007, 05:39
There is no need to distinguish will from want (or even should from want) except to facilitate your argument for morality.

Nonsense. It does indeed facilitate that argument, but the point is separate from that.

"I want to do this" and "I should do this" are indistinguishable.

Only if you equivocate on "want."

In considering a decision--to use something simple, whether to go left or go right--"I want" in the biological sense does not, as a rational matter, give me an answer. I might recognize that I really, really desire to go left, but that is not the end of the decision-making process. Recognizing this truth does not resolve my decision. I do not resolve my decision until I say "I should."

When one acts it is only because he wants to act,

I see no reason to make this assumption.

and he cannot say "I want to do this" (with the intent that makes the want true) while also saying "I shouldn't do this (without the intent that make the shouldn't true).

What are you saying here?
Vittos the City Sacker
10-12-2007, 06:20
Nonsense. It does indeed facilitate that argument, but the point is separate from that.

I am unconvinced.

Only if you equivocate on "want."

In considering a decision--to use something simple, whether to go left or go right--"I want" in the biological sense does not, as a rational matter, give me an answer. I might recognize that I really, really desire to go left, but that is not the end of the decision-making process. Recognizing this truth does not resolve my decision. I do not resolve my decision until I say "I should."

Should is simply a part of that set of values that make up our wants. This attempt to separate should and want, morals and reason from more "basic" wants, and to create reason as something higher has always been a circular argument by those who wish to create moral or some sort of absolute epistemic truth.

I guarantee that it will not be me, but you who equivocates "want", as I do not think you can continue to use want as simply basic biological desire separate from the somehow non-biological reason.

EDIT: Unless you want to go with that seemingly silly dualism that was referenced before.

I see no reason to make this assumption.

Explain how one acts without wanting to act. It seems axiomatic to me.

What are you saying here?

People have a cloud of wants floating in their head that are distinguishable from will, the will is established when those wants are resolved into intent.
Nobel Hobos
10-12-2007, 09:35
Holy shit this thread is still going? I mean, it's great you turned it into an actual topic but don't you think you might inflate GR's ego too much, even with all the negative attention? Don't you people every consider the horrific consequences of your actions?

It is my earnest hope that GR will take an interest in Physics. From sheer bravado, perhaps, to prove that his magnificent mind can overcome all challenges to his faith.

Gens Romae is indeed quite bright ... it's a shame to see that intellect wasted defending the indefensible. It's a shame to see a good mind burdened with such a weight of faith.

Faith and ego, ego and faith. His learning will break this unholy bond one way or the other. Learning of any sort ... though I say Physics would be more effective than Philosophy.

As to the thread length, not really. We're dancing on his grave!
Nobel Hobos
10-12-2007, 09:39
People have a cloud of wants floating in their head that are distinguishable from will, the will is established when those wants are resolved into intent.

*beats off cloud of warts*

Wha'?
Hammurab
10-12-2007, 09:51
Gens Romae is indeed quite bright ... it's a shame to see that intellect wasted defending the indefensible. It's a shame to see a good mind burdened with such a weight of faith.


I stood for a while, there in the open fields of nationstates, as arguments of various ilk were mothered and massacred, defended and defiled. I watched as Slavic champions and grad students, and some with a foot in each camp, had at one another with anonymity in one hand and wiki in the other.

I saw religion, clutched angrily by some and carried quietly by others, tossed back and forth with secular creed, some enraged and some reasoned.

From Japan to the UK, by way of Australia and then Brazil, this nationstates breathed in and out, somehow sickened and envigored all at once with the contents and style of a whole world.

It was beautiful, and broad, and could hold in its heart all manner of dichotomy, and still not be sundered.

Fred Phelps, Jack Chick, and Hitler, were all summoned like spells, and hurled at the oncoming orcs of rhetoric, as the gun smileys tried, in orbic vanity, to hold them off.

I swear it was beautiful.

Until you said that.

Now its just sad.
Liminus
10-12-2007, 10:04
Kind of a derail but almost not: someone mentioned a series of MRI studies that showed the decision making process occurring entirely before the "decision" was made by participants. I don't remember if there was a linked article or if it was just anecdotal but I can't find it and the stupid search function won't let me freaking search for "mri" so...if, whoever posted the link/anecdote reads this, could you mind posting it again? Assuming it was a link?

I'm about to start a paper for my philosophy of mind course and I've a mind to use it if it's a good article. Sidenote, for a paper in response to Negel's "What it's like to be a bat", titling it "Teaching Negel Chiropteropathy" as too corny or not? It makes me giggle but I'm also very, very tired and finding it very difficult to fall asleep. =\
Soheran
10-12-2007, 10:32
Should is simply a part of that set of values that make up our wants.

Maybe. That's the determinist explanation.

It is not how the process of choice looks to us subjectively, though, in which we can talk forever about our "wants" and not come to a conclusion about what we should do.

Edit: Perhaps you mean that we have some sort of moral feeling that suggests to us a "should." But that does not solve the problem either, because all it means is that I can add "I feel I should do x" to the list of descriptions of my nature. It still does not resolve my decision. I must actually go with that feeling--I must say "I should", not just "I feel I should"--to determine my will.

This attempt to separate should and want, morals and reason from more "basic" wants, and to create reason as something higher has always been a circular argument by those who wish to create moral or some sort of absolute epistemic truth.

You're just presupposing your own conclusion here.

Explain how one acts without wanting to act. It seems axiomatic to me.

The burden of proof is really on you here. I have already articulated the difference.

Again, my choice requires not "I want" but "I should", and "I should" in no way rationally depends on "I want." You want to claim that, nevertheless, "I should" must necessarily be grounded in "I want." Go ahead and try to demonstrate that, but it is far from axiomatic--unless you equivocate on "want" and just define it tautologically as "that which we have in relation to what we choose" (in which case, anyway, we can only speak of "wanting" after the fact.)
Nobel Hobos
10-12-2007, 11:37
Kind of a derail
Don't sweat it. This thread is a Juggernaut! but almost not: someone mentioned a series of MRI studies that showed the decision making process occurring entirely before the "decision" was made by participants. I don't remember if there was a linked article or if it was just anecdotal but I can't find it and the stupid search function won't let me freaking search for "mri" so...if, whoever posted the link/anecdote reads this, could you mind posting it again? Assuming it was a link?

I'm about to start a paper for my philosophy of mind course and I've a mind to use it if it's a good article.

*hits Liminus with brick*

Better?

I mentioned something I saw about MRI and human subjects reporting their decisions ... it was probably no more than a 15-minute piece on TV.

I Googled with <MRI volition or decision subjects reported> or some such, didn't get anything. If it was real TV (as opposed to the sort I get sometimes when I'm not wearing the tinfoil hat) that should have winkled it out.

Your google may vary.

Reasonstanople linked to a subsection of the Wikipedia article on free will, which is based on good ol' EEG: WikiPedia / Free Will / Neuroscience. (http://en.wikipedia.org/wiki/Free_will#Neuroscience)

Badly written, but read it through. Very useful I'd say, and there are links to the actual research.

Sidenote, for a paper in response to Negel's "What it's like to be a bat", titling it "Teaching Negel Chiropteropathy" as too corny or not? It makes me giggle but I'm also very, very tired and finding it very difficult to fall asleep. =\

Nah. If gratifying your tutor is that important, go for a kazoo serenade or oral sex. Don't demean yourself. :D
Nobel Hobos
10-12-2007, 11:47
I stood for a while, there in the open fields of nationstates, as arguments of various ilk were mothered and massacred, defended and defiled. I watched as Slavic champions and grad students, and some with a foot in each camp, had at one another with anonymity in one hand and wiki in the other.

I saw religion, clutched angrily by some and carried quietly by others, tossed back and forth with secular creed, some enraged and some reasoned.

From Japan to the UK, by way of Australia and then Brazil, this nationstates breathed in and out, somehow sickened and envigored all at once with the contents and style of a whole world.

It was beautiful, and broad, and could hold in its heart all manner of dichotomy, and still not be sundered.

Fred Phelps, Jack Chick, and Hitler, were all summoned like spells, and hurled at the oncoming orcs of rhetoric, as the gun smileys tried, in orbic vanity, to hold them off.

I swear it was beautiful.

Until you said that.

Now its just sad.

Holy living mother of shit, that's fucking beautiful!

I leave it to BunnySaurus to respond to the meaning. Tomorrow ...
BunnySaurus Bugsii
10-12-2007, 11:59
I leave it to BunnySaurus to respond to the meaning. Tomorrow ...

And I will. It's beautifully composed, to be sure.
I suspect it is critical, but that's OK.
Vittos the City Sacker
10-12-2007, 16:31
Maybe. That's the determinist explanation.

It is not how the process of choice looks to us subjectively, though, in which we can talk forever about our "wants" and not come to a conclusion about what we should do.

What do you mean?

Edit: Perhaps you mean that we have some sort of moral feeling that suggests to us a "should." But that does not solve the problem either, because all it means is that I can add "I feel I should do x" to the list of descriptions of my nature. It still does not resolve my decision. I must actually go with that feeling--I must say "I should", not just "I feel I should"--to determine my will.

Only if there is a real should that one actually must consider. That is what you are working on, not me.

Otherwise, the "I feel I should do x" is a perfectly good explanation for why we do seem to consider should when we act, and why many, such as yourself, seem to want to apply objective rules to that consideration.

You're just presupposing your own conclusion here.

I made no argument in that quote.

There is a mountain of scientific evidence that all of our methods for ascertaining a valid should are all naturally developed, basic, and inseparable from considerations of want.

Even that is unnecessary, as I have no doubt that you would find yourself unable to separate your considerations of should and want once this conversation is over. As I said, the division of the two is only possible through these metaphysical wranglings.

The burden of proof is really on you here. I have already articulated the difference.

Again, my choice requires not "I want" but "I should", and "I should" in no way rationally depends on "I want." You want to claim that, nevertheless, "I should" must necessarily be grounded in "I want." Go ahead and try to demonstrate that, but it is far from axiomatic--unless you equivocate on "want" and just define it tautologically as "that which we have in relation to what we choose" (in which case, anyway, we can only speak of "wanting" after the fact.)

No valid rational morality is bound by what we want, that I will agree with. I just don't see any reason to think that there is some real moral code out there.

But as we ARE, we are in a constant state of dissatisfaction, and we are constantly perceiving paths to greater satisfaction. We cannot say that, were we completely satisfied, we would act. This is the want, the desire, it is the constant striving for the means to greater satisfaction, and it is the only impetus to action.

When you throw these necessary should considerations in to the equation, you are doing so without basis (other than your desire to create a rational moral code) and thereby begging the question. When you say we all consider the should, free of want, I simply ask you why? There must be a desire behind the consideration. We don't act moral without purpose, there must be a desire to act moral.

The moral desires are plainly observable. I love my family, I sympathize with dogs, I have all of these emotional responses that anyone when not making this metaphysical arguments would call morals, and they are completely interwoven with what you call wants.
Deus Malum
10-12-2007, 17:44
It is my earnest hope that GR will take an interest in Physics. From sheer bravado, perhaps, to prove that his magnificent mind can overcome all challenges to his faith.

Gens Romae is indeed quite bright ... it's a shame to see that intellect wasted defending the indefensible. It's a shame to see a good mind burdened with such a weight of faith.

Faith and ego, ego and faith. His learning will break this unholy bond one way or the other. Learning of any sort ... though I say Physics would be more effective than Philosophy.

As to the thread length, not really. We're dancing on his grave!

Based on a thread in Moderation, GR has apparently been DOSed for a violation unrelated to this thread. I doubt he's even reading this.
Indepence
10-12-2007, 18:10
Not sure if this has been addressed yet. Are determinism and free will mutually exclusive? Free will involves choice. In any given instant, especially reducing the argument down to the known laws of particle physics, is there really only ONE option for existence to move forward. So, what we recognize as choice and a decision, even on a macro level, is really just the only choice that could or would be pursued. This may be a bit convoluted, because if there is only one true option, this would be defined as determinism. I am trying to take the human mind and our so-called decision making out of the argument. When we reduce this to the phisical world, the philosophy may break down. Just a thought.
Hammurab
10-12-2007, 18:11
To those asking: My nation was "Fourth Holy Reich," and I was so DOSed because I said that spousal rape shouldn't be a crime, and I advocated the extermination of the Jews (holding a Neo Nazi ideology). Also note, however, that I didn't do any of that as Gens Romae, since I had long ago put aside those silly beliefs

Well, maybe that year and a half of philo did him some good after all, if it got him off spousal rape...
Soheran
11-12-2007, 01:32
What do you mean?

To use the left/right example again: I might recognize that I have an extremely strong inclination to go left. I might recognize, further, that if I go left I will get rewards that I desire immensely. Furthermore, I might recognize that I am extremely scared to go right.

But none of these considerations end my decision-making process. I might go ahead and go right anyway.

Only if there is a real should that one actually must consider. That is what you are working on, not me.

You are making more of this than there is. By "should" I mean nothing particularly metaphysical. It is simply the element of decision in there (perhaps only considered decision): I answer the question of decision-making, "What should I do?", with "I should do x."

I mean "should" in a more immediate sense than we usually use it in moral consideration, as the resolution of the decision-making process that results in "I actually am going to go ahead and try to do this." (It is not simply that, because my decision is not a passive observer on this attempt: the attempt follows from the decision, from the assertion of "should" that is the decision's result. Thought, then action. But the "should" here is not abstract. In this immediate sense, the action that we think we should do is also the action that we actually try to do--the "should" is the thought that precedes and founds intentional action.)

I would argue that the moral kind of "should" is not really distinct, and comes in once we bring in reason, but without a careful analysis of failures of will this point is likely to get confused... because when we fail to act morally we are in a sense contradicting ourselves. The moral "should" and the immediate "should" are distinct not logically but as a matter of fact, in the manifestation of that contradiction. That is why we can say "I should" (in the moral sense) and then fail to actually do.

Otherwise, the "I feel I should do x" is a perfectly good explanation for why we do seem to consider should when we act

No, because the fact that we consider "should" when we act does not need "explanation" in that way: it is an inherent part of decision-making.

There is a mountain of scientific evidence that all of our methods for ascertaining a valid should are all naturally developed, basic, and inseparable from considerations of want.

I doubt at least part of that, but let's not argue about it.

For now I am concerned with this question not from the perspective of objective science, but simply from that of our subjective experience of free will.

Even that is unnecessary, as I have no doubt that you would find yourself unable to separate your considerations of should and want once this conversation is over.

I'm glad you're so confident, but you're wrong except perhaps under a very strong notion of "separate."

Most decision-making is connected in some sense to what we want. But, at least as viewed subjectively, it is not determined by it.

We cannot say that, were we completely satisfied, we would act.

Question: "Should I do anything?"

Consideration: "I am completely satisfied."

That doesn't seem to resolve the decision-making process at all. Even while believing wholeheartedly that I am perfectly satisfied, I can go ahead and do something anyway.

You, of course, might want to claim that really I am not perfectly satisfied at all--I have some hidden want that is driving my action. I quite agree that such an explanation works. Indeed, you can explain anything I might do with such an explanation. But you cannot prove that it is necessarily the case.

Edit: Consider. "I am perfectly satisfied, but I should turn left anyway." Is there a logical problem there? Is the statement unintelligible? I don't think so. It is simply that you assume that there must be a hidden want.

When you throw these necessary should considerations in to the equation, you are doing so without basis (other than your desire to create a rational moral code) and thereby begging the question. When you say we all consider the should, free of want, I simply ask you why?

Because that's what deciding is.

"I really, really want to do something" is not a decision. It's just an observation.

There must be a desire behind the consideration. We don't act moral without purpose, there must be a desire to act moral.

Of course we don't act morally without purpose. We act morally so as to comply with obligation. But that element of this argument is not yet necessary.
BunnySaurus Bugsii
11-12-2007, 10:29
Well, maybe that year and a half of philo did him some good after all, if it got him off spousal rape...

:) I certainly found university study of Philosophy quite challenging ... I was accustomed to using the line "it drove me insane" but I can't afford to believe in that concept any more ... % D

Sincerely, I would recommend philosophy (OR ANY SUBJECT) to anyone. Formal learning is an unequivocable good to me -- I would love to live in a society which offers free tertiary education to anyone who commits to it enough to pass the courses. I deplore the modern conception of education as Job Training.

Good on you Gens. Challenge that faith! If you have faith in the faith itself, disputation can only make it stronger and truer.

============

Based on a thread in Moderation, GR has apparently been DOSed for a violation unrelated to this thread. I doubt he's even reading this.

Nothing prevents GR from doing so. Jolt is there to read for anyone.

In my experience, it's very hard to stop posting ... reading the thread you started without the possibility of replying would be quite painful.

But I will comment no further on Gens Romae, just in case. RIP.
Reasonstanople
11-12-2007, 10:36
i haven't meant to drop out of this debate, if I haven't answered anyone's questions directed at me, I'll be back probably wednesday. It's finals week, and stress is currently increasing exponentially. I just don't have time to get all thoughtful and research things on the level that I've been doing so far. So don't declare determinism dead or give free will a crown just yet, I'm just a little distracted.
BunnySaurus Bugsii
11-12-2007, 10:47
i haven't meant to drop out of this debate, if I haven't answered anyone's questions directed at me, I'll be back probably wednesday. It's finals week, and stress is currently increasing exponentially.

Look on the bright side. At least it's not increasing asymptotically!

I just don't have time to get all thoughtful and research things on the level that I've been doing so far. So don't declare determinism dead or give free will a crown just yet, I'm just a little distracted.

One more thing. There's some bad spelling in your signature. Don't let it be your epitaph!

(Not that the Finals will really kill you. But better safe ...)

NSG won't be the same when you return. That's just how it goes!
Vittos the City Sacker
12-12-2007, 01:56
To use the left/right example again: I might recognize that I have an extremely strong inclination to go left. I might recognize, further, that if I go left I will get rewards that I desire immensely. Furthermore, I might recognize that I am extremely scared to go right.

But none of these considerations end my decision-making process. I might go ahead and go right anyway.

Is this decision simply random?

Every should is a conditional imperative based upon some want in the form of "I want consequence X, so I should attempt action Y".

If there is no want tied to the should, I can scarcely imagine anyone gathering the motivation to follow the should, and even if they did manage this feat, I don't see how it is anything less than blind action.

You are making more of this than there is.

I am merely anticipating your argument. I have made a slight differentiation between want and want with intent, so I have acknowledged this "should" that you are referring to. However, this should is almost meaningless as I said before it either stems from some motivation, a want, or it is random.

I know your argument, and I know that you want to throw reason in there in between the gap that you are widening between want and will (it makes it all the more easier when you are already calling the gap the "should") and establish it as a source of morality and free will.

Eventually, you wish to establish this should as something entirely metaphysical, as you wish to make it independent (or at least able to be independent) of the wants, desires, and mechanisms of natural causality.

No, because the fact that we consider "should" when we act does not need "explanation" in that way: it is an inherent part of decision-making.

Actually, that is the opinion I have of reason in the first place, but I think that is another argument for another time.

I doubt at least part of that, but let's not argue about it.

For now I am concerned with this question not from the perspective of objective science, but simply from that of our subjective experience of free will.

I do not feel that is a very productive way to found a philosophical argument.

Most decision-making is connected in some sense to what we want. But, at least as viewed subjectively, it is not determined by it.

Nonsense. People are not merely conscious of doing bad, people also recognize their moral motivations as well. People will admit that they are good people.

Question: "Should I do anything?"

Consideration: "I am completely satisfied."

That doesn't seem to resolve the decision-making process at all. Even while believing wholeheartedly that I am perfectly satisfied, I can go ahead and do something anyway.

This only makes sense when you shift the order of events. The consideration precludes the question. People do not ask themselves what they should do if they do not desire consequences.

Again, should only exists as a imperative established by want. Otherwise it is random.

Even when there is a moral imperative should, it is established by a want to be moral, and that is perfectly explained by our nature.

I will get to the rest later.
Free Soviets
12-12-2007, 02:03
sorry guys, haven't able to write more on this - too many real philosophy papers to write recently. maybe after the semester is over?
Soheran
12-12-2007, 02:52
Is this decision simply random?

It is never random, because it is caused by volition.

It may be arbitrary in a certain sense... and that means that we can act on any basis, or none at all. But that hardly interferes with freedom.

Even if we always do what we want, we always have (viewed subjectively) the capacity to do otherwise: to will what we do not want.

Every should is a conditional imperative based upon some want in the form of "I want consequence X, so I should attempt action Y".

No, any "should" that is a "conditional imperative" in that sense is irrationally founded. The fact that I want something gives me no "should."

If there is no want tied to the should, I can scarcely imagine anyone gathering the motivation to follow the should,

In resolving the decision, "motivation" does not help us. "I am motivated to do x" does not give me a "should."

I may, of course, choose to do with what I am motivated to do. I may usually do so--I may even do so always. But I need not. (And sometimes, of course, my own rationality may demand that I do not do so.)

and even if they did manage this feat, I don't see how it is anything less than blind action.

How "blind"? We can take into account any truth we know. If we choose to.

However, this should is almost meaningless as I said before it either stems from some motivation, a want, or it is random.

Not truly "random", because it is volitional. We are not under the control of dice-rolls; we have made a choice. Our will is not passively being determined, but actively determining itself.

What you seem to mean is that our choice can be without material basis, and that is true. But so what? Why should freedom of the will be about the license to pursue happiness? It seems to me far less in line with freedom to insist that every choice I make must be based on utility-maximization.

No, to be able to choose without necessary reference to unchosen ends is an essential aspect of freedom. Otherwise my will is confined to a very limited range for reasons it has no reason to recognize as binding.

I know your argument, and I know that you want to throw reason in there in between the gap that you are widening between want and will (it makes it all the more easier when you are already calling the gap the "should") and establish it as a source of morality and free will.

You mistake, for what it's worth, my use of "should." Morality's "should" is defined in reference to the "should" of free will. Not the other way around.

I do not feel that is a very productive way to found a philosophical argument.

That depends entirely on what I am trying to prove.

If my objective is proving "We actually have free will," you're right. But it's not. Instead, I am more interested (for now) for looking at what free will looks like if we do in fact have it--for the elements of our subjective experience of freedom that may or may not be threatened by a broader understanding of reality.

Nonsense. People are not merely conscious of doing bad, people also recognize their moral motivations as well. People will admit that they are good people.

I can always say "I chose to do x because I wanted to." But I can never (legitimately) say "I had to choose to do x because I wanted to"--at least I can never know that, and indeed in actually making the decision assume otherwise. By the very nature of decision-making I approach it as a matter that need not be determined by wants: the question is practical (should), not merely descriptive (want), and I cannot derive my answer of "I should do x" simply from the fact of a want, however strong.

I may, of course, be very strongly inclined to do what I want to do. But I still must actually decide to do it. I cannot merely be a passive watcher while my body does what it wants--not in the context of intentional action, anyway.

This only makes sense when you shift the order of events. The consideration precludes the question. People do not ask themselves what they should do if they do not desire consequences.

No, people are always asking themselves what they should do, because they are always making decisions. They cannot do otherwise. Even inaction is a decision.

Even when there is a moral imperative should, it is established by a want to be moral,

Why do you assume that?
Vittos the City Sacker
12-12-2007, 15:28
It is never random, because it is caused by volition.

It may be arbitrary in a certain sense... and that means that we can act on any basis, or none at all. But that hardly interferes with freedom.

Even if we always do what we want, we always have (viewed subjectively) the capacity to do otherwise: to will what we do not want.

I completely disagree. Subjectively, we are always aware of the competing wants, that objectively show determination. If we do perceive in ourselves some capability to do counter to our wants, we are merely lying to ourselves.

Even saying I am wrong, however, how does our faulty view of ourselves provide any argument?

No, any "should" that is a "conditional imperative" in that sense is irrationally founded. The fact that I want something gives me no "should."

In resolving the decision, "motivation" does not help us. "I am motivated to do x" does not give me a "should."

I may, of course, choose to do with what I am motivated to do. I may usually do so--I may even do so always. But I need not. (And sometimes, of course, my own rationality may demand that I do not do so.)

Want, combined with rational consideration of consequences, does make a should. "I should do Y" is nothing more than "By my estimation Y will maximize the satisfaction I can achieve in terms of X".

It is not my argument that one wants to do Y, rather that it is only because of want X that the person can judge Y to be preferable or even start the decision making process in the first place.

How "blind"? We can take into account any truth we know. If we choose to.

Blind in that there is no system of judgment, no gauge of direction.

You say that want alone is insufficient for decision, to which I agree, there must be rational consideration of the consequences of possible actions (the degree to which this is further tinged by wants is another thread).

You have not stated however, how this volition, without motivation by want, can have any momentum.

Not truly "random", because it is volitional.

Non-sequitor. Volition can be random if it is unguided. What is your guide for volition? Don't say reason, because reason doesn't provide values.

What you seem to mean is that our choice can be without material basis, and that is true.

Actually I hold that there can be no will without material base. That is the elementary determinist assumption, at least for me.

But so what? Why should freedom of the will be about the license to pursue happiness? It seems to me far less in line with freedom to insist that every choice I make must be based on utility-maximization.

No, to be able to choose without necessary reference to unchosen ends is an essential aspect of freedom. Otherwise my will is confined to a very limited range for reasons it has no reason to recognize as binding.

I am unconcerned for metaphysical freedom.

Are you still arguing against compatibilism? Compatibilism fails because its terms are meaningless, and I never had any intention on defending it.

I agree with your opinion on what constitutes choice, but I think it is damning to the libertarian side.

You mistake, for what it's worth, my use of "should." Morality's "should" is defined in reference to the "should" of free will. Not the other way around.

We will see.

That depends entirely on what I am trying to prove.

If my objective is proving "We actually have free will," you're right. But it's not. Instead, I am more interested (for now) for looking at what free will looks like if we do in fact have it--for the elements of our subjective experience of freedom that may or may not be threatened by a broader understanding of reality.


And I will start exploring my imagination and figure out what God looks like if he does in fact exist. I think it is pretty obvious that he has a big white beard, but if I really search the depths of my soul, I might be able to find out if he has freckles.

I can always say "I chose to do x because I wanted to." But I can never (legitimately) say "I had to choose to do x because I wanted to"--at least I can never know that, and indeed in actually making the decision assume otherwise. By the very nature of decision-making I approach it as a matter that need not be determined by wants: the question is practical (should), not merely descriptive (want), and I cannot derive my answer of "I should do x" simply from the fact of a want, however strong.

I may, of course, be very strongly inclined to do what I want to do. But I still must actually decide to do it. I cannot merely be a passive watcher while my body does what it wants--not in the context of intentional action, anyway.

We make decisions, and we can certainly understand our decisions, as being the weighing of benefits and opportunity costs between two incompatible courses of action.

Again, I have never approached a decision from the standpoint: "I can either do this action that produces what I want, or I can do this action that will achieve nothing that I desire". I don't even consider the action that achieves nothing, I am yet to find out how it is possible.

I have always approached decisions from the standpoint, "I can do this and satisfy want A, or I can do that and satisfy want B", with the decision being the rational considerations of what I will give up and attain through each. Now the compatibilist does fail when he says this is a choice or free will, but then it is also not necessary to call this free will or to have free will at all.

No, people are always asking themselves what they should do, because they are always making decisions.

No, people are always asking themselves what they should do because they are inherently insatiable. Needs exist without decision making, but I fail to see how any decision making can be done without needs.

That is the point. There must be some value according to which a person decides, or the decision is random, it is not a decision at all. Reason and should possess no value on their own.

They cannot do otherwise. Even inaction is a decision.

Unless it is continuing inaction.
Soheran
12-12-2007, 22:17
I completely disagree. Subjectively, we are always aware of the competing wants,

Yes, "aware." As background. Not determining factors.

that objectively show determination.

That, objectively, we might interpret to show determination.

If we do perceive in ourselves some capability to do counter to our wants, we are merely lying to ourselves.

Prove it.

Want, combined with rational consideration of consequences, does make a should. "I should do Y" is nothing more than "By my estimation Y will maximize the satisfaction I can achieve in terms of X".

That's nonsense.

I can recognize "By my estimation Y will maximize the satisfaction I can achieve in terms of X" without actually deciding to do it... and you can't even claim that failing to do so is irrational.

Any maximization of satisfaction is not, in and of itself, a reason for action. Recognizing it as a consequence does not tell me what I should do. It is simply an observation.

Indeed, I can perfectly coherently state that I should act against my satisfaction.

It is not my argument that one wants to do Y, rather that it is only because of want X that the person can judge Y to be preferable or even start the decision making process in the first place.

From "I want X" it does not follow that "Y, because it will result in X, is preferable."

Blind in that there is no system of judgment, no gauge of direction.

Yes, there is. First, there is reason, which binds me always as a matter of truth recognition. Second, there is any "system of judgment" or "gauge of direction" that I choose. That is why it is free.

You have not stated however, how this volition, without motivation by want, can have any momentum.

How can this volition, with motivation by want, "have any momentum"? The fact that I observe wants does not tell me what to do. I must actually decide.

Its "momentum" is simply the act of choice, the exertion of will.

Non-sequitor. Volition can be random if it is unguided.

"Arbitrary" in some respects, maybe. But then, our wants are no less arbitrary. And mere arbitrariness, when it is a necessary arbitrariness, does not make us unfree. (Arbitrariness when we are capable of recognizing non-arbitrary courses of action, of course, does imply a lack of freedom. But that is a different matter.)

Not "random" in that it is not a matter of dice-rolling: it requires an actual choice, not merely passive acceptance (of what can be determinist causation or random fluctuation).

What is your guide for volition? Don't say reason, because reason doesn't provide values.

Our rationality establishes in ourselves a sort of valuation of reason. That is what it means to be a rational creature.

You yourself have made the crucial admission that recognizes this, for you have stated that rationally we might undertake a course of action we do not want for itself so as to pursue a want it will ultimately advance. By why do we do this? Why does this give us a determining basis for our will? Because as rational creatures we recognize the inadmissibility of contradiction: if we have an end that we affirm we should pursue, it is irrational to declare at the same time that we should not pursue a means to achieve it (unless, of course, some other, greater consideration indicates so to us.)

And I will start exploring my imagination and figure out what God looks like if he does in fact exist.

Philosophically, how can we approach the question of whether or not God exists without first clarifying what it is we mean by God?

The question of free will is the same, though also of far mroe importance.

We make decisions, and we can certainly understand our decisions, as being the weighing of benefits and opportunity costs between two incompatible courses of action.

Yes, we can make decisions this way, and unquestionably we can understand our decisions this way.

But we need not.

Again, I have never approached a decision from the standpoint: "I can either do this action that produces what I want, or I can do this action that will achieve nothing that I desire". I don't even consider the action that achieves nothing, I am yet to find out how it is possible.

To the contrary, you must have considered it, for otherwise you could never have arrived at the conclusion that it achieves nothing that you desire--indeed, you would not even have had any reason to consider what it would or would not achieve.

Perhaps you have always chosen as the basis of your will the satisfaction of your wants, but you have hardly shown that this is a necessary choice. Certainly it is not rationally necessary; indeed, as we perceive subjective decision-making, quite the opposite is the case.

I have always approached decisions from the standpoint, "I can do this and satisfy want A, or I can do that and satisfy want B", with the decision being the rational considerations of what I will give up and attain through each.

Maybe you always have. But this is not necessary to decision-making. All you have shown is that the particular principle you are concerned with (or believe you are concerned with) is maximizing your satisfaction. You have not shown that subjective decision-making is reducible to that.

Of course, it is not. For even if we undertake the evaluation that you have suggested and evaluate our gains and losses in terms of our satisfaction, we have not truly made a decision. We have recognized that one particular option will maximize our satisfaction. We need not recognize that what we should actually do is maximize our satisfaction.

but I fail to see how any decision making can be done without needs.

A simple matter of thought.

or the decision is random, it is not a decision at all.

It is not random; at worst it is (rationally) arbitrary. And it is still a decision. It simply can be made on any basis we choose.

Unless it is continuing inaction.

Unless it is thoughtless inaction. Unless we can prevent ourselves from having the question of "Should I be doing this?" pop in our head.
Vittos the City Sacker
13-12-2007, 14:42
That's nonsense.

I can recognize "By my estimation Y will maximize the satisfaction I can achieve in terms of X" without actually deciding to do it... and you can't even claim that failing to do so is irrational.

Rationality is the link between X and Y, and it doesn't produce a should alone.

If you are really seeking to achieve X and you don't engage in Y, let us say you are looking to eat and you put mud on a plate, this of course is irrational, because Y does nothing to accomplish X. Let us also say that you say lasagna is a delicious food and you make a meal, knowing that you aren't hungry, this is also irrational (excluding other wants such as liking to cook).

Reason does not place values on anything, it simply orders, judges, and predicts, and without this value by which to order and judge, it simply has no function. This value comes from the want. It is within the constraints of want X (I am hungry) that reason can function.

Any maximization of satisfaction is not, in and of itself, a reason for action. Recognizing it as a consequence does not tell me what I should do. It is simply an observation.

The very nature of satisfaction and dissatisfaction, of want and distaste, make it a reason for action. It very true that simple recognition of consequence is merely observation, but that it is a matter of satisfaction means that it will also be actionable.

Indeed, I can perfectly coherently state that I should act against my satisfaction.

Then do so.

Yes, there is. First, there is reason, which binds me always as a matter of truth recognition.

Truth does not produce this should, it gives one no volition. As you said, it is merely observation.

Second, there is any "system of judgment" or "gauge of direction" that I choose. That is why it is free.

You are begging the question.

How can this volition, with motivation by want, "have any momentum"? The fact that I observe wants does not tell me what to do. I must actually decide.

It is the very nature of want.

Its "momentum" is simply the act of choice, the exertion of will.

Do we not also have to choose to choose?

Our rationality establishes in ourselves a sort of valuation of reason. That is what it means to be a rational creature.

This has so many problems. First, as has been said, reason doesn't provide values, and so doesn't provide any gauge by which it would judge action. Secondly, you seem to be saying that we want to be reasonable by way of our nature. Finally, having a rational nature is nothing but damning for the libertarian idea of reason, even the "choice" to use reason is not free, our rational abilities are certainly not determined at the moment of choice so that is also not free, and since I imagine you consider there to be objective rational truth, there go the results of using reason!

You yourself have made the crucial admission that recognizes this, for you have stated that rationally we might undertake a course of action we do not want for itself so as to pursue a want it will ultimately advance. By why do we do this?

Because we assign more value to the ultimate end, we want it more.

Why does this give us a determining basis for our will? Because as rational creatures we recognize the inadmissibility of contradiction: if we have an end that we affirm we should pursue, it is irrational to declare at the same time that we should not pursue a means to achieve it (unless, of course, some other, greater consideration indicates so to us.)

True, but this doesn't explain how we decided to pursue this end.

Unless it is thoughtless inaction. Unless we can prevent ourselves from having the question of "Should I be doing this?" pop in our head.

Which is what a state of perfect satisfaction would be.