Gens Romae
29-06-2007, 06:58
In another thread, someone said something about how "certain people don't accept logic." I confronted this person, asking him what logic was. He professed ignorant.
Therefore, for all who might feel inclined to make such comments in the future, be educated before you speak on things of which you know not. Read my instructional:
I had posted this on another forum, from which I've since been banned. It seems like a waste to let such a work go to waste, and so I have decided to post it on this forum, for any who might find it useful.
Therefore, in the course of this thread I intend to give a brief outline of what logic is, how logic works, and how to use it. Even if you have no absolutely no intention of pursuing absolutely any logical training, read at least this thread. That said, this thread shall ultimately follow this outline:
I. Definition of logic
II. Logical notation
III. Truth Tables
IV. Rules for the Boolean connectives
V. Brief foray on First Order logic
VI. Topics in informal logic
I. What Logic Is
Logic by definition is the science by which arguments are analyzed. The driving principle of Logic is that contradictions (Something both being the case and not being the case) are impossible. It is from this principle that logic works.
In this sense, Logic is not merely a “man made thing,” insofar as logic is merely the means by which men argue. No, that is rhetoric. Logic is not rhetoric. Logic transcends everything, and is truly a means by which absolutely everything is to be determined, even statements about God. For example, you can’t say both that God is incorporeal and that God literally has a hand.
II. Logical notation
Logic is expressed formally in an artificial language. There are 5 main “Boolean connectives” utilized in this language to express the entirety of basic propositional logic. The English equivalents are “or” “not” “if, then” “if and only if” and “and.”
The common way that these sentiments are expressed are shown as below:
“Or” is symbolized by a “v.”
“If, then” is symbolized by an arrow “->.”
“If and only if” is symbolized by two oppositely pointing arrows “<- ->.”
“And” is symbolized by a “ ^.”
“Not” is symbolized by something like “~.” Except that the first curve is straight across and not curved, and the second curve is a line straight down.
Statements are reduced in propositional logic to capital letters. So, for example, you had the statement “If you don’t shut up, I am going punch the crap out of you.”
You would probably symbolize the “If you don’t shut up” as an “A” and “I am going to punch the crap out of you” as a “B.” Combined with the symbol for “if, then,” you would get “A -> B.”
In cases in which one does not want to go with letters, such as when names and predicates are important, predicates are placed on the outside of parentheses, and names are placed within from left to right. For example, if I want to say “Bob hit Betty,” I would say:
Hit(Bob, Betty)
Names, when formalized, are expressed as lower case letters, whereas predicates are expressed as capital letters. So the same would be expressed as:
H(b, c)
Last but not least, when multiple operators are expressed in a single statement, parentheses generally have to be used (except when the only operators are v and ^) to separate them.
So, if you wanted to say “If both the crayon is red and I desire to color, then I shall draw,” you would say something like this:
((R ^ D) -> S)
I used an additional set of parentheses at the very outside of the statement, but these really aren’t needed.
III. Truth Tables
The logical language is a very unambiguous one, and each of the connectives has a given meaning. Taken together, the connectives can be determined to show when the statement as a whole can be true or false. A statement that is always true is called “tautology.” A statement that is always false is a “contradiction.” A statement that is sometimes true and sometimes false is called “contingent.”
A tool for us to use to grasp how these connectives work both alone and together is a thing called a “truth table.” Each connective has certain rules associated with it in a truth table. To form a truth table, one draws a vertical line, and a horizontal line near the top, placing the unique atomic sentences in the upper left of the table, and the actual argument in the top right. See the below link:
See a truth table here. (http://www.gaiaonline.com/gaia/redirect.php?r=http%3A%2F%2Fonegoodmove.org%2Ffallacy%2Fimages%2Fiff.gif)
Underneath that, take the number of unique atomic sentences, and create a number of rows beneath that equal to 2 to the nth power, in which n is equal to the number of unique atomic sentences. So if you have the sentence “P -> Q,” the truth table should have 4 rows. Then, under the first atomic sentence, write a number of T’s equal to half the number of rows, then below that a number of F’s. Below the second, write a number of T’s equal to a fourth, then a number of F’s equal to a fourth, and so forth until you have that filled in. For the third, write a number of T’s equal to an eighth…you get the point.
In body of the truth table to the right of the T’s and F’s, but below the actual sentence, you should then calculate the truth values of each of the connectives going from the connectives which affect the least number of atomics to the one which affects the most…this one called the “main connective.”
So, say you have a sentence “(P -> Q) ^ R. You would calculate the truth value first of (P -> Q), and then you would calculate the truth value of that in relation to ^ R.
So, without further ado, I shall give the rules for the connectives.
A statement with ~ (not) is true if and only if the statement it modifies is false. So a statement ~A is true if and only if A is false. So if A is true, ~A (not A) is false.
A statement with ^ (and) is true if and only if both conjuncts (the things conjoined by and) are true. So a statement A ^ B is true if and only if A is true and B is true, but A ^ B is false if either A or B is false.
A statement with v (or) is true if and only if at least one of the disjuncts (the things disjoined by or) are true. So a statement A v B v C is true if and only if A, B, or C is true, but false if they are all false.
For -> (if, then), we need a bit of terminology. The statement to the left of the -> is called the antecedent, or sufficient condition, and the statement to the right is called the consequent, or the necessary condition. A statement with -> is true if and only if it is NOT the case that the antecedent is true and the conclusion false. So as long as the antecedent is false or the consequent is true, then the statement is true.
A statement with <- -> (if and only if, or iff) is true if and only if the truth value of the atomics to the left and to the right of the iff symbol have the same truth values. So in a statement A <- -> B, if A and B are both true, or they are both false, then the statement is true. But if A or B is true, and the other is false, then the entire statement is false.
In any case in which you have an argument, say…(A ^ B, B -> C, C -> D, and therefore ((A ^ B) -> D), you should conjoin the parts of the argument using ^, wrap some parentheses around that, and then create an if, then statement placing the premises (the part of the argument that isn’t the conclusion) as the antecedent, and the conclusion as the consequent. So this becomes this:
((A ^ B) ^ (B -> C) ^ (C -> D)) -> ((A ^ B) -> D)
The truth table method is very powerful for determining the truth value of arguments. Unfortunately, I think that you can all see where this becomes very tedious in more advanced arguments. Even the above example, which only has 4 atomic sentences, would require 16 rows on a truth table. Therefore, a simpler method is for determining logical inference. This is, of course, a logically deductive system, which leads me to the next point:
IV. Rules for the Boolean Connectives
While the truth table exhaustively shows when a statement is true and false, a deductive system specifically shows logical inference, or deductive validity. Deductive validity is defined as it being impossible for the premises of an argument to be true and a conclusion following from these premises to be false. Soundness is, of course, when an argument is both valid, and all of the premises are true. If one can derive a certain conclusion from a set of premises by doing a logical proof, then the argument is said to be logically valid.
However, if a conclusion cannot be drawn from the premises, then it’s probably not valid, and the best way to show the invalidity is by constructing a situation in which the premises of the argument form are true, yet the conclusion is false.
So say…you have the argument (A ^ B) -> C. A good counter example would be something like… “The grass is green and the sky is blue. Therefore, women do not exist.” Obviously, the premises are true. The grass is green, and the sky is assuredly blue. However, it is not true that women do not exist.
Before I give any of the rules, I would like to point out that presuming you have some premise, or have proven something, you can at any point reiterate it, or repeat it.
For example:
1.A
…
10. A (reiteration of line 1)
So, what are the rules for the Boolean connectives? For each Boolean connective, there is an introduction rule and an elimination rule. I’m going to give them now.
The first you should know is the ^ introduction rule. If you have a premise, or have somewhere proven, some statement A, and also some statement B, then you can conclude A ^ B.
On a proof, this would look something like this:
1. A
2. B
3. A ^ B (^ introduction, lines 1 and 2)
The ^ elimination rule is basically the opposite of the intro rule. If you have some sentence A ^ B, then you can conclude A, and you can conclude B.
So a proof would look something like this:
1. A ^ B
2.A (^ elimination, line 1)
3.B (^ elimination, line 1)
The v intro rule is simple. If you have some sentence A, then you can conclude A v B. I’ll get to the v elimination rule later
A proof would look something like this:
1.A
2.A v B (v introduction, line 1)
The rule for -> elimination is also easy. If you have some sentence A -> B, and also A, then you can conclude B. For, remember, there is no way in a true conditional statement (if then) that a premise can be true and a conclusion false. I’ll get to the introduction rule later.
A proof would look something like this:
1. A -> B
2. A
3. B (-> elimination, lines 1 and 2)
The rule for <- -> elimination is also easy. If you have some sentence A <- -> B, and either A or B, then you can conclude the other part of the biconditional (the iff statement). I’ll get to the introduction rule later.
A proof would look something like this:
1. A <- -> B
2. B
3. A (<- -> elimination, lines 1 and 2)
The rule for ~ elimination is simple. If you have some sentence ~~A. (Not not A), then you can remove negation symbols in increments of two. I’ll get to the intro rule later.
A proof would look something like this:
1.~~~~~A
2.~~~A (~ elimination, line 1)
3. ~A (~ elimination, line 2)
So far, I have given 6 of the rules, but not the other 4. The reason for this is that these require sub proofs. A sub proof is where one temporarily assumes some premise for the sake of argument in order to derive some purpose in the proof, upon which the proof is discharged and the target conclusion is placed in the actual proof.
The easiest rule to understand so far, then, is the -> introduction rule. The -> rule says that if you presume some premise A, and can derive some conclusion B based on that premise, then you can conclude A -> B, citing the sub proof in which you proved it.
A proof looks something like this:
1. A -> B
2. B -> C
3.|A (assumed for sub proof)
4.|B (-> elimination, lines 1 and 3)
5.|C (-> elimination, lines 2 and 4)
6. A -> C (-> introduction, lines 3-5)
The next rule is the <- -> intro rule. This rule says that if you can assume some premise A and conclude some conclusion B, and then assume some premise B, and conclude some conclusion A, then you can conclude A <- -> B, citing both of the sub proofs in which these things are proven.
A proof looks something like this:
1.A
2.|A
3.|A (reiteration of 1)
4.|A
5.|A (reiteration of 1)
6. A <- -> A (<- -> introduction, lines 2-3 and 4-5)
The next rule is the v elimination rule. This rule says that if you have a disjunction A v B, and can conclude some conclusion C from both of the disjuncts, then you can conclude C. After all, if the disjunction is true, then at least one of the disjuncts is true. So if you can say something based on either one of them is true, then that something must be true. One should cite the disjunction and the sub proofs in which each disjuncts is assumed.
A proof looks something like this:
1. A v B
2. A -> C
3. B -> D
4.|A
5.|C (-> elimination, lines 4 and 2)
6.|C v D (v introduction, line 5)
7.|B
8.|D (-> elimination, lines 7 and 3)
9.|C v D (v introduction, line 8 )
10. C v D (v elimination, lines 1, 4-6, 7-9)
At this point, I’d like to bring up another set of rules, though not for a “connective,” per se. The rule is for absurdity, usually symbolized by a vertical line standing upon a horizontal line, something like this: _|_.
The introduction rule for absurdity is this: If you have some sentence A, and some sentence ~A, then you can conclude _|_. Simply put, absurdity means that some contradiction has occurred, and one or more of the premises used to arrive at this contradiction are false. Cite the two contradictory premises.
A proof looks something like this:
1.A
2.B
3.B -> ~A
4.~A (-> elimination rules, lines 2 and 3)
5._|_ (_|_ introduction, lines 1 and 4)
The elimination rule for Absurdity is this: You can conclude anything from absurdity. After all, validity means that it is impossible that the premises of an argument be true and the conclusion false. Clearly, it is impossible for the premises of a contradictory argument to be true. Therefore, it is impossible that the premises of such an argument be true and the conclusion false. Therefore, such is always valid, but never sound. Once again, anything follows from absurdity. Such a proof is like this:
1.A
2.~A
3._|_ (absurdity introduction, lines 1 and 2)
4. R (absurdity elimination, line 3)
This is particularly important for a proof called “disjunctive syllogism.” This proof is like this:
1.A v B
2.~A
3.|A
4.|_|_ (absurdity intro, lines 2 and 3)
5.|B (absurdity elimination, line 4)
6.|B
7.|B (reiteration of line 6)
8.B (v elimination, lines 1, 3-5, 6-7)
The last rule, then is that of ~ introduction, or reductio ad absurdam. The rule for negation introduction says that if you can presume some premise, and reach a conclusion, then you can conclude that the premise is false. Cite the sub proof.
Here is such a proof:
1.A -> ~B
2.B
3.|A
4.|~B (-> elimination, 1 and 3)
5.| _|_ (absurdity intro, lines 3 and 4)
6.~A (~ introduction, lines 3-5)
These are all of the rules for doing deductive proofs in basic propositional systems.
V. Brief foray on First Order logic
Last but not least, then, is First Order. First Order is basically propositional logic with quantifiers. The additional symbols are as follows:
The upside down A (the universal quantifier).
The upside down E (the existential quantifier).
The identity (or equal) sign, or =.
The universal quantifier means “for all,” whereas the existential quantifier means “there exists at least one.”
A well formed formula is one in which there are no unbound variables, which is to say, that for every variable in a formula, there is a corresponding quantifier, and everything is inside of parentheses that is bound to the preceding quantifiers. For example, ExEy (A(x) ^ B(x, y)) is a well formed formula, whereas Ey (A(x) ^ B(x, y)) is not, and ExEy (A(x) ^ B(x, y) is not. Parentheses are very important here.
For the record, the well formed formula above means “There exists some x, and there exists some y such that x is A, and x is B in relation to y.”
The basic formulas for the quantifiers are “some,” “all,” and “none.”
For all, one should use the universal quantifier. For example, “All men are loved by God” is expressed Ax (Man(x) -> Loves(God, x)), or “For all cases x, if x is a man, then God loves x.”
Some is expressed with the existential quantifier. For example, “Some men love Jesus” is expressed as Ex(Man(x) ^ Loves(x, Jesus)), or “There exists a man such that this man loves Jesus.”
None can be expressed one of two ways, with either the Universal or the Existential quantifier. “No man is not loved by God” can be expressed either as:
Ax(Man(x) -> ~~Loves(God, x)), or “For all cases x, if x is a man, then it is not the case that God does not love x,” or…
~Ex (Man(x) ^ ~Love(God, x)), or “There does not exist an x such that x is a man and God does not love x.”
Therefore, I shall give the formal rules for each of those rules:
The rule for = introduction is fairly simple. Everything is equal to itself. John is equal to John. Bob is equal to Bob. God is equal to God. No lines have to be cited.
A proof looks like this:
1.
2. a = a (= introduction)
The rule for = elimination is kind of simple too. If something is true of some thing a, and a is equal to b, then this thing is true of b. A proof looks like this:
1.P(a)
2.a = b
3.P(b) (= elimination, lines 1 and 2)
The rule for E introduction is this: If you have some predicate true of some proper name, then you can assert that there exists things of which that predicate is true. A proof looks like this:
1.P(a)
2.Ex (P(x)) (E introduction, line 1)
The rule for A elimination is this: If you have some universal statement, and some proper name, then this universal statement must be true of this proper name. A proof looks like this:
1.Ax (P(x))
...[a] (produced somewhere in the course of argument)
N. P(a) (A elimination, line 1)
The rule for A introduction is this: If you can choose some arbitrary name, and draw some conclusion, you can conclude that this conclusion is true for everything, or that if you have some proper name with some property and draw some conclusion, then it‘s true for everything of that property, for the name was drawn completely arbitrarily. Cite the sub proof.
A proof looks like this:
1.Ax (Ax -> Bx)
2.Ax (Bx -> Cx)
3.|[a] Aa
4.|Aa -> Ba (A elimination, line 1)
5.|Ba -> Ca (A elimination, line 2)
6.|Ba (-> elimination, lines 3 and 4)
7.|Ca (-> elimination, lines 6 and 5)
8. Ax (Ax -> Cx) (-> introduction, lines 3-7)
The rule for E elimination is this: If you have some existential statement, and you can provide some arbitrary name with the qualities in this existential statement, and derive some conclusion, you can draw that conclusion citing the existential statement and the sub proof. However, in doing so, you cannot derive the conclusion if the conclusion still has the temporary name. A proof looks like this:
1. Ex (Ax)
2. Ax (Ax -> Bx)
3.|[a] Aa
4.|Aa -> Ba (A elimination, line 2)
5.|B(a) (-> elimination, lines 3 and 4)
6.|Ex (B(x)) (E intro, line 5)
7. Ex (B(x)) (E elimination, lines 1 and 3-6
This more or less encompasses the basics of propositional and first order logic. Most arguments can be summed up with these connectives.
VI. Topics in informal logic
All that is left, then, is my rant on informal logic. Not every good argument is derived by deductive logic. Indeed, some things are not deduced by first principles, as in the deductive system(s) I have shown above. Indeed, some things cannot be deduced by first principles. One such occasion is natural science. Natural science isn’t based on the presumption of some first principle, and deducing some conclusion based on those presumptions. It is based on predicting things based on what is thought to be known.
However, this doesn’t mean that science is always wrong. Here is where I define inductive strength and cogency. An argument is inductively strong if it is unlikely that the premises be true and the conclusion false. An argument is cogent is if it is inductively strong, and all of the premises are true.
Two major forms of inductive reasoning are “Appeal to authority” and “Argument from analogy.”
A strong appeal to authority is one in which the authority really is authoritative in the given subject. For example, appealing to a doctor’s diagnosis to support the assertion “I have a cold,” is good reasoning. A doctor is trained to diagnose illnesses, and therefore it is perfectly legitimate to accept such a conclusion based on the authority of the doctor. Granted, truth value is not guaranteed, but it is nonetheless likely enough to presume it to be true.
An argument from analogy is when various scenarios are used to support some given conclusion. An effective argument from analogy is one in which the scenarios appealed are similar to the one being argued. For example, “The last two cases in which a man was found guilty of murder, he was executed. This man is found guilty of murder, and the facts are more or less the same, and the laws have not changed. Therefore, this man will probably be executed.” Again, while the truth value is not guaranteed, it is still nonetheless very likely.
That said, I would like to outline some basic, commonly misunderstood fallacies. A fallacy is committed when the premises, even though true, do not truly support the conclusion given. The fallacy might be such because of irrelevance, because of weak induction, etc.
The first fallacy I’d like to talk about is Ad Verecundiam. Ad Verecundiam is known as “Appeal to unqualified authority.” The fallacy is committed when the authority cited is not authoritative in the subject given. For example, if you appeal to your physician’s opinion to support the conclusion “I think the carburetor’s busted,” and your physician is not also a mechanic, then clearly, the premise does not support the conclusion. Ad Verecundiam is committed.
For the record, however, not every appeal to authority is fallacious. Only those appeals to authority in which the authority isn’t an authority in the particular field is fallacious.
The second fallacy I’d like to talk about is that of “Ad Ignorantiam,” or “to ignorance.” The fallacy is committed when some thing is either rejected or affirmed due to ignorance of its existence or non existence. For example, “There is no proof of God. Therefore, He does not exist.” “There is no proof that Dumbo does not exist, and therefore he does exist.”
However, not all conclusions based on ignorance are fallacious. For example, “We are not sure that there are or are not snipers in that building, and this is a location in which snipers are known to shoot people. Therefore, we should be cautious.”
This isn’t fallacious.
“Your honor, my client has not been proven to be guilty. Therefore, he must be said to be innocent!”
This isn’t fallacious either. That’s how the legal system works.
The third fallacy I’d like to talk about is that of “Ad Antiquatem,” or “To tradition.” This fallacy is committed when some course of action is suggested based solely on it always having been done that way. For example:
“We should ride horses rather than drive cars. After all, we’ve ridden horses for a long, long time!” This is fallacious because horses were always used probably solely because cars didn’t exist at the time.
However, appealing to tradition in general isn’t always a fallacy, particularly when referring to some history event or belief. For example:
“It has always been believed that a certain Spartan said in the war against the Persians, ‘Their arrows will block out the sun? Then we shall fight in the shade.’ Therefore, such a Spartan probably did say this.
Not fallacious.
Last but not least, I’d like to talk about Ad Populum, or “Appeal to the people.” This fallacy is committed when some conclusion is drawn based on popular opinion. I’d like to note, however, that the conclusion drawn in Ad Populum is that of objective truth.
For example, “Everyone believes that there are 9 planets. Therefore, there are 9 planets.” Fallacy.
However, if the argument given is concerning an action taken in a system in which majority rules…clearly, it’s not a fallacy.
“The majority of the United States citizens believe a Republican should be President of the United States, at least as represented in the electoral college. Therefore, a Republican should be President of the United States.”
Not a fallacy.
That said, I truly hope that everyone reads this, and everyone who does read this shall take the time to try to understand it. I thank everyone who read it for taking the time to do so, and I truly hope that this will raise the standard of debate in ED.
Apologist.
Therefore, for all who might feel inclined to make such comments in the future, be educated before you speak on things of which you know not. Read my instructional:
I had posted this on another forum, from which I've since been banned. It seems like a waste to let such a work go to waste, and so I have decided to post it on this forum, for any who might find it useful.
Therefore, in the course of this thread I intend to give a brief outline of what logic is, how logic works, and how to use it. Even if you have no absolutely no intention of pursuing absolutely any logical training, read at least this thread. That said, this thread shall ultimately follow this outline:
I. Definition of logic
II. Logical notation
III. Truth Tables
IV. Rules for the Boolean connectives
V. Brief foray on First Order logic
VI. Topics in informal logic
I. What Logic Is
Logic by definition is the science by which arguments are analyzed. The driving principle of Logic is that contradictions (Something both being the case and not being the case) are impossible. It is from this principle that logic works.
In this sense, Logic is not merely a “man made thing,” insofar as logic is merely the means by which men argue. No, that is rhetoric. Logic is not rhetoric. Logic transcends everything, and is truly a means by which absolutely everything is to be determined, even statements about God. For example, you can’t say both that God is incorporeal and that God literally has a hand.
II. Logical notation
Logic is expressed formally in an artificial language. There are 5 main “Boolean connectives” utilized in this language to express the entirety of basic propositional logic. The English equivalents are “or” “not” “if, then” “if and only if” and “and.”
The common way that these sentiments are expressed are shown as below:
“Or” is symbolized by a “v.”
“If, then” is symbolized by an arrow “->.”
“If and only if” is symbolized by two oppositely pointing arrows “<- ->.”
“And” is symbolized by a “ ^.”
“Not” is symbolized by something like “~.” Except that the first curve is straight across and not curved, and the second curve is a line straight down.
Statements are reduced in propositional logic to capital letters. So, for example, you had the statement “If you don’t shut up, I am going punch the crap out of you.”
You would probably symbolize the “If you don’t shut up” as an “A” and “I am going to punch the crap out of you” as a “B.” Combined with the symbol for “if, then,” you would get “A -> B.”
In cases in which one does not want to go with letters, such as when names and predicates are important, predicates are placed on the outside of parentheses, and names are placed within from left to right. For example, if I want to say “Bob hit Betty,” I would say:
Hit(Bob, Betty)
Names, when formalized, are expressed as lower case letters, whereas predicates are expressed as capital letters. So the same would be expressed as:
H(b, c)
Last but not least, when multiple operators are expressed in a single statement, parentheses generally have to be used (except when the only operators are v and ^) to separate them.
So, if you wanted to say “If both the crayon is red and I desire to color, then I shall draw,” you would say something like this:
((R ^ D) -> S)
I used an additional set of parentheses at the very outside of the statement, but these really aren’t needed.
III. Truth Tables
The logical language is a very unambiguous one, and each of the connectives has a given meaning. Taken together, the connectives can be determined to show when the statement as a whole can be true or false. A statement that is always true is called “tautology.” A statement that is always false is a “contradiction.” A statement that is sometimes true and sometimes false is called “contingent.”
A tool for us to use to grasp how these connectives work both alone and together is a thing called a “truth table.” Each connective has certain rules associated with it in a truth table. To form a truth table, one draws a vertical line, and a horizontal line near the top, placing the unique atomic sentences in the upper left of the table, and the actual argument in the top right. See the below link:
See a truth table here. (http://www.gaiaonline.com/gaia/redirect.php?r=http%3A%2F%2Fonegoodmove.org%2Ffallacy%2Fimages%2Fiff.gif)
Underneath that, take the number of unique atomic sentences, and create a number of rows beneath that equal to 2 to the nth power, in which n is equal to the number of unique atomic sentences. So if you have the sentence “P -> Q,” the truth table should have 4 rows. Then, under the first atomic sentence, write a number of T’s equal to half the number of rows, then below that a number of F’s. Below the second, write a number of T’s equal to a fourth, then a number of F’s equal to a fourth, and so forth until you have that filled in. For the third, write a number of T’s equal to an eighth…you get the point.
In body of the truth table to the right of the T’s and F’s, but below the actual sentence, you should then calculate the truth values of each of the connectives going from the connectives which affect the least number of atomics to the one which affects the most…this one called the “main connective.”
So, say you have a sentence “(P -> Q) ^ R. You would calculate the truth value first of (P -> Q), and then you would calculate the truth value of that in relation to ^ R.
So, without further ado, I shall give the rules for the connectives.
A statement with ~ (not) is true if and only if the statement it modifies is false. So a statement ~A is true if and only if A is false. So if A is true, ~A (not A) is false.
A statement with ^ (and) is true if and only if both conjuncts (the things conjoined by and) are true. So a statement A ^ B is true if and only if A is true and B is true, but A ^ B is false if either A or B is false.
A statement with v (or) is true if and only if at least one of the disjuncts (the things disjoined by or) are true. So a statement A v B v C is true if and only if A, B, or C is true, but false if they are all false.
For -> (if, then), we need a bit of terminology. The statement to the left of the -> is called the antecedent, or sufficient condition, and the statement to the right is called the consequent, or the necessary condition. A statement with -> is true if and only if it is NOT the case that the antecedent is true and the conclusion false. So as long as the antecedent is false or the consequent is true, then the statement is true.
A statement with <- -> (if and only if, or iff) is true if and only if the truth value of the atomics to the left and to the right of the iff symbol have the same truth values. So in a statement A <- -> B, if A and B are both true, or they are both false, then the statement is true. But if A or B is true, and the other is false, then the entire statement is false.
In any case in which you have an argument, say…(A ^ B, B -> C, C -> D, and therefore ((A ^ B) -> D), you should conjoin the parts of the argument using ^, wrap some parentheses around that, and then create an if, then statement placing the premises (the part of the argument that isn’t the conclusion) as the antecedent, and the conclusion as the consequent. So this becomes this:
((A ^ B) ^ (B -> C) ^ (C -> D)) -> ((A ^ B) -> D)
The truth table method is very powerful for determining the truth value of arguments. Unfortunately, I think that you can all see where this becomes very tedious in more advanced arguments. Even the above example, which only has 4 atomic sentences, would require 16 rows on a truth table. Therefore, a simpler method is for determining logical inference. This is, of course, a logically deductive system, which leads me to the next point:
IV. Rules for the Boolean Connectives
While the truth table exhaustively shows when a statement is true and false, a deductive system specifically shows logical inference, or deductive validity. Deductive validity is defined as it being impossible for the premises of an argument to be true and a conclusion following from these premises to be false. Soundness is, of course, when an argument is both valid, and all of the premises are true. If one can derive a certain conclusion from a set of premises by doing a logical proof, then the argument is said to be logically valid.
However, if a conclusion cannot be drawn from the premises, then it’s probably not valid, and the best way to show the invalidity is by constructing a situation in which the premises of the argument form are true, yet the conclusion is false.
So say…you have the argument (A ^ B) -> C. A good counter example would be something like… “The grass is green and the sky is blue. Therefore, women do not exist.” Obviously, the premises are true. The grass is green, and the sky is assuredly blue. However, it is not true that women do not exist.
Before I give any of the rules, I would like to point out that presuming you have some premise, or have proven something, you can at any point reiterate it, or repeat it.
For example:
1.A
…
10. A (reiteration of line 1)
So, what are the rules for the Boolean connectives? For each Boolean connective, there is an introduction rule and an elimination rule. I’m going to give them now.
The first you should know is the ^ introduction rule. If you have a premise, or have somewhere proven, some statement A, and also some statement B, then you can conclude A ^ B.
On a proof, this would look something like this:
1. A
2. B
3. A ^ B (^ introduction, lines 1 and 2)
The ^ elimination rule is basically the opposite of the intro rule. If you have some sentence A ^ B, then you can conclude A, and you can conclude B.
So a proof would look something like this:
1. A ^ B
2.A (^ elimination, line 1)
3.B (^ elimination, line 1)
The v intro rule is simple. If you have some sentence A, then you can conclude A v B. I’ll get to the v elimination rule later
A proof would look something like this:
1.A
2.A v B (v introduction, line 1)
The rule for -> elimination is also easy. If you have some sentence A -> B, and also A, then you can conclude B. For, remember, there is no way in a true conditional statement (if then) that a premise can be true and a conclusion false. I’ll get to the introduction rule later.
A proof would look something like this:
1. A -> B
2. A
3. B (-> elimination, lines 1 and 2)
The rule for <- -> elimination is also easy. If you have some sentence A <- -> B, and either A or B, then you can conclude the other part of the biconditional (the iff statement). I’ll get to the introduction rule later.
A proof would look something like this:
1. A <- -> B
2. B
3. A (<- -> elimination, lines 1 and 2)
The rule for ~ elimination is simple. If you have some sentence ~~A. (Not not A), then you can remove negation symbols in increments of two. I’ll get to the intro rule later.
A proof would look something like this:
1.~~~~~A
2.~~~A (~ elimination, line 1)
3. ~A (~ elimination, line 2)
So far, I have given 6 of the rules, but not the other 4. The reason for this is that these require sub proofs. A sub proof is where one temporarily assumes some premise for the sake of argument in order to derive some purpose in the proof, upon which the proof is discharged and the target conclusion is placed in the actual proof.
The easiest rule to understand so far, then, is the -> introduction rule. The -> rule says that if you presume some premise A, and can derive some conclusion B based on that premise, then you can conclude A -> B, citing the sub proof in which you proved it.
A proof looks something like this:
1. A -> B
2. B -> C
3.|A (assumed for sub proof)
4.|B (-> elimination, lines 1 and 3)
5.|C (-> elimination, lines 2 and 4)
6. A -> C (-> introduction, lines 3-5)
The next rule is the <- -> intro rule. This rule says that if you can assume some premise A and conclude some conclusion B, and then assume some premise B, and conclude some conclusion A, then you can conclude A <- -> B, citing both of the sub proofs in which these things are proven.
A proof looks something like this:
1.A
2.|A
3.|A (reiteration of 1)
4.|A
5.|A (reiteration of 1)
6. A <- -> A (<- -> introduction, lines 2-3 and 4-5)
The next rule is the v elimination rule. This rule says that if you have a disjunction A v B, and can conclude some conclusion C from both of the disjuncts, then you can conclude C. After all, if the disjunction is true, then at least one of the disjuncts is true. So if you can say something based on either one of them is true, then that something must be true. One should cite the disjunction and the sub proofs in which each disjuncts is assumed.
A proof looks something like this:
1. A v B
2. A -> C
3. B -> D
4.|A
5.|C (-> elimination, lines 4 and 2)
6.|C v D (v introduction, line 5)
7.|B
8.|D (-> elimination, lines 7 and 3)
9.|C v D (v introduction, line 8 )
10. C v D (v elimination, lines 1, 4-6, 7-9)
At this point, I’d like to bring up another set of rules, though not for a “connective,” per se. The rule is for absurdity, usually symbolized by a vertical line standing upon a horizontal line, something like this: _|_.
The introduction rule for absurdity is this: If you have some sentence A, and some sentence ~A, then you can conclude _|_. Simply put, absurdity means that some contradiction has occurred, and one or more of the premises used to arrive at this contradiction are false. Cite the two contradictory premises.
A proof looks something like this:
1.A
2.B
3.B -> ~A
4.~A (-> elimination rules, lines 2 and 3)
5._|_ (_|_ introduction, lines 1 and 4)
The elimination rule for Absurdity is this: You can conclude anything from absurdity. After all, validity means that it is impossible that the premises of an argument be true and the conclusion false. Clearly, it is impossible for the premises of a contradictory argument to be true. Therefore, it is impossible that the premises of such an argument be true and the conclusion false. Therefore, such is always valid, but never sound. Once again, anything follows from absurdity. Such a proof is like this:
1.A
2.~A
3._|_ (absurdity introduction, lines 1 and 2)
4. R (absurdity elimination, line 3)
This is particularly important for a proof called “disjunctive syllogism.” This proof is like this:
1.A v B
2.~A
3.|A
4.|_|_ (absurdity intro, lines 2 and 3)
5.|B (absurdity elimination, line 4)
6.|B
7.|B (reiteration of line 6)
8.B (v elimination, lines 1, 3-5, 6-7)
The last rule, then is that of ~ introduction, or reductio ad absurdam. The rule for negation introduction says that if you can presume some premise, and reach a conclusion, then you can conclude that the premise is false. Cite the sub proof.
Here is such a proof:
1.A -> ~B
2.B
3.|A
4.|~B (-> elimination, 1 and 3)
5.| _|_ (absurdity intro, lines 3 and 4)
6.~A (~ introduction, lines 3-5)
These are all of the rules for doing deductive proofs in basic propositional systems.
V. Brief foray on First Order logic
Last but not least, then, is First Order. First Order is basically propositional logic with quantifiers. The additional symbols are as follows:
The upside down A (the universal quantifier).
The upside down E (the existential quantifier).
The identity (or equal) sign, or =.
The universal quantifier means “for all,” whereas the existential quantifier means “there exists at least one.”
A well formed formula is one in which there are no unbound variables, which is to say, that for every variable in a formula, there is a corresponding quantifier, and everything is inside of parentheses that is bound to the preceding quantifiers. For example, ExEy (A(x) ^ B(x, y)) is a well formed formula, whereas Ey (A(x) ^ B(x, y)) is not, and ExEy (A(x) ^ B(x, y) is not. Parentheses are very important here.
For the record, the well formed formula above means “There exists some x, and there exists some y such that x is A, and x is B in relation to y.”
The basic formulas for the quantifiers are “some,” “all,” and “none.”
For all, one should use the universal quantifier. For example, “All men are loved by God” is expressed Ax (Man(x) -> Loves(God, x)), or “For all cases x, if x is a man, then God loves x.”
Some is expressed with the existential quantifier. For example, “Some men love Jesus” is expressed as Ex(Man(x) ^ Loves(x, Jesus)), or “There exists a man such that this man loves Jesus.”
None can be expressed one of two ways, with either the Universal or the Existential quantifier. “No man is not loved by God” can be expressed either as:
Ax(Man(x) -> ~~Loves(God, x)), or “For all cases x, if x is a man, then it is not the case that God does not love x,” or…
~Ex (Man(x) ^ ~Love(God, x)), or “There does not exist an x such that x is a man and God does not love x.”
Therefore, I shall give the formal rules for each of those rules:
The rule for = introduction is fairly simple. Everything is equal to itself. John is equal to John. Bob is equal to Bob. God is equal to God. No lines have to be cited.
A proof looks like this:
1.
2. a = a (= introduction)
The rule for = elimination is kind of simple too. If something is true of some thing a, and a is equal to b, then this thing is true of b. A proof looks like this:
1.P(a)
2.a = b
3.P(b) (= elimination, lines 1 and 2)
The rule for E introduction is this: If you have some predicate true of some proper name, then you can assert that there exists things of which that predicate is true. A proof looks like this:
1.P(a)
2.Ex (P(x)) (E introduction, line 1)
The rule for A elimination is this: If you have some universal statement, and some proper name, then this universal statement must be true of this proper name. A proof looks like this:
1.Ax (P(x))
...[a] (produced somewhere in the course of argument)
N. P(a) (A elimination, line 1)
The rule for A introduction is this: If you can choose some arbitrary name, and draw some conclusion, you can conclude that this conclusion is true for everything, or that if you have some proper name with some property and draw some conclusion, then it‘s true for everything of that property, for the name was drawn completely arbitrarily. Cite the sub proof.
A proof looks like this:
1.Ax (Ax -> Bx)
2.Ax (Bx -> Cx)
3.|[a] Aa
4.|Aa -> Ba (A elimination, line 1)
5.|Ba -> Ca (A elimination, line 2)
6.|Ba (-> elimination, lines 3 and 4)
7.|Ca (-> elimination, lines 6 and 5)
8. Ax (Ax -> Cx) (-> introduction, lines 3-7)
The rule for E elimination is this: If you have some existential statement, and you can provide some arbitrary name with the qualities in this existential statement, and derive some conclusion, you can draw that conclusion citing the existential statement and the sub proof. However, in doing so, you cannot derive the conclusion if the conclusion still has the temporary name. A proof looks like this:
1. Ex (Ax)
2. Ax (Ax -> Bx)
3.|[a] Aa
4.|Aa -> Ba (A elimination, line 2)
5.|B(a) (-> elimination, lines 3 and 4)
6.|Ex (B(x)) (E intro, line 5)
7. Ex (B(x)) (E elimination, lines 1 and 3-6
This more or less encompasses the basics of propositional and first order logic. Most arguments can be summed up with these connectives.
VI. Topics in informal logic
All that is left, then, is my rant on informal logic. Not every good argument is derived by deductive logic. Indeed, some things are not deduced by first principles, as in the deductive system(s) I have shown above. Indeed, some things cannot be deduced by first principles. One such occasion is natural science. Natural science isn’t based on the presumption of some first principle, and deducing some conclusion based on those presumptions. It is based on predicting things based on what is thought to be known.
However, this doesn’t mean that science is always wrong. Here is where I define inductive strength and cogency. An argument is inductively strong if it is unlikely that the premises be true and the conclusion false. An argument is cogent is if it is inductively strong, and all of the premises are true.
Two major forms of inductive reasoning are “Appeal to authority” and “Argument from analogy.”
A strong appeal to authority is one in which the authority really is authoritative in the given subject. For example, appealing to a doctor’s diagnosis to support the assertion “I have a cold,” is good reasoning. A doctor is trained to diagnose illnesses, and therefore it is perfectly legitimate to accept such a conclusion based on the authority of the doctor. Granted, truth value is not guaranteed, but it is nonetheless likely enough to presume it to be true.
An argument from analogy is when various scenarios are used to support some given conclusion. An effective argument from analogy is one in which the scenarios appealed are similar to the one being argued. For example, “The last two cases in which a man was found guilty of murder, he was executed. This man is found guilty of murder, and the facts are more or less the same, and the laws have not changed. Therefore, this man will probably be executed.” Again, while the truth value is not guaranteed, it is still nonetheless very likely.
That said, I would like to outline some basic, commonly misunderstood fallacies. A fallacy is committed when the premises, even though true, do not truly support the conclusion given. The fallacy might be such because of irrelevance, because of weak induction, etc.
The first fallacy I’d like to talk about is Ad Verecundiam. Ad Verecundiam is known as “Appeal to unqualified authority.” The fallacy is committed when the authority cited is not authoritative in the subject given. For example, if you appeal to your physician’s opinion to support the conclusion “I think the carburetor’s busted,” and your physician is not also a mechanic, then clearly, the premise does not support the conclusion. Ad Verecundiam is committed.
For the record, however, not every appeal to authority is fallacious. Only those appeals to authority in which the authority isn’t an authority in the particular field is fallacious.
The second fallacy I’d like to talk about is that of “Ad Ignorantiam,” or “to ignorance.” The fallacy is committed when some thing is either rejected or affirmed due to ignorance of its existence or non existence. For example, “There is no proof of God. Therefore, He does not exist.” “There is no proof that Dumbo does not exist, and therefore he does exist.”
However, not all conclusions based on ignorance are fallacious. For example, “We are not sure that there are or are not snipers in that building, and this is a location in which snipers are known to shoot people. Therefore, we should be cautious.”
This isn’t fallacious.
“Your honor, my client has not been proven to be guilty. Therefore, he must be said to be innocent!”
This isn’t fallacious either. That’s how the legal system works.
The third fallacy I’d like to talk about is that of “Ad Antiquatem,” or “To tradition.” This fallacy is committed when some course of action is suggested based solely on it always having been done that way. For example:
“We should ride horses rather than drive cars. After all, we’ve ridden horses for a long, long time!” This is fallacious because horses were always used probably solely because cars didn’t exist at the time.
However, appealing to tradition in general isn’t always a fallacy, particularly when referring to some history event or belief. For example:
“It has always been believed that a certain Spartan said in the war against the Persians, ‘Their arrows will block out the sun? Then we shall fight in the shade.’ Therefore, such a Spartan probably did say this.
Not fallacious.
Last but not least, I’d like to talk about Ad Populum, or “Appeal to the people.” This fallacy is committed when some conclusion is drawn based on popular opinion. I’d like to note, however, that the conclusion drawn in Ad Populum is that of objective truth.
For example, “Everyone believes that there are 9 planets. Therefore, there are 9 planets.” Fallacy.
However, if the argument given is concerning an action taken in a system in which majority rules…clearly, it’s not a fallacy.
“The majority of the United States citizens believe a Republican should be President of the United States, at least as represented in the electoral college. Therefore, a Republican should be President of the United States.”
Not a fallacy.
That said, I truly hope that everyone reads this, and everyone who does read this shall take the time to try to understand it. I thank everyone who read it for taking the time to do so, and I truly hope that this will raise the standard of debate in ED.
Apologist.