Scientific Method
Taken from here.
- Define the question
- Gather information and resources (observe)
- Form hypothesis
- Perform experiment and collect data
- Analyze data
- Interpret data and draw conclusions that serve as a starting point for new hypothesis
- Publish results
- Retest (frequently done by other scientists)
Science as a Way of Knowing
Science is a powerful “way of knowing” based on experimentation and observations of the natural world. We depend on science for unbiased and verifiable information to make important decisions about our lives. Although there are other ways of knowing that may be important in our personal and cultural lives, they rely on opinion, belief, and other factors rather than on evidence and testing.
The scientific method utilizes a series of facts, hypotheses, laws, and theories to explain observations in the natural world. Everyday use of these terms is different than in the scientific context, leading to unintentional and intentional confusion. Theory is one of the most important—yet most misunderstood—terms. While theory is commonly used to mean a “hunch” or “opinion,” in science, a theory is an extremely strong statement that provides an explanation of a natural phenomenon based on a wealth of well-documented evidence. A theory must include the following criteria:
- It must be tested by experimentation and observation of the natural world.
- It must be falsifiable (i.e. experiments must exist that could prove it false).
- It cannot be proven, only confirmed or disconfirmed.
- It is subject to revision and change.
A Rough Guide To Spotting Bad Science
Source is here.
Sensationalized Headlines – Headlines of articles are commonly designed to entice viewers into clicking on and reading the article. At best, they over-simplify the findings of research. At worst, they sensationalize and misrepresent them.
Misinterpreted Results – News articles sometimes distort or misinterpret the findings of research for the sake of a good story, intentionally or otherwise. If possible, try to read the original research, rather than relying on the article based on it for information.
Conflict of Interests – Many companies employ scientists to carry out and publish research – whilst this does not necessarily invalidate research, it should be analysed with this in mind. Research can also be misrepresented for personal or financial gain.
Correlation and Causation – Be wary of confusion of correlation & causation. Correlation between two variables doesn’t automatically mean one causes the other. Global warming has increased since the 1800s, and pirate numbers decreased, but lack of pirates doesn’t cause global warming.
Speculative Language – Speculations from research are just that – speculation. Be on the look out for words such as ‘may’, ‘could’, ‘might’, and others, as it is unlikely the research provides hard evidence for any conclusions they precede.
Small Sample Group Size – In human trials, the smaller a sample size, the lower the confidence in the results from that sample when applied to the whole population. Conclusions drawn from smaller sample sizes should be considered with this in mind.
Unrepresentative Samples – In human trials, researchers will try to select individuals that are representative of a larger population. If the sample is different from the population as a whole, then the conclusions may well also be different.
No Control Group Used – In clinical trials, results from test subjects should be compared to a ‘control group’ not given the substance being tested. Groups should also be allocated randomly. In general experiments, a control test should be used where all variables are controlled.
No Blind Testing Used – To prevent intentional and unconscious bias, subjects should not know whether they are in the test group or the control group. In double-blind testing, even researchers don’t know which group subjects are in until after testing.
Cherry-Picked Results – This involves selecting data from experiments which supports the conclusion of the research, whilst ignoring those that do not. If a research paper draws conclusions from a selection of its results, not all, it may be cherry-picking.
Unreplicable Results – The idea that research must be reproducible is an important one in science. Experiments should be repeated more than once, and ideally there should be evidence that other scientists have been able to reproduce the results.
Journal and Citations Poor – Research published to major journals, such as Nature & Science, is likely to have undergone a more stringent peer review process. Additionally, if a paper has few citations from other papers, it may be a sign it is not well regarded.
Guide to Arguments and Slippery Tactics
Ad Hominem Fallacy – An argument is discounted based on attacking the character of the person making the argument. “He is wrong when he says there is no God, because he is a fool.”
Strawman Fallacy – Arguing against a position by creating a different, weaker, or irrelevant position and refuting that position instead of the original. “There is no God,” misrepresents, “There isn’t sufficient evidence that God exists.”
Circular Reasoning – The truth of the conclusion is assumed in order to justify the premises. “The fool says there is no God, because anyone who says there is no God is a fool.”
Begging the Question – The argument creates a secondary proposition that is related to the primary proposition, which requires a similar argument that is missing. The existence of God is assumed, while addressing propositions of whether God exists.
Fallacy of Inconsistency – The argument is inconsistent with other arguments within the same context. In the Christian context, Jesus commands against the invective in Psalms 53:1, warning that “whoever says ‘You fool!’ shall be liable to the hell of fire,” in Matthew 5:22.
Special Pleading – The inappropriate attribution of emotive functions to objects that do not have that capability. Hearts are not capable of “knowing” or of feeling emotions.
Redundancy – Psalm 53 is identical to Psalm 14.
Questionable Premise – It is obviously not the case that all atheists do nothing but bad deeds. This premise is invalidated by a single example of an atheist doing a single charitable act.
False Dilemma – The logical fallacy of false dilemma (also called false dichotomy, the either-or fallacy) involves a situation in which only two alternatives are considered, when in fact there are other options. Closely related are failing to consider a range of options and the tendency to think in extremes, called black-and-white thinking. Strictly speaking, the prefix “di” in “dilemma” means “two”. When a list of more than two choices is offered, but there are other choices not mentioned, then the fallacy is called the fallacy of false choice, or the fallacy of exhaustive hypotheses.
False dilemma can arise intentionally, when fallacy is used in an attempt to force a choice (“If you are not with us, you are against us.”) But the fallacy can arise simply by accidental omission—possibly through a form of wishful thinking or ignorance—rather than by deliberate deception (“I thought we were friends, but all my friends were at my apartment last night and you weren’t there.”)
When two alternatives are presented, they are often, though not always, two extreme points on some spectrum of possibilities. This can lend credence to the larger argument by giving the impression that the options are mutually exclusive, even though they need not be. Furthermore, the options are typically presented as being collectively exhaustive, in which case the fallacy can be overcome, or at least weakened, by considering other possibilities, or perhaps by considering a whole spectrum of possibilities, as in fuzzy logic.
Critical Thinking
Critical thinking is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. In its exemplary form, it is based on universal intellectual values that transcend subject matter divisions: clarity, accuracy, precision, consistency, relevance, sound evidence, good reasons, depth, breadth, and fairness.
It entails the examination of those structures or elements of thought implicit in all reasoning: purpose, problem, or question-at-issue; assumptions; concepts; empirical grounding; reasoning leading to conclusions; implications and consequences; objections from alternative viewpoints; and frame of reference. Critical thinking – in being responsive to variable subject matter, issues, and purposes – is incorporated in a family of interwoven modes of thinking, among them: scientific thinking, mathematical thinking, historical thinking, anthropological thinking, economic thinking, moral thinking, and philosophical thinking.
Critical thinking can be seen as having two components: 1) a set of information and belief generating and processing skills, and 2) the habit, based on intellectual commitment, of using those skills to guide behavior. It is thus to be contrasted with: 1) the mere acquisition and retention of information alone, because it involves a particular way in which information is sought and treated; 2) the mere possession of a set of skills, because it involves the continual use of them; and 3) the mere use of those skills (“as an exercise”) without acceptance of their results.
Critical thinking varies according to the motivation underlying it. When grounded in selfish motives, it is often manifested in the skillful manipulation of ideas in service of one’s own, or one’s groups’, vested interest. As such it is typically intellectually flawed, however pragmatically successful it might be. When grounded in fair-mindedness and intellectual integrity, it is typically of a higher order intellectually, though subject to the charge of “idealism” by those habituated to its selfish use.
Critical thinking of any kind is never universal in any individual; everyone is subject to episodes of undisciplined or irrational thought. Its quality is therefore typically a matter of degree and dependent on , among other things, the quality and depth of experience in a given domain of thinking or with respect to a particular class of questions. No one is a critical thinker through-and-through, but only to such-and-such a degree, with such-and-such insights and blind spots, subject to such-and-such tendencies towards self-delusion. For this reason, the development of critical thinking skills and dispositions is a life-long endeavor.
The Problem
Everyone thinks; it is our nature to do so. But much of our thinking, left to itself, is biased, distorted, partial, uninformed or down-right prejudiced. Yet the quality of our life and that of what we produce, make, or build depends precisely on the quality of our thought. Shoddy thinking is costly, both in money and in quality of life. Excellence in thought, however, must be systematically cultivated.
A Definition
Critical thinking is that mode of thinking – about any subject, content, or problem – in which the thinker improves the quality of his or her thinking by skillfully taking charge of the structures inherent in thinking and imposing intellectual standards upon them.
The Result
A well cultivated critical thinker:
- Raises vital questions and problems, formulating them clearly and precisely.
- Gathers and assesses relevant information, using abstract ideas to interpret it effectively comes to well-reasoned conclusions and solutions, testing them against relevant criteria and standards.
- Thinks open-mindedly within alternative systems of thought, recognizing and assessing, as need be, their assumptions, implications, and practical consequences.
- Communicates effectively with others in figuring out solutions to complex problems.
Critical thinking is, in short, self-directed, self-disciplined, self-monitored, and self-corrective thinking. It presupposes assent to rigorous standards of excellence and mindful command of their use. It entails effective communication and problem solving abilities and a commitment to overcome our native ego-centrism and sociocentrism.
Baloney Detection
From the “Fine Art of Baloney Detection” chapter of Carl Sagan’s fine book, “The Demon-Haunted World – Science as a Candle in the Dark“
The following are suggested as tools for testing arguments and detecting fallacious or fraudulent arguments:
- Wherever possible there must be independent confirmation of the facts.
- Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
- Arguments from authority carry little weight (in science there are no “authorities”).
- Spin more than one hypothesis – don’t simply run with the first idea that caught your fancy.
- Try not to get overly attached to a hypothesis just because it’s yours.
- Quantify, wherever possible.
- If there is a chain of argument every link in the chain must work.
- “Occam’s razor” – if there are two hypothesis that explain the data equally well choose the simpler.
- Ask whether the hypothesis can, at least in principle, be falsified (shown to be false by some unambiguous test). In other words, it is testable? Can others duplicate the experiment and get the same result?
Additional issues are:
- Conduct control experiments – especially “double blind” experiments where the person taking measurements is not aware of the test and control subjects.
- Check for confounding factors – separate the variables.
Common Fallacies of Logic and Rhetoric
- Ad hominem – attacking the arguer and not the argument.
- Argument from “authority”.
- Argument from adverse consequences (putting pressure on the decision maker by pointing out dire consequences of an “unfavourable” decision).
- Appeal to ignorance (absence of evidence is not evidence of absence).
- Special pleading (typically referring to god’s will).
- Begging the question (assuming an answer in the way the question is phrased).
- Observational selection (counting the hits and forgetting the misses).
- Statistics of small numbers (such as drawing conclusions from inadequate sample sizes).
- Misunderstanding the nature of statistics (President Eisenhower expressing astonishment and alarm on discovering that fully half of all Americans have below average intelligence!)
- Inconsistency (e.g. military expenditures based on worst case scenarios but scientific projections on environmental dangers thriftily ignored because they are not “proved”).
- Non sequitur – “it does not follow” – the logic falls down.
- Post hoc, ergo propter hoc – “it happened after so it was caused by” – confusion of cause and effect.
- Meaningless question (“what happens when an irresistible force meets an immovable object?).
- Excluded middle – considering only the two extremes in a range of possibilities (making the “other side” look worse than it really is).
- Short-term v. long-term – a subset of excluded middle (“why pursue fundamental science when we have so huge a budget deficit?”).
- Slippery slope – a subset of excluded middle – unwarranted extrapolation of the effects (give an inch and they will take a mile).
- Confusion of correlation and causation.
- Straw man – caricaturing (or stereotyping) a position to make it easier to attack.
- Suppressed evidence or half-truths.
- Weasel words – for example, use of euphemisms for war such as “police action” to get around limitations on Presidential powers. “An important art of politicians is to find new names for institutions which under old names have become odious to the public”