Does The Scientific Method Necessarily Always Produce Reliable Knowledge

Posted by admin
Does The Scientific Method Necessarily Always Produce Reliable Knowledge Average ratng: 4,5/5 6997 reviews
  1. Characteristics Of Scientific Method
Purpose

The Scientific MethodPsychologists use the scientific method to conduct their research. The scientific method is a standardized way of making observations, gathering data, forming theories, testing predictions, and interpreting results.Researchers make observations in order to describe and measure behavior. Afterobserving certain events repeatedly, researchers come up with a theory that explainsthese observations. A theory is an explanation that organizes separatepieces of information in a coherent way. Researchers generally develop a theory onlyafter they have collected a lot of evidence and made sure their research results canbe reproduced by others.Example: A psychologist observes that some college sophomores date alot, while others do not. He observes that some sophomores haveblond hair, while others have brown hair. He also observes that inmost sophomore couples at least one person has brown hair.

Inaddition, he notices that most of his brown-haired friends dateregularly, but his blond friends don’t date much at all. He explainsthese observations by theorizing that brown-haired sophomores aremore likely to date than those who have blond hair. Based on thistheory, he develops a hypothesis that more brown-haired sophomoresthan blond sophomores will make dates with people they meet at aparty. He then conducts an experiment to test his hypothesis. In hisexperiment, he has twenty people go to a party, ten with blond hairand ten with brown hair.

Focusing on astronomy and physics, the module highlights the work of scientists through history who have contributed to our understanding of the age of the universe as a means of conveying the nature of scientific knowledge. Key Concepts. Science consists of a body of knowledge and the process by which that knowledge is developed.

He makes observations and gathers data bywatching what happens at the party and counting how many people ofeach hair color actually make dates. If, contrary to his hypothesis,the blond-haired people make more dates, he’ll have to think aboutwhy this occurred and revise his theory and hypothesis. If the datahe collects from further experiments still do not support thehypothesis, he’ll have to reject his theory. Making Research ScientificPsychological research, like research in other fields, must meetcertain criteria in order to be considered scientific. Research mustbe:. Replicable. Falsifiable.

Precise. ParsimoniousResearch Must Be ReplicableResearch is replicable when others can repeat it and getthe same results. When psychologists report what they have found throughtheir research, they also describe in detail how they made theirdiscoveries. This way, other psychologists can repeat the research to see ifthey can replicate the findings.After psychologists do their research and make sure it’sreplicable, they develop a theory and translate the theory intoa precise hypothesis. A hypothesis is a testable predictionof what will happen given a certain set of conditions. Psychologiststest a hypothesis by using a specific research method, such as naturalistic observation, a case study, a survey, or an experiment.

If the test does notconfirm the hypothesis, the psychologist revises or rejects the originaltheory.

Does The Scientific Method Necessarily Always Produce Reliable Knowledge

Get our daily newsletterUpgrade your inbox and get our Daily Dispatch and Editor's Picks.Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see ). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic.

Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties. What a load of rubbishEven when flawed research does not put people’s lives at risk—and much of it is too far from the market to do so—it squanders money and the efforts of some of the world’s best minds.

The opportunity costs of stymied progress are hard to quantify, but they are likely to be vast. And they could be rising.One reason is the competitiveness of science. In the 1950s, when modern academic research took shape after its successes in the second world war, it was still a rarefied pastime. The entire club of scientists numbered a few hundred thousand. As their ranks have swelled, to 6m-7m active researchers on the latest reckoning, scientists have lost their taste for self-policing and quality control.

The obligation to “publish or perish” has come to rule over academic life. Competition for jobs is cut-throat. Full professors in America earned on average $135,000 in 2012—more than judges did. Every year six freshly minted PhDs vie for every academic post. Nowadays verification (the replication of other people’s results) does little to advance a researcher’s career. And without verification, dubious findings live on to mislead.Careerism also encourages exaggeration and the cherry-picking of results. In order to safeguard their exclusivity, the leading journals impose high rejection rates: in excess of 90% of submitted manuscripts.

Characteristics Of Scientific Method

The most striking findings have the greatest chance of making it onto the page. Little wonder that one in three researchers knows of a colleague who has pepped up a paper by, say, excluding inconvenient data from results “based on a gut feeling”. And as more research teams around the world work on a problem, the odds shorten that at least one will fall prey to an honest confusion between the sweet signal of a genuine discovery and a freak of the statistical noise. Such spurious correlations are often recorded in journals eager for startling papers. If they touch on drinking wine, going senile or letting children play video games, they may well command the front pages of newspapers, too.Conversely, failures to prove a hypothesis are rarely even offered for publication, let alone accepted. “Negative results” now account for only 14% of published papers, down from 30% in 1990.

Yet knowing what is false is as important to science as knowing what is true. The failure to report failures means that researchers waste money and effort exploring blind alleys already investigated by other scientists.The hallowed process of peer review is not all it is cracked up to be, either. When a prominent medical journal ran research past other experts in the field, it found that most of the reviewers failed to spot mistakes it had deliberately inserted into papers, even after being told they were being tested. If it’s broke, fix itAll this makes a shaky foundation for an enterprise dedicated to discovering the truth about the world. What might be done to shore it up? One priority should be for all disciplines to follow the example of those that have done most to tighten standards.

A start would be getting to grips with statistics, especially in the growing number of fields that sift through untold oodles of data looking for patterns. Geneticists have done this, and turned an early torrent of specious results from genome sequencing into a trickle of truly significant ones.Ideally, research protocols should be registered in advance and monitored in virtual notebooks. This would curb the temptation to fiddle with the experiment’s design midstream so as to make the results look more substantial than they are.

(It is already meant to happen in clinical trials of drugs, but compliance is patchy.) Where possible, trial data also should be open for other researchers to inspect and test. The most enlightened journals are already becoming less averse to humdrum papers.

Some government funding agencies, including America’s National Institutes of Health, which dish out $30 billion on research each year, are working out how best to encourage replication. And growing numbers of scientists, especially young ones, understand statistics.

But these trends need to go much further. Journals should allocate space for “uninteresting” work, and grant-givers should set aside money to pay for it. Peer review should be tightened—or perhaps dispensed with altogether, in favour of post-publication evaluation in the form of appended comments. That system has worked well in recent years in physics and mathematics. Lastly, policymakers should ensure that institutions using public money also respect the rules.Science still commands enormous—if sometimes bemused—respect. But its privileged status is founded on the capacity to be right most of the time and to correct its mistakes when it gets things wrong.

And it is not as if the universe is short of genuine mysteries to keep generations of scientists hard at work. The false trails laid down by shoddy research are an unforgivable barrier to understanding.