judicialsupport

Legal Writing for Legal Reading!

Archive for the category “Articles by Others: Problems with Modern Science”

Gödel’s Incompleteness: The #1 Mathematical Breakthrough of the 20th Century

Every now and again I come across a fantastic article the warrants posting here; I recently came across one CosmicFingerprints.com which, I thought, was pretty insightful.  Be edified.

“Faith and Reason are not enemies. In fact, the exact opposite is true! One is absolutely necessary for the other to exist. All reasoning ultimately traces back to faith in something that you cannot prove.”

In 1931, the young mathematician Kurt Gödel made a landmark discovery, as powerful as anything Albert Einstein developed.

In one salvo, he completely demolished an entire class of scientific theories.

Gödel’s discovery not only applies to mathematics but literally all branches of science, logic and human knowledge. It has earth-shattering implications.

Oddly, few people know anything about it.

Allow me to tell you the story.

Mathematicians love proofs. They were hot and bothered for centuries, because they were unable to PROVE some of the things they knew were true.

So for example if you studied high school Geometry, you’ve done the exercises where you prove all kinds of things about triangles based on a set of theorems.

That high school geometry book is built on Euclid’s five postulates. Everyone knows the postulates are true, but in 2500 years nobody’s figured out a way to prove them.

Yes, it does seem perfectly “obvious” that a line can be extended infinitely in both directions, but no one has been able to PROVE that. We can only demonstrate that Euclid’s postulates are a reasonable, and in fact necessary, set of 5 assumptions.

Towering mathematical geniuses were frustrated for 2000+ years because they couldn’t prove all their theorems. There were so many things that were “obviously true,” but nobody could find a way to prove them.

In the early 1900’s, however, a tremendous wave of optimism swept through mathematical circles. The most brilliant mathematicians in the world (like Bertrand Russell, David Hilbert and Ludwig Wittgenstein) became convinced that they were rapidly closing in on a final synthesis.

A unifying “Theory of Everything” that would finally nail down all the loose ends. Mathematics would be complete, bulletproof, airtight, triumphant.

In 1931 this young Austrian mathematician, Kurt Gödel, published a paper that once and for all PROVED that a single Theory Of Everything is actually impossible. He proved they would never prove everything. (Yeah I know, it sounds a little odd, doesn’t it?)

Gödel’s discovery was called “The Incompleteness Theorem.”

If you’ll give me just a few minutes, I’ll explain what it says, how Gödel proved it, and what it means – in plain, simple English that anyone can understand.

Gödel’s Incompleteness Theorem says:

“Anything you can draw a circle around cannot explain itself without referring to something outside the circle – something you have to assume but cannot prove.”

You can draw a circle around all of the concepts in your high school geometry book. But they’re all built on Euclid’s 5 postulates which we know are true but cannot be proven. Those 5 postulates are outside the book, outside the circle.

Stated in Formal Language:
Gödel’s theorem says: “Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.”The Church-Turing thesis says that a physical system can express elementary arithmetic just as a human can, and that the arithmetic of a Turing Machine (computer) is not provable within the system and is likewise subject to incompleteness.

Any physical system subjected to measurement is capable of expressing elementary arithmetic. (In other words, children can do math by counting their fingers, water flowing into a bucket does integration, and physical systems always give the right answer.)

Therefore the universe is capable of expressing elementary arithmetic and like both mathematics itself and a Turing machine, is incomplete.

Syllogism:

1. All non-trivial computational systems are incomplete

2. The universe is a non-trivial computational system

3. Therefore the universe is incomplete

You can draw a circle around a bicycle. But the existence of that bicycle relies on a factory that is outside that circle. The bicycle cannot explain itself.

You can draw the circle around a bicycle factory. But that factory likewise relies on other things outside the factory.

Gödel proved that there are ALWAYS more things that are true than you can prove. Any system of logic or numbers that mathematicians ever came up with will always rest on at least a few unprovable assumptions.

Gödel’s Incompleteness Theorem applies not just to math, but to everything that is subject to the laws of logic. Everything that you can count or calculate. Incompleteness is true in math; it’s equally true in science or language and philosophy.

Gödel created his proof by starting with “The Liar’s Paradox” — which is the statement

“I am lying.”

“I am lying” is self-contradictory, since if it’s true, I’m not a liar, and it’s false; and if it’s false, I am a liar, so it’s true.

Gödel, in one of the most ingenious moves in the history of math, converted this Liar’s Paradox into a mathematical formula. He proved that no statement can prove its own truth.

You always need an outside reference point.

The Incompleteness Theorem was a devastating blow to the “positivists” of the time. They insisted that literally anything you could not measure or prove was nonsense. He showed that their positivism was nonsense.

Gödel proved his theorem in black and white and nobody could argue with his logic. Yet some of his fellow mathematicians went to their graves in denial, believing that somehow or another Gödel must surely be wrong.

He wasn’t wrong. It was really true. There are more things that are true than you can prove.

A “theory of everything” – whether in math, or physics, or philosophy – will never be found.  Because it is mathematically impossible.

OK, so what does this really mean? Why is this super-important, and not just an interesting geek factoid?

Here’s what it means:

  • Faith and Reason are not enemies. In fact, the exact opposite is true! One is absolutely necessary for the other to exist. All reasoning ultimately traces back to faith in something that you cannot prove.
  • All closed systems depend on something outside the system.
  • You can always draw a bigger circle but there will still be something outside the circle.

Reasoning inward from a larger circle to a smaller circle (from “all things” to “some things”) is deductive reasoning.

Example of a deductive reasoning:

1.    All men are mortal
2.    Socrates is a man
3.    Therefore Socrates is mortal

Reasoning outward from a smaller circle to a larger circle (from “some things” to “all things”) is inductive reasoning.

Examples of inductive reasoning:

1. All the men I know are mortal
2. Therefore all men are mortal

1. When I let go of objects, they fall
2. Therefore there is a law of gravity that governs all falling objects

Notice than when you move from the smaller circle to the larger circle, you have to make assumptions that you cannot 100% prove.

For example you cannot PROVE gravity will always be consistent at all times. You can only observe that it’s consistently true every time.

Nearly all scientific laws are based on inductive reasoning. All of science rests on an assumption that the universe is orderly, logical and mathematical based on fixed discoverable laws.

You cannot PROVE this. (You can’t prove that the sun will come up tomorrow morning either.) You literally have to take it on faith. In fact most people don’t know that outside the science circle is a philosophy circle. Science is based on philosophical assumptions that you cannot scientifically prove. Actually, the scientific method cannot prove, it can only infer.

(Science originally came from the idea that God made an orderly universe which obeys fixed, discoverable laws – and because of those laws, He would not have to constantly tinker with it in order for it to operate.)

Now please consider what happens when we draw the biggest circle possibly can – around the whole universe.
(If there are multiple universes, we’re drawing a circle around all of them too):

  • There has to be something outside that circle. Something which we have to assume but cannot prove
  • The universe as we know it is finite – finite matter, finite energy, finite space and 13.8 billion years time
  • The universe (all matter, energy, space and time) cannot explain itself
  • Whatever is outside the biggest circle is boundless. So by definition it is not possible to draw a circle around it.
  • If we draw a circle around all matter, energy, space and time and apply Gödel’s theorem, then we know what is outside that circle is not matter, is not energy, is not space and is not time. Because all the matter and energy are inside the circle. It’s immaterial.
  • Whatever is outside the biggest circle is not a system – i.e. is not an assemblage of parts. Otherwise we could draw a circle around them. The thing outside the biggest circle is indivisible.
  • Whatever is outside the biggest circle is an uncaused cause, because you can always draw a circle around an effect.

We can apply the same inductive reasoning to the origin of information:

  • In the history of the universe we also see the introduction of information, some 3.8 billion years ago. It came in the form of the Genetic code, which is symbolic and immaterial.
  • The information had to come from the outside, since information is not known to be an inherent property of matter, energy, space or time.
  • All codes we know the origin of are designed by conscious beings.
  • Therefore whatever is outside the largest circle is a conscious being.

When we add information to the equation, we conclude that not only is the thing outside the biggest circle infinite and immaterial, it is also self-aware.

Isn’t it interesting how all these conclusions sound suspiciously similar to how theologians have described God for thousands of years?

Maybe that’s why it’s hardly surprising that 80-90% of the people in the world believe in some concept of God. Yes, it’s intuitive to most folks. But Gödel’s theorem indicates it’s also supremely logical. In fact it’s the only position one can take and stay in the realm of reason and logic.

The person who proudly proclaims, “You’re a man of faith, but I’m a man of science” doesn’t understand the roots of science or the nature of knowledge!

Interesting aside…

If you visit the world’s largest atheist website, Infidels, on the home page you will find the following statement:

“Naturalism is the hypothesis that the natural world is a closed system, which means that nothing that is not part of the natural world affects it.”

If you know Gödel’s theorem, you know all systems rely on something outside the system. So according to Gödel’s Incompleteness theorem, the folks at Infidels cannot be correct. Because the universe is a system, it has to have an outside cause.

Therefore Atheism violates the laws mathematics.

The Incompleteness of the universe isn’t proof that God exists. But… it IS proof that in order to construct a consistent model of the universe, belief in God is not just 100% logical… it’s necessary.

Euclid’s 5 postulates aren’t formally provable and God is not formally provable either. But… just as you cannot build a coherent system of geometry without Euclid’s 5 postulates, neither can you build a coherent description of the universe without a First Cause and a Source of order.

Thus faith and science are not enemies, but allies. They are two sides of the same coin. It had been true for hundreds of years, but in 1931 this skinny young Austrian mathematician named Kurt Gödel proved it.

No time in the history of mankind has faith in God been more reasonable, more logical, or more thoroughly supported by rational thought, science and mathematics.

Perry Marshall

“Math is the language God wrote the universe in.” –Galileo Galile, 1623

Further reading:

Incompleteness: The Proof and Paradox of Kurt Gödel” by Rebecca Goldstein – fantastic biography and a great read

A collection of quotes and notes about Gödel’s proof from Miskatonic University Press

Formal description of Gödel’s Incompleteness Theorem and links to his original papers on Wikipedia

Science vs. Faith on CoffeehouseTheology.com

Originally published on cosmicfingerprints.com and can be found here.

The 7 Biggest Problems Facing Science, According to 270 Scientists

Every now and again I come across a fantastic article that warrants posting here.  I have seen a recent proliferation of articles in respected publications pointing out, bemoaning, and/or highlighting increasing problems with the trustworthiness of the alleged findings of the contemporary scientific community.  I find these articles to be particularly interesting given how our society looks to science as a (the?) source of ultimate truths (often as a mutually exclusive alternative to spirituality).  This sort of scientism may be misplaced, and these articles delve into the pitfalls that come with such an approach.

Here are the links the other articles I posted on this subject:

Be edified.

_______________

“Science, I had come to learn, is as political, competitive, and fierce a career as you can find, full of the temptation to find easy paths.” — Paul Kalanithi, neurosurgeon and writer (1977–2015)

Science is in big trouble. Or so we’re told.

In the past several years, many scientists have become afflicted with a serious case of doubt — doubt in the very institution of science.

As reporters covering medicine, psychology, climate change, and other areas of research, we wanted to understand this epidemic of doubt. So we sent scientists a survey asking this simple question: If you could change one thing about how science works today, what would it be and why?

We heard back from 270 scientists all over the world, including graduate students, senior professors, laboratory heads, and Fields Medalists. They told us that, in a variety of ways, their careers are being hijacked by perverse incentives. The result is bad science.

The scientific process, in its ideal form, is elegant: Ask a question, set up an objective test, and get an answer. Repeat. Science is rarely practiced to that ideal. But Copernicus believed in that ideal. So did the rocket scientists behind the moon landing.

But nowadays, our respondents told us, the process is riddled with conflict. Scientists say they’re forced to prioritize self-preservation over pursuing the best questions and uncovering meaningful truths.

“I feel torn between asking questions that I know will lead to statistical significance and asking questions that matter,” says Kathryn Bradshaw, a 27-year-old graduate student of counseling at the University of North Dakota.

Today, scientists’ success often isn’t measured by the quality of their questions or the rigor of their methods. It’s instead measured by how much grant money they win, the number of studies they publish, and how they spin their findings to appeal to the public.

Scientists often learn more from studies that fail. But failed studies can mean career death. So instead, they’re incentivized to generate positive results they can publish. And the phrase “publish or perish” hangs over nearly every decision. It’s a nagging whisper, like a Jedi’s path to the dark side.

“Over time the most successful people will be those who can best exploit the system,” Paul Smaldino, a cognitive science professor at University of California Merced, says.

To Smaldino, the selection pressures in science have favored less-than-ideal research: “As long as things like publication quantity, and publishing flashy results in fancy journals are incentivized, and people who can do that are rewarded … they’ll be successful, and pass on their successful methods to others.”

Many scientists have had enough. They want to break this cycle of perverse incentives and rewards. They are going through a period of introspection, hopeful that the end result will yield stronger scientific institutions. In our survey and interviews, they offered a wide variety of ideas for improving the scientific process and bringing it closer to its ideal form.

Before we jump in, some caveats to keep in mind: Our survey was not a scientific poll. For one, the respondents disproportionately hailed from the biomedical and social sciences and English-speaking communities.

Many of the responses did, however, vividly illustrate the challenges and perverse incentives that scientists across fields face. And they are a valuable starting point for a deeper look at dysfunction in science today.

The place to begin is right where the perverse incentives first start to creep in: the money.

Academia has a huge money problem

To do most any kind of research, scientists need money: to run studies, to subsidize lab equipment, to pay their assistants and even their own salaries. Our respondents told us that getting — and sustaining — that funding is a perennial obstacle.

Their gripe isn’t just with the quantity, which, in many fields, is shrinking. It’s the way money is handed out that puts pressure on labs to publish a lot of papers, breeds conflicts of interest, and encourages scientists to overhype their work.

In the United States, academic researchers in the sciences generally cannot rely on university funding alone to pay for their salaries, assistants, and lab costs. Instead, they have to seek outside grants. “In many cases the expectations were and often still are that faculty should cover at least 75 percent of the salary on grants,” writes John Chatham, a professor of medicine studying cardiovascular disease at University of Alabama at Birmingham.

Grants also usually expire after three or so years, which pushes scientists away from long-term projects. Yet as John Pooley, a neurobiology postdoc at the University of Bristol, points out, the biggest discoveries usually take decades to uncover and are unlikely to occur under short-term funding schemes.

Outside grants are also in increasingly short supply. In the US, the largest source of funding is the federal government, and that pool of money has been plateauing for years, while young scientists enter the workforce at a faster rate than older scientists retire.

Take the National Institutes of Health, a major funding source. Its budget rose at a fast clip through the 1990s, stalled in the 2000s, and then dipped with sequestration budget cuts in 2013. All the while, rising costs for conducting science meant that each NIH dollar purchased less and less. Last year, Congress approved the biggest NIH spending hike in a decade. But it won’t erase the shortfall.

The consequences are striking: In 2000, more than 30 percent of NIH grant applications got approved. Today, it’s closer to 17 percent. “It’s because of what’s happened in the last 12 years that young scientists in particular are feeling such a squeeze,” NIH Director Francis Collins said at the Milken Global Conference in May.

Some of our respondents said that this vicious competition for funds can influence their work. Funding “affects what we study, what we publish, the risks we (frequently don’t) take,” explains Gary Bennett a neuroscientist at Duke University. It “nudges us to emphasize safe, predictable (read: fundable) science.”

Truly novel research takes longer to produce, and it doesn’t always pay off. A National Bureau of Economic Research working paper found that, on the whole, truly unconventional papers tend to be less consistently cited in the literature. So scientists and funders increasingly shy away from them, preferring short-turnaround, safer papers. But everyone suffers from that: the NBER report found that novel papers also occasionally lead to big hits that inspire high-impact, follow-up studies.

“I think because you have to publish to keep your job and keep funding agencies happy, there are a lot of (mediocre) scientific papers out there … with not much new science presented,” writes Kaitlyn Suski, a chemistry and atmospheric science postdoc at Colorado State University.

Another worry: When independent, government, or university funding sources dry up, scientists may feel compelled to turn to industry or interest groups eager to generate studies to support their agendas.

Finally, all of this grant writing is a huge time suck, taking resources away from the actual scientific work. Tyler Josephson, an engineering graduate student at the University of Delaware, writes that many professors he knows spend 50 percent of their time writing grant proposals. “Imagine,” he asks, “what they could do with more time to devote to teaching and research?”

It’s easy to see how these problems in funding kick off a vicious cycle. To be more competitive for grants, scientists have to have published work. To have published work, they need positive (i.e., statistically significant) results. That puts pressure on scientists to pick “safe” topics that will yield a publishable conclusion — or, worse, may bias their research toward significant results.

“When funding and pay structures are stacked against academic scientists,” writes Alison Bernstein, a neuroscience postdoc at Emory University, “these problems are all exacerbated.”

Fixes for science’s funding woes

Right now there are arguably too many researchers chasing too few grants. Or, as a 2014 piece in the Proceedings of the National Academy of Sciences put it: “The current system is in perpetual disequilibrium, because it will inevitably generate an ever-increasing supply of scientists vying for a finite set of research resources and employment opportunities.”

“As it stands, too much of the research funding is going to too few of the researchers,” writes Gordon Pennycook, a PhD candidate in cognitive psychology at the University of Waterloo. “This creates a culture that rewards fast, sexy (and probably wrong) results.”

One straightforward way to ameliorate these problems would be for governments to simply increase the amount of money available for science. (Or, more controversially, decrease the number of PhDs, but we’ll get to that later.) If Congress boosted funding for the NIH and National Science Foundation, that would take some of the competitive pressure off researchers.

But that only goes so far. Funding will always be finite, and researchers will never get blank checks to fund the risky science projects of their dreams. So other reforms will also prove necessary.

One suggestion: Bring more stability and predictability into the funding process. “The NIH and NSF budgets are subject to changing congressional whims that make it impossible for agencies (and researchers) to make long term plans and commitments,” M. Paul Murphy, a neurobiology professor at the University of Kentucky, writes. “The obvious solution is to simply make [scientific funding] a stable program, with an annual rate of increase tied in some manner to inflation.”

Another idea would be to change how grants are awarded: Foundations and agencies could fund specific people and labs for a period of time rather than individual project proposals. (The Howard Hughes Medical Institute already does this.) A system like this would give scientists greater freedom to take risks with their work.

Alternatively, researchers in the journal mBio recently called for a lottery-style system. Proposals would be measured on their merits, but then a computer would randomly choose which get funded.

“Although we recognize that some scientists will cringe at the thought of allocating funds by lottery,” the authors of the mBio piece write, “the available evidence suggests that the system is already in essence a lottery without the benefits of being random.” Pure randomness would at least reduce some of the perverse incentives at play in jockeying for money.

There are also some ideas out there to minimize conflicts of interest from industry funding. Recently, in PLOS Medicine, Stanford epidemiologist John Ioannidis suggested that pharmaceutical companies ought to pool the money they use to fund drug research, to be allocated to scientists who then have no exchange with industry during study design and execution. This way, scientists could still get funding for work crucial for drug approvals — but without the pressures that can skew results.

These solutions are by no means complete, and they may not make sense for every scientific discipline. The daily incentives facing biomedical scientists to bring new drugs to market are different from the incentives facing geologists trying to map out new rock layers. But based on our survey, funding appears to be at the root of many of the problems facing scientists, and it’s one that deserves more careful discussion.

Too many studies are poorly designed. Blame bad incentives.

Scientists are ultimately judged by the research they publish. And the pressure to publish pushes scientists to come up with splashy results, of the sort that get them into prestigious journals. “Exciting, novel results are more publishable than other kinds,” says Brian Nosek, who co-founded the Center for Open Science at the University of Virginia.

The problem here is that truly groundbreaking findings simply don’t occur very often, which means scientists face pressure to game their studies so they turn out to be a little more “revolutionary.” (Caveat: Many of the respondents who focused on this particular issue hailed from the biomedical and social sciences.)

Some of this bias can creep into decisions that are made early on: choosing whether or not to randomize participants, including a control group for comparison, or controlling for certain confounding factors but not others. (Read more on study design particulars here.)

Many of our survey respondents noted that perverse incentives can also push scientists to cut corners in how they analyze their data.

“I have incredible amounts of stress that maybe once I finish analyzing the data, it will not look significant enough for me to defend,” writes Jess Kautz, a PhD student at the University of Arizona. “And if I get back mediocre results, there’s going to be incredible pressure to present it as a good result so they can get me out the door. At this moment, with all this in my mind, it is making me wonder whether I could give an intellectually honest assessment of my own work.”

Increasingly, meta-researchers (who conduct research on research) are realizing that scientists often do find little ways to hype up their own results — and they’re not always doing it consciously. Among the most famous examples is a technique called “p-hacking,” in which researchers test their data against many hypotheses and only report those that have statistically significant results.

In a recent study, which tracked the misuse of p-values in biomedical journals, meta-researchers found “an epidemic” of statistical significance: 96 percent of the papers that included a p-value in their abstracts boasted statistically significant results.

That seems awfully suspicious. It suggests the biomedical community has been chasing statistical significance, potentially giving dubious results the appearance of validity through techniques like p-hacking — or simply suppressing important results that don’t look significant enough. Fewer studies share effect sizes (which arguably gives a better indication of how meaningful a result might be) or discuss measures of uncertainty.

“The current system has done too much to reward results,” says Joseph Hilgard, a postdoctoral research fellow at the Annenberg Public Policy Center. “This causes a conflict of interest: The scientist is in charge of evaluating the hypothesis, but the scientist also desperately wants the hypothesis to be true.”

The consequences are staggering. An estimated $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on poorly designed and redundant studies, according to meta-researchers who have analyzed inefficiencies in research. We know that as much as 30 percent of the most influential original medical research papers later turn out to be wrong or exaggerated.

Fixes for poor study design

Our respondents suggested that the two key ways to encourage stronger study design — and discourage positive results chasing — would involve rethinking the rewards system and building more transparency into the research process.

“I would make rewards based on the rigor of the research methods, rather than the outcome of the research,” writes Simine Vazire, a journal editor and a social psychology professor at UC Davis. “Grants, publications, jobs, awards, and even media coverage should be based more on how good the study design and methods were, rather than whether the result was significant or surprising.”

Likewise, Cambridge mathematician Tim Gowers argues that researchers should get recognition for advancing science broadly through informal idea sharing — rather than only getting credit for what they publish.

“We’ve gotten used to working away in private and then producing a sort of polished document in the form of a journal article,” Gowers said. “This tends to hide a lot of the thought process that went into making the discoveries. I’d like attitudes to change so people focus less on the race to be first to prove a particular theorem, or in science to make a particular discovery, and more on other ways of contributing to the furthering of the subject.”

When it comes to published results, meanwhile, many of our respondents wanted to see more journals put a greater emphasis on rigorous methods and processes rather than splashy results.

“I think the one thing that would have the biggest impact is removing publication bias: judging papers by the quality of questions, quality of method, and soundness of analyses, but not on the results themselves,” writes Michael Inzlicht, a University of Toronto psychology and neuroscience professor.

Some journals are already embracing this sort of research. PLOS One, for example, makes a point of accepting negative studies (in which a scientist conducts a careful experiment and finds nothing) for publication, as does the aptly named Journal of Negative Results in Biomedicine.

More transparency would also help, writes Daniel Simons, a professor of psychology at the University of Illinois. Here’s one example: ClinicalTrials.gov, a site run by the NIH, allows researchers to register their study design and methods ahead of time and then publicly record their progress. That makes it more difficult for scientists to hide experiments that didn’t produce the results they wanted. (The site now holds information for more than 180,000 studies in 180 countries.)

Similarly, the AllTrials campaign is pushing for every clinical trial (past, present, and future) around the world to be registered, with the full methods and results reported. Some drug companies and universities have created portals that allow researchers to access raw data from their trials.

The key is for this sort of transparency to become the norm rather than a laudable outlier.

Replicating results is crucial. But scientists rarely do it.

Replication is another foundational concept in science. Researchers take an older study that they want to test and then try to reproduce it to see if the findings hold up.

Testing, validating, retesting — it’s all part of a slow and grinding process to arrive at some semblance of scientific truth. But this doesn’t happen as often as it should, our respondents said. Scientists face few incentives to engage in the slog of replication. And even when they attempt to replicate a study, they often find they can’t do so. Increasingly it’s being called a “crisis of irreproducibility.”

The stats bear this out: A 2015 study looked at 83 highly cited studies that claimed to feature effective psychiatric treatments. Only 16 had ever been successfully replicated. Another 16 were contradicted by follow-up attempts, and 11 were found to have substantially smaller effects the second time around. Meanwhile, nearly half of the studies (40) had never been subject to replication at all.

More recently, a landmark study published in the journal Science demonstrated that only a fraction of recent findings in top psychology journals could be replicated. This is happening in other fields too, says Ivan Oransky, one of the founders of the blog Retraction Watch, which tracks scientific retractions.

As for the underlying causes, our survey respondents pointed to a couple of problems. First, scientists have very few incentives to even try replication. Jon-Patrick Allem, a social scientist at the Keck School of Medicine of USC, noted that funding agencies prefer to support projects that find new information instead of confirming old results.

Journals are also reluctant to publish replication studies unless “they contradict earlier findings or conclusions,” Allem writes. The result is to discourage scientists from checking each other’s work. “Novel information trumps stronger evidence, which sets the parameters for working scientists.”

The second problem is that many studies can be difficult to replicate. Sometimes their methods are too opaque. Sometimes the original studies had too few participants to produce a replicable answer. And sometimes, as we saw in the previous section, the study is simply poorly designed or outright wrong.

Again, this goes back to incentives: When researchers have to publish frequently and chase positive results, there’s less time to conduct high-quality studies with well-articulated methods.

Fixes for underreplication

Scientists need more carrots to entice them to pursue replication in the first place. As it stands, researchers are encouraged to publish new and positive results and to allow negative results to linger in their laptops or file drawers.

This has plagued science with a problem called “publication bias” — not all studies that are conducted actually get published in journals, and the ones that do tend to have positive and dramatic conclusions.

If institutions started to reward tenure positions or make hires based on the quality of a researcher’s body of work, instead of quantity, this might encourage more replication and discourage positive results chasing.

“The key that needs to change is performance review,” writes Christopher Wynder, a former assistant professor at McMaster University. “It affects reproducibility because there is little value in confirming another lab’s results and trying to publish the findings.”

The next step would be to make replication of studies easier. This could include more robust sharing of methods in published research papers. “It would be great to have stronger norms about being more detailed with the methods,” says University of Virginia’s Brian Nosek.

He also suggested more regularly adding supplements at the end of papers that get into the procedural nitty-gritty, to help anyone wanting to repeat an experiment. “If I can rapidly get up to speed, I have a much better chance of approximating the results,” he said.

Nosek has detailed other potential fixes that might help with replication — all part of his work at the Center for Open Science.

A greater degree of transparency and data sharing would enable replications, said Stanford’s John Ioannidis. Too often, anyone trying to replicate a study must chase down the original investigators for details about how the experiment was conducted.

“It is better to do this in an organized fashion with buy-in from all leading investigators in a scientific discipline,” he explained, “rather than have to try to find the investigator in each case and ask him or her in detective-work fashion about details, data, and methods that are otherwise unavailable.”

Researchers could also make use of new tools, such as open source software that tracks every version of a data set, so that they can share their data more easily and have transparency built into their workflow.

Some of our respondents suggested that scientists engage in replication prior to publication. “Before you put an exploratory idea out in the literature and have people take the time to read it, you owe it to the field to try to replicate your own findings,” says John Sakaluk, a social psychologist at the University of Victoria.

For example, he has argued, psychologists could conduct small experiments with a handful of participants to form ideas and generate hypotheses. But they would then need to conduct bigger experiments, with more participants, to replicate and confirm those hypotheses before releasing them into the world. “In doing so,” Sakaluk says, “the rest of us can have more confidence that this is something we might want to [incorporate] into our own research.”

Peer review is broken

Peer review is meant to weed out junk science before it reaches publication. Yet over and over again in our survey, respondents told us this process fails. It was one of the parts of the scientific machinery to elicit the most rage among the researchers we heard from.

Normally, peer review works like this: A researcher submits an article for publication in a journal. If the journal accepts the article for review, it’s sent off to peers in the same field for constructive criticism and eventual publication — or rejection. (The level of anonymity varies; some journals have double-blind reviews, while others have moved to triple-blind review, where the authors, editors, and reviewers don’t know who one another are.)

It sounds like a reasonable system. But numerous studies and systematic reviews have shown that peer review doesn’t reliably prevent poor-quality science from being published.

The process frequently fails to detect fraud or other problems with manuscripts, which isn’t all that surprising when you consider researchers aren’t paid or otherwise rewarded for the time they spend reviewing manuscripts. They do it out of a sense of duty — to contribute to their area of research and help advance science.

But this means it’s not always easy to find the best people to peer-review manuscripts in their field, that harried researchers delay doing the work (leading to publication delays of up to two years), and that when they finally do sit down to peer-review an article they might be rushed and miss errors in studies.

“The issue is that most referees simply don’t review papers carefully enough, which results in the publishing of incorrect papers, papers with gaps, and simply unreadable papers,” says Joel Fish, an assistant professor of mathematics at the University of Massachusetts Boston. “This ends up being a large problem for younger researchers to enter the field, since that means they have to ask around to figure out which papers are solid and which are not.”

That’s not to mention the problem of peer review bullying. Since the default in the process is that editors and peer reviewers know who the authors are (but authors don’t know who the reviews are), biases against researchers or institutions can creep in, opening the opportunity for rude, rushed, and otherwise unhelpful comments. (Just check out the popular #SixWordPeerReview hashtag on Twitter).

These issues were not lost on our survey respondents, who said peer review amounts to a broken system, which punishes scientists and diminishes the quality of publications. They want to not only overhaul the peer review process but also change how it’s conceptualized.

Fixes for peer review

On the question of editorial bias and transparency, our respondents were surprisingly divided. Several suggested that all journals should move toward double-blinded peer review, whereby reviewers can’t see the names or affiliations of the person they’re reviewing and publication authors don’t know who reviewed them. The main goal here was to reduce bias.

“We know that scientists make biased decisions based on unconscious stereotyping,” writes Pacific Northwest National University postdoc Timothy Duignan. “So rather than judging a paper by the gender, ethnicity, country, or institutional status of an author — which I believe happens a lot at the moment — it should be judged by its quality independent of those things.”

Yet others thought that more transparency, rather than less, was the answer: “While we correctly advocate for the highest level of transparency in publishing, we still have most reviews that are blinded, and I cannot know who is reviewing me,” writes Lamberto Manzoli, a professor of epidemiology and public health at the University of Chieti, in Italy. “Too many times we see very low quality reviews, and we cannot understand whether it is a problem of scarce knowledge or conflict of interest.”

Perhaps there is a middle ground. For example, eLife, a new open access journal that is rapidly rising in impact factor, runs a collaborative peer review process. Editors and peer reviewers work together on each submission to create a consolidated list of comments about a paper. The author can then reply to what the group saw as the most important issues, rather than facing the biases and whims of individual reviewers. (Oddly, this process is faster — eLife takes less time to accept papers than Nature or Cell.)

Still, those are mostly incremental fixes. Other respondents argued that we might need to radically rethink the entire process of peer review from the ground up.

“The current peer review process embraces a concept that a paper is final,” says Nosek. “The review process is [a form of] certification, and that a paper is done.” But science doesn’t work that way. Science is an evolving process, and truth is provisional. So, Nosek said, science must “move away from the embrace of definitiveness of publication.”

Some respondents wanted to think of peer review as more of a continuous process, in which studies are repeatedly and transparently updated and republished as new feedback changes them — much like Wikipedia entries. This would require some sort of expert crowdsourcing.

“The scientific publishing field — particularly in the biological sciences — acts like there is no internet,” says Lakshmi Jayashankar, a senior scientific reviewer with the federal government. “The paper peer review takes forever, and this hurts the scientists who are trying to put their results quickly into the public domain.”

One possible model already exists in mathematics and physics, where there is a long tradition of “pre-printing” articles. Studies are posted on an open website called arXiv.org, often before being peer-reviewed and published in journals. There, the articles are sorted and commented on by a community of moderators, providing another chance to filter problems before they make it to peer review.

“Posting preprints would allow scientific crowdsourcing to increase the number of errors that are caught, since traditional peer-reviewers cannot be expected to be experts in every sub-discipline,” writes Scott Hartman, a paleobiology PhD student at the University of Wisconsin.

And even after an article is published, researchers think the peer review process shouldn’t stop. They want to see more “post-publication” peer review on the web, so that academics can critique and comment on articles after they’ve been published. Sites like PubPeer and F1000Research have already popped up to facilitate that kind of post-publication feedback.

“We do this a couple of times a year at conferences,” writes Becky Clarkson, a geriatric medicine researcher at the University of Pittsburgh. “We could do this every day on the internet.”

The bottom line is that traditional peer review has never worked as well as we imagine it to — and it’s ripe for serious disruption.

Too much science is locked behind paywalls

After a study has been funded, conducted, and peer-reviewed, there’s still the question of getting it out so that others can read and understand its results.

Over and over, our respondents expressed dissatisfaction with how scientific research gets disseminated. Too much is locked away in paywalled journals, difficult and costly to access, they said. Some respondents also criticized the publication process itself for being too slow, bogging down the pace of research.

On the access question, a number of scientists argued that academic research should be free for all to read. They chafed against the current model, in which for-profit publishers put journals behind pricey paywalls.

A single article in Science will set you back $30; a year-long subscription to Cell will cost $279. Elsevier publishes 2,000 journals that can cost up to $10,000 or $20,000 a year for a subscription.

Many US institutions pay those journal fees for their employees, but not all scientists (or other curious readers) are so lucky. In a recent issue of Science, journalist John Bohannon described the plight of a PhD candidate at a top university in Iran. He calculated that the student would have to spend $1,000 a week just to read the papers he needed.

As Michael Eisen, a biologist at UC Berkeley and co-founder of the Public Library of Science (or PLOS), put it, scientific journals are trying to hold on to the profits of the print era in the age of the internet. Subscription prices have continued to climb, as a handful of big publishers (like Elsevier) have bought up more and more journals, creating mini knowledge fiefdoms.

“Large, publicly owned publishing companies make huge profits off of scientists by publishing our science and then selling it back to the university libraries at a massive profit (which primarily benefits stockholders),” Corina Logan, an animal behavior researcher at the University of Cambridge, noted. “It is not in the best interest of the society, the scientists, the public, or the research.” (In 2014, Elsevier reported a profit margin of nearly 40 percent and revenues close to $3 billion.)

“It seems wrong to me that taxpayers pay for research at government labs and universities but do not usually have access to the results of these studies, since they are behind paywalls of peer-reviewed journals,” added Melinda Simon, a postdoc microfluidics researcher at Lawrence Livermore National Lab.

Fixes for closed science

Many of our respondents urged their peers to publish in open access journals (along the lines of PeerJ or PLOS Biology). But there’s an inherent tension here. Career advancement can often depend on publishing in the most prestigious journals, like Science or Nature, which still have paywalls.

There’s also the question of how best to finance a wholesale transition to open access. After all, journals can never be entirely free. Someone has to pay for the editorial staff, maintaining the website, and so on. Right now, open access journals typically charge fees to those submitting papers, putting the burden on scientists who are already struggling for funding.

One radical step would be to abolish for-profit publishers altogether and move toward a nonprofit model. “For journals I could imagine that scientific associations run those themselves,” suggested Johannes Breuer, a postdoctoral researcher in media psychology at the University of Cologne. “If they go for online only, the costs for web hosting, copy-editing, and advertising (if needed) can be easily paid out of membership fees.”

As a model, Cambridge’s Tim Gowers has launched an online mathematics journal called Discrete Analysis. The nonprofit venture is owned and published by a team of scholars, it has no publisher middlemen, and access will be completely free for all.

Until wholesale reform happens, however, many scientists are going a much simpler route: illegally pirating papers.

Bohannon reported that millions of researchers around the world now use Sci-Hub, a site set up by Alexandra Elbakyan, a Russia-based neuroscientist, that illegally hosts more than 50 million academic papers. “As a devout pirate,” Elbakyan told us, “I think that copyright should be abolished.”

One respondent had an even more radical suggestion: that we abolish the existing peer-reviewed journal system altogether and simply publish everything online as soon as it’s done.

“Research should be made available online immediately, and be judged by peers online rather than having to go through the whole formatting, submitting, reviewing, rewriting, reformatting, resubmitting, etc etc etc that can takes years,” writes Bruno Dagnino, formerly of the Netherlands Institute for Neuroscience. “One format, one platform. Judge by the whole community, with no delays.”

A few scientists have been taking steps in this direction. Rachel Harding, a genetic researcher at the University of Toronto, has set up a website called Lab Scribbles, where she publishes her lab notes on the structure of huntingtin proteins in real time, posting data as well as summaries of her breakthroughs and failures. The idea is to help share information with other researchers working on similar issues, so that labs can avoid needless overlap and learn from each other’s mistakes.

Not everyone might agree with approaches this radical; critics worry that too much sharing might encourage scientific free riding. Still, the common theme in our survey was transparency. Science is currently too opaque, research too difficult to share. That needs to change.

Science is poorly communicated to the public

“If I could change one thing about science, I would change the way it is communicated to the public by scientists, by journalists, and by celebrities,” writes Clare Malone, a postdoctoral researcher in a cancer genetics lab at Brigham and Women’s Hospital.

She wasn’t alone. Quite a few respondents in our survey expressed frustration at how science gets relayed to the public. They were distressed by the fact that so many laypeople hold on to completely unscientific ideas or have a crude view of how science works.

They griped that misinformed celebrities like Gwyneth Paltrow have an outsize influence over public perceptions about health and nutrition. (As the University of Alberta’s Timothy Caulfield once told us, “It’s incredible how much she is wrong about.”)

They have a point. Science journalism is often full of exaggerated, conflicting, or outright misleading claims. If you ever want to see a perfect example of this, check out “Kill or Cure,” a site where Paul Battley meticulously documents all the times the Daily Mail reported that various items — from antacids to yogurt — either cause cancer, prevent cancer, or sometimes do both.

Sometimes bad stories are peddled by university press shops. In 2015, the University of Maryland issued a press release claiming that a single brand of chocolate milk could improve concussion recovery. It was an absurd case of science hype.

Indeed, one review in BMJ found that one-third of university press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice.

But not everyone blamed the media and publicists alone. Other respondents pointed out that scientists themselves often oversell their work, even if it’s preliminary, because funding is competitive and everyone wants to portray their work as big and important and game-changing.

“You have this toxic dynamic where journalists and scientists enable each other in a way that massively inflates the certainty and generality of how scientific findings are communicated and the promises that are made to the public,” writes Daniel Molden, an associate professor of psychology at Northwestern University. “When these findings prove to be less certain and the promises are not realized, this just further erodes the respect that scientists get and further fuels scientists desire for appreciation.”

Fixes for better science communication

Opinions differed on how to improve this sorry state of affairs — some pointed to the media, some to press offices, others to scientists themselves.

Plenty of our respondents wished that more science journalists would move away from hyping single studies. Instead, they said, reporters ought to put new research findings in context, and pay more attention to the rigor of a study’s methodology than to the splashiness of the end results.

“On a given subject, there are often dozens of studies that examine the issue,” writes Brian Stacy of the US Department of Agriculture. “It is very rare for a single study to conclusively resolve an important research question, but many times the results of a study are reported as if they do.”

But it’s not just reporters who will need to shape up. The “toxic dynamic” of journalists, academic press offices, and scientists enabling one another to hype research can be tough to change, and many of our respondents pointed out that there were no easy fixes — though recognition was an important first step.

Some suggested the creation of credible referees that could rigorously distill the strengths and weaknesses of research. (Some variations of this are starting to pop up: The Genetic Expert News Service solicits outside experts to weigh in on big new studies in genetics and biotechnology.) Other respondents suggested that making research free to all might help tamp down media misrepresentations.

Still other respondents noted that scientists themselves should spend more time learning how to communicate with the public — a skill that tends to be under-rewarded in the current system.

“Being able to explain your work to a non-scientific audience is just as important as publishing in a peer-reviewed journal, in my opinion, but currently the incentive structure has no place for engaging the public,” writes Crystal Steltenpohl, a graduate assistant at DePaul University.

Reducing the perverse incentives around scientific research itself could also help reduce overhype. “If we reward research based on how noteworthy the results are, this will create pressure to exaggerate the results (through exploiting flexibility in data analysis, misrepresenting results, or outright fraud),” writes UC Davis’s Simine Vazire. “We should reward research based on how rigorous the methods and design are.”

Or perhaps we should focus on improving science literacy. Jeremy Johnson, a project coordinator at the Broad Institute, argued that bolstering science education could help ameliorate a lot of these problems. “Science literacy should be a top priority for our educational policy,” he said, “not an elective.

Life as a young academic is incredibly stressful

When we asked researchers what they’d fix about science, many talked about the scientific process itself, about study design or peer review. These responses often came from tenured scientists who loved their jobs but wanted to make the broader scientific project even better.

But on the flip side, we heard from a number of researchers — many of them graduate students or postdocs — who were genuinely passionate about research but found the day-to-day experience of being a scientist grueling and unrewarding. Their comments deserve a section of their own.

Today, many tenured scientists and research labs depend on small armies of graduate students and postdoctoral researchers to perform their experiments and conduct data analysis.

These grad students and postdocs are often the primary authors on many studies. In a number of fields, such as the biomedical sciences, a postdoc position is a prerequisite before a researcher can get a faculty-level position at a university.

This entire system sits at the heart of modern-day science. (A new card game called Lab Wars pokes fun at these dynamics.)

But these low-level research jobs can be a grind. Postdocs typically work long hours and are relatively low-paid for their level of education — salaries are frequently pegged to stipends set by NIH National Research Service Award grants, which start at $43,692 and rise to $47,268 in year three.

Postdocs tend to be hired on for one to three years at a time, and in many institutions they are considered contractors, limiting their workplace protections. We heard repeatedly about extremely long hours and limited family leave benefits.

“Oftentimes this is problematic for individuals in their late 20s and early to mid-30s who have PhDs and who may be starting families while also balancing a demanding job that pays poorly,” wrote one postdoc, who asked for anonymity.

This lack of flexibility tends to disproportionately affect women — especially women planning to have families — which helps contribute to gender inequalities in research. (A 2012 paper found that female job applicants in academia are judged more harshly and are offered less money than males.) “There is very little support for female scientists and early-career scientists,” noted another postdoc.

“There is very little long-term financial security in today’s climate, very little assurance where the next paycheck will come from,” wrote William Kenkel, a postdoctoral researcher in neuroendocrinology at Indiana University. “Since receiving my PhD in 2012, I left Chicago and moved to Boston for a post-doc, then in 2015 I left Boston for a second post-doc in Indiana. In a year or two, I will move again for a faculty job, and that’s if I’m lucky. Imagine trying to build a life like that.”

This strain can also adversely affect the research that young scientists do. “Contracts are too short term,” noted another researcher. “It discourages rigorous research as it is difficult to obtain enough results for a paper (and hence progress) in two to three years. The constant stress drives otherwise talented and intelligent people out of science also.”

Because universities produce so many PhDs but have way fewer faculty jobs available, many of these postdoc researchers have limited career prospects. Some of them end up staying stuck in postdoc positions for five or 10 years or more.

“In the biomedical sciences,” wrote the first postdoc quoted above, “each available faculty position receives applications from hundreds or thousands of applicants, putting immense pressure on postdocs to publish frequently and in high impact journals to be competitive enough to attain those positions.”

Many young researchers pointed out that PhD programs do fairly little to train people for careers outside of academia. “Too many [PhD] students are graduating for a limited number of professor positions with minimal training for careers outside of academic research,” noted Don Gibson, a PhD candidate studying plant genetics at UC Davis.

Laura Weingartner, a graduate researcher in evolutionary ecology at Indiana University, agreed: “Few universities (specifically the faculty advisors) know how to train students for anything other than academia, which leaves many students hopeless when, inevitably, there are no jobs in academia for them.”

Add it up and it’s not surprising that we heard plenty of comments about anxiety and depression among both graduate students and postdocs. “There is a high level of depression among PhD students,” writes Gibson. “Long hours, limited career prospects, and low wages contribute to this emotion.”

A 2015 study at the University of California Berkeley found that 47 percent of PhD students surveyed could be considered depressed. The reasons for this are complex and can’t be solved overnight. Pursuing academic research is already an arduous, anxiety-ridden task that’s bound to take a toll on mental health.

But as Jennifer Walker explored recently at Quartz, many PhD students also feel isolated and unsupported, exacerbating those issues.

Fixes to keep young scientists in science

We heard plenty of concrete suggestions. Graduate schools could offer more generous family leave policies and child care for graduate students. They could also increase the number of female applicants they accept in order to balance out the gender disparity.

But some respondents also noted that workplace issues for grad students and postdocs were inseparable from some of the fundamental issues facing science that we discussed earlier. The fact that university faculty and research labs face immense pressure to publish — but have limited funding — makes it highly attractive to rely on low-paid postdocs.

“There is little incentive for universities to create jobs for their graduates or to cap the number of PhDs that are produced,” writes Weingartner. “Young researchers are highly trained but relatively inexpensive sources of labor for faculty.”

Some respondents also pointed to the mismatch between the number of PhDs produced each year and the number of academic jobs available.

A recent feature by Julie Gould in Nature explored a number of ideas for revamping the PhD system. One idea is to split the PhD into two programs: one for vocational careers and one for academic careers. The former would better train and equip graduates to find jobs outside academia.

This is hardly an exhaustive list. The core point underlying all these suggestions, however, was that universities and research labs need to do a better job of supporting the next generation of researchers. Indeed, that’s arguably just as important as addressing problems with the scientific process itself. Young scientists, after all, are by definition the future of science.

Weingartner concluded with a sentiment we saw all too frequently: “Many creative, hard-working, and/or underrepresented scientists are edged out of science because of these issues. Not every student or university will have all of these unfortunate experiences, but they’re pretty common. There are a lot of young, disillusioned scientists out there now who are expecting to leave research.”

Science is not doomed.

For better or worse, it still works. Look no further than the novel vaccines to prevent Ebola, the discovery of gravitational waves, or new treatments for stubborn diseases. And it’s getting better in many ways. See the work of meta-researchers who study and evaluate research — a field that has gained prominence over the past 20 years.

But science is conducted by fallible humans, and it hasn’t been human-proofed to protect against all our foibles. The scientific revolution began just 500 years ago. Only over the past 100 has science become professionalized. There is still room to figure out how best to remove biases and align incentives.

To that end, here are some broad suggestions:

One: Science has to acknowledge and address its money problem. Science is enormously valuable and deserves ample funding. But the way incentives are set up can distort research.

Right now, small studies with bold results that can be quickly turned around and published in journals are disproportionately rewarded. By contrast, there are fewer incentives to conduct research that tackles important questions with robustly designed studies over long periods of time. Solving this won’t be easy, but it is at the root of many of the issues discussed above.

Two: Science needs to celebrate and reward failure. Accepting that we can learn more from dead ends in research and studies that failed would alleviate the “publish or perish” cycle. It would make scientists more confident in designing robust tests and not just convenient ones, in sharing their data and explaining their failed tests to peers, and in using those null results to form the basis of a career (instead of chasing those all-too-rare breakthroughs).

Three: Science has to be more transparent. Scientists need to publish the methods and findings more fully, and share their raw data in ways that are easily accessible and digestible for those who may want to reanalyze or replicate their findings.

There will always be waste and mediocre research, but as Stanford’s Ioannidis explains in a recent paper, a lack of transparency creates excess waste and diminishes the usefulness of too much research.

Again and again, we also heard from researchers, particularly in social sciences, who felt that their cognitive biases in their own work, influenced by pressures to publish and advance their careers, caused science to go off the rails. If more human-proofing and de-biasing were built into the process — through stronger peer review, cleaner and more consistent funding, and more transparency and data sharing — some of these biases could be mitigated.

These fixes will take time, grinding along incrementally — much like the scientific process itself. But the gains humans have made so far using even imperfect scientific methods would have been unimaginable 500 years ago. The gains from improving the process could prove just as staggering, if not more so.

By, Julia Belluz, Brad Plumer, and Brian Resnick and originally published on July 14, 2016 on vox.comand can be seen here.

Why Neil deGrasse Tyson is a Philistine

Every now and again I come across a fantastic article that warrants posting here.  I have seen a recent proliferation of articles in respected publications pointing out, bemoaning, and/or highlighting increasing problems with the trustworthiness of the alleged findings of the contemporary scientific community.  I find these articles to be particularly interesting given how our society looks to science as a (the?) source of ultimate truths (often as a mutually exclusive alternative to spirituality).  This sort of scientism may be misplaced, and these articles delve into the pitfalls that come with such an approach.

Here are the links the other articles I posted on this subject:

Be edified.

_______________

Neil deGrasse Tyson may be a gifted popularizer of science, but when it comes to humanistic learning more generally, he is a philistine. Some of us suspected this on the basis of the historically and theologically inept portrayal of Giordano Bruno in the opening episode of Tyson’s reboot of Carl Sagan’s Cosmos.

But now it’s been definitively demonstrated by a recent interview in which Tyson sweepingly dismisses the entire history of philosophy. Actually, he doesn’t just dismiss it. He goes much further — to argue that undergraduates should actively avoid studying philosophy at all. Because, apparently, asking too many questions “can really mess you up.”

Yes, he really did say that. Go ahead, listen for yourself, beginning at 20:19 — and behold the spectacle of an otherwise intelligent man and gifted teacher sounding every bit as anti-intellectual as a corporate middle manager or used-car salesman. He proudly proclaims his irritation with “asking deep questions” that lead to a “pointless delay in your progress” in tackling “this whole big world of unknowns out there.” When a scientist encounters someone inclined to think philosophically, his response should be to say, “I’m moving on, I’m leaving you behind, and you can’t even cross the street because you’re distracted by deep questions you’ve asked of yourself. I don’t have time for that.”

“I don’t have time for that.”

With these words, Tyson shows he’s very much a 21st-century American, living in a perpetual state of irritated impatience and anxious agitation. Don’t waste your time with philosophy! (And, one presumes, literature, history, the arts, or religion.) Only science will get you where you want to go! It gets results! Go for it! Hurry up! Don’t be left behind! Progress awaits!

There are many ways to respond to this indictment. One is to make the case for progress in philosophical knowledge. This would show that Tyson is wrong because he fails to recognize the real advances that happen in the discipline of philosophy over time.

I’ll leave this for others to do, since I don’t buy such progress myself. I very seriously believe that Plato, Aristotle, Aquinas, Hume, Kant, Hegel, Nietzsche, Heidegger, or Wittgenstein may have gotten just about everything right all those decades, centuries, and even millennia ago — and I know of no professional philosophers writing today who come anywhere close to rivaling the brilliance and depth of these thinkers.

Tyson is right about one thing: Philosophy is primarily about posing questions. But he’s wrong to view such questioning as a pernicious waste of time. If Socrates is to be believed, it may actually be the best way of life for a human being — and quite possibly the only way to avoid the dogmatism to which all thinking is prone, and to which Tyson himself certainly has fallen prey.

Allow me to explain.

Philosophy arose in the West when a handful of ancient Greeks began to question the truth of received (dogmatic) explanations for various occurrences. Whereas it was commonly presumed that the gods were responsible for the weather, crop yields, and a city’s success or failure on the battlefield, these early philosophers proposed, instead, that something called “nature,” which operates according to regular and necessary laws, might be the true cause.

These early philosophers were forerunners of today’s natural scientists, in other words, and one imagines that Tyson would treat them with the kind of condescending respect that scientists often reserve for their forerunners in the history of science. This is especially likely in the case of Democritus, who made an uncannily good guess when he proposed at some point late in the fifth century B.C. that all matter is composed of indivisible particles called “atoms.”

Socrates appears to have been one of these natural philosophers in his youth. But at some point he became convinced that the anti-dogmatism of his fellow philosophers concealed an even deeper dogmatism. Like the poets, politicians, and craftsmen he regularly talked to on the streets of Athens, the natural philosophers were incapable of giving a coherent account of their own activity and why it was good. They couldn’t explain the nature and origins of the concepts they presupposed in their own thinking. They couldn’t consistently define what they meant by such fundamental ideas as truth, goodness, nobility, beauty, and justice. Neither could they consistently explain what they hoped for from the knowledge they so passionately pursued.

If the natural philosophers truly wished to liberate themselves from dogma in all of its forms and live lives of complete intellectual wakefulness and self-awareness, they would need to pose far more searching questions. They would need to begin reflecting on human nature as both a part of and distinct from the wider natural world. They would need to begin examining their own minds and motives, very much including their motives in taking up the pursuit of philosophical knowledge in the first place.

Philosophy rightly understood is the mind’s rigorous, open-ended, radically undogmatic pursuit of this self-knowledge.

If what you crave is answers, the study of philosophy in this sense can be hugely frustrating and unsatisfying. But if you want to understand yourself as well as the world around you — including why you’re so impatient for answers, and progress, in the first place — then there’s nothing more thrilling and gratifying than training in philosophy and engaging with its tumultuous, indeterminate history.

Not that many young people today recognize its value. There are always an abundance of reasons to resist raising the peskiest, most difficult questions of oneself and the world. To that list, our time has added several more: technological distractions, economic imperatives, cultural prejudices, ideological commitments.

And now Neil deGrasse Tyson has added another — one specially aimed at persuading scientifically minded young people to reject self-examination and the self-knowledge that goes along with it.

He should be ashamed of himself.

By Damon Linker and originally published in The Week on May 6, 2014 and can be found here.

Big Science is Broken

Every now and again I come across a fantastic article that warrants posting here.  I have seen a recent proliferation of articles in respected publications pointing out, bemoaning, and/or highlighting increasing problems with the trustworthiness of the alleged findings of the contemporary scientific community.  I find these articles to be particularly interesting given how our society looks to science as a (the?) source of ultimate truths (often as a mutually exclusive alternative to spirituality).  This sort of scientism may be misplaced, and these articles delve into the pitfalls that come with such an approach.

Here are the links the other articles I posted on this subject:

Be edified.

___________________________

Science is broken.

That’s the thesis of a must-read article in First Things magazine, in which William A. Wilson accumulates evidence that a lot of published research is false. But that’s not even the worst part.

Advocates of the existing scientific research paradigm usually smugly declare that while some published conclusions are surely false, the scientific method has “self-correcting mechanisms” that ensure that, eventually, the truth will prevail. Unfortunately for all of us, Wilson makes a convincing argument that those self-correcting mechanisms are broken.

For starters, there’s a “replication crisis” in science. This is particularly true in the field of experimental psychology, where far too many prestigious psychology studies simply can’t be reliably replicated. But it’s not just psychology. In 2011, the pharmaceutical company Bayer looked at 67 blockbuster drug discovery research findings published in prestigious journals, and found that three-fourths of them weren’t right. Another study of cancer research found that only 11 percent of preclinical cancer research could be reproduced. Even in physics, supposedly the hardest and most reliable of all sciences, Wilson points out that “two of the most vaunted physics results of the past few years — the announced discovery of both cosmic inflation and gravitational waves at the BICEP2 experiment in Antarctica, and the supposed discovery of superluminal neutrinos at the Swiss-Italian border — have now been retracted, with far less fanfare than when they were first published.”

What explains this? In some cases, human error. Much of the research world exploded in rage and mockery when it was found out that a highly popularized finding by the economists Ken Rogoff and Carmen Reinhardt linking higher public debt to lower growth was due to an Excel error. Steven Levitt, of Freakonomics fame, largely built his career on a paper arguing that abortion led to lower crime rates 20 years later because the aborted babies were disproportionately future criminals. Two economists went through the painstaking work of recoding Levitt’s statistical analysis — and found a basic arithmetic error.

Then there is outright fraud. In a 2011 survey of 2,000 research psychologists, over half admitted to selectively reporting those experiments that gave the result they were after. The survey also concluded that around 10 percent of research psychologists have engaged in outright falsification of data, and more than half have engaged in “less brazen but still fraudulent behavior such as reporting that a result was statistically significant when it was not, or deciding between two different data analysis techniques after looking at the results of each and choosing the more favorable.”

Then there’s everything in between human error and outright fraud: rounding out numbers the way that looks better, checking a result less thoroughly when it comes out the way you like, and so forth.

Well, maybe not. There’s actually good reason to believe the exact opposite is happening.

The peer review process doesn’t work. Most observers of science guffaw at the so-called “Sokal affair,” where a physicist named Alan Sokal submitted a gibberish paper to an obscure social studies journal, which accepted it. Less famous is a similar hoodwinking of the very prestigious British Medical Journal, to which a paper with eight major errors was submitted. Not a single one of the 221 scientists who reviewed the paper caught all the errors in it, and only 30 percent of reviewers recommended that the paper be rejected. Amazingly, the reviewers who were warned that they were in a study and that the paper might have problems with it found no more flaws than the ones who were in the dark.

This is serious. In the preclinical cancer study mentioned above, the authors note that “some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis.”

This gets into the question of the sociology of science. It’s a familiar bromide that “science advances one funeral at a time.” The greatest scientific pioneers were mavericks and weirdos. Most valuable scientific work is done by youngsters. Older scientists are more likely to be invested, both emotionally and from a career and prestige perspective, in the regnant paradigm, even though the spirit of science is the challenge of regnant paradigms.

Why, then, is our scientific process so structured as to reward the old and the prestigious? Government funding bodies and peer review bodies are inevitably staffed by the most hallowed (read: out of touch) practitioners in the field. The tenure process ensures that in order to further their careers, the youngest scientists in a given department must kowtow to their elders’ theories or run a significant professional risk. Peer review isn’t any good at keeping flawed studies out of major papers, but it can be deadly efficient at silencing heretical views.

All of this suggests that the current system isn’t just showing cracks, but is actually broken, and in need of major reform. There is very good reason to believe that much scientific research published today is false, there is no good way to sort the wheat from the chaff, and, most importantly, that the way the system is designed ensures that this will continue being the case.

As Wilson writes:

Even if self-correction does occur and theories move strictly along a lifecycle from less to more accurate, what if the unremitting flood of new, mostly false, results pours in faster? Too fast for the sclerotic, compromised truth-discerning mechanisms of science to operate? The result could be a growing body of true theories completely overwhelmed by an ever-larger thicket of baseless theories, such that the proportion of true scientific beliefs shrinks even while the absolute number of them continues to rise. Borges’ Library of Babel contained every true book that could ever be written, but it was useless because it also contained every false book, and both true and false were lost within an ocean of nonsense. [First Things]

This is a big problem, one that can’t be solved with a column. But the first step is admitting you have a problem.

Science, at heart an enterprise for mavericks, has become an enterprise for careerists. It’s time to flip the career track for science on its head. Instead of waiting until someone’s best years are behind her to award her academic freedom and prestige, abolish the PhD and grant fellowships to the best 22-year-olds, giving them the biggest budgets and the most freedoms for the first five or 10 years of their careers. Then, with only few exceptions, shift them away from research to teaching or some other harmless activity. Only then can we begin to fix Big Science.

Originally published in The Week on April 18, 2016 and can be found here.

Why So Many Scientists are so Ignorant

Every now and again I come across a fantastic article that warrants posting here.  I have seen a recent proliferation of articles in respected publications pointing out, bemoaning, and/or highlighting increasing problems with the trustworthiness of the alleged findings of the contemporary scientific community.  I find these articles to be particularly interesting given how our society looks to science as a (the?) source of ultimate truths (often as a mutually exclusive alternative to spirituality).  This sort of scientism may be misplaced, and these articles delve into the pitfalls that come with such an approach.

Here are the links the other articles I posted on this subject:

Be edified.

___________________________

Science has enormous cachet and authority in our culture — for very understandable reasons! And that has led scientists (and non-scientists who claim the mantle of science) to claim public authority, which is all well and good in their areas of expertise. The problem is when they claim authority in areas where they don’t have much expertise.

One recent example is Bill Nye, the “Science Guy,” who isn’t actually a scientist but owes his career as a popular entertainer to his purported scientific expertise. Bill Nye was recently asked to opine about whether philosophy is a worthy pursuit.

As Olivia Goldhill points out in Quartz, Nye’s answer was as self-assured as it was stunningly ignorant. Here’s Goldhill:

The video, which made the entire U.S. philosophy community collectively choke on its morning espresso, is hard to watch, because most of Nye’s statements are wrong. Not just kinda wrong, but deeply, ludicrously wrong. He merges together questions of consciousness and reality as though they’re one and the same topic, and completely misconstrues Descartes’ argument “I think, therefore I am” — to mention just two of many examples. [Quartz]

Nye fell into the same trap that Neil DeGrasse Tyson and Stephen Hawking have been caught up in. Philosophy, these men of science opine, is largely useless, because it can’t give us the sort of certain answers that science can, and amounts to little more than speculation.

There’s obviously a grain of truth in this. Philosophy does not give us the certainty that math or experimental science can (but even then — as many philosophers would point out — these fields do not give us as much certainty as is sometimes claimed). But that doesn’t mean that philosophy is worthless, or that it doesn’t have rigor. Indeed, in a sense, philosophy is inescapable. To argue that philosophy is useless is to do philosophy. Moreover, some existential questions simply can’t be escaped, and philosophy is one of the best, or at least least bad, ways we’ve come up with to address those questions.

Instead, we’ve become a philosophically illiterate culture at large. Seemingly every day, you can find examples of people displaying stunning cultural illiteracy — people in positions where that simply should not happen. The great philosophical tradition that our civilization is built on is left largely untaught. Even “liberal arts” curricula in many colleges do not teach the most influential thinkers. If our elites aren’t being taught this great tradition, then it should come as no surprise that some subset of that elite — experimental scientists and their hangers-on — don’t know it.

That’s part of the problem. But it’s just a part of it. After all, as a group, scientists have an obvious objective interest in experimental science being recognized as the only path to valuable knowledge, and therefore an interest in disdaining other paths to knowledge as less valid. People who listen to scientists opine about philosophy ought to keep that in mind.

And then there’s another factor at play. Many, though certainly not all, of the scientists who opine loudest about the uselessness of philosophy are public atheists. The form of atheism they promote is usually known as “eliminative materialism,” or the notion that matter is the only thing that exists. This theory is motivated by “scientism,” or the notion that the only knowable things are knowable by science. Somewhat paradoxically, these propositions are essentially religious — to dismiss entire swathes of human experience and human thought requires a venture of faith. They’re also not very smart religion, since they end up simply shouting away inconvenient propositions.

Fundamentalism is not a belief system or a religion, it’s a state of mind. There can be fundamentalist religion, fundamentalist atheism, fundamentalist socialism, fundamentalism libertarianism. What all of them have in common is, in David Bentley Hart’s words, “a stubborn refusal to think.” The fundamentalist is not the one whose ideas are too simple or too crude. He’s the one who stubbornly refuses to think through either other ideas, or those ideas themselves.

Sadly, many of our greatest minds give us an example of this state of mind.

Originally published on March 8, 2016 by The Week and can be found here.

How Our Botched Understanding of ‘Science’ Ruins Everything

Every now and again I come across a fantastic article that warrants posting here.  I have seen a recent proliferation of articles in respected publications pointing out, bemoaning, and/or highlighting increasing problems with the trustworthiness of the alleged findings of the contemporary scientific community.  I find these articles to be particularly interesting given how our society looks to science as a (the?) source of ultimate truths (often as a mutually exclusive alternative to spirituality).  This sort of scientism may be misplaced, and these articles delve into the pitfalls that come with such an approach.

Here are the links the other articles I posted on this subject:

Be edified.
_________________

Here’s one certain sign that something is very wrong with our collective mind: Everybody uses a word, but no one is clear on what the word actually means.

One of those words is “science.”

Everybody uses it. Science says this, science says that. You must vote for me because science. You must buy this because science. You must hate the folks over there because science.

Look, science is really important. And yet, who among us can easily provide a clear definition of the word “science” that matches the way people employ the term in everyday life?

So let me explain what science actually is. Science is the process through which we derive reliable predictive rules through controlled experimentation. That’s the science that gives us airplanes and flu vaccines and the Internet. But what almost everyone means when he or she says “science” is something different.

To most people, capital-S Science is the pursuit of capital-T Truth. It is a thing engaged in by people wearing lab coats and/or doing fancy math that nobody else understands. The reason capital-S Science gives us airplanes and flu vaccines is not because it is an incremental engineering process but because scientists are really smart people.

In other words — and this is the key thing — when people say “science”, what they really mean is magic or truth.

A little history: The first proto-scientist was the Greek intellectual Aristotle, who wrote many manuals of his observations of the natural world and who also was the first person to propose a systematic epistemology, i.e., a philosophy of what science is and how people should go about it. Aristotle’s definition of science became famous in its Latin translation as: rerum cognoscere causas, or, “knowledge of the ultimate causes of things.” For this, you can often see in manuals Aristotle described as the Father of Science.

The problem with that is that it’s absolutely not true. Aristotelian “science” was a major setback for all of human civilization. For Aristotle, science started with empirical investigation and then used theoretical speculation to decide what things are caused by.

What we now know as the “scientific revolution” was a repudiation of Aristotle: science, not as knowledge of the ultimate causes of things but as the production of reliable predictive rules through controlled experimentation.

Galileo disproved Aristotle’s “demonstration” that heavier objects should fall faster than light ones by creating a subtle controlled experiment (contrary to legend, he did not simply drop two objects from the Tower of Pisa). What was so important about this Galileo Moment was not that Galileo was right and Aristotle wrong; what was so important was how Galileo proved Aristotle wrong: through experiment.

This method of doing science was then formalized by one of the greatest thinkers in history, Francis Bacon. What distinguishes modern science from other forms of knowledge such as philosophy is that it explicitly forsakes abstract reasoning about the ultimate causes of things and instead tests empirical theories through controlled investigation. Science is not the pursuit of capital-T Truth. It’s a form of engineering — of trial by error. Scientific knowledge is not “true” knowledge, since it is knowledge about only specific empirical propositions — which is always, at least in theory, subject to further disproof by further experiment. Many people are surprised to hear this, but the founder of modern science says it. Bacon, who had a career in politics and was an experienced manager, actually wrote that scientists would have to be misled into thinking science is a pursuit of the truth, so that they will be dedicated to their work, even though it is not.

Why is all this ancient history important? Because science is important, and if we don’t know what science actually is, we are going to make mistakes.

The vast majority of people, including a great many very educated ones, don’t actually know what science is.

If you ask most people what science is, they will give you an answer that looks a lot like Aristotelian “science” — i.e., the exact opposite of what modern science actually is. Capital-S Science is the pursuit of capital-T Truth. And science is something that cannot possibly be understood by mere mortals. It delivers wonders. It has high priests. It has an ideology that must be obeyed.

This leads us astray. Since most people think math and lab coats equal science, people call economics a science, even though almost nothing in economics is actually derived from controlled experiments. Then people get angry at economists when they don’t predict impending financial crises, as if having tenure at a university endowed you with magical powers. Countless academic disciplines have been wrecked by professors’ urges to look “more scientific” by, like a cargo cult, adopting the externals of Baconian science (math, impenetrable jargon, peer-reviewed journals) without the substance and hoping it will produce better knowledge.

Because people don’t understand that science is built on experimentation, they don’t understand that studies in fields like psychology almost never prove anything, since only replicated experiment proves something and, humans being a very diverse lot, it is very hard to replicate any psychological experiment. This is how you get articles with headlines saying “Study Proves X” one day and “Study Proves the Opposite of X” the next day, each illustrated with stock photography of someone in a lab coat. That gets a lot of people to think that “science” isn’t all that it’s cracked up to be, since so many studies seem to contradict each other.

This is how you get people asserting that “science” commands this or that public policy decision, even though with very few exceptions, almost none of the policy options we as a polity have have been tested through experiment (or can be). People think that a study that uses statistical wizardry to show correlations between two things is “scientific” because it uses high school math and was done by someone in a university building, except that, correctly speaking, it is not. While it is a fact that increased carbon dioxide in the atmosphere leads, all else equal, to higher atmospheric temperatures, the idea that we can predict the impact of global warming — and anti-global warming policies! — 100 years from now is sheer lunacy. But because it is done using math by people with tenure, we are told it is “science” even though by definition it is impossible to run an experiment on the year 2114.

This is how you get the phenomenon of philistines like Richard Dawkins and Jerry Coyne thinking science has made God irrelevant, even though, by definition, religion concerns the ultimate causes of things and, again, by definition, science cannot tell you about them.

Neil DeGrasse Tyson (Facebook.com/COSMOSOnTV)

You might think of science advocate, cultural illiterate, mendacious anti-Catholic propagandist, and possible serial fabulist Neil DeGrasse Tyson and anti-vaccine looney-toon Jenny McCarthy as polar opposites on a pro-science/anti-science spectrum, but in reality they are the two sides of the same coin. Both of them think science is like magic, except one of them is part of the religion and the other isn’t.

The point isn’t that McCarthy isn’t wrong on vaccines. (She is wrong.) The point is that she is the predictable result of a society that has forgotten what “science” means. Because we lump many different things together, there are bits of “science” that aren’t actual science that get lumped into society’s understanding of what science is. It’s very profitable for those who grab some of the social prestige that accrues to science, but it means we live in a state of confusion.

It also means that for all our bleating about “science” we live in an astonishingly unscientific and anti-scientific society. We have plenty of anti-science people, but most of our “pro-science” people are really pro-magic (and therefore anti-science).

This bizarre misunderstanding of science yields the paradox that even as we expect the impossible from science (“Please, Mr Economist, peer into your crystal ball and tell us what will happen if Obama raises/cuts taxes”), we also have a very anti-scientific mindset in many areas.

For example, our approach to education is positively obscurantist. Nobody uses rigorous experimentation to determine better methods of education, and someone who would dare to do so would be laughed out of the room. The first and most momentous scientist of education, Maria Montessori, produced an experimentally based, scientific education method that has been largely ignored by our supposedly science-enamored society. We have departments of education at very prestigious universities, and absolutely no science happens at any of them.

Our approach to public policy is also astonishingly pre-scientific. There have been almost no large-scale truly scientific experiments on public policy since the welfare randomized field trials of the 1990s, and nobody seems to realize how barbaric this is. We have people at Brookings who can run spreadsheets, and Ezra Klein can write about it and say it proves things, we have all the science we need, thank you very much. But that is not science.

Modern science is one of the most important inventions of human civilization. But the reason it took us so long to invent it and the reason we still haven’t quite understood what it is 500 years later is it is very hard to be scientific. Not because science is “expensive” but because it requires a fundamental epistemic humility, and humility is the hardest thing to wring out of the bombastic animals we are.

But until we take science for what it really is, which is both more and less than magic, we will still have one foot in the barbaric dark.

Originally published in The Week on September 19, 2014 and can be found here.

How Academia’s Liberal Bias is Killing Social Science

Every now and again I come across a fantastic article that warrants posting here.  I have seen a recent proliferation of articles in respected publications pointing out, bemoaning, and/or highlighting increasing problems with the trustworthiness of the alleged findings of the contemporary scientific community.  I find these articles to be particularly interesting given how our society looks to science as a (the?) source of ultimate truths (often as a mutually exclusive alternative to spirituality).  This sort of scientism may be misplaced, and these articles delve into the pitfalls that come with such an approach.

Here are the links the other articles I posted on this subject:

Be edified.
_________________

I have had the following experience more than once: I am speaking with a professional academic who is a liberal. The subject of the underrepresentation of conservatives in academia comes up. My interlocutor admits that this is indeed a reality, but says the reason why conservatives are underrepresented in academia is because they don’t want to be there, or they’re just not smart enough to cut it. I say: “That’s interesting. For which other underrepresented groups do you think that’s true?” An uncomfortable silence follows.

I point this out not to score culture-war points, but because it’s actually a serious problem. Social sciences and humanities cannot be completely divorced from the philosophy of those who practice it. And groupthink causes some questions not to be asked, and some answers not to be overly scrutinized. It is making our science worse. Anyone who cares about the advancement of knowledge and science should care about this problem.

That’s why I was very gratified to read this very enlightening draft paper written by a number of social psychologists on precisely this topic, attacking the lack of political diversity in their profession and calling for reform. For those who have the time and care about academia, the whole thing truly makes for enlightening reading. The main author of the paper is Jonathan Haidt, well known for his Moral Foundations Theory (and a self-described liberal, if you care to know).

Although the paper focuses on the field of social psychology, its introduction as well as its overall logic make many of its points applicable to disciplines beyond social psychology.

The authors first note the well-known problems of groupthink in any collection of people engaged in a quest for the truth: uncomfortable questions get suppressed, confirmation bias runs amok, and so on.

But it is when the authors move to specific examples that the paper is most enlightening.

They start by debunking published (and often well-publicized) social psychology findings that seem to suggest moral or intellectual superiority on the part of liberals over conservatives, which smartly serves to debunk both the notion that social psychology is bereft of conservatives because they’re not smart enough to cut it, and that groupthink doesn’t produce shoddy science. For example, a study that sought to show that conservatives reach their beliefs only through denying reality achieved that result by describing ideological liberal beliefs as “reality,” surveying people on whether they agreed with them, and then concluding that those who disagree with them are in denial of reality — and lo, people in that group are much more likely to be conservative! This has nothing to do with science, and yet in a field with such groupthink, it can get published in peer-reviewed journals and passed off as “science,” complete with a Vox stenographic exercise at the end of the rainbow. A field where this is possible is in dire straits indeed.

The study also goes over many data points that suggest discrimination against conservatives in social psychology. For example, at academic conferences, the number of self-reported conservatives by a show of hands is even lower than the already low numbers in online surveys, suggesting that conservative social psychologists are afraid of identifying as such in front of their colleagues. The authors say they have all heard groups of social psychologists make jokes at the expense of conservatives — not just at bars, but from the pulpits of academic conferences. (This probably counts as micro-aggression.)

The authors also drop this bombshell: In one survey they conducted of academic social psychologists, “82 percent admitted that they would be at least a little bit prejudiced against a conservative [job] candidate.” Eighty-two percent! It’s often said discrimination works through unconscious bias, but here 82 percent even have conscious bias.

The authors also submitted different test studies to different peer-review boards. The methodology was identical, and the variable was that the purported findings either went for, or against, the liberal worldview (for example, one found evidence of discrimination against minority groups, and another found evidence of “reverse discrimination” against straight white males). Despite equal methodological strengths, the studies that went against the liberal worldview were criticized and rejected, and those that went with it were not.

I hope this paper starts a conversation. Again, this is not about culture-war squabbling — it is about something much more important: the search for knowledge.

This article was originally published in The Week on December 17, 2014 and can be found here.

 

How a Liberal Bias is Killing Science

Every now and again I come across a fantastic article that warrants posting here.  I have seen a recent proliferation of articles in respected publications pointing out, bemoaning, and/or highlighting increasing problems with the trustworthiness of the alleged findings of the contemporary scientific community.  I find these articles to be particularly interesting given how our society looks to science as a (the?) source of ultimate truths (often as a mutually exclusive alternative to spirituality).  This sort of scientism may be misplaced, and these articles delve into the pitfalls that come with such an approach.

Here is the link to the other article I posted on this subject:

Be edified.
_________________

Oh boy. Remember when a study came out that said that conservative political beliefs are associated with psychotic traits, such as authoritarianism and tough-mindedness? While liberalism is associated with “social desirability?”

The American Journal of Political Science recently had to print a somewhat embarrassing correction, as the invaluable website Retraction Watch pointed out: It turns out somebody made an Excel error. And the study’s results aren’t a little off. They aren’t a lot off. They are exactly backwards.

 Writes the American Journal of Political Science:

The interpretation of the coding of the political attitude items in the descriptive and preliminary analyses portion of the manuscript was exactly reversed. Thus, where we indicated that higher scores in Table 1 (page 40) reflect a more conservative response, they actually reflect a more liberal response. [American Journal of Political Science]

In other words, at least according to this study, it’s liberals who are psychotic and conservatives who are awesome.

Well, obviously, as a conservative, I first had to stop laughing for 10 minutes before I could catch my breath.

I could also make a crassly political point, like of course liberals are psychotic given liberal authoritarianism, and of course conservatives are more balanced — after all, we’re happier and we have better sex.

But actually, this is bigger than that. Adds Retraction Watch, “That 2012 paper has been cited 45 times, according to Thomson Reuters Web of Science.”

I’ve been a harsh critic of shoddy scientific research. Criticizing American academia’s liberal bias earned me a lot of pushback, mostly from progressives on Twitter patiently explaining to me that it’s not “bias” to turn down equally qualified conservatives for tenure or promotion or their papers, since after all conservatives are intrinsically unreasonable and stupid (they could have added psychotic for good measure. After all, science proves it!).

Contacted by Retraction Watch, the authors of the study hem and haw and say that their point was not about conservatives or liberals, but about the magnitude of differences between those camps. Yeah, right.

Actually, as independent reviewers point out, the paper itself is so shoddy that we conservatives shouldn’t use it to crow about how liberals are psychos. The correlations are “spurious,” explains one reviewer. And looking at the methodology, I couldn’t help but agree.

The reason the study was made, and the reason it was published, and the reason it was cited so often despite its shoddy methodology, was simply to smear conservatives, and to use “science” as a weapon in our soul-deadening cultural-political war.

Isn’t it time we see that this is killing science and its credibility? Isn’t it time to do something about it? That is, if science is an actual disinterested pursuit, and not a priestly class that, like all priestly classes, eventually forgets its calling and just seeks to aggrandize its power and control the masses.

The political bias problem is merely the visible part of the iceberg.

Science’s problems run much deeper. The social prestige associated with the word science has led to excesses in many directions, leading us to believe that “science” is the equivalent of “magic” when it is a specific and flawed process for doing important but limited things. We’re not helped by the fact that most scientists are themselves ignorant about how science works.

The end result is that Big Science is now broken, with it being nearly certain now that most published research findings are false — and, most importantly, nobody has any idea what to do about it. And nobody is panicking! Because science is infallible, so how could anything be wrong with it?

It’s time for scientists and the scientific establishment to wake up. Only 11 percent of preclinical cancer research could be reproduced according to a recent survey. False results have spawned entire fields of literature and of study and grants. And this is just one example. At stake is much more than political and culture wars.

This article was originally published in The Week on June 10, 2016 and can be found here.

Scientific Regress

Every now and again I come across a fantastic article the warrants posting here; I just came across one in First Things, which is a journal (print and online) published by the Institute on Religion and Public Life.  It is a scholarly and rather academic publication which has many well respected contributors.  I found this piece to be particularly interesting given how our society looks to science as a (the?) source of ultimate truths (often as a mutually exclusive alternative to spirituality).  This sort of scientism may be misplaced, and this article delves into the pitfalls that come with such an approach.  Be edified.

______

The problem with ­science is that so much of it simply isn’t. Last summer, the Open Science Collaboration announced that it had tried to replicate one hundred published psychology experiments sampled from three of the most prestigious journals in the field. Scientific claims rest on the idea that experiments repeated under nearly identical conditions ought to yield approximately the same results, but until very recently, very few had bothered to check in a systematic way whether this was actually the case. The OSC was the biggest attempt yet to check a field’s results, and the most shocking. In many cases, they had used original experimental materials, and sometimes even performed the experiments under the guidance of the original researchers. Of the studies that had originally reported positive results, an astonishing 65 percent failed to show statistical significance on replication, and many of the remainder showed greatly reduced effect sizes.

Their findings made the news, and quickly became a club with which to bash the social sciences. But the problem isn’t just with psychology. There’s an ­unspoken rule in the pharmaceutical industry that half of all academic biomedical research will ultimately prove false, and in 2011 a group of researchers at Bayer decided to test it. Looking at sixty-seven recent drug discovery projects based on preclinical cancer biology research, they found that in more than 75 percent of cases the published data did not match up with their in-house attempts to replicate. These were not studies published in fly-by-night oncology journals, but blockbuster research featured in Science, Nature, Cell, and the like. The Bayer researchers were drowning in bad studies, and it was to this, in part, that they attributed the mysteriously declining yields of drug pipelines. Perhaps so many of these new drugs fail to have an effect because the basic research on which their development was based isn’t valid.

When a study fails to replicate, there are two possible interpretations. The first is that, unbeknownst to the investigators, there was a real difference in experimental setup between the original investigation and the failed replication. These are colloquially referred to as “wallpaper effects,” the joke being that the experiment was affected by the color of the wallpaper in the room. This is the happiest possible explanation for failure to reproduce: It means that both experiments have revealed facts about the universe, and we now have the opportunity to learn what the difference was between them and to incorporate a new and subtler distinction into our theories.

The other interpretation is that the original finding was false. Unfortunately, an ingenious statistical argument shows that this second interpretation is far more likely. First articulated by John Ioannidis, a professor at Stanford University’s School of Medicine, this argument proceeds by a simple application of Bayesian statistics. Suppose that there are a hundred and one stones in a certain field. One of them has a diamond inside it, and, luckily, you have a diamond-detecting device that advertises 99 percent accuracy. After an hour or so of moving the device around, examining each stone in turn, suddenly alarms flash and sirens wail while the device is pointed at a promising-looking stone. What is the probability that the stone contains a diamond?

Most would say that if the device advertises 99 percent accuracy, then there is a 99 percent chance that the device is correctly discerning a diamond, and a 1 percent chance that it has given a false positive reading. But consider: Of the one hundred and one stones in the field, only one is truly a diamond. Granted, our machine has a very high probability of correctly declaring it to be a diamond. But there are many more diamond-free stones, and while the machine only has a 1 percent chance of falsely declaring each of them to be a diamond, there are a hundred of them. So if we were to wave the detector over every stone in the field, it would, on average, sound twice—once for the real diamond, and once when a false reading was triggered by a stone. If we know only that the alarm has sounded, these two possibilities are roughly equally probable, giving us an approximately 50 percent chance that the stone really contains a diamond.

This is a simplified version of the argument that Ioannidis applies to the process of science itself. The stones in the field are the set of all possible testable hypotheses, the diamond is a hypothesized connection or effect that happens to be true, and the diamond-detecting device is the scientific method. A tremendous amount depends on the proportion of possible hypotheses which turn out to be true, and on the accuracy with which an experiment can discern truth from falsehood. Ioannidis shows that for a wide variety of scientific settings and fields, the values of these two parameters are not at all favorable.

For instance, consider a team of molecular biologists investigating whether a mutation in one of the countless thousands of human genes is linked to an increased risk of Alzheimer’s. The probability of a randomly selected mutation in a randomly selected gene having precisely that effect is quite low, so just as with the stones in the field, a positive finding is more likely than not to be spurious—unless the experiment is unbelievably successful at sorting the wheat from the chaff. Indeed, Ioannidis finds that in many cases, approaching even 50 percent true positives requires unimaginable accuracy. Hence the eye-catching title of his paper: “Why Most Published Research Findings Are False.”

What about accuracy? Here, too, the news is not good. First, it is a de facto standard in many fields to use one in twenty as an acceptable cutoff for the rate of false positives. To the naive ear, that may sound promising: Surely it means that just 5 percent of scientific studies report a false positive? But this is precisely the same mistake as thinking that a stone has a 99 percent chance of containing a ­diamond just because the detector has sounded. What it really means is that for each of the countless false hypo­theses that are contemplated by researchers, we accept a 5 percent chance that it will be falsely counted as true—a decision with a considerably more deleterious effect on the proportion of correct studies.

Paradoxically, the situation is actually made worse by the fact that a promising connection is often studied by several independent teams. To see why, suppose that three groups of researchers are studying a phenomenon, and when all the data are analyzed, one group announces that it has discovered a connection, but the other two find nothing of note. Assuming that all the tests involved have a high statistical power, the lone positive finding is almost certainly the spurious one. However, when it comes time to report these findings, what happens? The teams that found a negative result may not even bother to write up their non-discovery. After all, a report that a fanciful connection probably isn’t true is not the stuff of which scientific prizes, grant money, and tenure decisions are made.

And even if they did write it up, it probably wouldn’t be accepted for publication. Journals are in competition with one another for attention and “impact factor,” and are always more eager to report a new, exciting finding than a killjoy failure to find an association. In fact, both of these effects can be quantified. Since the majority of all investigated hypotheses are false, if positive and negative evidence were written up and accepted for publication in equal proportions, then the majority of articles in scientific journals should report no findings. When tallies are actually made, though, the precise opposite turns out to be true: Nearly every published scientific article reports the presence of an association. There must be massive bias at work.

Ioannidis’s argument would be potent even if all scientists were angels motivated by the best of intentions, but when the human element is considered, the picture becomes truly dismal. Scientists have long been aware of something euphemistically called the “experimenter effect”: the curious fact that when a phenomenon is investigated by a researcher who happens to believe in the phenomenon, it is far more likely to be detected. Much of the effect can likely be explained by researchers unconsciously giving hints or suggestions to their human or animal subjects, perhaps in something as subtle as body language or tone of voice. Even those with the best of intentions have been caught fudging measurements, or making small errors in rounding or in statistical analysis that happen to give a more favorable result. Very often, this is just the result of an honest statistical error that leads to a desirable outcome, and therefore it isn’t checked as deliberately as it might have been had it pointed in the opposite direction.

But, and there is no putting it nicely, deliberate fraud is far more widespread than the scientific establishment is generally willing to admit. One way we know that there’s a great deal of fraud occurring is that if you phrase your question the right way, ­scientists will confess to it. In a survey of two thousand research psychologists conducted in 2011, over half of those surveyed admitted outright to selectively reporting those experiments which gave the result they were after. Then the investigators asked respondents anonymously to estimate how many of their fellow scientists had engaged in fraudulent behavior, and promised them that the more accurate their guesses, the larger a contribution would be made to the charity of their choice. Through several rounds of anonymous guessing, refined using the number of scientists who would admit their own fraud and other indirect measurements, the investigators concluded that around 10 percent of research psychologists have engaged in outright falsification of data, and more than half have engaged in less brazen but still fraudulent behavior such as reporting that a result was statistically significant when it was not, or deciding between two different data analysis techniques after looking at the results of each and choosing the more favorable.

Many forms of statistical falsification are devilishly difficult to catch, or close enough to a genuine judgment call to provide plausible deniability. Data analysis is very much an art, and one that affords even its most scrupulous practitioners a wide degree of latitude. Which of these two statistical tests, both applicable to this situation, should be used? Should a subpopulation of the research sample with some common criterion be picked out and reanalyzed as if it were the totality? Which of the hundreds of coincident factors measured should be controlled for, and how? The same freedom that empowers a statistician to pick a true signal out of the noise also enables a dishonest scientist to manufacture nearly any result he or she wishes. Cajoling statistical significance where in reality there is none, a practice commonly known as “p-hacking,” is particularly easy to accomplish and difficult to detect on a case-by-case basis. And since the vast majority of studies still do not report their raw data along with their findings, there is often nothing to re-analyze and check even if there were volunteers with the time and inclination to do so.

One creative attempt to estimate how widespread such dishonesty really is involves comparisons between fields of varying “hardness.” The author, Daniele Fanelli, theorized that the farther from physics one gets, the more freedom creeps into one’s experimental methodology, and the fewer constraints there are on a scientist’s conscious and unconscious biases. If all scientists were constantly attempting to influence the results of their analyses, but had more opportunities to do so the “softer” the science, then we might expect that the social sciences have more papers that confirm a sought-after hypothesis than do the physical sciences, with medicine and biology somewhere in the middle. This is exactly what the study discovered: A paper in psychology or psychiatry is about five times as likely to report a positive result as one in astrophysics. This is not necessarily evidence that psychologists are all consciously or unconsciously manipulating their data—it could also be evidence of massive publication bias—but either way, the result is disturbing.

Speaking of physics, how do things go with this hardest of all hard sciences? Better than elsewhere, it would appear, and it’s unsurprising that those who claim all is well in the world of science reach so reliably and so insistently for examples from physics, preferably of the most theoretical sort. Folk histories of physics combine borrowed mathematical luster and Whiggish triumphalism in a way that journalists seem powerless to resist. The outcomes of physics experiments and astronomical observations seem so matter-of-fact, so concretely and immediately connected to underlying reality, that they might let us gingerly sidestep all of these issues concerning motivated or sloppy analysis and interpretation. “E pur si muove,” Galileo is said to have remarked, and one can almost hear in his sigh the hopes of a hundred science journalists for whom it would be all too convenient if Nature were always willing to tell us whose theory is more correct.

And yet the flight to physics rather gives the game away, since measured any way you like—volume of papers, number of working researchers, total amount of funding—deductive, theory-building physics in the mold of Newton and Lagrange, Maxwell and Einstein, is a tiny fraction of modern science as a whole. In fact, it also makes up a tiny fraction of modern physics. Far more common is the delicate and subtle art of scouring inconceivably vast volumes of noise with advanced software and mathematical tools in search of the faintest signal of some hypothesized but never before observed phenomenon, whether an astrophysical event or the decay of a subatomic particle. This sort of work is difficult and beautiful in its own way, but it is not at all self-evident in the manner of a falling apple or an elliptical planetary orbit, and it is very sensitive to the same sorts of accidental contamination, deliberate fraud, and unconscious bias as the medical and social-scientific studies we have discussed. Two of the most vaunted physics results of the past few years—the announced discovery of both cosmic inflation and gravitational waves at the BICEP2 experiment in Antarctica, and the supposed discovery of superluminal neutrinos at the Swiss-Italian border—have now been retracted, with far less fanfare than when they were first published.

Many defenders of the scientific establishment will admit to this problem, then offer hymns to the self-correcting nature of the scientific method. Yes, the path is rocky, they say, but peer review, competition between researchers, and the comforting fact that there is an objective reality out there whose test every theory must withstand or fail, all conspire to mean that sloppiness, bad luck, and even fraud are exposed and swept away by the advances of the field.

So the dogma goes. But these claims are rarely treated like hypotheses to be tested. Partisans of the new scientism are fond of recounting the “Sokal hoax”—physicist Alan Sokal submitted a paper heavy on jargon but full of false and meaningless statements to the postmodern cultural studies journal Social Text, which accepted and published it without quibble—but are unlikely to mention a similar experiment conducted on reviewers of the prestigious British Medical Journal. The experimenters deliberately modified a paper to include eight different major errors in study design, methodology, data analysis, and interpretation of results, and not a single one of the 221 reviewers who participated caught all of the errors. On average, they caught fewer than two—and, unbelievably, these results held up even in the subset of reviewers who had been specifically warned that they were participating in a study and that there might be something a little odd in the paper that they were reviewing. In all, only 30 percent of reviewers recommended that the intentionally flawed paper be rejected.

If peer review is good at anything, it appears to be keeping unpopular ideas from being published. Consider the finding of another (yes, another) of these replicability studies, this time from a group of cancer researchers. In addition to reaching the now unsurprising conclusion that only a dismal 11 percent of the preclinical cancer research they examined could be validated after the fact, the authors identified another horrifying pattern: The “bad” papers that failed to replicate were, on average, cited far more often than the papers that did! As the authors put it, “some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis.”

What they do not mention is that once an entire field has been created—with careers, funding, appointments, and prestige all premised upon an experimental result which was utterly false due either to fraud or to plain bad luck—pointing this fact out is not likely to be very popular. Peer review switches from merely useless to actively harmful. It may be ineffective at keeping papers with analytic or methodological flaws from being published, but it can be deadly effective at suppressing criticism of a dominant research paradigm. Even if a critic is able to get his work published, pointing out that the house you’ve built together is situated over a chasm will not endear him to his colleagues or, more importantly, to his mentors and patrons.

Older scientists contribute to the propagation of scientific fields in ways that go beyond educating and mentoring a new generation. In many fields, it’s common for an established and respected researcher to serve as “senior author” on a bright young star’s first few publications, lending his prestige and credibility to the result, and signaling to reviewers that he stands behind it. In the natural sciences and medicine, senior scientists are frequently the controllers of laboratory resources—which these days include not just scientific instruments, but dedicated staffs of grant proposal writers and regulatory compliance experts—without which a young scientist has no hope of accomplishing significant research. Older scientists control access to scientific prestige by serving on the editorial boards of major journals and on university tenure-review committees. Finally, the government bodies that award the vast majority of scientific funding are either staffed or advised by distinguished practitioners in the field.

All of which makes it rather more bothersome that older scientists are the most likely to be invested in the regnant research paradigm, whatever it is, even if it’s based on an old experiment that has never successfully been replicated. The quantum physicist Max Planck famously quipped: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Planck may have been too optimistic. A recent paper from the National Bureau of Economic Research studied what happens to scientific subfields when star researchers die suddenly and at the peak of their abilities, and finds that while there is considerable evidence that young researchers are reluctant to challenge scientific superstars, a sudden and unexpected death does not significantly improve the situation, particularly when “key collaborators of the star are in a position to channel resources (such as editorial goodwill or funding) to insiders.”

In the idealized Popperian view of scientific progress, new theories are proposed to explain new evidence that contradicts the predictions of old theories. The heretical philosopher of science Paul Feyerabend, on the other hand, claimed that new theories frequently contradict the best available evidence—at least at first. Often, the old observations were inaccurate or irrelevant, and it was the invention of a new theory that stimulated experimentalists to go hunting for new observational techniques to test it. But the success of this “unofficial” process depends on a blithe disregard for evidence while the vulnerable young theory weathers an initial storm of skepticism. Yet if Feyerabend is correct, and an unpopular new theory can ignore or reject experimental data long enough to get its footing, how much longer can an old and creaky theory, buttressed by the reputations and influence and political power of hundreds of established practitioners, continue to hang in the air even when the results upon which it is premised are exposed as false?

The hagiographies of science are full of paeans to the self-correcting, self-healing nature of the enterprise. But if raw results are so often false, the filtering mechanisms so ineffective, and the self-correcting mechanisms so compromised and slow, then science’s approach to truth may not even be monotonic. That is, past theories, now “refuted” by evidence and replaced with new approaches, may be closer to the truth than what we think now. Such regress has happened before: In the nineteenth century, the (correct) vitamin C deficiency theory of scurvy was replaced by the false belief that scurvy was caused by proximity to spoiled foods. Many ancient astronomers believed the heliocentric model of the solar system before it was supplanted by the geocentric theory of Ptolemy. The Whiggish view of scientific history is so dominant today that this possibility is spoken of only in hushed whispers, but ours is a world in which things once known can be lost and buried.

And even if self-correction does occur and theories move strictly along a lifecycle from less to more accurate, what if the unremitting flood of new, mostly false, results pours in faster? Too fast for the sclerotic, compromised truth-discerning mechanisms of science to operate? The result could be a growing body of true theories completely overwhelmed by an ever-larger thicket of baseless theories, such that the proportion of true scientific beliefs shrinks even while the absolute number of them continues to rise. Borges’s Library of Babel contained every true book that could ever be written, but it was useless because it also contained every false book, and both true and false were lost within an ocean of nonsense.

Which brings us to the odd moment in which we live. At the same time as an ever more bloated scientific bureaucracy churns out masses of research results, the majority of which are likely outright false, scientists themselves are lauded as heroes and science is upheld as the only legitimate basis for policy-making. There’s reason to believe that these phenomena are linked. When a formerly ascetic discipline suddenly attains a measure of influence, it is bound to be flooded by opportunists and charlatans, whether it’s the National Academy of Science or the monastery of Cluny.

This comparison is not as outrageous as it seems: Like monasticism, science is an enterprise with a superhuman aim whose achievement is forever beyond the capacities of the flawed humans who aspire toward it. The best scientists know that they must practice a sort of mortification of the ego and cultivate a dispassion that allows them to report their findings, even when those findings might mean the dashing of hopes, the drying up of financial resources, and the loss of professional prestige. It should be no surprise that even after outgrowing the monasteries, the practice of science has attracted souls driven to seek the truth regardless of personal cost and despite, for most of its history, a distinct lack of financial or status reward. Now, however, science and especially science bureaucracy is a career, and one amenable to social climbing. Careers attract careerists, in Feyerabend’s words: “devoid of ideas, full of fear, intent on producing some paltry result so that they can add to the flood of inane papers that now constitutes ‘scientific progress’ in many areas.”

If science was unprepared for the influx of careerists, it was even less prepared for the blossoming of the Cult of Science. The Cult is related to the phenomenon described as “scientism”; both have a tendency to treat the body of scientific knowledge as a holy book or an a-religious revelation that offers simple and decisive resolutions to deep questions. But it adds to this a pinch of glib frivolity and a dash of unembarrassed ignorance. Its rhetorical tics include a forced enthusiasm (a search on Twitter for the hashtag “#sciencedancing” speaks volumes) and a penchant for profanity. Here in Silicon Valley, one can scarcely go a day without seeing a t-shirt reading “Science: It works, b—es!” The hero of the recent popular movie The Martian boasts that he will “science the sh— out of” a situation. One of the largest groups on Facebook is titled “I f—ing love Science!” (a name which, combined with the group’s penchant for posting scarcely any actual scientific material but a lot of pictures of natural phenomena, has prompted more than one actual scientist of my acquaintance to mutter under her breath, “What you truly love is pictures”). Some of the Cult’s leaders like to play dress-up as scientists—Bill Nye and Neil deGrasse Tyson are two particularly prominent examples— but hardly any of them have contributed any research results of note. Rather, Cult leadership trends heavily in the direction of educators, popularizers, and journalists.

At its best, science is a human enterprise with a superhuman aim: the discovery of regularities in the order of nature, and the discerning of the consequences of those regularities. We’ve seen example after example of how the human element of this enterprise harms and damages its progress, through incompetence, fraud, selfishness, prejudice, or the simple combination of an honest oversight or slip with plain bad luck. These failings need not hobble the scientific enterprise broadly conceived, but only if scientists are hyper-aware of and endlessly vigilant about the errors of their colleagues . . . and of themselves. When cultural trends attempt to render science a sort of religion-less clericalism, scientists are apt to forget that they are made of the same crooked timber as the rest of humanity and will necessarily imperil the work that they do. The greatest friends of the Cult of Science are the worst enemies of science’s actual practice.”

By: William A. Wilson is a software engineer in the San Francisco Bay Area.

This article can be found on the First Things website here and was published in the May 2016 editi0n.

Post Navigation