judicialsupport

Legal Writing for Legal Reading!

Archive for the tag “christianity”

As an Atheist, I Truly Believe Africa Needs God

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in The Times (UK) which, I thought, was pretty insightful. Be edified.

_______________________

Before Christmas I returned, after 45 years, to the country that as a boy I knew as Nyasaland. Today it’s Malawi, and The Times Christmas Appeal includes a small British charity working there. Pump Aid helps rural communities to install a simple pump, letting people keep their village wells sealed and clean. I went to see this work.

It inspired me, renewing my flagging faith in development charities. But travelling in Malawi refreshed another belief, too: one I’ve been trying to banish all my life, but an observation I’ve been unable to avoid since my African childhood. It confounds my ideological beliefs, stubbornly refuses to fit my world view, and has embarrassed my growing belief that there is no God.

Now a confirmed atheist, I’ve become convinced of the enormous contribution that Christian evangelism makes in Africa: sharply distinct from the work of secular NGOs, government projects and international aid efforts. These alone will not do. Education and training alone will not do. In Africa Christianity changes people’s hearts. It brings a spiritual transformation. The rebirth is real. The change is good.

I used to avoid this truth by applauding – as you can – the practical work of mission churches in Africa. It’s a pity, I would say, that salvation is part of the package, but Christians black and white, working in Africa, do heal the sick, do teach people to read and write; and only the severest kind of secularist could see a mission hospital or school and say the world would be better without it. I would allow that if faith was needed to motivate missionaries to help, then, fine: but what counted was the help, not the faith.

But this doesn’t fit the facts. Faith does more than support the missionary; it is also transferred to his flock. This is the effect that matters so immensely, and which I cannot help observing.

First, then, the observation. We had friends who were missionaries, and as a child I stayed often with them; I also stayed, alone with my little brother, in a traditional rural African village. In the city we had working for us Africans who had converted and were strong believers. The Christians were always different. Far from having cowed or confined its converts, their faith appeared to have liberated and relaxed them. There was a liveliness, a curiosity, an engagement with the world – a directness in their dealings with others – that seemed to be missing in traditional African life. They stood tall.

At 24, travelling by land across the continent reinforced this impression. From Algiers to Niger, Nigeria, Cameroon and the Central African Republic, then right through the Congo to Rwanda, Tanzania and Kenya, four student friends and I drove our old Land Rover to Nairobi.

We slept under the stars, so it was important as we reached the more populated and lawless parts of the sub-Sahara that every day we find somewhere safe by nightfall. Often near a mission.

Whenever we entered a territory worked by missionaries, we had to acknowledge that something changed in the faces of the people we passed and spoke to: something in their eyes, the way they approached you direct, man-to-man, without looking down or away. They had not become more deferential towards strangers – in some ways less so – but more open.

This time in Malawi it was the same. I met no missionaries. You do not encounter missionaries in the lobbies of expensive hotels discussing development strategy documents, as you do with the big NGOs. But instead I noticed that a handful of the most impressive African members of the Pump Aid team (largely from Zimbabwe) were, privately, strong Christians. âPrivatelyâ because the charity is entirely secular and I never heard any of its team so much as mention religion while working in the villages. But I picked up the Christian references in our conversations. One, I saw, was studying a devotional textbook in the car. One, on Sunday, went off to church at dawn for a two-hour service.

It would suit me to believe that their honesty, diligence and optimism in their work was unconnected with personal faith. Their work was secular, but surely affected by what they were. What they were was, in turn, influenced by a conception of man’s place in the Universe that Christianity had taught.

There’s long been a fashion among Western academic sociologists for placing tribal value systems within a ring fence, beyond critiques founded in our own culture: âtheirsâ and therefore best for âthemâ; authentic and of intrinsically equal worth to ours.

I don’t follow this. I observe that tribal belief is no more peaceable than ours; and that it suppresses individuality. People think collectively; first in terms of the community, extended family and tribe. This rural-traditional mindset feeds into the âbig manâ and gangster politics of the African city: the exaggerated respect for a swaggering leader, and the (literal) inability to understand the whole idea of loyal opposition.

Anxiety – fear of evil spirits, of ancestors, of nature and the wild, of a tribal hierarchy, of quite everyday things – strikes deep into the whole structure of rural African thought. Every man has his place and, call it fear or respect, a great weight grinds down the individual spirit, stunting curiosity. People won’t take the initiative, won’t take things into their own hands or on their own shoulders.

How can I, as someone with a foot in both camps, explain? When the philosophical tourist moves from one world view to another he finds – at the very moment of passing into the new – that he loses the language to describe the landscape to the old. But let me try an example: the answer given by Sir Edmund Hillary to the question: Why climb the mountain? âBecause it’s there,â he said.

To the rural African mind, this is an explanation of why one would not climb the mountain. It’s… well, there. Just there. Why interfere? Nothing to be done about it, or with it. Hillary’s further explanation – that nobody else had climbed it – would stand as a second reason for passivity.

Christianity, post-Reformation and post-Luther, with its teaching of a direct, personal, two-way link between the individual and God, unmediated by the collective, and unsubordinate to any other human being, smashes straight through the philosphical/spiritual framework I’ve just described. It offers something to hold on to to those anxious to cast off a crushing tribal groupthink. That is why and how it liberates.

Those who want Africa to walk tall amid 21st-century global competition must not kid themselves that providing the material means or even the knowhow that accompanies what we call development will make the change. A whole belief system must first be supplanted.

And I’m afraid it has to be supplanted by another. Removing Christian evangelism from the African equation may leave the continent at the mercy of a malign fusion of Nike, the witch doctor, the mobile phone and the machete.

By Matthew Parris and published in The Times on 12/27/08 and can be found here.

 

Feminism’s Self-Defeating About-Face on Porn

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in Live Site News which, I thought, was pretty insightful. Be edified.

_____________

“Pornography is the theory,” renowned feminist Robin Morgan once wrote, “rape is the practice.”

Indeed, feminists used to widely understand that pornography was, at its very best, dehumanizing and degrading, a product by men and for men that portrayed women only as objects of male desire. At its very worst, it was a gory celebration of the destruction of the feminine, with women being beaten, raped, humiliated, and otherwise assaulted for the perverse pleasures of misogynists who claimed that their woman-hating was a “fetish.”

Today, however, feminists are supposed to be “sex-positive,” which means they have to support pornography, because with over 80% of the male population viewing it, resistance is futile.

Those who oppose pornography are not anti-sex. They are simply wise enough to recognize that pornography is poison. When used as a substitute for love, it is the equivalent of giving salt water to a man dying of thirst—it will merely inflame the desire further without bringing any satisfaction.

I remember a debate on pornography in one of my first political science classes in university—out of the entire class, only myself and one other guy were opposed to pornography. Most of the guys sat quietly, trying to avoid contributing to the discussion, while a few of the girls were the most vociferous defenders of this filth—almost as if they had something to prove.

Pornography, our new sexual dogmas say, is harmless, if not beneficial. And when I asserted in a number of articles that pornography fuels rape culture, the backlash from guys who couldn’t stop looking at porn was quick and angry.

So I began contacting experts in the field, people who had studied the impact of pornography on men and women. The most revealing and chilling interview I conducted was with Dr. Mary Anne Layden, director of the Sexual Trauma and Psychopathology Program in the Department of Psychiatry at the University of Pennsylvania. I had cited her work on pornography and violence before, and wanted to see what sort of things her research had uncovered.

Why, I asked Dr. Layden, did you start researching the links between violence and pornography?

“When I started as a psychotherapist, just about thirty years ago, I started treating patients who were victims of sexual violence and felt a special call to the damage that sexual violence did to these patients,” she replied,

When I had been doing the work for about ten years, because I’m a little bit of a slow learner, it occurred to me that I had not treated one case of sexual violence that didn’t involve pornography… some were rape cases, some were incest cases, some were child molestation cases, some were sexual harassment cases – in all of these different kinds of cases, pornography showed up in every single one.

So I said there seems to be some connection here. Over time, I got interested in what is common in the perpetrators of sexual violence because I realized we were never going to solve the problem of sexual violence by treating victims who’ve been damaged by the problem and treating them one at a time and trying to put them back together. There weren’t enough therapists in the world. There were too many victims in the world. We couldn’t solve this by pulling them out of the river one at a time. We were going to have to go upstream and see who was pushing them in.

And as Dr. Layden discovered, it was the porn industry that was pushing people into the river. Men are not born rapists, she pointed out to me. But for some reason, many are increasingly justifying sexual violence. Why? Because pornography has turned the bodies of women and girls into a commodity. It is shaping the way men see women.

“It’s a product,” Dr. Layden said, her voice getting more emphatic.

This is a business and I think that a lot of pimps would stop doing this if there wasn’t any money involved, but it’s a business and as soon as you tell somebody it’s a product, as soon as you say this [is] something you buy, then this is something you can steal. Those two things are hooked. If you can buy it, you can steal it, and even better if you steal it because then you don’t pay for it. So the sexual exploitation industry, whether it’s strip clubs or prostitution or pornography, is where you buy it. Sexual violence is where you steal it – rape and child molestation and sexual harassment is where you steal it.

So these things are all seamlessly connected. There isn’t a way to draw a bright line of demarcation between rape and prostitution and pornography and child molestation. There are not bright lines of demarcation. The perpetrators are in a common set of beliefs, and when we look at the research we can see some of those common beliefs, so that we know that individuals who are exposed to pornographic media have beliefs such as [thinking that] rape victims like to be raped, they don’t suffer so much when they’re raped, ‘she got what she wanted’ when she was raped, women make false accusations of rape because it isn’t really rape, sex is really either good or great and there isn’t any other option other than good or great, no one is really traumatized by it.

All of these are part of the rape myth. People who use pornography accept the rape myth to a greater degree than others. So we have a sense that pornography is teaching them to think like a rapist and then triggering them to act like rapists.

Pornography, like all other products, has done to the female body what economics always does to any product: If you commodify something, you cheapen it. It’s really that simple. But when your marketing strategy is inflaming lust and appealing to power by degrading women, there are devastating results. As Dr. Layden pointed out to me, we even stop seeing each other as human.

“When you cheapen sex and you cheapen women’s bodies, when you treat people like things there’s a consequence and one of the consequences is sexual violence but one the consequences is also relationship damage,” she pointed out.

There’s an interesting series of studies that actually highlights a bit of the phenomena of how this works. They were showing people just mildly sexualized pictures. They were men and women in swimsuits, men and women in their underwear, sort of relatively mild sexualized pictures and they showed them either upside right or upside down and looked at the processing in the brain, because it will display a phenomena of which part of your brain you’re using to process that picture that you see.

What we see with men, when people look at men, and look at them in their swimsuits or in their underwear, they’re using the part of their brain that processes humans and human faces but when we look at women in their swimsuits and their underwear we use the part of our brain that processes tools and objects and when you process a woman as a tool or an object you use. The rules that we use when we deal with tools or objects is if it’s not doing its job then throw it away, get another one.

So the feminists years ago said these men are treating women as sex objects and we thought that was a metaphor. It wasn’t a metaphor. It was an actual statement of reality, that they’re using the part of their brain which they use to process objects and things and there’s a consequence in the society when you start treating sex as a product and women as a thing.

Those who point these things out, of course, and those who oppose porn, are condemned as old-fashioned, prudish, and “anti-sex.” When I reminded Dr. Layden of this, she was decidedly unimpressed.

The desire for love is built into us. [One of my colleagues] said, ‘The real damage is that it threatens the loss of love in a world where only love brings happiness.’ That summarizes what we are doing, that everybody is hardwired to love and be loved. That’s what feeds our hungry heart, and we have a generation who are starved and have hungry hearts and yet they are eating the sexual junk food and becoming sexually obese because they’re so starved they would eat junk food if that’s all that’s available to them.

And so partly we need to have people talk about the glory of good sex, the wonderfulness of good sex, of how it bonds committed couples together and helps them keep their promises to each other, that there is a thing called good sexuality that is enhancing and enlivening and is love-based, but all of this sexual junk food that is out there is not it.

In short? Those who oppose pornography are not anti-sex. They are simply wise enough to recognize that pornography is poison. When used as a substitute for love, it is the equivalent of giving salt water to a man dying of thirst—it will merely inflame the desire further without bringing any satisfaction. To Dr. Mary Anne Layden, this is self-evident. And she intends to make sure as many other people as possible see it that way, too.

“If I said to people, ‘I want you to eat healthy food and don’t go to McDonald’s,’ they wouldn’t call me anti-food,” she said. “They would say you just want to promote healthy food and you don’t want people to go see that Supersize Me movie and find out if you eat McDonald’s every day for 30 days you’ll have a fatty liver. Well that’s what I want to do with sexuality. I want to promote healthy, loving, enhancing, soul-feeding sexuality, not sexual junk food.”

And the way to do that? With sky-high rates of porn addiction, is it possible? Dr. Layden has so many ideas that they come out in a rush.

“I think we’ve got to educate ourselves, we’ve got to tell the truth to others, you’ve got to speak truth to authority because once you know this stuff if you’re silent, silence is complicity,” she says.

We’ve got to go in to our schools and our libraries and say you’ve got to protect our children, we’ve got to say to our governments you’ve got to stop spreading permission-giving beliefs and that means don’t legalize prostitution. It tells men that it’s fine and more men will go to prostitutes. We’ve got to have laws against things that damage people; we’ve got to have outrage in this society when sexual violence is swept under the rug, when a professional athlete does it.

We’ve got to come together and have the journalists, the lawyers, the parents to get together as a mighty team and say this society is worth saving, our children are worth saving, sexuality is sacred. We’ve got to do it together and so it takes a concerted effort … When I hear people say we can’t put the genie back in the bottle I say fifty years ago 60% of the people in New York City smoked, today 18% in NYC smoke. Put the genie back in the bottle. We can do this one as well and it’s worth doing.

Like Dr. Mary Anne Layden, I am not anti-sex, although I don’t particularly object to being called old-fashioned. I am, however, very anti-porn—and that is because pornography is rapidly turning healthy, loving, and committed relationships into something “old-fashioned.” It is robbing the current generation of their ability to enjoy life-long and happy commitments. And as such, we have a responsibility to heed the call of Dr. Layden and so many other experts to fight the porn threat wherever it is found. Those who claim that pornography is harmless are, at the end of the day, woefully uneducated.

By: Jonathon Van Maren and published on Life Site News on January 26, 2015 and can be found here.

 

How conservatives out-intellectualized progressives

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in The Week which, I thought, was pretty insightful. Be edified.

_____________

The vital center is imploding throughout the Western world. Liberal norms and institutions face a greater challenge than at any time since the end of the Second World War. And so defenders of the liberal order seek, often desperately, to remind themselves of what principles they stand for and the premises that underlie their deepest political and moral convictions.

That’s what I take Molly Worthen to be doing in her recent, admirable essay in The New York Times. Worthen writes as a liberal who admires the way the American right has built an infrastructure of programs and institutes where young conservatives receive instruction in the history of political philosophy from Aristotle and Xenophon on down to James Madison, Adam Smith, and beyond.

Worthen thinks liberals should do something similar:

Liberals have their own activist workshops and reading groups, but these rarely instruct students in an intellectual tradition, a centuries-long canon… [Great Books] are powerful tools for preparing the next generation of activists to succeed in the bewildering ideological landscape of the country that just elected Mr. Trump. [The New York Times]

Indeed. So why don’t liberals follow the lead of their conservative counterparts in reading classic texts?

Though Worthen never says so explicitly, the germ of an explanation can be found in her essay when she writes, somewhat defensively, that liberals “can’t afford to dismiss Great Books as tools of white supremacy.” And why would they be tempted to do that? Because most so-called liberals today aren’t liberals at all. They’re progressives — and progressivism is an ideology that has little if any interest in learning from the greatest books, ideas, and thinkers of the past. And that’s because, as the name implies, progressivism is a theory of historical progress. It doesn’t see itself as an ideological project with premises and goals that had to be established against alternative views. Rather, at any given moment it identifies itself with empiricism, pragmatism, and the supposedly neutral, incontestable examination of facts and data, which it marshals for the sake of building a future that is always self-evidently superior (in a moral sense) to everything that came before.

Whereas conservatives look to the past in search of wisdom, inclined as they are to presume that the greatest writers of past ages may well have been wiser than we are — and displayed greater understanding about morality and politics than we do — progressives tend to see that same past as a graveyard packed with justly dead ideas.

No wonder they don’t spend time reading Great Books.

Like a physicist who is too busy pushing the boundaries of scientific knowledge to study the history of past errors and halting advances (now surpassed) within his own field, most progressives would rather continue their project of expanding the administrative-welfare state of which they consider themselves the rightful guardians (while stigmatizing its opponents) than turn back to examine the origins of and strongest case for their own most cherished ideas.

That’s why conservatives are much better placed than progressives to do the work of examining the intellectual foundations of the liberal political order. But that doesn’t mean liberals who are willing to distance themselves from progressive assumptions couldn’t follow Worthen’s advice and do something similar.

There are already tentative signs that some are doing just that. Liberal Bill Galston has recently gotten together with conservative Bill Kristol to encourage precisely this kind of rethinking and defense of liberal premises in the face of the populist challenge. Even more promising might be the efforts of classical liberal political theorist Jacob Levy and liberaltarian author Will Wilkinson, who will be pursuing their own similar projects through the libertarian Niskanen Center.

Maybe these efforts will even spawn the kind of Great Books programs for liberals that Worthen pines for. If they do, liberalism will be much the better for it — not least because it would be a sign that liberals had begun to separate themselves and their ideas from the powerful but pernicious ideology of progressivism.

By Damon Linker and originally published in The Week on December 6, 2016 and can be found here.

Is Belief in God Like Belief in Santa, Leprechauns, or Fairies? A Reflection

Every now and again I come across a fantastic article the warrants posting here; I recently came across one on Brian Nicholson’s Blog which, I thought, was pretty insightful. Be edified.

_________________

No.

Can you imagine if I had left it at that? A one-word post on WordPress. Brother, I’d get comments “for dayz.” I’d also probably get some pretty strong retorts.

When it comes to topics related to the origins of the universe, many have come to conclude that there is Someone behind it all. In this post, I will compare belief in God to Santa Claus, fairies, and leprechauns, hoping to illuminate that the existence of a Creator is something far more worthy of conversation than these characters. The goal isn’t to prove that God exists. For that, see my equation below:

Just kidding.

The objective is to critically compare these characters of fantasy and folklore, and see if they bare any resemblance to the existence of a deity. So, let’s get this party started! Jeeves, turn on my mix-tape…

If you believe in God without evidence, then I can assert that leprechauns and Santa exist without evidence.Various YouTube Commenters Since Pre-Extinction of the Dodo Bird

The problem here, of course, are we having good reason to think those things don’t exist, and not comparably having good reasons to think God does not. It’s not simply that we don’t have evidence for Santa, but we have positive reasons to think Santa does not exist. We know there’s no workshop at the North Pole, there aren’t Santa sightings around the holiday season, and the milk and cookies are obviously eaten by the parents… I mean, come on, do your kids really expect that Santa ALSO went gluten-free around the same time you did?

Negative Claims

But, we can’t prove that things don’t exist, right? There are definitely examples where we can prove negative statements. For example, we know Leonardo DaVinci is no longer alive. We know George Bush isn’t the President anymore. We can certainly prove these negative claims. Even if we couldn’t prove that God does not exist, which I don’t believe is the case; this certainly doesn’t mean that He does. It just means making a claim about His non-existence is also making a knowledge claim, that of which requires justification. So at the very least, one should be agnostic.

Another problem with drawing these false analogies is that God, if He exists, is beyond the natural, or is supernatural. That’s why we can’t observe Him in nature or put Him in a test tube. But moral values and mathematics are also not observable in nature, in yet we see their effects all the same. Things like leprechauns, if they existed, would be a part of the natural world, and would certainly be making their appearance known if they wanted to. So, just because God cannot be tested scientifically does not mean it’s worthless to talk about His existence. To stubbornly assert science as the only route to truth is self-refuting, because:

Can the statement, “you should only believe what can be scientifically proven,” itself be scientifically proven?

Here we see that there are other methods of discerning truth that are valid, as science is. For example, we all accept moral truths as real, but we can’t prove that those exist by scientific means. Mathematical truths and logic are valid ways of discerning truth, but these are beyond the realm of scientific inquiry as well.

On the contrary, leprechauns, Santa, and fairies are all purportedly within our spatial-temporal realm, frolicking with Chips Ahoy!, delivering presents, stealing your credit card, and forever trying to increase the value of ye olde pot of gold with Rosland Capital. Someone might say, “what if we simply define Santa or Paul Bunyun as existing outside the universe?” Well, at that point we really cease to be talking about Santa or Mr. Bunyun at all. If we make Santa an immaterial, all-powerful mind existing outside our universe, it really becomes just another name for God. This is much like the debate Dr. William Lane Craig had with Dr. Lewis Wolpert, where Wolpert said, “I think a computer did it!” (talking about creating the universe). But a computer is a device comprised of matter, and needs time to operate, so if we just rob it of all the attributes that make it a computer and just define it a space-less and timeless computer, we are really just re-naming God.

In the case of a Creator, we aren’t peering into telescopes looking for some bearded man resembling the renaissance images of God The Father. We are looking for the effects of God… things like, say, the existence of a finite universe, the remarkable fine-tuning of the universe for things like stars, chemistry, and us. We may also consider the potential reliability of miracle claims, such as the resurrection of Jesus- something that has raised many an eyebrow for a long time. Even the skeptic scholar Paula Fredriksen admits, “they must’ve seen something,” talking about Jesus’ disciples.

For more on the fine-tuning argument, please click here https://briannicholsonblog.wordpress.com/2016/08/28/first-blog-post/

By contrast we do not see the effects of these mystical creatures. So we can reasonably say they don’t exist.

What’s probably the bigger issue is that God is not detectable like other things in our world are, and this is where I feel the larger disagreement stems from. Let’s take a look at the objection Carl Sagan presented in his book, “A Demon-Haunted World,” where Sagan compares God to an invisible, undetectable dragon in someone’s garage.

“Now, what’s the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all?  If there’s no way to disprove my contention, no conceivable experiment that would count against it, what does it mean to say that my dragon exists?”

I’m not sure who was going around saying God could decisively be found in a garage, but this point seems to be missing what I mentioned earlier. We aren’t looking for God within space and time, but signs of something from beyond the universe, signs that there may have been an Agent involved in bringing the cosmos to life. In Sagan’s case, the person should really be asking why there is a reality for a garage to exist in in the first place. That’s where at least the possibility of God comes into play. Things like the Big Bang, the fine-tuning, the logically incoherent idea of an infinite series of past events, and the surprising fact that there is something rather than nothing, are just a few reasons that we shouldn’t dismiss God’s existence a priori. This doesn’t mean He does exist, but certainly this topic that has engaged philosophers and scientists for millennia is worth discussing.

In future posts I will talk more about problems with an infinite regress and Leibniz’ Contingency Argument. But for now, I hope we can see that the existence of God certainly deserves a place at the podium.

You can find the above blog post here.

Why Sex is Making us Morally Stupid

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in Mustard Seed Faith which, I thought, was pretty insightful. Be edified.

Mustard Seed Faith

C.S. Lewis, writing on June 3, 1956 to a man who asked him about masturbation, offered the following striking and relevant advice:

For me the evil of masturbation would be that it takes an appetite which, in lawful use, leads the individual out of himself to complete (and correct) his own personality in that of another (and finally in children and even grandchildren) and turns it back: sends the man back into the prison of himself, there to keep a harem of imaginary brides. And this harem, once admitted, works against his ever getting out and really uniting with a real woman. For the harem is always accessible, always subservient, calls for no sacrifices or adjustments, and can be endowed with erotic and psychological attractions which no real woman can rival. Among those shadowy brides he is always adored, always the perfect lover: no demand is made on his…

View original post 1,149 more words

Students’ Broken Moral Compasses

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in The Atlantic which, I thought, was pretty insightful.  Be edified.

_____________________________

A few months ago, I presented the following scenario to my junior English students: Your boyfriend or girlfriend has committed a felony, during which other people were badly harmed. Should you or should you not turn him or her into the police?

The class immediately erupted with commentary. It was obvious, they said, that loyalty was paramount—not a single student said they’d “snitch.” They were unequivocally unconcerned about who was harmed in this hypothetical scenario. This troubled me.

This discussion was part of an introduction to an essay assignment about whether Americans should pay more for ethically produced food. We continued discussing other dilemmas, and the kids were more engaged that they’d been in weeks, grappling with big questions about values, character, and right versus wrong as I attempted to expand their thinking about who and what is affected—and why it matters—by their caloric choices.

I was satisfied that students were clearly thinking about tough issues, but unsettled by their lack of experience considering their own values. “Do you think you should discuss morality and ethics more often in school?” I asked the class. The vast majority of heads nodded in agreement. Engaging in this type of discourse, it seemed, was a mostly foreign concept for the kids.

Widespread adoption of the Common Core standards—despite resistance by some states—arguably continues the legacy of the No Child Left Behind Act. The 2002 law charged all public schools to achieve 100 percent proficiency in reading and math by 2014, meaning that all students were expected to be on grade level. This unrealistic target forced schools to track and measure the academic achievement of all students, a goal lauded by most, but one that ultimately elevated standardized testing and severely narrowed curricula. Quantifying academic gains remains at the forefront of school-improvement efforts to the detriment of other worthwhile purposes of schooling.

As my students seemed to crave more meaningful discussions and instruction relating to character, morality, and ethics, it struck me how invisible these issues have become in many schools. By omission, are U.S. schools teaching their students that character, morality, and ethics aren’t important in becoming productive, successful citizens?

For many American students who have attended a public school at some point since 2002, standardized-test preparation and narrowly defined academic success has been the unstated, but de facto, purpose of their schooling experience. And while school mission statements often reveal a goal of preparing students for a mix of lifelong success, citizenship, college, and careers, the reality is that addressing content standards and test preparation continues to dominate countless schools’s operations and focus.

In 2014, an annual end-of-year kindergarten show in New York was canceled so students could focus on college-and-career readiness. Test-prep rallies have become increasingly commonplace, especially at the elementary level. And according to a 2015 Council of the Great City Schools study, eighth-graders spend an average of 25.3 hours a year taking standardized tests. In Kentucky, where I teach, high schools are under pressure to produce students who are ready for college, defined as simply reaching benchmark scores in reading, English, and math on the ACT.

Talking with my students about ethics and gauging their response served as a wakeup call for me to consider my own role as an educator and just how low character development, ethics, and helping students develop a moral identity have fallen with regard to debate over what schools should teach. The founders of this country, Jessica Lahey wrote in The Atlantic, would “likely be horrified by the loss of this goal, as they all cite character education as the way to create an educated and virtuous citizenry.” According to Gallup polling, Lahey added, 90 percent of adults support the teaching in public schools of honesty, acceptance of others, and moral courage, among other character traits. What adults hope occurs in schools, however, is in sharp contrast to observations provided by teens themselves.

The 2012 Josephson Report Card on the Ethics of American Youth reveals a pressing need to integrate elements of character education into the country’s public-school curriculums. According to the study, 57 percent of teens stated that successful people do what they have to do to win, even if it involves cheating. Twenty-four percent believe it is okay to threaten or hit someone when angry. Thirty-one percent believe physical violence is a big problem in their schools. Fifty-two percent reported cheating at least once on an exam. Forty-nine percent of students reported being bullied or harassed in a manner that seriously upset them.

In the recently released Unselfie: Why Empathetic Kids Succeed in Our All-About-Me World, Michelle Borba claims narcissism is on the rise, especially in the Western world, as more teens concur with the statement: “I am an extraordinary person.” If empathy is crucial to developing a moral identity, then this trend should be troubling to parents and educators who hope that students foster the ability to see the world through others’s eyes.

My own observations support the data. I’m frequently unnerved by the behaviors I see in classrooms and hallways every day, from physical and verbal bullying, to stereotyping, to students leaving trash strewn all over the outdoor cafeteria courtyard.

“Teaching character education in schools is actually unavoidable … [E]verything the school chooses to do or not do in terms of curriculum choices” influences the culture of a school and the character of its students, Steve Ellenwood, the director of Boston University’s Center for Character and Social Responsibility (CCSR), wrote in an email. His words resonated with me. During my 12 years in education, I can’t recall a single meeting in which the discussion of student character and ethics was elevated to anything close to the level of importance of academics within school curricula.

Groups like the CCSR and the Josephson Institute of Ethics’ Character Counts! initiative strive to enhance existing school programs and curricula to address these issues, proof that efforts do exist to transform schools into places where character education is elevated within traditional curricula. But Ellenwood laments that many educators “blithely accept that schools must be value-neutral,” adding that there is legal precedent for teaching about religions (and not imposing any set of beliefs), character, and ethics. And divisive national politics have left many educators with difficult choices about addressing certain issues, especially those who teach immigrant students who are actively afraid of their fates if Donald Trump wins the election.

A reluctance to teach about religions and value systems is coinciding with a steady decline of teen involvement in formal religious activity over the past 50 years, according to research led by San Diego State Professor Jean Twenge. And while attending church is only one way young people may begin to establish a moral identity, schools don’t seem to be picking up the slack. There’s undoubtedly a fear about what specific ethical beliefs and character traits schools might teach, but one answer might be to expose students to tough issues in the context of academic work—not imposing values, but simply exploring them.

At a recent convening of 15 teacher-leaders from around the country at the Center for Teaching Quality in Carrboro, North Carolina, I spoke to some colleagues about the balance between teaching academic content and striving to develop students’ moral identities. Leticia Skae-Jackson, an English teacher in Nashville, Tennessee, and Nick Tutolo, a math teacher in Pittsburgh, both commented that many teachers are overwhelmed by the pressure and time demands in covering academic standards. Focusing on character and ethics, they said, is seen as an additional demand.

Nonetheless, Tutolo engages his math students at the beginning of the school year by focusing on questions of what it means to be a conscientious person and citizen while also considering how his class could address community needs. His seventh-grade class focused on the issue of food deserts in Pittsburgh and began a campaign to build hydroponic window farms. While learning about ratios and scaling—skills outlined in the Common Core math standards—students began working to design and distribute the contraptions to residents in need, a project that will continue this fall as Tutolo “loops” up to teach eighth grade.

William Anderson, a high-school teacher in Denver, takes a similar approach to Tutolo, but told me that “most teachers haven’t been trained to design instruction that blends academic content with an exploration of character and ethics.” He emphasized that schools should promote this approach to develop well-rounded students. Addressing academic skills and challenging students to consider ethics and character should not, he argued, be mutually exclusive.

When I reflect upon my own education, two classes stand out with regard to finding the balance between imparting academic skills and developing my own moral identity. My high-school biology teacher Phil Browne challenged us to think about the consequences of our consumer choices and individual actions as they related to ecosystems and the environment in a way that challenged us to think about ourselves as ethical actors.

A couple years later, I signed up for a freshman seminar in college titled “Education and Social Inequality” at Middlebury College in Vermont. I remember being moved by Jonathan Kozol’s Savage Inequalities and his moral outrage at dilapidated, underfunded, and understaffed schools in impoverished areas; early on in the course, I struggled to articulate my thoughts during essay assignments. My professor, Peggy Nelson, would sit quietly during seminars, watching us squirm in our seats while we grappled with big ideas such as personal responsibility, systemic injustice, and racism.

Entering my 13th year in the classroom this fall, I hope to continue striving to capture the dynamic that Browne, Nelson, Tutolo, Skae-Jackson, Anderson, and other skilled educators have achieved by blending academic instruction with the essential charge of developing students as people. It’s time for critical reflection about values our schools transmit to children by omission in our curriculum of the essential human challenges of character development, morality, and ethics. Far too often, “we’re sacrificing the humanity of students for potential academic and intellectual gain,” Anderson said.

By Paul Barnwell and originally published in The Atlantic on July 25, 2016 and can be seen here.

Assimilate! How Modern Liberalism Is Destroying Individuality

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in National Review which, I thought, was pretty insightful.  Be edified.

____________

Progressives claim to celebrate diversity, but demand that everyone fit their mold.
I was once called a “cracker” by a member of the Nation of Islam. It was in the mid-1980s and I was driving through Washington, D.C., in the kind of neighborhood that conservatives call dangerous and liberals call “transitioning.” I saw a member of the Nation of Islam, bow tie and all, on the corner hawking copies of The Final Call, the NOI’s newspaper. I rolled down the window and asked for a copy. That’s when he hit me with it: “F*ck off, cracker.”
I thought of this gentleman fondly when I was reading the new book, The Demon in Democracy: Totalitarian Temptations in Free Societies by Polish scholar Ryszard Legutko. The book is an intense read that argues that liberal democracies are succumbing to a utopian ideal where individuality and eccentricity might eventually be banned. As liberals push us towards a monoculture where there is no dissent, no gender, and no conflict, the unique and the great will eventually cease to exist. No more offbeat weirdoes, eccentric crazies, or cults. No more Nation of Islam there to call me a cracker. No more of the self-made and inspired figures of the past: Duke Ellington, Hunter Thompson, Annie Leibowitz.
Legutko’s thesis is that liberal democracies have something in common with communism: the sense that time is inexorably moving towards a kind of human utopia, and that progressive bureaucrats must make sure it succeeds. Legutko first observed this after the fall of communism. Thinking that communist bureaucrats would have difficulty adjusting to Western democracy, he was surprised when the former Marxists smoothly adapted — indeed, thrived — in a system of liberal democracy. It was the hard-core anti-communists who couldn’t quite fit into the new system. They were unable to untether themselves from their faith, culture, and traditions.
Both communism and liberal democracy call for people to become New Men by jettisoning their old faith, customs, arts, literature, and traditions. Thus a Polish anti-communist goes from being told by communists that he has to abandon his old concepts of faith and family to become a member of the larger State, only to come to America after the fall of the Berlin Wall and be told he has to forego those same beliefs for the sake of the sexual revolution and the bureaucratic welfare state. Both systems believe that societies are moving towards a certain ideal state, and to stand against that is to violate not just the law but human happiness itself. Legutko compares the two: “Societies — as the supporters of the two regimes are never tired of repeating — are not only changing and developing according to a linear pattern but also improving, and the most convincing evidence of the improvement, they add, is the rise of communism and liberal democracy. And even if a society does not become better at each stage and in each place, it should continue improving given the inherent human desire to which both regimes claim those found the most satisfactory response.”
Legutko argues that, of course, there are huge differences between communism and liberal democracy — liberal democracy is obviously a system that allows for greater freedom. He appreciates that in a free society people are able to enjoy the arts, books, and pop culture that they want. Our medical system is superior. We don’t suffer from famines. Yet Legutko argues that with so much freedom has come a kind of flattening of taste and the hard work of creating original art.
We’ve witnessed the a slow and steady debasement of our politics and popular culture — see, for example, those “man on the street” interviews where Americans can’t name who won the Revolutionary War. Enter the unelected bureaucrats who appoint themselves to steer the ship; in other words, we’re liberals and we’re here to help. Inspired by the idea that to be against them is to be “on the wrong side of history,” both communism and contemporary liberalism demand absolute submission to the progressive plan. All resistance, no matter how grounded in genuine belief or natural law, must be quashed.
Thus in America came the monochromatic washing of a country that once could boast not only crazies like Scientologists and Louis Farrakhan, but creative and unusual icons like Norman Mailer, Georgia O’Keefe, Baptists, Hindus, dry counties, John Courtney Murray, Christian bakers, orthodox Jews, accents, and punk rockers. The eccentric and the oddball, as well as the truly great, are increasingly less able to thrive. As Legutko observes, we have a monoculture filled with people whose “loutish manners and coarse language did not have their origin in communism, but, as many found astonishing, in the patterns, or rather anti-patterns that developed in Western liberal democracies.” The revolution didn’t devour its children; progressive-minded bureaucrats did.
By Mark Judge and originally published on August 11, 2016 and can be seen here.

 

Mammon Ascendant

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in First Things which, I thought, was pretty insightful.  It is by one of my favorite theologians/philosophers, David Bently Hart (see here), and regards the relationship between capitalism and Christianity, and takes a view with which I tend to agree.  Be edified.

_______________

So, there I was, pondering, with an old familiar feeling of perplexity (about which more anon), certain reactions to my reaction to various reactions to the pope’s last encyclical, when it occurred to me that the one thing on which ­Hegelians of every stripe—right or left, theological or materialist, contemplative or activist—are undoubtedly correct is that the logic of history is not the logic of individuals, or of parties, or of states. It is not ideology, that is to say, that determines the course of cultural evolution, but the dialectic of history, which (even if it is not materialist) can never float free of material conditions. Hence Hegel’s famous “master-slave dialectic”: that process by which the material economy of ancient society slowly but inevitably inverted the order of knowledge and power upon which that society rested. History—its meaning, its irony—reveals itself only by way of a ­continuous pragmatic labor, an engagement between spirit and matter; and the final issue of that labor becomes manifest not in the abstractions we profess but in the culture we create.

Take, for instance, American political history of the last thirty-five years. One of the great political masterstrokes of the late twentieth century was Ronald Reagan’s successful creation of a coalition between cultural “conservatives” and fiscal “conservatives,” one that seemed to a great many at the time and for a long while thereafter not only a stable alliance, but a natural association. All at once, Wall Street Journal–reading mandarins began caring about abortion, “family values,” and even school prayer; pro-life Christians and Jews became genuine partisans of supply-side economics, reduced marginal tax rates, and expansive free-trade agreements; and both sides shared just enough traditional American traits (sincere patriotism unburdened by the disenchantments of postwar Europe, genial optimism, the language of self-reliance, pioneer myths, small-town ideals, and so forth) to overcome whatever regional and cultural differences might otherwise have separated them. It was an invincible political force.

But, again, it is culture—not politics—that pronounces the final historical verdict on our transitory ideologies and grand social projects and high ideals. The coalition that Reagan wrought has largely collapsed, and has done so as the result not of external hostile forces, but under the weight of its own contradictions. It has been said often enough that in the long aftermath of the 1960s, it became evident that the “right” had won the economic argument over culture and the “left” the moral argument. At least, I have heard one of my friends say it often enough (usually in a slurred voice and under fairly dim lighting: free markets and free love, corporatism and hedonism, low taxes and high times, Ben and Jerry’s, Whole Foods, Bill Gates . . .).

And, of course, it is true. The social revolution of the late 1960s was a marvelous impasto of cultural, political, social, and moral gestures, many of them more spasmodic than deliberate, and most of them only accidentally associated with one another. The most licentiously self-indulgent hedonism dallied for a gay flirtatious season with the grimly severe moralism of Trotskyite or Maoist rhetoric; the revolution was proclaimed by cossetted children of the middle class who imagined the socialist utopia as an interminable revel of psychedelic drugs, casual copulation, and ever shorter skirts; Madison Avenue was relentlessly denounced by its most servile victims. (And, oh, how I sigh with genuine nostalgia for the idiot happiness of those days.)

But, once the mists had cleared and the lava lamps had dimmed, things began sorting themselves out very rapidly. The economic radicalism faded, but the new social mores persisted, and grew in power, and became the common social grammar. A once very fashionably idealist generation found the adventure of revolution far less exhilarating than the venture of capital; it continued to cling to the old new Bohemianism (which, after all, always sold very well), but realized that endless self-indulgence requires the sort of resources that only canny investment can secure. Apple Records began as a collectivist idyll but a few bats of the eyelashes later was a tightly controlled distribution firm with security cameras at the gate; George Harrison soon learned that it was easier to find time for Krishna and room for organic farming on the sprawling grounds of an English manor house; Haight-Ashbury tie-dye ­mutated into Silicon Valley office casual; the homiletics of public property yielded to the legalese of the public offering; cannabis was just the new Chivas. All that now remained of economic debate were procedural details: the relative preponderance of development and regulation, the shifting balance of power between business sectors and state agencies, and so on.

In another sense, however, the notion that two opposed ideologies divided the spoils of the culture between them is deeply false. In truth, no political faction won or lost, because none was involved in the process at all except as one of the forces employed (and then perhaps discarded) by a deeper power. What in fact won the day was a single historical dynamism, a single indivisible cultural philosophy. That its political expressions had been distributed among different parties was a purely incidental matter of process, an especially exquisite example of “the cunning of history,” which effectively hid the true form of what everyone really wanted behind the spectacle of superficial antagonisms. Or rather, I should say: not superficial, but certainly futile. The struggle over “values” was quite real on both sides, as far as personal commitments were concerned. But, once again, the reasons for which individuals act are not the reasons by which history unfolds.

All right. Perhaps I am not as much of a German idealist as all that. But I do believe that the relation between material conditions and moral concepts is never accidental, and that cultural logic invariably discovers the real harmonies and balances and accords that our fleeting intellectual paradigms generally cannot. As late modern persons, we live in a society whose highest values—in every sphere: moral, religious, economic, domestic, cultural, and so on—can loosely be described as “libertarian.” We understand freedom principally as an ­individual’s sovereign liberty of deliberative and acquisitive choice, and we understand individual desires (so long as they fall within certain minimal legal constraints) either as rights or at least as protected by rights. And we are increasingly disposed to see almost every restriction placed upon the pursuit of those desires as an unreasonable imposition. Our natural economic philosophy, then, is of course “neoliberal” (or, as it is also called in America, “neoconservative”) while our natural moral philosophy is voluntarist, individualist, and hedonist (in a not necessarily opprobrious sense). Not only is there no contradiction here; there is an essential unity.

And, in this sense, talk of history’s dialectic is not only pardonable, but probably necessary. The story of how, over a far longer period than thirty-five years, we arrived where we are has been told often before: on the side of ideas, the rise of late scholastic voluntarism, the emergence not only of epistemological nominalism but even of nominalist accounts of the good, theologies that subordinated all divine attributes to the supreme attribute of absolute sovereignty and that increasingly conceived of sovereignty as something like pure spontaneity of will, then the eventual migration of this idea of sovereign freedom from theology to anthropology, as well as the rise of mechanistic metaphysics (and so on); on the side of material circumstance, the rise of the absolute state, the creation of denatured ecclesial establishments, the rise of early market institutions, the growth of an enfranchised merchant class, the rise of the power of large capital in the age of industrialization, the inevitable emergence of consumerism (and so on); and, between the two sides, a dynamic, fluctuating, oscillating, but ultimately inexorable dialectical process. It is an absorbing tale, but it has gone through so many editions by now that even the effort of declining to repeat it is tedious.

Even so, just at the moment I feel as if somehow I have to remark this essential, indissoluble concomitance between the logic of late modern secularity and that of late modern capitalism. It all has to do, I suppose, with those reactions to my reaction to those reactions I mentioned above, and with that old familiar perplexity they occasioned in me. At least I feel I want to confess, if nothing else, the limits of my imagination on one vexing point. Simply said, I have never been able to understand those (almost exclusively American) souls who expend such energy both on lamenting the late modern collapse of so many of the moral accords and cultural values and religious aspirations of the past and also on vigorously promoting the very system of material and social practices that made that collapse inevitable.

The history of capitalism and the history of secularism are not two accidentally contemporaneous tales, after all; they are the same story told from different vantages. Any dominant material economy is complicit with, and in fact demands, a particular anthropology, ethics, and social vision. And a late capitalist culture, being intrinsically a consumerist economy, of necessity promotes a voluntarist understanding of individual freedom and a purely negative understanding of social and political liberty. The entire system depends not merely on supplying needs and satisfying natural longings, but on the ceaseless invention of ever newer desires, ever more choices. It is also a system inevitably corrosive of as many prohibitions of desire and inhibitions of the will as possible, and therefore of all those customs and institutions—religious, cultural, social—that tend to restrain or even forbid so many acquisitive longings and individual choices.

This is what Marx genuinely admired about capitalism: its power to dissolve all the immemorial associations of family, tradition, faith, and affinity, the irresistible dynamism of its dissolution of ancient values, its (to borrow a loathsome phrase) “gales of creative destruction.” The secular world—our world, our age—is one from which as many mediating and subsidiary powers have been purged as possible, precisely to make room for the adventures of the will. It is a reality in which all social, political, and economic associations have been reduced to a bare tension between the individual and the state, each of which secures the other against the intrusions and encroachments of other claims to authority, other demands upon desire, other narratives of the human. Secularization is simply developed capitalism in its ineluctable cultural manifestation.

Mind you, part of the difficulty of convincing American Christians of this lies in the generous vagueness with which we have come to use the word “capitalism” in recent decades. For many, the term means nothing more than a free market in goods, or the right to produce and trade, or buying and selling as such. In that sense, every culture in recorded history would have been “capitalist” in some degree. And for many, then, it also seems natural to think that all free trade and all systems of market exchange are of a piece, and that to defend the dignity of production and trade in every sphere, it is necessary also to defend the globalized market and the immense power of current corporate entities—or, conversely, to think that any serious and sustained criticism of the immorality, environmental devastation, exploitation of desperate labor markets, or political mischief for which such entities might often justly be arraigned is necessarily an assault on every honest entrepreneur who tries to build a business, create some jobs, or produce something useful or delightful to sell.

But, in long historical perspective, the capitalist epoch of market economies has so far been one of, at most, a few centuries. At least, in the narrower acceptation of the term generally agreed on by economic historians, capitalism is what Proudhon in 1861 identified as a system—at once economic and social—in which, as a general rule, the source of income does not belong at all to those who make it operative by their labor. If that is too vague, we can say it is the set of economic conventions that succeeded those of the “mercantilism” of the previous era, with its tariff regimes and nationalist policies of trade regulation, and that took shape in the age of industrialization. Historically, this meant a shift in economic eminence from the merchant class—purveyors of goods contracted from and produced by independent artisanal labor or subsidiary estates or small local markets—to the capitalist investor who is at once producer and seller of goods, and who is able to generate immense capital at the secondary level of investment speculation: a purely financial market where wealth is generated and enjoyed by those who produce nothing except an incessant circulation of investment and divestment.

Along with this came a new labor system: the end of most of the contractual power of free skilled labor, the death of the artisanal guilds, and the genesis of a mass wage system; one, that is, in which labor became a commodity, different markets could compete against one another for the cheapest, most desperate laborers, and (as the old Marxist plaint has it) both the means of production and the fruit of labor belonged not to the workers but only to the investors. Hence the accusation of early generations of socialists, like William Morris and John Ruskin, that capitalism was to be eschewed not because it was a free-market system, but because it destroyed the true freedom of the market economies that had begun to appear at the end of the Middle Ages, and concentrated all real economic and contractual liberty in the hands of a very few.

This is a system that not only allows for, but positively depends upon, immense concentrations of private capital and private dispositive use of that capital, as unencumbered by fiscal regulation as possible. It also obviously allows for the exploitation of material and human resources on an unprecedentedly massive scale, one that even governments cannot rival. And it is a system that inevitably eventuates not only in economic, but cultural, “consumerism,” because it can continue to create wealth sufficient to sustain the investment system only by a social habit of consumption extravagantly in excess of mere natural need or even (arguably) natural want. Thus it must dedicate itself not only to fulfilling desire, but to fabricating new desires, prompted by fashion, or by seductive appeals to what 1 John calls “the lust of the eyes”—the high art of which we call “advertising.”

Now, without question, capitalism works. It is magnificently efficient at generating enormous wealth, and increasing the wealth of society at large—if not necessarily of all individuals or classes—and adjusting to the supersession of one form of commercial production by another. But this is practically a tautology. That is its entire purpose, and it is no great surprise that over time it should have evolved ever more refined and comprehensive means for achieving it. It generates immense returns for the few, which sometimes redound to the benefit of the many, but which often do not; it can create and enrich or destroy and impoverish, as prudence warrants; it can encourage liberty and equity or abet tyranny and injustice, as necessity dictates. It has no natural attachment to the institutions of democratic or liberal freedom; China has proved beyond any reasonable doubt that endless consumer choices can comfortably coexist with a near total absence of civil liberties. Capitalism has no moral nature at all. The good it yields is not benevolence; the evil is not malice. It is a system that cannot be abused, but only practiced with greater or lesser efficiency. It admits of no other criterion by which to judge its consequences.

This last point, moreover, needs to be particularly stressed, at least in America, where many of capitalism’s apologists are eager (perhaps commendably) to believe that our market system is not only conducive of large social benefits, but possessed of deep structural virtues. This belief often leads them both to exaggerate those benefits and to ignore the damages, or to explain them away (like good Marxists preaching the socialist eschaton) as transient evils that will be redeemed by a final general beatitude (“rising tide” . . . “all boats” . . . “supply-side” . . . “trickle down” . . . “Walmart may destroy small businesses and force the formerly well-employed into inferior jobs, but, hey, think of the joy that all those cheap—if occasionally toxic—Chinese goods produced by ruthlessly exploited laborers will provide the lower middle class in its ceaseless fiscal decline!”). But, given the sheer magnitude of capitalism’s ability to alter material, social, economic, and cultural reality, to cherish even the faintest illusions regarding some kind of inherent goodness in the system is to risk more than mere complacency.

Yes, venture capital built Manhattan—its shining cloud-capped towers, its millions of jobs, its inexhaustible bagels—but the cost of a world where Manhattans are built has to be reckoned in more than capital. And one does not even need to travel any great distance to assess some of the gravest of them. One need go no farther than the carboniferous tectonic collision zones of West Virginia and eastern Kentucky to find a land where a once poor but propertied people were reduced to helotry on land they used to own by predatory mineral rights’ purchases, and then forced into dangerous and badly remunerated labor that destroyed their health, and then kept generation upon generation in servile dependency on an industry that shears the crests off mountains, chokes river valleys with slurry and chemical toxins, and subverts local politics. And what one must remember is that all that devastation was not the result of one of capitalism’s failures, but of one of its most conspicuous successes. All the investors realized returns on their initial expenditures many thousands of times over. Those who win at the game can win everything and more, while those who lose—who more often than we care to acknowledge lose everything and forever—are simply part of the cost of doing business.

None of which is to deny that capital investment can achieve goods that governments usually cannot. While it is certainly not the case that, say, the world’s rising mean life span or the increase in third-world literacy are straightforwardly consequences of globalization, it certainly is the case that global investment and trade have created resources that have made rapid medical progress, improvements in nutrition, and distribution of goods and services—by private firms, charities, governments, and international humanitarian organizations—possible in ways that less fluid commercial systems never could have done. There are regions of sub-Saharan Africa currently enjoying the kind of economic development that once seemed impossible because certain governments and businesses (such as numerous small technology firms) have set aside generations of post-colonial prejudice and finally begun building businesses there.

On the other hand, untold tens of thousands of Africans have died as a result of large Western pharmaceutical firms, concerned for their market share and their proprietary rights, exerting fiscal and government pressure to deny access to affordable antiretroviral drugs manufactured in Thailand and elsewhere. The market gives life; the market murders. It creates cities; it poisons oceans. And throughout the third world, as well as in less fortunate districts of the developed world, the price of industrialization remains (as ever) environmental damage of a sort that cannot be remedied in centuries, along with all its attendant human suffering. The World Health Organization, on very judiciously gathered data, estimates that roughly 12.6 million persons die each year as a result of environmental degradation, particularly pollution from industrial waste products. This being so, it seems only decent to wonder whether a thriving market system might be run on more humane principles—which is to say, on principles alien to capitalism as it has always existed.

Perhaps, though, I am allowing myself to drift away from my original point. Even if it were not so—even if fully developed capitalism, per impossibile, operated without any destruction of ecologies, communities, and lives—it would still carry moral costs that would render it ultimately antagonistic to any but an essentially secularized culture. At least, it could not coexist indefinitely with a culture informed by genuine Christian conviction. Even the fact of the system’s necessary reliance on immense private wealth makes it a moral problem from the vantage of the Gospel, for the simple reason that the New Testament treats such wealth not merely as a spiritual danger, and not merely as a blessing that should not be misused, but as an intrinsic evil. I know there are plentiful interpretations of Christianity that claim otherwise, and many of them have been profoundly influential of American understandings of the faith. Calvin’s scriptural commentaries, for instance, treat almost all of the New Testament’s more consequential moral teachings—Christ’s advice to the rich young ruler, his exhortations to spiritual perfection, and so on—as exercises in instructive irony, meant to demonstrate the impossibility of righteousness through works. Calvin even remarks that having some money in the bank is one of the signs of election. But that is offensive nonsense. The real text of the New Testament, uncolored by theological fancy, is utterly perspicuous and relentlessly insistent on this matter. Christ’s concern for the ptōchoi—the abjectly destitute—is more or less exclusive of any other social class.

What he says about the rich youth selling all his possessions and giving the proceeds to the poor, and about the indisposition of camels trying to pass through needles’ eyes, is only the beginning. In the Sermon on the Plain’s list of beatitudes and woes, he not only tells the poor that the kingdom belongs to them, but explicitly tells the rich that, having had their pleasures in this world, they shall have none in the world to come. He condemns those who buy up properties and create large estates for themselves. You cannot serve both God and mammon. Do not store up treasure on earth, in earthly vessels, for where your treasure is, there your heart will also be. The apostolic Church in Jerusalem adopted an absolute communism of goods. Paul constantly condemns pleonektia, which is often translated as “excessive greed” or even “thievery,” but which really means no more than an acquisitive desire for more than one needs. He instructs the Corinthian Christians to donate all their profits to the relief of the poor in other church assemblies. James says that God’s elect are the poor of this world; the rich he condemns as oppressors and revilers of the divine name, who should howl in terror at the judgment that is coming upon them, because the rust of their treasure shall eat their flesh like fire on the last day. And on and on. This is so persistent, pervasive, and unqualified a theme of the New Testament that the genius with which Christians down the centuries have succeeded in not seeing it, or in explaining it away, or in pretending that it does not mean what it unquestionably means may be among the greatest marvels of the faith.

But, again, even if it were not so—even if there is a way of possessing wealth purely as a blameless stewardship of God’s bounty, or if the system could function as well in a society with more equitably distributed capital, or what have you—the problem with which I began remains. As a cultural reality, late capitalism is not merely a regulatory regime for markets, but also a positive system of values, necessarily at odds with other orders of desire, especially those that seek to limit acquisition or inhibit expressions of the will. We may think we are free to believe as we wish because that is what our totalitarian libertarianism or consumerist collectivism chiefly needs us to think. But, while our ancestors inhabited a world full of gods or saints, ours is one in which they have all been chased away by advertising, into the hidden world of personal devotion or private fixation. Public life is a realm of pure elective spontaneity, in every sphere, and that power of choice must be ceaselessly directed toward an interminable diversity of consumer goods, and encouraged to expand into ever more regions of fiscal, moral, and spiritual life. We are shaped by what we desire, and what we desire is shaped by the material culture that surrounds us, and by the ideologies and imaginative possibilities that it embodies and sustains.

This is not to say that believing Christians, Jews, and other retrograde types cannot live peacefully amid the heaven-scaling towers and abyss-plumbing indulgences of late modernity. Believers of every kind are strangers and sojourners in this life, and should not seek to build enduring cities in this world. Still, all of us must make our livings, and seek to provide for others, and that means buying and selling, hiring and being hired, seeking justice and enduring injustice. That is the business of life, and conducted well, it can bring about many good things. And who knows? Perhaps it is possible to reimagine a real market economy on a more truly human and humane scale, of the sort envisaged by E. F. Schumacher or various other religious “economists of the small.” After all, the exchange of goods, the common commerce of everyday life, the community that exists wherever one person trades one “gift” for another—all of these are natural goods, part of the corporal grammar of community, and can usually in some way exhibit a generosity more original and more ultimate than any calculus of greed or selfish appetite. But, beyond that, the claim that capitalist culture and Christianity are compatible—indeed, that they are not ultimately inimical to one another—seems to me not only self-evidently false, but quaintly (and perhaps perilously) deluded.

David Bentley Hart is a fellow of the Notre Dame Institute of Advanced Studies. His most recent book is A Splendid Wickedness and Other Essays.

Originally published in First Things in June 2016 and can be seen here.

Gödel’s Incompleteness: The #1 Mathematical Breakthrough of the 20th Century

Every now and again I come across a fantastic article the warrants posting here; I recently came across one CosmicFingerprints.com which, I thought, was pretty insightful.  Be edified.

“Faith and Reason are not enemies. In fact, the exact opposite is true! One is absolutely necessary for the other to exist. All reasoning ultimately traces back to faith in something that you cannot prove.”

In 1931, the young mathematician Kurt Gödel made a landmark discovery, as powerful as anything Albert Einstein developed.

In one salvo, he completely demolished an entire class of scientific theories.

Gödel’s discovery not only applies to mathematics but literally all branches of science, logic and human knowledge. It has earth-shattering implications.

Oddly, few people know anything about it.

Allow me to tell you the story.

Mathematicians love proofs. They were hot and bothered for centuries, because they were unable to PROVE some of the things they knew were true.

So for example if you studied high school Geometry, you’ve done the exercises where you prove all kinds of things about triangles based on a set of theorems.

That high school geometry book is built on Euclid’s five postulates. Everyone knows the postulates are true, but in 2500 years nobody’s figured out a way to prove them.

Yes, it does seem perfectly “obvious” that a line can be extended infinitely in both directions, but no one has been able to PROVE that. We can only demonstrate that Euclid’s postulates are a reasonable, and in fact necessary, set of 5 assumptions.

Towering mathematical geniuses were frustrated for 2000+ years because they couldn’t prove all their theorems. There were so many things that were “obviously true,” but nobody could find a way to prove them.

In the early 1900’s, however, a tremendous wave of optimism swept through mathematical circles. The most brilliant mathematicians in the world (like Bertrand Russell, David Hilbert and Ludwig Wittgenstein) became convinced that they were rapidly closing in on a final synthesis.

A unifying “Theory of Everything” that would finally nail down all the loose ends. Mathematics would be complete, bulletproof, airtight, triumphant.

In 1931 this young Austrian mathematician, Kurt Gödel, published a paper that once and for all PROVED that a single Theory Of Everything is actually impossible. He proved they would never prove everything. (Yeah I know, it sounds a little odd, doesn’t it?)

Gödel’s discovery was called “The Incompleteness Theorem.”

If you’ll give me just a few minutes, I’ll explain what it says, how Gödel proved it, and what it means – in plain, simple English that anyone can understand.

Gödel’s Incompleteness Theorem says:

“Anything you can draw a circle around cannot explain itself without referring to something outside the circle – something you have to assume but cannot prove.”

You can draw a circle around all of the concepts in your high school geometry book. But they’re all built on Euclid’s 5 postulates which we know are true but cannot be proven. Those 5 postulates are outside the book, outside the circle.

Stated in Formal Language:
Gödel’s theorem says: “Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.”The Church-Turing thesis says that a physical system can express elementary arithmetic just as a human can, and that the arithmetic of a Turing Machine (computer) is not provable within the system and is likewise subject to incompleteness.

Any physical system subjected to measurement is capable of expressing elementary arithmetic. (In other words, children can do math by counting their fingers, water flowing into a bucket does integration, and physical systems always give the right answer.)

Therefore the universe is capable of expressing elementary arithmetic and like both mathematics itself and a Turing machine, is incomplete.

Syllogism:

1. All non-trivial computational systems are incomplete

2. The universe is a non-trivial computational system

3. Therefore the universe is incomplete

You can draw a circle around a bicycle. But the existence of that bicycle relies on a factory that is outside that circle. The bicycle cannot explain itself.

You can draw the circle around a bicycle factory. But that factory likewise relies on other things outside the factory.

Gödel proved that there are ALWAYS more things that are true than you can prove. Any system of logic or numbers that mathematicians ever came up with will always rest on at least a few unprovable assumptions.

Gödel’s Incompleteness Theorem applies not just to math, but to everything that is subject to the laws of logic. Everything that you can count or calculate. Incompleteness is true in math; it’s equally true in science or language and philosophy.

Gödel created his proof by starting with “The Liar’s Paradox” — which is the statement

“I am lying.”

“I am lying” is self-contradictory, since if it’s true, I’m not a liar, and it’s false; and if it’s false, I am a liar, so it’s true.

Gödel, in one of the most ingenious moves in the history of math, converted this Liar’s Paradox into a mathematical formula. He proved that no statement can prove its own truth.

You always need an outside reference point.

The Incompleteness Theorem was a devastating blow to the “positivists” of the time. They insisted that literally anything you could not measure or prove was nonsense. He showed that their positivism was nonsense.

Gödel proved his theorem in black and white and nobody could argue with his logic. Yet some of his fellow mathematicians went to their graves in denial, believing that somehow or another Gödel must surely be wrong.

He wasn’t wrong. It was really true. There are more things that are true than you can prove.

A “theory of everything” – whether in math, or physics, or philosophy – will never be found.  Because it is mathematically impossible.

OK, so what does this really mean? Why is this super-important, and not just an interesting geek factoid?

Here’s what it means:

  • Faith and Reason are not enemies. In fact, the exact opposite is true! One is absolutely necessary for the other to exist. All reasoning ultimately traces back to faith in something that you cannot prove.
  • All closed systems depend on something outside the system.
  • You can always draw a bigger circle but there will still be something outside the circle.

Reasoning inward from a larger circle to a smaller circle (from “all things” to “some things”) is deductive reasoning.

Example of a deductive reasoning:

1.    All men are mortal
2.    Socrates is a man
3.    Therefore Socrates is mortal

Reasoning outward from a smaller circle to a larger circle (from “some things” to “all things”) is inductive reasoning.

Examples of inductive reasoning:

1. All the men I know are mortal
2. Therefore all men are mortal

1. When I let go of objects, they fall
2. Therefore there is a law of gravity that governs all falling objects

Notice than when you move from the smaller circle to the larger circle, you have to make assumptions that you cannot 100% prove.

For example you cannot PROVE gravity will always be consistent at all times. You can only observe that it’s consistently true every time.

Nearly all scientific laws are based on inductive reasoning. All of science rests on an assumption that the universe is orderly, logical and mathematical based on fixed discoverable laws.

You cannot PROVE this. (You can’t prove that the sun will come up tomorrow morning either.) You literally have to take it on faith. In fact most people don’t know that outside the science circle is a philosophy circle. Science is based on philosophical assumptions that you cannot scientifically prove. Actually, the scientific method cannot prove, it can only infer.

(Science originally came from the idea that God made an orderly universe which obeys fixed, discoverable laws – and because of those laws, He would not have to constantly tinker with it in order for it to operate.)

Now please consider what happens when we draw the biggest circle possibly can – around the whole universe.
(If there are multiple universes, we’re drawing a circle around all of them too):

  • There has to be something outside that circle. Something which we have to assume but cannot prove
  • The universe as we know it is finite – finite matter, finite energy, finite space and 13.8 billion years time
  • The universe (all matter, energy, space and time) cannot explain itself
  • Whatever is outside the biggest circle is boundless. So by definition it is not possible to draw a circle around it.
  • If we draw a circle around all matter, energy, space and time and apply Gödel’s theorem, then we know what is outside that circle is not matter, is not energy, is not space and is not time. Because all the matter and energy are inside the circle. It’s immaterial.
  • Whatever is outside the biggest circle is not a system – i.e. is not an assemblage of parts. Otherwise we could draw a circle around them. The thing outside the biggest circle is indivisible.
  • Whatever is outside the biggest circle is an uncaused cause, because you can always draw a circle around an effect.

We can apply the same inductive reasoning to the origin of information:

  • In the history of the universe we also see the introduction of information, some 3.8 billion years ago. It came in the form of the Genetic code, which is symbolic and immaterial.
  • The information had to come from the outside, since information is not known to be an inherent property of matter, energy, space or time.
  • All codes we know the origin of are designed by conscious beings.
  • Therefore whatever is outside the largest circle is a conscious being.

When we add information to the equation, we conclude that not only is the thing outside the biggest circle infinite and immaterial, it is also self-aware.

Isn’t it interesting how all these conclusions sound suspiciously similar to how theologians have described God for thousands of years?

Maybe that’s why it’s hardly surprising that 80-90% of the people in the world believe in some concept of God. Yes, it’s intuitive to most folks. But Gödel’s theorem indicates it’s also supremely logical. In fact it’s the only position one can take and stay in the realm of reason and logic.

The person who proudly proclaims, “You’re a man of faith, but I’m a man of science” doesn’t understand the roots of science or the nature of knowledge!

Interesting aside…

If you visit the world’s largest atheist website, Infidels, on the home page you will find the following statement:

“Naturalism is the hypothesis that the natural world is a closed system, which means that nothing that is not part of the natural world affects it.”

If you know Gödel’s theorem, you know all systems rely on something outside the system. So according to Gödel’s Incompleteness theorem, the folks at Infidels cannot be correct. Because the universe is a system, it has to have an outside cause.

Therefore Atheism violates the laws mathematics.

The Incompleteness of the universe isn’t proof that God exists. But… it IS proof that in order to construct a consistent model of the universe, belief in God is not just 100% logical… it’s necessary.

Euclid’s 5 postulates aren’t formally provable and God is not formally provable either. But… just as you cannot build a coherent system of geometry without Euclid’s 5 postulates, neither can you build a coherent description of the universe without a First Cause and a Source of order.

Thus faith and science are not enemies, but allies. They are two sides of the same coin. It had been true for hundreds of years, but in 1931 this skinny young Austrian mathematician named Kurt Gödel proved it.

No time in the history of mankind has faith in God been more reasonable, more logical, or more thoroughly supported by rational thought, science and mathematics.

Perry Marshall

“Math is the language God wrote the universe in.” –Galileo Galile, 1623

Further reading:

Incompleteness: The Proof and Paradox of Kurt Gödel” by Rebecca Goldstein – fantastic biography and a great read

A collection of quotes and notes about Gödel’s proof from Miskatonic University Press

Formal description of Gödel’s Incompleteness Theorem and links to his original papers on Wikipedia

Science vs. Faith on CoffeehouseTheology.com

Originally published on cosmicfingerprints.com and can be found here.

The 7 Biggest Problems Facing Science, According to 270 Scientists

Every now and again I come across a fantastic article that warrants posting here.  I have seen a recent proliferation of articles in respected publications pointing out, bemoaning, and/or highlighting increasing problems with the trustworthiness of the alleged findings of the contemporary scientific community.  I find these articles to be particularly interesting given how our society looks to science as a (the?) source of ultimate truths (often as a mutually exclusive alternative to spirituality).  This sort of scientism may be misplaced, and these articles delve into the pitfalls that come with such an approach.

Here are the links the other articles I posted on this subject:

Be edified.

_______________

“Science, I had come to learn, is as political, competitive, and fierce a career as you can find, full of the temptation to find easy paths.” — Paul Kalanithi, neurosurgeon and writer (1977–2015)

Science is in big trouble. Or so we’re told.

In the past several years, many scientists have become afflicted with a serious case of doubt — doubt in the very institution of science.

As reporters covering medicine, psychology, climate change, and other areas of research, we wanted to understand this epidemic of doubt. So we sent scientists a survey asking this simple question: If you could change one thing about how science works today, what would it be and why?

We heard back from 270 scientists all over the world, including graduate students, senior professors, laboratory heads, and Fields Medalists. They told us that, in a variety of ways, their careers are being hijacked by perverse incentives. The result is bad science.

The scientific process, in its ideal form, is elegant: Ask a question, set up an objective test, and get an answer. Repeat. Science is rarely practiced to that ideal. But Copernicus believed in that ideal. So did the rocket scientists behind the moon landing.

But nowadays, our respondents told us, the process is riddled with conflict. Scientists say they’re forced to prioritize self-preservation over pursuing the best questions and uncovering meaningful truths.

“I feel torn between asking questions that I know will lead to statistical significance and asking questions that matter,” says Kathryn Bradshaw, a 27-year-old graduate student of counseling at the University of North Dakota.

Today, scientists’ success often isn’t measured by the quality of their questions or the rigor of their methods. It’s instead measured by how much grant money they win, the number of studies they publish, and how they spin their findings to appeal to the public.

Scientists often learn more from studies that fail. But failed studies can mean career death. So instead, they’re incentivized to generate positive results they can publish. And the phrase “publish or perish” hangs over nearly every decision. It’s a nagging whisper, like a Jedi’s path to the dark side.

“Over time the most successful people will be those who can best exploit the system,” Paul Smaldino, a cognitive science professor at University of California Merced, says.

To Smaldino, the selection pressures in science have favored less-than-ideal research: “As long as things like publication quantity, and publishing flashy results in fancy journals are incentivized, and people who can do that are rewarded … they’ll be successful, and pass on their successful methods to others.”

Many scientists have had enough. They want to break this cycle of perverse incentives and rewards. They are going through a period of introspection, hopeful that the end result will yield stronger scientific institutions. In our survey and interviews, they offered a wide variety of ideas for improving the scientific process and bringing it closer to its ideal form.

Before we jump in, some caveats to keep in mind: Our survey was not a scientific poll. For one, the respondents disproportionately hailed from the biomedical and social sciences and English-speaking communities.

Many of the responses did, however, vividly illustrate the challenges and perverse incentives that scientists across fields face. And they are a valuable starting point for a deeper look at dysfunction in science today.

The place to begin is right where the perverse incentives first start to creep in: the money.

Academia has a huge money problem

To do most any kind of research, scientists need money: to run studies, to subsidize lab equipment, to pay their assistants and even their own salaries. Our respondents told us that getting — and sustaining — that funding is a perennial obstacle.

Their gripe isn’t just with the quantity, which, in many fields, is shrinking. It’s the way money is handed out that puts pressure on labs to publish a lot of papers, breeds conflicts of interest, and encourages scientists to overhype their work.

In the United States, academic researchers in the sciences generally cannot rely on university funding alone to pay for their salaries, assistants, and lab costs. Instead, they have to seek outside grants. “In many cases the expectations were and often still are that faculty should cover at least 75 percent of the salary on grants,” writes John Chatham, a professor of medicine studying cardiovascular disease at University of Alabama at Birmingham.

Grants also usually expire after three or so years, which pushes scientists away from long-term projects. Yet as John Pooley, a neurobiology postdoc at the University of Bristol, points out, the biggest discoveries usually take decades to uncover and are unlikely to occur under short-term funding schemes.

Outside grants are also in increasingly short supply. In the US, the largest source of funding is the federal government, and that pool of money has been plateauing for years, while young scientists enter the workforce at a faster rate than older scientists retire.

Take the National Institutes of Health, a major funding source. Its budget rose at a fast clip through the 1990s, stalled in the 2000s, and then dipped with sequestration budget cuts in 2013. All the while, rising costs for conducting science meant that each NIH dollar purchased less and less. Last year, Congress approved the biggest NIH spending hike in a decade. But it won’t erase the shortfall.

The consequences are striking: In 2000, more than 30 percent of NIH grant applications got approved. Today, it’s closer to 17 percent. “It’s because of what’s happened in the last 12 years that young scientists in particular are feeling such a squeeze,” NIH Director Francis Collins said at the Milken Global Conference in May.

Some of our respondents said that this vicious competition for funds can influence their work. Funding “affects what we study, what we publish, the risks we (frequently don’t) take,” explains Gary Bennett a neuroscientist at Duke University. It “nudges us to emphasize safe, predictable (read: fundable) science.”

Truly novel research takes longer to produce, and it doesn’t always pay off. A National Bureau of Economic Research working paper found that, on the whole, truly unconventional papers tend to be less consistently cited in the literature. So scientists and funders increasingly shy away from them, preferring short-turnaround, safer papers. But everyone suffers from that: the NBER report found that novel papers also occasionally lead to big hits that inspire high-impact, follow-up studies.

“I think because you have to publish to keep your job and keep funding agencies happy, there are a lot of (mediocre) scientific papers out there … with not much new science presented,” writes Kaitlyn Suski, a chemistry and atmospheric science postdoc at Colorado State University.

Another worry: When independent, government, or university funding sources dry up, scientists may feel compelled to turn to industry or interest groups eager to generate studies to support their agendas.

Finally, all of this grant writing is a huge time suck, taking resources away from the actual scientific work. Tyler Josephson, an engineering graduate student at the University of Delaware, writes that many professors he knows spend 50 percent of their time writing grant proposals. “Imagine,” he asks, “what they could do with more time to devote to teaching and research?”

It’s easy to see how these problems in funding kick off a vicious cycle. To be more competitive for grants, scientists have to have published work. To have published work, they need positive (i.e., statistically significant) results. That puts pressure on scientists to pick “safe” topics that will yield a publishable conclusion — or, worse, may bias their research toward significant results.

“When funding and pay structures are stacked against academic scientists,” writes Alison Bernstein, a neuroscience postdoc at Emory University, “these problems are all exacerbated.”

Fixes for science’s funding woes

Right now there are arguably too many researchers chasing too few grants. Or, as a 2014 piece in the Proceedings of the National Academy of Sciences put it: “The current system is in perpetual disequilibrium, because it will inevitably generate an ever-increasing supply of scientists vying for a finite set of research resources and employment opportunities.”

“As it stands, too much of the research funding is going to too few of the researchers,” writes Gordon Pennycook, a PhD candidate in cognitive psychology at the University of Waterloo. “This creates a culture that rewards fast, sexy (and probably wrong) results.”

One straightforward way to ameliorate these problems would be for governments to simply increase the amount of money available for science. (Or, more controversially, decrease the number of PhDs, but we’ll get to that later.) If Congress boosted funding for the NIH and National Science Foundation, that would take some of the competitive pressure off researchers.

But that only goes so far. Funding will always be finite, and researchers will never get blank checks to fund the risky science projects of their dreams. So other reforms will also prove necessary.

One suggestion: Bring more stability and predictability into the funding process. “The NIH and NSF budgets are subject to changing congressional whims that make it impossible for agencies (and researchers) to make long term plans and commitments,” M. Paul Murphy, a neurobiology professor at the University of Kentucky, writes. “The obvious solution is to simply make [scientific funding] a stable program, with an annual rate of increase tied in some manner to inflation.”

Another idea would be to change how grants are awarded: Foundations and agencies could fund specific people and labs for a period of time rather than individual project proposals. (The Howard Hughes Medical Institute already does this.) A system like this would give scientists greater freedom to take risks with their work.

Alternatively, researchers in the journal mBio recently called for a lottery-style system. Proposals would be measured on their merits, but then a computer would randomly choose which get funded.

“Although we recognize that some scientists will cringe at the thought of allocating funds by lottery,” the authors of the mBio piece write, “the available evidence suggests that the system is already in essence a lottery without the benefits of being random.” Pure randomness would at least reduce some of the perverse incentives at play in jockeying for money.

There are also some ideas out there to minimize conflicts of interest from industry funding. Recently, in PLOS Medicine, Stanford epidemiologist John Ioannidis suggested that pharmaceutical companies ought to pool the money they use to fund drug research, to be allocated to scientists who then have no exchange with industry during study design and execution. This way, scientists could still get funding for work crucial for drug approvals — but without the pressures that can skew results.

These solutions are by no means complete, and they may not make sense for every scientific discipline. The daily incentives facing biomedical scientists to bring new drugs to market are different from the incentives facing geologists trying to map out new rock layers. But based on our survey, funding appears to be at the root of many of the problems facing scientists, and it’s one that deserves more careful discussion.

Too many studies are poorly designed. Blame bad incentives.

Scientists are ultimately judged by the research they publish. And the pressure to publish pushes scientists to come up with splashy results, of the sort that get them into prestigious journals. “Exciting, novel results are more publishable than other kinds,” says Brian Nosek, who co-founded the Center for Open Science at the University of Virginia.

The problem here is that truly groundbreaking findings simply don’t occur very often, which means scientists face pressure to game their studies so they turn out to be a little more “revolutionary.” (Caveat: Many of the respondents who focused on this particular issue hailed from the biomedical and social sciences.)

Some of this bias can creep into decisions that are made early on: choosing whether or not to randomize participants, including a control group for comparison, or controlling for certain confounding factors but not others. (Read more on study design particulars here.)

Many of our survey respondents noted that perverse incentives can also push scientists to cut corners in how they analyze their data.

“I have incredible amounts of stress that maybe once I finish analyzing the data, it will not look significant enough for me to defend,” writes Jess Kautz, a PhD student at the University of Arizona. “And if I get back mediocre results, there’s going to be incredible pressure to present it as a good result so they can get me out the door. At this moment, with all this in my mind, it is making me wonder whether I could give an intellectually honest assessment of my own work.”

Increasingly, meta-researchers (who conduct research on research) are realizing that scientists often do find little ways to hype up their own results — and they’re not always doing it consciously. Among the most famous examples is a technique called “p-hacking,” in which researchers test their data against many hypotheses and only report those that have statistically significant results.

In a recent study, which tracked the misuse of p-values in biomedical journals, meta-researchers found “an epidemic” of statistical significance: 96 percent of the papers that included a p-value in their abstracts boasted statistically significant results.

That seems awfully suspicious. It suggests the biomedical community has been chasing statistical significance, potentially giving dubious results the appearance of validity through techniques like p-hacking — or simply suppressing important results that don’t look significant enough. Fewer studies share effect sizes (which arguably gives a better indication of how meaningful a result might be) or discuss measures of uncertainty.

“The current system has done too much to reward results,” says Joseph Hilgard, a postdoctoral research fellow at the Annenberg Public Policy Center. “This causes a conflict of interest: The scientist is in charge of evaluating the hypothesis, but the scientist also desperately wants the hypothesis to be true.”

The consequences are staggering. An estimated $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on poorly designed and redundant studies, according to meta-researchers who have analyzed inefficiencies in research. We know that as much as 30 percent of the most influential original medical research papers later turn out to be wrong or exaggerated.

Fixes for poor study design

Our respondents suggested that the two key ways to encourage stronger study design — and discourage positive results chasing — would involve rethinking the rewards system and building more transparency into the research process.

“I would make rewards based on the rigor of the research methods, rather than the outcome of the research,” writes Simine Vazire, a journal editor and a social psychology professor at UC Davis. “Grants, publications, jobs, awards, and even media coverage should be based more on how good the study design and methods were, rather than whether the result was significant or surprising.”

Likewise, Cambridge mathematician Tim Gowers argues that researchers should get recognition for advancing science broadly through informal idea sharing — rather than only getting credit for what they publish.

“We’ve gotten used to working away in private and then producing a sort of polished document in the form of a journal article,” Gowers said. “This tends to hide a lot of the thought process that went into making the discoveries. I’d like attitudes to change so people focus less on the race to be first to prove a particular theorem, or in science to make a particular discovery, and more on other ways of contributing to the furthering of the subject.”

When it comes to published results, meanwhile, many of our respondents wanted to see more journals put a greater emphasis on rigorous methods and processes rather than splashy results.

“I think the one thing that would have the biggest impact is removing publication bias: judging papers by the quality of questions, quality of method, and soundness of analyses, but not on the results themselves,” writes Michael Inzlicht, a University of Toronto psychology and neuroscience professor.

Some journals are already embracing this sort of research. PLOS One, for example, makes a point of accepting negative studies (in which a scientist conducts a careful experiment and finds nothing) for publication, as does the aptly named Journal of Negative Results in Biomedicine.

More transparency would also help, writes Daniel Simons, a professor of psychology at the University of Illinois. Here’s one example: ClinicalTrials.gov, a site run by the NIH, allows researchers to register their study design and methods ahead of time and then publicly record their progress. That makes it more difficult for scientists to hide experiments that didn’t produce the results they wanted. (The site now holds information for more than 180,000 studies in 180 countries.)

Similarly, the AllTrials campaign is pushing for every clinical trial (past, present, and future) around the world to be registered, with the full methods and results reported. Some drug companies and universities have created portals that allow researchers to access raw data from their trials.

The key is for this sort of transparency to become the norm rather than a laudable outlier.

Replicating results is crucial. But scientists rarely do it.

Replication is another foundational concept in science. Researchers take an older study that they want to test and then try to reproduce it to see if the findings hold up.

Testing, validating, retesting — it’s all part of a slow and grinding process to arrive at some semblance of scientific truth. But this doesn’t happen as often as it should, our respondents said. Scientists face few incentives to engage in the slog of replication. And even when they attempt to replicate a study, they often find they can’t do so. Increasingly it’s being called a “crisis of irreproducibility.”

The stats bear this out: A 2015 study looked at 83 highly cited studies that claimed to feature effective psychiatric treatments. Only 16 had ever been successfully replicated. Another 16 were contradicted by follow-up attempts, and 11 were found to have substantially smaller effects the second time around. Meanwhile, nearly half of the studies (40) had never been subject to replication at all.

More recently, a landmark study published in the journal Science demonstrated that only a fraction of recent findings in top psychology journals could be replicated. This is happening in other fields too, says Ivan Oransky, one of the founders of the blog Retraction Watch, which tracks scientific retractions.

As for the underlying causes, our survey respondents pointed to a couple of problems. First, scientists have very few incentives to even try replication. Jon-Patrick Allem, a social scientist at the Keck School of Medicine of USC, noted that funding agencies prefer to support projects that find new information instead of confirming old results.

Journals are also reluctant to publish replication studies unless “they contradict earlier findings or conclusions,” Allem writes. The result is to discourage scientists from checking each other’s work. “Novel information trumps stronger evidence, which sets the parameters for working scientists.”

The second problem is that many studies can be difficult to replicate. Sometimes their methods are too opaque. Sometimes the original studies had too few participants to produce a replicable answer. And sometimes, as we saw in the previous section, the study is simply poorly designed or outright wrong.

Again, this goes back to incentives: When researchers have to publish frequently and chase positive results, there’s less time to conduct high-quality studies with well-articulated methods.

Fixes for underreplication

Scientists need more carrots to entice them to pursue replication in the first place. As it stands, researchers are encouraged to publish new and positive results and to allow negative results to linger in their laptops or file drawers.

This has plagued science with a problem called “publication bias” — not all studies that are conducted actually get published in journals, and the ones that do tend to have positive and dramatic conclusions.

If institutions started to reward tenure positions or make hires based on the quality of a researcher’s body of work, instead of quantity, this might encourage more replication and discourage positive results chasing.

“The key that needs to change is performance review,” writes Christopher Wynder, a former assistant professor at McMaster University. “It affects reproducibility because there is little value in confirming another lab’s results and trying to publish the findings.”

The next step would be to make replication of studies easier. This could include more robust sharing of methods in published research papers. “It would be great to have stronger norms about being more detailed with the methods,” says University of Virginia’s Brian Nosek.

He also suggested more regularly adding supplements at the end of papers that get into the procedural nitty-gritty, to help anyone wanting to repeat an experiment. “If I can rapidly get up to speed, I have a much better chance of approximating the results,” he said.

Nosek has detailed other potential fixes that might help with replication — all part of his work at the Center for Open Science.

A greater degree of transparency and data sharing would enable replications, said Stanford’s John Ioannidis. Too often, anyone trying to replicate a study must chase down the original investigators for details about how the experiment was conducted.

“It is better to do this in an organized fashion with buy-in from all leading investigators in a scientific discipline,” he explained, “rather than have to try to find the investigator in each case and ask him or her in detective-work fashion about details, data, and methods that are otherwise unavailable.”

Researchers could also make use of new tools, such as open source software that tracks every version of a data set, so that they can share their data more easily and have transparency built into their workflow.

Some of our respondents suggested that scientists engage in replication prior to publication. “Before you put an exploratory idea out in the literature and have people take the time to read it, you owe it to the field to try to replicate your own findings,” says John Sakaluk, a social psychologist at the University of Victoria.

For example, he has argued, psychologists could conduct small experiments with a handful of participants to form ideas and generate hypotheses. But they would then need to conduct bigger experiments, with more participants, to replicate and confirm those hypotheses before releasing them into the world. “In doing so,” Sakaluk says, “the rest of us can have more confidence that this is something we might want to [incorporate] into our own research.”

Peer review is broken

Peer review is meant to weed out junk science before it reaches publication. Yet over and over again in our survey, respondents told us this process fails. It was one of the parts of the scientific machinery to elicit the most rage among the researchers we heard from.

Normally, peer review works like this: A researcher submits an article for publication in a journal. If the journal accepts the article for review, it’s sent off to peers in the same field for constructive criticism and eventual publication — or rejection. (The level of anonymity varies; some journals have double-blind reviews, while others have moved to triple-blind review, where the authors, editors, and reviewers don’t know who one another are.)

It sounds like a reasonable system. But numerous studies and systematic reviews have shown that peer review doesn’t reliably prevent poor-quality science from being published.

The process frequently fails to detect fraud or other problems with manuscripts, which isn’t all that surprising when you consider researchers aren’t paid or otherwise rewarded for the time they spend reviewing manuscripts. They do it out of a sense of duty — to contribute to their area of research and help advance science.

But this means it’s not always easy to find the best people to peer-review manuscripts in their field, that harried researchers delay doing the work (leading to publication delays of up to two years), and that when they finally do sit down to peer-review an article they might be rushed and miss errors in studies.

“The issue is that most referees simply don’t review papers carefully enough, which results in the publishing of incorrect papers, papers with gaps, and simply unreadable papers,” says Joel Fish, an assistant professor of mathematics at the University of Massachusetts Boston. “This ends up being a large problem for younger researchers to enter the field, since that means they have to ask around to figure out which papers are solid and which are not.”

That’s not to mention the problem of peer review bullying. Since the default in the process is that editors and peer reviewers know who the authors are (but authors don’t know who the reviews are), biases against researchers or institutions can creep in, opening the opportunity for rude, rushed, and otherwise unhelpful comments. (Just check out the popular #SixWordPeerReview hashtag on Twitter).

These issues were not lost on our survey respondents, who said peer review amounts to a broken system, which punishes scientists and diminishes the quality of publications. They want to not only overhaul the peer review process but also change how it’s conceptualized.

Fixes for peer review

On the question of editorial bias and transparency, our respondents were surprisingly divided. Several suggested that all journals should move toward double-blinded peer review, whereby reviewers can’t see the names or affiliations of the person they’re reviewing and publication authors don’t know who reviewed them. The main goal here was to reduce bias.

“We know that scientists make biased decisions based on unconscious stereotyping,” writes Pacific Northwest National University postdoc Timothy Duignan. “So rather than judging a paper by the gender, ethnicity, country, or institutional status of an author — which I believe happens a lot at the moment — it should be judged by its quality independent of those things.”

Yet others thought that more transparency, rather than less, was the answer: “While we correctly advocate for the highest level of transparency in publishing, we still have most reviews that are blinded, and I cannot know who is reviewing me,” writes Lamberto Manzoli, a professor of epidemiology and public health at the University of Chieti, in Italy. “Too many times we see very low quality reviews, and we cannot understand whether it is a problem of scarce knowledge or conflict of interest.”

Perhaps there is a middle ground. For example, eLife, a new open access journal that is rapidly rising in impact factor, runs a collaborative peer review process. Editors and peer reviewers work together on each submission to create a consolidated list of comments about a paper. The author can then reply to what the group saw as the most important issues, rather than facing the biases and whims of individual reviewers. (Oddly, this process is faster — eLife takes less time to accept papers than Nature or Cell.)

Still, those are mostly incremental fixes. Other respondents argued that we might need to radically rethink the entire process of peer review from the ground up.

“The current peer review process embraces a concept that a paper is final,” says Nosek. “The review process is [a form of] certification, and that a paper is done.” But science doesn’t work that way. Science is an evolving process, and truth is provisional. So, Nosek said, science must “move away from the embrace of definitiveness of publication.”

Some respondents wanted to think of peer review as more of a continuous process, in which studies are repeatedly and transparently updated and republished as new feedback changes them — much like Wikipedia entries. This would require some sort of expert crowdsourcing.

“The scientific publishing field — particularly in the biological sciences — acts like there is no internet,” says Lakshmi Jayashankar, a senior scientific reviewer with the federal government. “The paper peer review takes forever, and this hurts the scientists who are trying to put their results quickly into the public domain.”

One possible model already exists in mathematics and physics, where there is a long tradition of “pre-printing” articles. Studies are posted on an open website called arXiv.org, often before being peer-reviewed and published in journals. There, the articles are sorted and commented on by a community of moderators, providing another chance to filter problems before they make it to peer review.

“Posting preprints would allow scientific crowdsourcing to increase the number of errors that are caught, since traditional peer-reviewers cannot be expected to be experts in every sub-discipline,” writes Scott Hartman, a paleobiology PhD student at the University of Wisconsin.

And even after an article is published, researchers think the peer review process shouldn’t stop. They want to see more “post-publication” peer review on the web, so that academics can critique and comment on articles after they’ve been published. Sites like PubPeer and F1000Research have already popped up to facilitate that kind of post-publication feedback.

“We do this a couple of times a year at conferences,” writes Becky Clarkson, a geriatric medicine researcher at the University of Pittsburgh. “We could do this every day on the internet.”

The bottom line is that traditional peer review has never worked as well as we imagine it to — and it’s ripe for serious disruption.

Too much science is locked behind paywalls

After a study has been funded, conducted, and peer-reviewed, there’s still the question of getting it out so that others can read and understand its results.

Over and over, our respondents expressed dissatisfaction with how scientific research gets disseminated. Too much is locked away in paywalled journals, difficult and costly to access, they said. Some respondents also criticized the publication process itself for being too slow, bogging down the pace of research.

On the access question, a number of scientists argued that academic research should be free for all to read. They chafed against the current model, in which for-profit publishers put journals behind pricey paywalls.

A single article in Science will set you back $30; a year-long subscription to Cell will cost $279. Elsevier publishes 2,000 journals that can cost up to $10,000 or $20,000 a year for a subscription.

Many US institutions pay those journal fees for their employees, but not all scientists (or other curious readers) are so lucky. In a recent issue of Science, journalist John Bohannon described the plight of a PhD candidate at a top university in Iran. He calculated that the student would have to spend $1,000 a week just to read the papers he needed.

As Michael Eisen, a biologist at UC Berkeley and co-founder of the Public Library of Science (or PLOS), put it, scientific journals are trying to hold on to the profits of the print era in the age of the internet. Subscription prices have continued to climb, as a handful of big publishers (like Elsevier) have bought up more and more journals, creating mini knowledge fiefdoms.

“Large, publicly owned publishing companies make huge profits off of scientists by publishing our science and then selling it back to the university libraries at a massive profit (which primarily benefits stockholders),” Corina Logan, an animal behavior researcher at the University of Cambridge, noted. “It is not in the best interest of the society, the scientists, the public, or the research.” (In 2014, Elsevier reported a profit margin of nearly 40 percent and revenues close to $3 billion.)

“It seems wrong to me that taxpayers pay for research at government labs and universities but do not usually have access to the results of these studies, since they are behind paywalls of peer-reviewed journals,” added Melinda Simon, a postdoc microfluidics researcher at Lawrence Livermore National Lab.

Fixes for closed science

Many of our respondents urged their peers to publish in open access journals (along the lines of PeerJ or PLOS Biology). But there’s an inherent tension here. Career advancement can often depend on publishing in the most prestigious journals, like Science or Nature, which still have paywalls.

There’s also the question of how best to finance a wholesale transition to open access. After all, journals can never be entirely free. Someone has to pay for the editorial staff, maintaining the website, and so on. Right now, open access journals typically charge fees to those submitting papers, putting the burden on scientists who are already struggling for funding.

One radical step would be to abolish for-profit publishers altogether and move toward a nonprofit model. “For journals I could imagine that scientific associations run those themselves,” suggested Johannes Breuer, a postdoctoral researcher in media psychology at the University of Cologne. “If they go for online only, the costs for web hosting, copy-editing, and advertising (if needed) can be easily paid out of membership fees.”

As a model, Cambridge’s Tim Gowers has launched an online mathematics journal called Discrete Analysis. The nonprofit venture is owned and published by a team of scholars, it has no publisher middlemen, and access will be completely free for all.

Until wholesale reform happens, however, many scientists are going a much simpler route: illegally pirating papers.

Bohannon reported that millions of researchers around the world now use Sci-Hub, a site set up by Alexandra Elbakyan, a Russia-based neuroscientist, that illegally hosts more than 50 million academic papers. “As a devout pirate,” Elbakyan told us, “I think that copyright should be abolished.”

One respondent had an even more radical suggestion: that we abolish the existing peer-reviewed journal system altogether and simply publish everything online as soon as it’s done.

“Research should be made available online immediately, and be judged by peers online rather than having to go through the whole formatting, submitting, reviewing, rewriting, reformatting, resubmitting, etc etc etc that can takes years,” writes Bruno Dagnino, formerly of the Netherlands Institute for Neuroscience. “One format, one platform. Judge by the whole community, with no delays.”

A few scientists have been taking steps in this direction. Rachel Harding, a genetic researcher at the University of Toronto, has set up a website called Lab Scribbles, where she publishes her lab notes on the structure of huntingtin proteins in real time, posting data as well as summaries of her breakthroughs and failures. The idea is to help share information with other researchers working on similar issues, so that labs can avoid needless overlap and learn from each other’s mistakes.

Not everyone might agree with approaches this radical; critics worry that too much sharing might encourage scientific free riding. Still, the common theme in our survey was transparency. Science is currently too opaque, research too difficult to share. That needs to change.

Science is poorly communicated to the public

“If I could change one thing about science, I would change the way it is communicated to the public by scientists, by journalists, and by celebrities,” writes Clare Malone, a postdoctoral researcher in a cancer genetics lab at Brigham and Women’s Hospital.

She wasn’t alone. Quite a few respondents in our survey expressed frustration at how science gets relayed to the public. They were distressed by the fact that so many laypeople hold on to completely unscientific ideas or have a crude view of how science works.

They griped that misinformed celebrities like Gwyneth Paltrow have an outsize influence over public perceptions about health and nutrition. (As the University of Alberta’s Timothy Caulfield once told us, “It’s incredible how much she is wrong about.”)

They have a point. Science journalism is often full of exaggerated, conflicting, or outright misleading claims. If you ever want to see a perfect example of this, check out “Kill or Cure,” a site where Paul Battley meticulously documents all the times the Daily Mail reported that various items — from antacids to yogurt — either cause cancer, prevent cancer, or sometimes do both.

Sometimes bad stories are peddled by university press shops. In 2015, the University of Maryland issued a press release claiming that a single brand of chocolate milk could improve concussion recovery. It was an absurd case of science hype.

Indeed, one review in BMJ found that one-third of university press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice.

But not everyone blamed the media and publicists alone. Other respondents pointed out that scientists themselves often oversell their work, even if it’s preliminary, because funding is competitive and everyone wants to portray their work as big and important and game-changing.

“You have this toxic dynamic where journalists and scientists enable each other in a way that massively inflates the certainty and generality of how scientific findings are communicated and the promises that are made to the public,” writes Daniel Molden, an associate professor of psychology at Northwestern University. “When these findings prove to be less certain and the promises are not realized, this just further erodes the respect that scientists get and further fuels scientists desire for appreciation.”

Fixes for better science communication

Opinions differed on how to improve this sorry state of affairs — some pointed to the media, some to press offices, others to scientists themselves.

Plenty of our respondents wished that more science journalists would move away from hyping single studies. Instead, they said, reporters ought to put new research findings in context, and pay more attention to the rigor of a study’s methodology than to the splashiness of the end results.

“On a given subject, there are often dozens of studies that examine the issue,” writes Brian Stacy of the US Department of Agriculture. “It is very rare for a single study to conclusively resolve an important research question, but many times the results of a study are reported as if they do.”

But it’s not just reporters who will need to shape up. The “toxic dynamic” of journalists, academic press offices, and scientists enabling one another to hype research can be tough to change, and many of our respondents pointed out that there were no easy fixes — though recognition was an important first step.

Some suggested the creation of credible referees that could rigorously distill the strengths and weaknesses of research. (Some variations of this are starting to pop up: The Genetic Expert News Service solicits outside experts to weigh in on big new studies in genetics and biotechnology.) Other respondents suggested that making research free to all might help tamp down media misrepresentations.

Still other respondents noted that scientists themselves should spend more time learning how to communicate with the public — a skill that tends to be under-rewarded in the current system.

“Being able to explain your work to a non-scientific audience is just as important as publishing in a peer-reviewed journal, in my opinion, but currently the incentive structure has no place for engaging the public,” writes Crystal Steltenpohl, a graduate assistant at DePaul University.

Reducing the perverse incentives around scientific research itself could also help reduce overhype. “If we reward research based on how noteworthy the results are, this will create pressure to exaggerate the results (through exploiting flexibility in data analysis, misrepresenting results, or outright fraud),” writes UC Davis’s Simine Vazire. “We should reward research based on how rigorous the methods and design are.”

Or perhaps we should focus on improving science literacy. Jeremy Johnson, a project coordinator at the Broad Institute, argued that bolstering science education could help ameliorate a lot of these problems. “Science literacy should be a top priority for our educational policy,” he said, “not an elective.

Life as a young academic is incredibly stressful

When we asked researchers what they’d fix about science, many talked about the scientific process itself, about study design or peer review. These responses often came from tenured scientists who loved their jobs but wanted to make the broader scientific project even better.

But on the flip side, we heard from a number of researchers — many of them graduate students or postdocs — who were genuinely passionate about research but found the day-to-day experience of being a scientist grueling and unrewarding. Their comments deserve a section of their own.

Today, many tenured scientists and research labs depend on small armies of graduate students and postdoctoral researchers to perform their experiments and conduct data analysis.

These grad students and postdocs are often the primary authors on many studies. In a number of fields, such as the biomedical sciences, a postdoc position is a prerequisite before a researcher can get a faculty-level position at a university.

This entire system sits at the heart of modern-day science. (A new card game called Lab Wars pokes fun at these dynamics.)

But these low-level research jobs can be a grind. Postdocs typically work long hours and are relatively low-paid for their level of education — salaries are frequently pegged to stipends set by NIH National Research Service Award grants, which start at $43,692 and rise to $47,268 in year three.

Postdocs tend to be hired on for one to three years at a time, and in many institutions they are considered contractors, limiting their workplace protections. We heard repeatedly about extremely long hours and limited family leave benefits.

“Oftentimes this is problematic for individuals in their late 20s and early to mid-30s who have PhDs and who may be starting families while also balancing a demanding job that pays poorly,” wrote one postdoc, who asked for anonymity.

This lack of flexibility tends to disproportionately affect women — especially women planning to have families — which helps contribute to gender inequalities in research. (A 2012 paper found that female job applicants in academia are judged more harshly and are offered less money than males.) “There is very little support for female scientists and early-career scientists,” noted another postdoc.

“There is very little long-term financial security in today’s climate, very little assurance where the next paycheck will come from,” wrote William Kenkel, a postdoctoral researcher in neuroendocrinology at Indiana University. “Since receiving my PhD in 2012, I left Chicago and moved to Boston for a post-doc, then in 2015 I left Boston for a second post-doc in Indiana. In a year or two, I will move again for a faculty job, and that’s if I’m lucky. Imagine trying to build a life like that.”

This strain can also adversely affect the research that young scientists do. “Contracts are too short term,” noted another researcher. “It discourages rigorous research as it is difficult to obtain enough results for a paper (and hence progress) in two to three years. The constant stress drives otherwise talented and intelligent people out of science also.”

Because universities produce so many PhDs but have way fewer faculty jobs available, many of these postdoc researchers have limited career prospects. Some of them end up staying stuck in postdoc positions for five or 10 years or more.

“In the biomedical sciences,” wrote the first postdoc quoted above, “each available faculty position receives applications from hundreds or thousands of applicants, putting immense pressure on postdocs to publish frequently and in high impact journals to be competitive enough to attain those positions.”

Many young researchers pointed out that PhD programs do fairly little to train people for careers outside of academia. “Too many [PhD] students are graduating for a limited number of professor positions with minimal training for careers outside of academic research,” noted Don Gibson, a PhD candidate studying plant genetics at UC Davis.

Laura Weingartner, a graduate researcher in evolutionary ecology at Indiana University, agreed: “Few universities (specifically the faculty advisors) know how to train students for anything other than academia, which leaves many students hopeless when, inevitably, there are no jobs in academia for them.”

Add it up and it’s not surprising that we heard plenty of comments about anxiety and depression among both graduate students and postdocs. “There is a high level of depression among PhD students,” writes Gibson. “Long hours, limited career prospects, and low wages contribute to this emotion.”

A 2015 study at the University of California Berkeley found that 47 percent of PhD students surveyed could be considered depressed. The reasons for this are complex and can’t be solved overnight. Pursuing academic research is already an arduous, anxiety-ridden task that’s bound to take a toll on mental health.

But as Jennifer Walker explored recently at Quartz, many PhD students also feel isolated and unsupported, exacerbating those issues.

Fixes to keep young scientists in science

We heard plenty of concrete suggestions. Graduate schools could offer more generous family leave policies and child care for graduate students. They could also increase the number of female applicants they accept in order to balance out the gender disparity.

But some respondents also noted that workplace issues for grad students and postdocs were inseparable from some of the fundamental issues facing science that we discussed earlier. The fact that university faculty and research labs face immense pressure to publish — but have limited funding — makes it highly attractive to rely on low-paid postdocs.

“There is little incentive for universities to create jobs for their graduates or to cap the number of PhDs that are produced,” writes Weingartner. “Young researchers are highly trained but relatively inexpensive sources of labor for faculty.”

Some respondents also pointed to the mismatch between the number of PhDs produced each year and the number of academic jobs available.

A recent feature by Julie Gould in Nature explored a number of ideas for revamping the PhD system. One idea is to split the PhD into two programs: one for vocational careers and one for academic careers. The former would better train and equip graduates to find jobs outside academia.

This is hardly an exhaustive list. The core point underlying all these suggestions, however, was that universities and research labs need to do a better job of supporting the next generation of researchers. Indeed, that’s arguably just as important as addressing problems with the scientific process itself. Young scientists, after all, are by definition the future of science.

Weingartner concluded with a sentiment we saw all too frequently: “Many creative, hard-working, and/or underrepresented scientists are edged out of science because of these issues. Not every student or university will have all of these unfortunate experiences, but they’re pretty common. There are a lot of young, disillusioned scientists out there now who are expecting to leave research.”

Science is not doomed.

For better or worse, it still works. Look no further than the novel vaccines to prevent Ebola, the discovery of gravitational waves, or new treatments for stubborn diseases. And it’s getting better in many ways. See the work of meta-researchers who study and evaluate research — a field that has gained prominence over the past 20 years.

But science is conducted by fallible humans, and it hasn’t been human-proofed to protect against all our foibles. The scientific revolution began just 500 years ago. Only over the past 100 has science become professionalized. There is still room to figure out how best to remove biases and align incentives.

To that end, here are some broad suggestions:

One: Science has to acknowledge and address its money problem. Science is enormously valuable and deserves ample funding. But the way incentives are set up can distort research.

Right now, small studies with bold results that can be quickly turned around and published in journals are disproportionately rewarded. By contrast, there are fewer incentives to conduct research that tackles important questions with robustly designed studies over long periods of time. Solving this won’t be easy, but it is at the root of many of the issues discussed above.

Two: Science needs to celebrate and reward failure. Accepting that we can learn more from dead ends in research and studies that failed would alleviate the “publish or perish” cycle. It would make scientists more confident in designing robust tests and not just convenient ones, in sharing their data and explaining their failed tests to peers, and in using those null results to form the basis of a career (instead of chasing those all-too-rare breakthroughs).

Three: Science has to be more transparent. Scientists need to publish the methods and findings more fully, and share their raw data in ways that are easily accessible and digestible for those who may want to reanalyze or replicate their findings.

There will always be waste and mediocre research, but as Stanford’s Ioannidis explains in a recent paper, a lack of transparency creates excess waste and diminishes the usefulness of too much research.

Again and again, we also heard from researchers, particularly in social sciences, who felt that their cognitive biases in their own work, influenced by pressures to publish and advance their careers, caused science to go off the rails. If more human-proofing and de-biasing were built into the process — through stronger peer review, cleaner and more consistent funding, and more transparency and data sharing — some of these biases could be mitigated.

These fixes will take time, grinding along incrementally — much like the scientific process itself. But the gains humans have made so far using even imperfect scientific methods would have been unimaginable 500 years ago. The gains from improving the process could prove just as staggering, if not more so.

By, Julia Belluz, Brad Plumer, and Brian Resnick and originally published on July 14, 2016 on vox.comand can be seen here.

Post Navigation