Legal Writing for Legal Reading!

Archive for the category “Articles By Others”

Millions Are Hounded for Debt They Don’t Owe. One Victim Fought Back, With a Vengeance

Every now and again I come across a fantastic article the warrants posting here; I recently came across one on Bloomberg which, I thought, was pretty insightful. Be edified.


On the morning a debt collector threatened to rape his wife, Andrew Therrien was working from home, in a house with green shutters on a cul-de-sac in a small Rhode Island town. Tall and stocky, with a buzz cut and a square, friendly face, Therrien was a salesman for a promotions company. He’d always had an easy rapport with people over the phone, and on that day, in February 2015, he was calling food vendors to talk about grocery store giveaways.

Therrien was interrupted midpitch by a call from his wife. She’d gotten a voicemail from an authoritative-sounding man saying Therrien was in some kind of trouble. “I need to verify an address to present you with your formal claim,” the man had said. “Andrew Therrien, you are officially notified.”

A few minutes later, Therrien’s phone buzzed. It was the same guy. He gave his name as Charles Cartwright and said Therrien owed $700 on a payday loan. But Therrien knew he didn’t owe anyone anything. Suspecting a scam, he told Cartwright just what he thought of his scare tactics.

Cartwright hung up, then called back, mad. He said he wanted to meet face-to-face to teach Therrien a lesson.

“I will,” Cartwright said, “and I hope your wife is at home.”

That’s when he made the rape threat.

Therrien got so angry he couldn’t think clearly. He wasn’t going to just let someone menace and disrespect his wife like that. He had to know who this Cartwright guy was, and his employer, too. Therrien wanted to make them pay.

At the same time, he worried that the call might not be a swindle. What if some misinformed loan shark really was coming for them? But Therrien didn’t have any real information he could take to the police.

Then he remembered Cartwright had left a number with his wife.

He dialed.

Somewhere—at the top of a ladder of dirty debt collectors that Therrien would spend the next two years relentlessly climbing—a man named Joel Tucker had no idea what was coming.

Earlier this year, I met Therrien, 33, at a Panera Bread restaurant in central Providence. He had reluctantly agreed to be interviewed, on the condition that I not reveal his hometown or his wife’s name.

Therrien had been caught up in a fraud known as phantom debt, where millions of Americans are hassled to pay back money they don’t owe. The concept is centuries old: Inmates of a New York debtors’ prison joked about it as early as 1800, in a newspaper they published called Forlorn Hope. But systematic schemes to collect on fake debts started only about five years ago. It begins when someone scoops up troves of personal information that are available cheaply online—old loan applications, long-expired obligations, data from hacked accounts—and reformats it to look like a list of debts. Then they make deals with unscrupulous collectors who will demand repayment of the fictitious bills. Their targets are often poor and likely to already be getting confusing calls about other loans. The harassment usually doesn’t work, but some marks are convinced that because the collectors know so much, the debt must be real.

The problem is as simple as it is intractable. In 2012 a call center in India was busted for making 8 million calls in eight months to collect made-up bills. The Federal Trade Commission has since broken up at least 13 similar scams. In most cases, regulators weren’t able to identify the original perpetrators because the data files had been sold and repackaged so many times. Victims have essentially no recourse to do anything but take the abuse.

Most victims, that is. When the scammers started to hound Therrien, he hounded them right back. Obsessed with payback, he spent hundreds of hours investigating the dirty side of debt. By day he was still promoting ice cream brands and hiring models for liquor store tastings. But in his spare time, he was living out a revenge fantasy. He befriended loan sharks and blackmailed crooked collectors, getting them to divulge their suppliers, and then their suppliers above them. In method, Therrien was like a prosecutor flipping gangster underlings to get to lieutenants and then the boss. In spirit, he was a bit like Liam Neeson’s vigilante character in the movie Taken—using unflagging aggression to obtain scraps of information and reverse-engineer a criminal syndicate. Therrien didn’t punch anyone in the head, of course. He was simply unstoppable over the phone.

When Therrien dialed the number Cartwright had left, a woman answered and said she worked for Lakefront Processing Solutions in Buffalo. She’d never heard of Charles Cartwright, though, and implied he must be some kind of freelancer or bounty hunter. Regardless, she said, Therrien could clear everything up by making a payment. Her records indicated that he owed a payday lender called Vista.

Therrien had indeed once taken out a loan, but he didn’t think it was from Vista. He’d been selling copiers at the time, and when his boss stiffed him on a $20,000 commission, he turned to an online lender to make a car payment. Therrien says he paid back the debt promptly. He was offended by the Lakefront woman’s suggestion that he was a deadbeat. “I’m a person who believes in personal friggin’ responsibility,” Therrien tells me. “I signed an agreement. And I fulfilled my obligation.”

On his laptop, Therrien started digging. He found a securities filing saying Vista had merged with a company called That Marketing Solution Inc. After paying a few dollars to an online people-search service, he got its president on the line. “You sold my personal information to a bunch of thugs,” Therrien recalls telling the man. “I want to know why, and I want to know what you’re going to do about it.” Within hours, the company provided a letter saying that Therrien had never borrowed from Vista.

Armed with proof the debt was invalid, Therrien turned back to Lakefront. More searches yielded a corporate parent, owned by two Buffalo men. Therrien called them, then their lawyer. When the lawyer stalled, Therrien bombarded him with more calls, at home and on his cell—enough to put Lakefront off him for good. (The parties eventually reached a confidential settlement, and Lakefront—whose name I found in a public record—declined to comment.)

By the morning after Cartwright’s call, Therrien’s fears of a psycho collector had been assuaged—no one had showed up at his house. But swatting down Lakefront turned out to be just the first round in a game of whack-a-mole. More collection agencies contacted him, his wife, his brother, even his grandparents. The calls made it clear to Therrien that an overarching force was at play. His name had to be getting on these lists somehow.

Each night, after his wife went to sleep, he cracked open his laptop to comb lawsuits, unearth filings, and uproot the owners of the agencies calling him. When he got names, he’d phone them, often surprising them at home, and make clear that he wouldn’t go away until they’d revealed who supplied their debt portfolios. “Here’s the deal,” he’d say. “I don’t really care about you. There’s a million guys like you out there. You’ll never get your money back. You might as well get blood out of it. Tell me what I need to know to put these guys in jail.”

Sometimes, Therrien would make a small payment on the fake debt, then check bank records to see where it went. He found people with convictions for counterfeiting, stock fraud, drug dealing, and child molestation. He started a spreadsheet, Scums.xlsx, to keep track. On weekends he’d harangue them from his couch while watching New England Patriots games. He used persuasion techniques he’d learned selling copiers, some drawn from a book called Getting Into Your Customer’s Head. On the phone, Therrien is a savant. He has an instinct for when to be a friend—one gruff payday lender tells me, sheepishly, that he simply doesn’t know why he speaks with Therrien so frequently—and when to be a bully.

Therrien would threaten to report the collectors to regulators unless they helped him figure out what was going on. “You are either with me in this, or you are against me,” he wrote to one man. Others he tried to shame. “If my intentions are right, I’ll have God on my side,” Therrien emailed one source. “You may not love poor people, but He does.”

The targets were shocked by Therrien’s doggedness. In their world, complaints are common, but most victims give up after being promised they won’t be called again. One shady-debt player tells me he suspected Therrien was an undercover federal investigator because he’d gathered so much information on his business. “It’s an obsession, it’s unbelievable, an outright vigilante crusade,” another says. “It doesn’t seem to equal the harm that was done to him.”

Therrien knew his fixation seemed odd. He didn’t tell his friends and family much about his nighttime activity. But the collectors’ threats brought back feelings of rage and fear that he’d struggled to suppress since childhood. He grew up in working-class Connecticut, where his father was a factory man and his mother had a series of part-time jobs. Therrien says they mistreated him and his brother, and he moved out at 16 after an incident he won’t discuss. He told me he regrets not doing more to protect his brother. (Therrien’s father is dead, and his mother denies she did anything wrong.)

In college, Therrien worked at a J.Crew store, where a customer spotted his talent for sales and offered him a job. Therrien makes a good living now, and he takes pride in being a more responsible person than his parents—paying his bills on time, going to church on Sunday, and taking care of those close to him. “If it’s just about me, I don’t particularly give a f—,” he tells me, with an incongruous laugh. “You call my wife, and you call my grandparents? You just opened up a door that got really f—ing ugly, and now I’m going to make sure that I just ruin your life.”

As more collectors yielded to Therrien’s persistence and talked, he dropped his pursuit of Charles Cartwright, concluding that it was an untraceable alias, and focused on understanding their business. Phantom debt, he learned, is blended with real debt in ways that are almost impossible to untangle.

Americans are currently late on more than $600 billion in bills, according to Federal Reserve research, and almost one person in 10 has a debt in collectors’ hands. The agencies recoup what they can and sell the rest down-market, so that iffier and iffier debt is bought by shadier and shadier individuals. Deception is common. Scammers often sell the same portfolios of debt, called “paper,” to several collection agencies at once, so a legitimate IOU gains illegitimate clones. Some inflate balances, a practice known as “overbiffing.” Others create “redo” lists—people who’ve settled their debt, but will be harassed again anyway. These rosters are actually more valuable, because the targets have proved willing to part with money over the phone. And then there are those who invent debts out of whole cloth.

Portfolios are combined and doctored until they contain thousands of entries. One collector told Therrien that he’d paid cash at a diner for a thumb drive with a database containing Therrien’s name. Some collectors told him they thought the files were partially legitimate; others knew their paper was completely falsified. Yet they continued to trade it, referring to the people they pursued as deadbeats and losers. The more Therrien learned, the more disgusted he grew with everyone involved.

His search for the ur-source rarely traveled in a straight line. For a time, Therrien focused on Buffalo,one of the poorest cities in the U.S. and a hub for the collections industry—home to agencies that work the oldest, cheapest paper. Debt collector is a more common job there than bartender or construction worker, according to the Bureau of Labor Statistics. As Therrien wore down as many Buffalo collectors as he could, one name kept surfacing: Joel Tucker, a former payday-loan mogul from Kansas City, Mo. By the summer of 2015, Therrien was convinced he’d found his guy.

Therrien needed an ally inside the Kansas City racket. He found one in Frampton “Ted” Rowland III, a middle-aged insurance-broker-turned-predatory-lender whose company was listed as the original creditor for one of Therrien’s supposed loans. When Therrien called, Rowland said he was sorry—and kept talking. His life was falling apart. He’d been sued by the FTC over his lending practices, he’d lost all his money, and his wife was leaving him. Therrien sympathized. He sensed Rowland was a good man who’d made a bad choice out of a desire to provide for his family. They started to speak regularly, and Rowland told Therrien he blamed Tucker for everything.

Tucker had created the local industry with his two brothers. Scott, the oldest, was the brains. He’d served time in prison for a scam in which he’d pretended to work for JPMorgan Chase & Co. The middle son, Blaine, was popular and a talented musician. Joel, tall and handsome, was a natural salesman. But when he was 21, he was selling furniture and working at a mini-mart, so hard up that he got arrested for bouncing a $12 check. (The case was dismissed.)

In the mid-1990s, Scott opened a payday-loan store and gave his brothers jobs. Lending money to people who don’t have any is surprisingly profitable. In states where such stores are legal, such as Missouri, they’re more common than McDonald’s franchises. But in the 15 states where such stores are against the law, there are millions of desperate people willing to pay for fast cash and no one to give it to them. Scott pioneered what he thought was a clever legal loophole that would give him access to that market: He created websites that were owned on paper by an American Indian tribe, which could claim sovereign immunity from regulators. Those sites charged as much as $150 interest on a two-week, $500 loan—an annualized interest rate of about 700 percent.

The loophole was ridiculously lucrative. Scott’s operation generated $2 billion in revenue from 2003 to 2012. He bought a private jet and spent more than $60 million to start his own professional Ferrari racing team. Around 2005, Joel split to start a company that would allow anyone to get into online payday lending—supplying software to process applications and loans and offering access to a steady stream of customers. All the clients had to bring was money and a willingness to bypass state law. Word spread around Kansas City’s country clubs and private schools that if you wanted to get rich, Joel Tucker was your man.

With Tucker’s help, one property management executive and his son, a general contractor, started a lender that saw $161 million in revenue over eight years. An investor presentation from that period shows that Tucker was personally clearing tens of millions of dollars in profit per year.

One of his clients was Rowland, until the gravy train crashed in 2013. Under pressure from regulators, banks stopped doing business with the sketchiest payday lenders, making it hard for them to issue loans and collect payments. In 2014 federal authorities raided Rowland’s office, and the FBI began investigating the Tucker brothers. Blaine committed suicide by jumping off a parking garage in 2014; Scott was charged two years later with racketeering, and prosecutors called his tribal arrangement a sham. (He declined to comment.)

By the time Therrien came looking for Joel Tucker in the fall of 2015, he’d become a hard man to find. Twice divorced, he was moving from place to place, ducking his creditors. A booking photo from the time when he was briefly imprisoned for failing to show up for court in an unrelated lawsuit shows him with bristly gray hair and dark circles under deep-set blue eyes. Therrien couldn’t find a working phone number for him—not even when he reached his 81-year-old mother, Norma. She claimed not to know where he was.

Therrien’s tactics grew more intense, mirroring those of the debt collectors he loathed. As he had in Buffalo, he developed a network of sources in Kansas City, figuring out who hated whom and playing them off each other. He got a burner app that provided disposable numbers for his smartphone, with any area code he wanted. He called wives, widows, business partners, even a waitress who’d once worked at a restaurant the Tuckers owned. He’d have his sources drive by places where he thought Tucker might be living, to look for his car. He told one broker’s mother-in-law that she should investigate who her daughter was married to. Therrien acknowledges that sometimes he went too far.

By November 2015 he developed a simple theory. Tucker’s business had given him access to a huge database of people who’d applied for loans—including, just maybe, the one Therrien had taken out in his copier-selling days. What if, when Tucker was broke and needed money, he’d taken applicants’ personal information, invented loan balances, and sold the list as a portfolio of delinquent debt?

Therrien took his hypothesis to the FBI and FTC. His emails were breathless and confusing, but the authorities were patient, taking his calls and talking to him at length. It was clear they knew about Tucker, but Therrien got frustrated by what he saw as inaction. “There are millions of people out there being threatened daily by these actions and I’m doing my part to try and stop it,” he wrote to an FTC investigator in early 2016, begging him to hold Tucker accountable.

January 2016 saw a breakthrough: A former employee of Tucker’s agreed to arrange a call between him and Therrien to clear the air. Therrien couldn’t believe his unseen antagonist was willing to talk. So anxious he couldn’t sit down, he set up a recording device in his home office, put his phone on speaker, and called.

Tucker seemed hyper and defensive, telling Therrien that if any of the portfolios he’d sold now contained phantom debt, they must have been doctored after leaving his hands. “F—ing shame on them,” he said. “Wasn’t me. It had to have been them.”

Therrien was trying to hold back his anger, but his voice wavered. He wanted to impress Tucker, mentioning tidbits he knew about his business. Tucker didn’t understand why Therrien, this guy he’d never met, was so extravagantly invested.

“I’ll tell you why I care,” Therrien said calmly. “I’ll tell you why I care. I believe, and I’m just telling you what I believe, you sold my personal information 21 separate times. I’ve gotten close to 100 f—ing calls, and because I’ve gotten those 100 calls from scumbag collectors that you facilitated, I’m going to make sure that that kind of shit ends now.”

Tucker was incredulous: “You think this is my fault?”

“You got desperate because you spent two dollars for every dollar you had,” Therrien said.

“What are you talking about? Are you trying to micromanage my life? You don’t know jack shit about me.”

“I know what happened. You f—ing stole money from people,” Therrien said. “I’m giving you the opportunity to come clean.”

“I don’t know who you are, Andrew,” Tucker said. “Who are you?”

“A person that you f—ed with too many times.”

When Therrien played the tape for me, I was amazed at how fluently he channeled emotion—his own and Tucker’s—to get what he wanted. Incredibly, by the end of the half-hour call, Tucker was offering to help Therrien collect evidence about crimes committed by other people in the payday-loan business. “We need to get this stuff resolved,” Tucker said on the tape, with a sigh. “’Cause this—it’s not healthy for anybody.”

The two men started talking and texting a few times a week. “I think he has a mental illness that allows him to think he did nothing wrong,” Therrien told me. (Tucker didn’t respond to most of my emailed questions and kept putting off interview requests. “Lies are not stories,” he wrote in one email. He said that any debt he’d sold was legitimate.)

Tucker’s denials made Therrien hate him more, but Therrien masked his feelings to keep the conversation going. The one-year anniversary of his quest was approaching, and he wanted real evidence of wrongdoing—something Tucker couldn’t deny and officials couldn’t ignore.

Therrien soon obtained two crucial sets of documents to that end. In March 2016 he flew to California to meet a debt broker, who handed over some contracts Tucker had signed. Separately, Therrien received an email from the manager of a collection agency, to whose conscience he’d spent weeks appealing. The email, whose subject line read “Have faith in the good in heart,” included actual phantom-debt files, with names and Social Security numbers. The metadata yielded a new name: Rob Harsh, Tucker’s IT guy. (The author of the email died of a drug overdose a few months later.)

In May 2016, Therrien emailed his discoveries to the FTC. A lawyer replied right away: “Andrew, we need to talk about this.” Therrien also gave his intel to some private lawyers who were going after Tucker in Texas. They contacted Harsh, and in August 2016 he submitted an affidavit to the court. Harsh, who declined to comment for this story, testified that Tucker had asked him to manipulate a database of almost 8 million payday-loan applications, writing in a made-up lender and adding an amount owed of $300 for each person.

Therrien had been right all along.

Vindication didn’t make Therrien happy, not even when the FTC suit against Rowland’s company took a karmic swerve that drew in Tucker, directing him to return $30 million he’d received in ill-gotten profits from the business. Tucker told the court he was broke.

Meanwhile, Rowland was spiraling. He confided in Therrien that he was considering suicide, and one day that summer he called Therrien to say goodbye. “Don’t do anything stupid,” Therrien texted him afterward. “I may be callous with you lately but I still care and don’t want anything bad to happen.” Therrien told me he’d informed the police of Rowland’s plan and that they had intervened. But that October, Rowland shot himself. His death added to Therrien’s outrage at Tucker and other predatory lenders like him who hadn’t faced any real legal consequences.

Finally, in December 2016, the FTC sued Tucker for selling phantom debt. According to the regulator, everything had happened pretty much as Therrien imagined: Tucker had invented more than 7.7 million fake debts and sold them to a series of middlemen for $4.2 million. This September, a judge ruled for the agency, ordering Tucker to pay back that money on top of the $30 million he already owed.

The FTC has never credited Therrien, and Michael Tankersley, an agency lawyer, declined to discuss their interactions. But Tankersley told me that Harsh and the California broker were two key sources of information establishing Tucker’s wrongdoing.

Therrien, as usual, was unsatisfied. He was still getting calls from collectors, for one thing. And he felt that if he’d done a better job investigating, Tucker would be facing criminal charges—not a civil fine he’d never end up paying. Therrien has stayed in touch with the FBI’s Kansas City office. An FBI spokeswoman declines to say whether Tucker is being investigated, but three of his associates told me that agents had contacted them about his debt sales.

After the ruling against Tucker, Therrien heard from him for the first time in months, and they started talking again. Amid their conversations, which were recorded, Tucker’s brother, Scott, was convicted on all 14 charges he faced. Without directly asking Therrien to drop his vendetta, Tucker seemed to be pleading for mercy. “I’ve f—ing had enough harm done,” he said. “I’ve lost a brother. Got a brother going to prison. Put it this way, Andrew. I’m tired, buddy. I’m f—ing tired.”

“I’m tired too,” Therrien replied, “because I’m still getting harassed by these motherf—ers.”

By Zeke Faux in Bloomberg and can be found here.



Polarization and the Counter-Factual Crisis

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in Splice Today by my old philosophy professor Dr. Crispin Sartwell from back in my Penn State days which, I thought, was pretty insightful. Be edified.


It seems impossible for people at the moment to grasp that I’m not on their side and also not on the other side. So, for example, when I predict that Trump will be impeached and declare that it will be richly deserved, I’m taken to be a Democrat. Or when I describe and sneer exhaustedly at the way CNN and The New York Times have transformed themselves from news organizations into obsessive anti-Trump ranters, it’s obvious that I’m a Republican.

I don’t even understand this line of thought: a society in which you could infer someone’s whole politics from his prediction of whether Trump will be impeached is entirely irrational, because the whole edifice of anyone’s political beliefs is completely irrelevant to the factual claim. If political ideology correlates across the population with whether people think Trump will be impeached, the right conclusion is that everyone is operating on the same standard of evidence: wishful thinking.

Be that as it may, I think that in 2017 CNN and the Times and many others compromised their own mission, as they themselves purport to understand it. Watching Jake Tapper or Don Lemon in a righteous lather every afternoon and evening: you might as well be watching Sean Hannity. But Hannity knows who he is: a right-wing polemicist. Tapper and Lemon are still purporting to be news anchors. The Times did somewhat better with a news/opinion firewall. But they should understand, as well, that when more or less all your columnists are unanimous anti-Trump verbal abusers and your every editorial hits the same tone of indiscriminate outrage, they’re making their institutional culture evident to their readers. I know and they know that they’d do anything to destroy Trump. That may even be their duty as citizens or something as they understand it. But it’s incompatible with the values and purposes of their profession.

Also, they might want to consider that Charles Blow, Timothy Egan, and Paul Krugman are, as verbal abusers, completely incompetent: excruciatingly repetitive, utterly predictable, indistinguishable from one another or thousands of others of their ilk. You could sell me a year of insults directed at the President, but you’ve got to write better than that.

These organizations may think they can take a virtually unanimous political position as individuals and on their opinion pages and also function as neutral arbiters of the facts and gatekeepers of relevant information. But this distinction doesn’t necessarily come naturally to readers, and they should reflect that they’re giving many good reasons to be suspicious that numerous aspects of their news coverage are consciously or semi-consciously devoted to inculcating their shared ideology, or to motivating rather than informing people.

To all these difficulties, they’ve managed to add a layer of continual self-righteous defense of their own neutrality. One of the biggest stories they covered last year was Trump’s supposed repression of and misunderstanding of journalism. Ruth Marcus in The Washington Post: “How scary it is to have a president who derides us as ‘the enemy of the American people.’ To have a cable news network that inflames his worst instincts and recklessly flings suggestions of a ‘coup’ by special counsel Robert S. Mueller III. To have nearly half the public, egged on by Trump’s bellowing about ‘fake news,’ believing that reporters simply invent negative stories about the president.” (How scary is it that Fox News exists? She slipped that one in there, but it’s the mirror image of Trump’s attacks on CNN, for example.) Like many alleged news stories last year, the underlying events in Marcus’ crisis consisted of little but a series of tweets.

Indeed, it couldn’t be more obvious that the self-image of these newsrooms as part of the political resistance infects their news coverage every day. There might’ve been 30 tweets last year that were greeted as actual crises, covered as though they were acts of terrorism or natural disasters. The fake news is the enemy of the people. Mika had a facelift. NFL players shouldn’t kneel. Jeff Sessions is “beleaguered.” In the end, almost none had any effect on anything, and yet they filled the pages and airwaves day after day.

An astonishing example of the extreme slant (and we might throw in The Washington Post, MSNBC, and the network news broadcasts) is one of the biggest stories of 2017, which has dominated the news on and off since May: the counter-factual Mueller firing crisis. What if Trump were to fire Mueller? You can ask every guest that question; you can desperately probe for leaks to the effect that he may have mentioned it. In other words, you can cover a story that hasn’t happened, and that you have no particular reason to think will happen, heavily for a year. It’s an interpretation of Trump’s personality, or a personal takedown. It purports to be objective coverage (of the non-facts, mind you).

Perhaps Trump will fire Mueller at some point. That would be the time to write your “Mueller-firing-constitutional-crisis” story, if you regard your primary function as reporting the facts. Or you could, instead, blame Trump for events that haven’t occurred and call that news.

Media organizations have to think about readership numbers, ratings and page views. I think they’re doing well in this regard; their total anti-Trump obsession is paying off in that sense, at least at the moment. But they should also consider what sort of operations they’re becoming; more and more, CNN represents a mirror mage of Fox, and The New York Times of Breitbart. Marcus, who is one of the better and more reasonable columnists working today in many ways, pretty much says right there that the job of thePost is to compensate for the existence of Fox. But to do that, all of these publications are becoming more like Fox every day.

Well, it seems like we all are, so now you’re probably reading me as a Republican. If so, there’s just going to be no point in trying to communicate.

Originally published on January 1, 2018 and can be found here.

I’m a Pediatrician. Here’s What I Did When a Little Boy Patient Said He Was a Girl.

“Congratulations, it’s a boy!” Or, “Congratulations, it’s a girl!”

As a pediatrician for nearly 20 years, that’s how many of my patient relationships began. Our bodies declare our sex.

Biological sex is not assigned. Sex is determined at conception by our DNA and is stamped into every cell of our bodies. Human sexuality is binary. You either have a normal Y chromosome, and develop into a male, or you don’t, and you will develop into a female. There are at least 6,500 genetic differences between men and women. Hormones and surgery cannot change this.

An identity is not biological, it is psychological. It has to do with thinking and feeling. Thoughts and feelings are not biologically hardwired. Our thinking and feeling may be factually right or factually wrong.

If I walk into my doctor’s office today and say, “Hi, I’m Margaret Thatcher,” my physician will say I am delusional and give me an anti-psychotic. Yet, if instead, I walked in and said, “I’m a man,” he would say, “Congratulations, you’re transgender.”

If I were to say, “Doc, I am suicidal because I’m an amputee trapped in a normal body, please cut off my leg,” I will be diagnosed with body identity integrity disorder. But if I walk into that doctor’s office and say, “I am a man, sign me up for a double mastectomy,” my physician will. See, if you want to cut off a leg or an arm you’re mentally ill, but if you want to cut off healthy breasts or a penis, you’re transgender.

No one is born transgender. If gender identity were hardwired in the brain before birth, identical twins would have the same gender identity 100 percent of the time. But they don’t.

I had one patient we’ll call Andy. Between the ages of 3 and 5, he increasingly played with girls and “girl toys” and said he was a girl. I referred the parents and Andy to a therapist. Sometimes mental illness of a parent or abuse of the child are factors, but more commonly, the child has misperceived family dynamics and internalized a false belief.

In the middle of one session, Andy put down the toy truck, held onto a Barbie, and said, “Mommy and Daddy, you don’t love me when I’m a boy.” When Andy was 3, his sister with special needs was born, and required significantly more of his parents’ attention. Andy misperceived this as “Mommy and Daddy love girls. If I want them to love me, I have to be a girl.” With family therapy Andy got better.

Today, Andy’s parents would be told, “This is who Andy really is. You must ensure that everyone treats him as a girl, or else he will commit suicide.”

As Andy approaches puberty, the experts would put him on puberty blockers so he can continue to impersonate a girl.

It doesn’t matter that we’ve never tested puberty blockers in biologically normal children. It doesn’t matter that when blockers are used to treat prostate cancer in men, and gynecological problems in women, they cause problems with memory. We don’t need testing. We need to arrest his physical development now, or he will kill himself.

But this is not true. Instead, when supported in their biological sex through natural puberty, the vast majority of gender-confused children get better. Yet, we chemically castrate gender-confused children with puberty blockers. Then we permanently sterilize many of them by adding cross-sex hormones, which also put them at risk for heart disease, strokes, diabetes, cancers, and even the very emotional problems that the gender experts claim to be treating.

P.S. If a girl who insists she is male has been on testosterone daily for one year, she is cleared to get a bilateral mastectomy at age 16. Mind you, the American Academy of Pediatrics recently came out with a report that urges pediatricians to caution teenagers about getting tattoos because they are essentially permanent and can cause scarring. But this same AAP is 110 percent in support of 16-year-old girls getting a double mastectomy, even without parental consent, so long as the girl insists that she is a man, and has been taking testosterone daily for one year.

To indoctrinate all children from preschool forward with the lie that they could be trapped in the wrong body disrupts the very foundation of a child’s reality testing. If they can’t trust the reality of their physical bodies, who or what can they trust? Transgender ideology in schools is psychological abuse that often leads to chemical castration, sterilization, and surgical mutilation.

By Michelle Cretella and published on December 11, 2017 in the The Daily Signal and can be seen here.

The First Sexual Revolution

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in First Things which, I thought, was pretty insightful. Be edified.


Epictetus was the sort of figure that only the Roman Empire could have produced. He was born in the Phrygian hills of Anatolia in the middle of the first century. Enslaved and brought to the capital, he served in the household of the freedman Epaphroditos. Epaphroditos, in turn, was in the direct employ of the emperors. Epictetus has told us nothing about his circumstances in these years, but he must have had a close-up view of the swarm of peoples and ideas that passed through the corridors of power. We do not know whether Epictetus noticed, or cared, when in a.d. 64 the emperor Nero fastened blame for the Great Fire on a tiny band of religious eccentrics known as “the people of the anointed one.” We do know that he met the Roman aristocrat and Stoic philosopher Musonius Rufus and fell under the master’s spell. Epictetus earned his freedom and lived out his days, many of them in exile, as a Stoic sage. The former slave from Phrygia was a sensational teacher, sought out by the sons of the gentry from across the empire; he shaped the best minds of a generation.

Epictetus sought to attain the Stoic ideal of “apathy,” a majestic indifference to all things without moral value, including pain and death. For the Stoic, the lover of virtue does his duty—to family, to city—without concern for wealth or status. True freedom, taught the ex-slave, is not a legal condition, but a kind of moral Zen achieved by emancipation from the passions, including the pangs of sexual desire. Musonius Rufus seems to have gone so far as to advise against all sex for the purpose of pleasure, even within marriage. Epictetus, too, reckoned the conquest of physical desire an integral part of the philosopher’s task. But sexual desire claimed no special place of distinction in the wide array of the world’s enticements. “Learn to use wine with refinement,” Epictetus said, “and to hold back from some little lass or a little flatcake.” The precise tone of Stoic advice in sexual matters is nowhere clearer than in his Stoic Handbook. “Remain as pure as you can before marriage with regard to sexual pleasures, and insofar as they are engaged in, let them be lawful. Yet do not become oppressive or reproachful toward those who do indulge, and do not hold forth all the time on your own restraint.”

Of course, when it comes to sex in the ancient world, the moral decency of the imperial Stoics is not what immediately leaps to mind. We are more apt to imagine modern scenes of Roman debauchery (“I’d like a sit-down orgy for forty”) and the naughty pictures on lamps and living room walls dug up in places like Pompeii. But the tame austerity of the philosophers and the ebullient eroticism of the streets coexisted in easy proximity. In fact, they shared a hidden premise. Both presumed that sex was just sex, one instinctual need among others, to be channeled in certain fundamental ways.

For sages and sensualists alike, there were consensus “no-go” zones, where the rules were hard and fast. Expectations of chastity for respectable women, whether maidens or wives, were clear and inflexible. Female purity was heavily guarded. Men were governed by entirely different rules. The code of masculinity abhorred any hint of feminine passivity, in the public square and in the bedchamber alike. The stern threats of public law hovered in the background of these norms. But male sexual restraint was not a prerequisite of dynastic purity, and men were not restrained by the protocols that regulated female chastity. For instance, there is not even a word for “male virgin” in Latin or Greek. It is a little misleading to say that Roman sexual culture had a double standard. There were, very frankly, two entirely different sets of standards of erotic behavior, precisely because sexual morality was determined by the imperatives of reproducing the family and the city, and the bodies of men and women had different roles in that endeavor. The perpetuation of socially honorable households, generation after generation, was the enduring mental frame of public sexual morality. Stoic morality, hard-edged as it might at times be, ran along the grain of this world.

The Roman Empire that nurtured Stoic moralists such as Musonius and Epictetus was really an agglomeration of societies connected by bustling roads and busy sea-lanes. It was a sprawling, polyglot, and agrarian empire. The empire was home to a galaxy of cities—some one thousand of them, most of them smaller than their proud marble ruins might suggest. A grievously poor and unlettered peasantry constituted the silent majority, and some 10 or 15 percent of the empire’s inhabitants had the misfortune of finding themselves in bondage, as chattel slaves whose bodies could as well have been inert matter in the moral imagination of ancient philosophers. Life expectancy at birth was in the mid-twenties. The evanescence of all life turned eros into a divine blessing to be enjoyed in proper season. But the grim realities of Roman life expectancy also made reproduction urgent. Epictetus’s short list of human duties encompassed “citizenship, marriage, child production, piety to God, care of one’s parents.” Sex was a civic duty.

This was the scene onto which the Christians came loudly striding. The Christian movement’s sexual demands were not just austere or unusual. They were jolting, and deliberately so. The apostolic generation did not pour out of the Levant onto the open roads of the empire with anything like a detailed packet of sexual rules. Paul’s letters show us that Christian sexual morality was settled on the go, adapting the gospel’s searing ethic of radical love and interior purity to the realities of life in the towns of the empire. Paul’s letter to the fledgling Christian community in Corinth provides the clearest example. It is the most direct entrée we have to the confrontation between the nascent Christian Church and the habits and half-articulate expectations that governed sexual life in a Greek or Roman city.

First Corinthians shows that Paul’s message was heard in the most contradictory ways, even by sympathetic ears. Some of the new adherents to the faith had drawn startlingly libertine conclusions from Paul’s language of Christian freedom: “All things are lawful for me.” This was not altogether surprising. In the society from which they came, sexual ethics were not invested with much more significance than dietary guidelines. The desire for some “little lass” and the desire for a “little flatcake” were treated with the same moral gravity. So it stood to reason that just as the Gentile Christians were freed from the magnificently intricate regulations of the Jewish dietary code, so too they might expect a certain laxness in erotic matters.

Paul stops this line of thinking in its tracks. His letter unleashes a barrage of ideas and metaphors that came to define the boundaries of Christian sexual orthodoxy. He could have ruled narrowly—along the lines that sex is a moral category like violence or greed, not a merely ethnic cultic norm like rules about shellfish and the Sabbath. He could have enjoined Gentile Christians to obey the old Jewish codes, which regulated sex in detailed ways. Instead, he offered a conceptual framework that, while drawing some of its language and logic from familiar sources, offered an entirely fresh way of grounding sexual morality. His model of human sexuality flowed from a much grander vision than any we find in pagan antiquity. Sexual morality was part of the proclamation of a half-hidden story of God’s restoration of the created cosmos.

The keystone of Paul’s reaction to the Corinthians was his steadfast opposition to “fornication,” porneia in Greek. The word’s underlying associations are rich and esoteric, and we must approach the term with due caution. Consider that the Latin word for it, fornicatio, seems to have been invented for no other purpose than to capture all the fugitive associations of the original. Fornication in English is a churchy word, with little place in the vernacular. (As I tell my students, it is impossible to imagine “fornication” in a text message or tweet.) The root of the word in Greek is “prostitute,” pornê. In ordinary Greek from the classical period onward, the meaning of porneia was prostitution. Before Jews and Christians took hold of the term, the exclusive meaning of porneia was prostitution in the active sense, from the pimp’s or prostitute’s perspective. Porneia was the business of trading sex for money, not the act of patronizing a brothel—and certainly not premarital or extramarital sex tout court.

Yet Paul has something much broader in mind than running a cathouse when he uses the term in his letters. The generic translation in English, “sexual immorality,” won’t do. The locution is too anodyne and reflects a failure of nerve in the face of the intimidating range of meanings that porneia takes in Paul’s usage. It fails to shed light on what the word meant for Paul and leaves a fog around the origins of a distinctive Christian sexual ethics.

Paul’s use of porneia fuses two very different frames of reference, one biblical and the other drawn from the experience of life in the Greco-Roman towns where the apostle preached. In the Old Testament, prostitution (zenuth in the Hebrew, which became porneia in the Septuagint) became a metaphor for idolatry. It is the visceral image of Israel’s betrayal of her exclusive covenantal relationship with Yahweh, and it appears frequently in the Old Testament. The English “harlotry” may still capture some of the abrasive sound of this evocation of covenant infidelity. Closer still: Idolaters are spiritual sluts. The metaphor is easily reversed, so sexual sin can be considered a form of religious betrayal. The prostitute, especially the non-Israelite harlot, who had many lovers, threatens to lure men into idolatry, the worship of many gods. In the Old Testament, sexual and covenantal infidelity are blurred, and thus the imperative of fidelity also has fused meaning. Religious matters of supreme significance merge with and elevate what the surrounding cultures considered matters of worldly propriety.

Paul not only summoned the high-stakes history of porneia in Israel’s Scripture but also deployed it in a way that made the word’s resonance unmistakable. In his usage, prostitution was a synecdoche for the many forms of erotic permissiveness in the culture around him. Moving in a society where it was totally unexceptional—and casually expected—for men to indulge their sexual desires with prostitutes, slaves, and others who lacked social honor, Paul forbade it. Not only that, he proclaimed sexual congress to be a mysterious union of the flesh, something of transcendent significance. The body is a temple, a site of sacred communication. Sexual sin, therefore, is a kind of pollution, as scandalous and disruptive as the desecration of a holy sanctum. We are a long way from the rigorous but pragmatic counsels of Epictetus. The Stoic urged self-control, on the grounds that physical pleasure was a dangerous distraction from the virtuous life. Paul does so because sex implicates us in something with sacred significance.

Paul concedes in his Letter to the Corinthians that marriage is a legitimate safeguard. Because of the lures of the city, the followers of Christ would be allowed to marry. But Paul’s words are hesitant and qualified. Ideally, he writes, followers of Christ would be as he is—in a state of sexual abstinence (possibly but not certainly lifelong celibacy). Marriage is permissible, but only by way of concession, not command. It seems an implicit rejection, or at least a fundamental qualification, of the original imperative “Be fruitful and multiply.” Yet, for Paul, marriage does look back to the original acts of creation. It requires a level of mutual fidelity between partners that mirrors the original congress of God and the human creature. This emphasis on fidelity was alien to the patriarchal culture in which he proclaimed the gospel. With these few words, Paul charted the future course of Christian sexual discipline: Virginity as the highest mode of life and marriage as second best, yet also infused with a divine significance that jealously reserves sexual union for itself.

It is easy enough, and not entirely misleading, to say that Paul’s thought was compressed by the heavy weight of the apocalyptic atmosphere. He wanted his churches to live devotedly toward the coming age, during the small slice of time remaining. But that never led ancient Christians to doubt the larger significance of Paul’s austere counsels. After all, as the time between Christ’s ascension and return lengthened, the entire orthodox tradition in early Christianity chose not to write off Paul’s rigorism as a distortion of his apocalyptic lens; quite the opposite, it tended to accentuate the more extreme and anti-erotic possibilities latent in his thought. The possibility of full-blown Encratism stalked much of early Christian history. (Auden’s “Roman Wall Blues” is about right: “Piso’s a Christian, he worships a fish; / There’d be no kissing if he had his wish.”) In the second century, Clement of Alexandria held fast to the view that within marriage, only sex solely for the purpose of procreation was permissible. Not until the Jovinianist controversy was extinguished in the late fourth century, and Augustine’s tour de force “Of the Good of Marriage” was written, did it become completely clear within Christianity that marriage could be a genuine good and not merely some kind of lesser evil.

Over this same span of centuries, the Church gradually worked out another revolutionary implication of Paul’s message: Sexual morality would require moral agency for all persons, even those whose bodies were beyond the field of vision for ancient thinkers. In today’s terms, Christian sexual morality was inclusive. To be sure, Paul hardly announced the legal emancipation of the unfree. But already (so I have argued, though not all agree) Paul’s ban on porneia restricted one of the slave-owner’s most ordinary prerogatives: sexual access to his slaves. We can trace a dawning awareness in the early Church, unlike anything in pagan antiquity, of the sexual integrity of all persons. By the fifth century, Christian emperors were actually taking proactive (if still, by our standards, limited) measures to protect the bodily integrity of vulnerable women. The heightened place of sexuality in the overarching structure of morality, the respect for the human dignity of all persons, and the insistence on the value of the transcendent and sacred over the secular and the civic—these all went hand in hand in the growth of Christian culture.

Paul’s prohibition on fornication, his highly qualified acceptance of the practical necessity of marriage, and the liberatory movement of Christian individualism form a coherent ethic: For the early Christians, sexual morality was woven inseparably into their whole effort to live rightly in the world. Sex, by its essence, is entangled in the most fundamental questions about the nature of the self and its relation to God. Once launched, the revolution was not easily contained, and when the early Christians tore sexual morality away from the familiar outlines provided by the civic background, the repercussions were not confined to one discrete section of the moral code. Sex came to occupy a place in the foreground of moral instruction in a way that it simply never had in Judaism, or even the most stringent pagan philosophies. The conspicuous austerity of the early Christians caught the eye of early observers, including the Greek doctor Galen. In the competitive marketplace of Roman imperial religion, the way in which Paul loaded questions of sexual morality with dramatic salvific significance gave the moral teaching of this small but vocal movement a particular flavor. The proclamation of the gospel and this strange, spiritualized rigorism were inseparable.

The Christian movement did not come, in the first place, to overthrow the Stoic sages, but rather the folk and civic polytheism that ruled in the hearths and streets of the ancient Mediterranean. Despite the importance of the philosophical schools in shaping literate morality, traditional paganism prevailed. The Roman Empire was not an age of spiritual decadence, as once believed. Christianity did not triumph over a tired or limping polytheism. The old gods confidently ruled. The cities thrummed with their sounds, and the streets were fogged with altar smoke. Later Roman Alexandria, we happen to know, had some 2,500 temples. So it is no accident that the Roman Empire gave birth to the genre of deeply religious literature we call the Greek romance. The romances may be as close as we can get to the warm, earthy spirit of mature paganism in the centuries when Christianity rose to prominence. These long, prose stories of love—of eros, erotic love—start to appear in the first century. They celebrate the idea that two young people, a boy and a girl of high station and uncommon beauty, can fall in love with each other and overcome the obstacles thrown in their way. In the end, all tensions are resolved, as reliably as the stars move across the heavens. The lovers wed and are physically united. Sex is a blessing, the source of all generation and renewal.

These romances proclaim that we belong to the world; we are ordered toward its endless pattern of sexual consummation and new life. The presiding god is Eros, the son of Aphrodite, a god of this world if ever there was one. In Daphnis and Chloe, a second-century pastoral romance that Goethe advised rereading every year, the innocent, natural desire of the two protagonists is likened to the same lush power of nature that impelled the herds of rams and ewes in their season of love. The springs of desire well up from deep inside us and sweep us through life on their raging currents. Sex is an immanent, divine force running through the cycles of time. In these narratives, the whole course of vegetable life—desire, love, marriage, sex, childbirth—constitute who we truly are. We belong here, to the earth, to the benevolent gods, and to the dancing cosmos.

Despite its charms, the romance told Christians exactly what they were not. They did not belong in this world. It is telling that early Christians shaped their imaginations with the diffuse body of legends known as the apocryphal acts of the apostles (whence come such integral stories as the quo vadis and upside-down crucifixion of Peter). These stories are, despite their low literary register, clever anti-romances. In these stories, the Christian apostle often rends a convert away from sex and marriage. Usually, the apostle convinces the beautiful wife of a powerful Roman to believe in Christ, and even to renounce conjugal relations. The Christians in these narratives are ruthlessly hunted by a ruling order that is not benevolent. The assault on physical eros throws ice water in the face of those who walk through life oblivious to the false promises of this world. The stories end not in marriage and the renewal of life but in abstinence and spectacular, sanguinary acts of dying. The renunciation of sex is integral to the apocryphal acts, not as a discrete moral commandment, but as a way of orienting the self in the world. In the early Christian imagination, sexual renunciation turns humanity away from the transient cosmos and toward the eternal reality of divine truth. For the early Christians, a rigorous sexual morality was integral to its spiritual project, which was to move through a world that was always ebbing away and toward the immaterial and transcendent God.

It was not the austere sexual morality itself that set Christians apart from the world so much as its central place within an effort to redefine how humanity ought to live in a created but fallen order. This transforming vision was something new and altogether estranging—in antiquity and ever since. Michel Foucault was neither the first nor the last to look at the rigors of Stoic virtue and see antecedents for Christian austerity. But appearances of continuity are deceptive. However close they were in time, place, and occasionally idiom, what seem like subtle differences between Epictetus and Paul in fact point toward an impassable chasm. The Christian revolution in sexual morality was a departure from, not an acceleration of, Stoic asceticism. And it was a radical break from the warm and earthy pagan eroticism of the kind we find in romance. Christianity put forward a new cosmology, a new ethics, and a new vision of human solidarity, in short, a new view of human destiny that makes sex far more important. Sexual morality is integral to the Christian vision of redemption.

The experience of the early Church might suggest that there have always been, and will always be, uneasy fault lines between the Church and the culture around it. These fault lines have become more visible and dramatic in recent decades. In A Secular Age, Charles Taylor describes the 1960s as the “hinge moment” in the longer arc of modern secularization. The triumph of the secular, by Taylor’s account, does not mean the simple abolition or erasure of the divine from the modern world. Instead, it is a change in the background conditions of all beliefs. The self is no longer imagined as journeying toward final redemption. Human existence is pictured within an indifferent and infinite universe made up of what T. S. Eliot called the “vacant interstellar spaces.”

In this model, sex was, and is, the crux of secularization. According to Taylor, the 1960s saw the sensibility of romanticism broadened into a mass phenomenon. By romanticism he does not mean the dynamic of the ancient Greek romances, a fusion of erotic desire with a fecund, living cosmos. Modern romanticism is more anthropocentric. Romanticism in this sense means an ethic of individual expressivism in accord with codes of authenticity and freedom. Unable to recover eros as worldly god—and unmoored from a shared, public culture whose picture of the universe has a measure of enchantment and meaning—we are left with eros as a private prerogative.

Secularization is not just the scraping away of a religious crust and the return to a pristine condition. (Indeed, it is worth observing that the social assumptions of pre-Christian sexual morality, such as the casual exploitation of the bodies of non-persons, seem incomprehensible precisely because the Christian revolution so completely swept away that old order.) The dethroning of a broadly Christian public morality in the last generations has seen the revival of eros, but not a return to a pre-Christian framework. Eros is no longer a god that weaves us mysteriously into the fabric of an enchanted cosmos. The Christians killed that god dead. Nor does modern sexuality bear any trace of the Stoic sensibility, in which the needs of the city provide moral order to the desires of the individual subject. The power of eros simply is.

Thus, the modern Church finds itself in an odd position. It is surrounded by a culture that bears some of its own values, but they are shorn of their enchanted origins and presented as neutral axioms of the universe. Ironically, some of the most unabashedly secular models of human sexuality also share with Christianity a belief in the central place of the erotic within the architecture of morality. This is utterly alien to Epictetus, and for that matter to most religions outside the Christian (and to some extent the Jewish) tradition. An avowed secularist is as likely as a Christian activist to proclaim the universal dignity of all individuals and insist upon the individual’s freedom. And yet, however moralized the domain of sex might be, the vast, vacant universe seems to have left only authenticity and consent as the shared, public principles of sexual morality. These axioms derive from a picture of the universe different from the one imagined by Paul, who always envisioned the individual—including the sexual self—within the larger story of the gospel and its picture of a created cosmos in the throes of restoration.

And so we live in a fractured culture, with a shared background of meaning that is as thin as gossamer and yet whose values bear the ghostly presence of ancient religious revolutions. The friction between old codes and new ones is not about restraint versus liberty, repression versus authenticity, any more than the difference between Stoic sexual morality and the Pauline view can be described in terms of strict versus lax. In our secular age, just as in the early years of Christianity, differences in sexual morality are really about the clash between different pictures of the universe and the place of the individual within it.

The modern sexual revolution, Taylor writes, has “a tremendous impact on churches whose stance in recent centuries has laid so much stress on these issues [sexual ethics], and where piety has often been identified with a very stringent sexual code.” That is putting it delicately. For stance, read core. For recent centuries, all the way back. For piety, orthodoxy. In the early Church, sexual morality was not baggage, afterthought, or accident. It was the plane on which Christians tried to live in the world, but not of it. Which is why adapting this sexual morality to the modern age has proven as simple as extricating a taut thread from a spider’s web

By Kyle Harper and originally published in First Things in January 2018 and can be seen here.

Liberalism and the Wrath of the Privileged Whites

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in First Things which, I thought, was pretty insightful. Be edified.


Each of our parties is acting crazy, thanks to its own elites. The Republicans are acting crazy thanks to the narcissism and entitlement of the right-leaning business and professional classes. The Democrats are acting crazy thanks to racial politics—specifically, the angry racial politics of upper-middle-class white liberals.

One irony of recent American politics is that the exodus of wage-earning whites from the Democratic party has tended to make the rump of white Democratic voters more affluent, better educated, and more doctrinaire leftist. According to Pew, about 35 percent of Democrats and Democratic-leaners are white “solid liberals.”

Solid liberals are left-of-center on both economic and social issues, and they are pessimistic about American society. One presumes that they are pessimistic about other people in American society. The solid liberals are also the best educated and most affluent segment of the Democratic party’s factions.

The weakness of solid liberals is that they are electorally nothing, absent alliances with less affluent, less ideologically rigid, and less secular groups. This creates all kinds of complications. The largely white and affluent solid liberals are notionally egalitarian and opposed to white privilege, but they include many of the most privileged whites in America. How can they participate in a coalition that is largely poorer, less educated, and darker-skinned than they are, while maintaining their comfortable position (both economically and socially)?

One solution would be for them not to maintain their privileged position, but instead to prioritize the interests of the poorer, less secular, and more moderate parts of their coalition. But that hasn’t happened so far. An overwhelming majority of Hispanics opposes increasing immigration, but their position is entirely unrepresented in the Democratic party. It seems possible that the Democrats will throw away a winnable Senate seat in Alabama because they have nominated a pro-abortion extremist against a Republican who has been credibly accused of sexual assault and ephebophilia (probably better that you don’t look that up).

Even ten years ago, Democrats were willing to nominate candidates who were culturally conservative (or at least willing to pretend to be culturally conservative) in order to replace conservative Republicans with somewhat-more-liberal Democrats. What changed?

The first thing was the alleged coming of the “emerging Democratic majority,” which was supposed to be brought about by demographic change and a larger nonwhite share of the electorate. This Democratic majority has been a little late in arriving, but that isn’t the only important part of the story.

Many liberal whites wanted to be rid of the culturally conservative, economically liberal, working-class white voters whom Democrats had courted in the previous decade. Upper-middle-class whites were embarrassed by these people. After all these centuries of white privilege, they never managed to get into a good school—or even a state college—and now they were making demands about trade and immigration.

One of the themes that emerges from Shattered (a chronicle of the Clinton campaign) is that the Clinton operation didn’t want to make a strong play for working-class white voters in swing states. The Clintonites thought these voters were disposable. It was left to Barack Obama to point out that he had done better than Clinton in many heavily working-class white areas, because he had done those voters the courtesy of treating them as though they were as important as any other American.

In one sense, it was easy for Obama. He didn’t risk being called a racist by playing to working-class whites. This is the dilemma facing affluent white liberals: They want to lead a coalition in favor of equality, but their identity places them under suspicion.

And they do want to lead. Hilary Clinton’s slogan was “I’m with Her.” That is why the loudest yelps about white privilege come from pale-skinned students at the most expensive liberal arts colleges. The strategy is to make the bad whites a justification for the privilege and power of the good, solidly liberal whites. See? We are using our position to make America a better place (and living rather well in the meantime).

This helps explain the biggest rhetorical difference between Barack Obama and Hillary Clinton. Obama’s rhetorical vision included all but the most right-wing of Americans. Millions of working-class whites felt that Obama was talking about them, too, when he said, “There’s not a liberal America and a conservative America—there’s the United States of America. There’s not a black America and white America and Latino America and Asian America; there’s the United States of America.”

And many of those same Americans knew that Hillary Clinton was talking about them when she ranted about the “racist, sexist, homophobic, xenophobic, Islamaphobic—you name it” deplorables.

The Trump administration, for all of its obnoxiousness, seems most to have irritated affluent white liberals, rather than the nonwhite and relatively poor who are supposedly Trump’s great targets. Part of this is ideology, of course, since affluent white liberals are the most extreme segment of the Democratic coalition. But part of it is the rage of a privileged class.

By Pete Spiliakos and originally published in First Things on December 7, 2017 and can be seen here.



Something is wrong on the internet

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in Medium which, I thought, was pretty insightful. Be edified.


I’m James Bridle. I’m a writer and artist concerned with technology and culture. I usually write on my own blog, but frankly I don’t want what I’m talking about here anywhere near my own site. Please be advised: this essay describes disturbing things and links to disturbing graphic and video content. You don’t have to read it, and are advised to take caution exploring further.

As someone who grew up on the internet, I credit it as one of the most important influences on who I am today. I had a computer with internet access in my bedroom from the age of 13. It gave me access to a lot of things which were totally inappropriate for a young teenager, but it was OK. The culture, politics, and interpersonal relationships which I consider to be central to my identity were shaped by the internet, in ways that I have always considered to be beneficial to me personally. I have always been a critical proponent of the internet and everything it has brought, and broadly considered it to be emancipatory and beneficial. I state this at the outset because thinking through the implications of the problem I am going to describe troubles my own assumptions and prejudices in significant ways.

One of the thus-far hypothetical questions I ask myself frequently is how I would feel about my own children having the same kind of access to the internet today. And I find the question increasingly difficult to answer. I understand that this is a natural evolution of attitudes which happens with age, and at some point this question might be a lot less hypothetical. I don’t want to be a hypocrite about it. I would want my kids to have the same opportunities to explore and grow and express themselves as I did. I would like them to have that choice. And this belief broadens into attitudes about the role of the internet in public life as whole.

I’ve also been aware for some time of the increasingly symbiotic relationship between younger children and YouTube. I see kids engrossed in screens all the time, in pushchairs and in restaurants, and there’s always a bit of a Luddite twinge there, but I am not a parent, and I’m not making parental judgments for or on anyone else. I’ve seen family members and friend’s children plugged into Peppa Pig and nursery rhyme videos, and it makes them happy and gives everyone a break, so OK.

But I don’t even have kids and right now I just want to burn the whole thing down.

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. Much of what I am going to describe next has been covered elsewhere, although none of the mainstream coverage I’ve seen has really grasped the implications of what seems to be occurring.

To begin: Kid’s YouTube is definitely and markedly weird. I’ve been aware of its weirdness for some time. Last year, there were a number of articles posted about the Surprise Egg craze. Surprise Eggs videos depict, often at excruciating length, the process of unwrapping Kinder and other egg toys. That’s it, but kids are captivated by them. There are thousands and thousands of these videos and thousands and thousands, if not millions, of children watching them.

From the article linked above:

The maker of my particular favorite videos is “Blu Toys Surprise Brinquedos & Juegos,” and since 2010 he seems to have accrued 3.7 million subscribers and just under 6 billion views for a kid-friendly channel entirely devoted to opening surprise eggs and unboxing toys. The video titles are a continuous pattern of obscure branded lines and tie-ins: “Surprise Play Doh Eggs Peppa Pig Stamper Cars Pocoyo Minecraft Smurfs Kinder Play Doh Sparkle Brilho,” “Cars Screamin’ Banshee Eats Lightning McQueen Disney Pixar,” “Disney Baby Pop Up Pals Easter Eggs SURPRISE.”

As I write this he has done a total of 4,426 videos and counting. With so many views — for comparison, Justin Bieber’s official channel has more than 10 billion views, while full-time YouTube celebrity PewDiePie has nearly 12 billion — it’s likely this man makes a living as a pair of gently murmuring hands that unwrap Kinder eggs. (Surprise-egg videos are all accompanied by pre-roll, and sometimes mid-video and ads.)

That should give you some idea of just how odd the world of kids online video is, and that list of video titles hints at the extraordinary range and complexity of this situation. We’ll get into the latter in a minute; for the moment know that it’s already very strange, if apparently pretty harmless, out there.

Another huge trope, especially the youngest children, is nursery rhyme videos.

Little Baby Bum, which made the above video, is the 7th most popular channel on YouTube. With just 515 videos, they have accrued 11.5 million subscribers and 13 billion views. Again, there are questions as to the accuracy of these numbers, which I’ll get into shortly, but the key point is that this is a huge, huge network and industry.

On-demand video is catnip to both parents and to children, and thus to content creators and advertisers. Small children are mesmerised by these videos, whether it’s familiar characters and songs, or simply bright colours and soothing sounds. The length of many of these videos — one common video tactic is to assemble many nursery rhyme or cartoon episodes into hour+ compilations —and the way that length is marketed as part of the video’s appeal, points to the amount of time some kids are spending with them.

YouTube broadcasters have thus developed a huge number of tactics to draw parents’ and childrens’ attention to their videos, and the advertising revenues that accompany them. The first of these tactics is simply to copy and pirate other content. A simple search for “Peppa Pig” on YouTube in my case yielded “About 10,400,000 results” and the front page is almost entirely from the verified “Peppa Pig Official Channel”, while one is from an unverified channel called Play Go Toys, which you really wouldn’t notice unless you were looking out for it:

Play Go Toys’ channel consists of (I guess?) pirated Peppa Pig and other cartoons, videos of toy unboxings (another kid magnet), and videos of, one supposes, the channel owner’s own children. I am not alleging anything bad about Play Go Toys; I am simply illustrating how the structure of YouTube facilitates the delamination of content and author, and how this impacts on our awareness and trust of its source.

As another blogger notes, one of the traditional roles of branded content is that it is a trusted source. Whether it’s Peppa Pig on children’s TV or a Disney movie, whatever one’s feelings about the industrial model of entertainment production, they are carefully produced and monitored so that kids are essentially safe watching them, and can be trusted as such. This no longer applies when brand and content are disassociated by the platform, and so known and trusted content provides a seamless gateway to unverified and potentially harmful content.

(Yes, this is the exact same process as the delamination of trusted news media on Facebook feeds and in Google results that is currently wreaking such havoc on our cognitive and political systems and I am not going to explicitly explore that relationship further here, but it is obviously deeply significant.)

A second way of increasing hits on videos is through keyword/hashtag association, which is a whole dark art unto itself. When some trend, such as Surprise Egg videos, reaches critical mass, content producers pile onto it, creating thousands and thousands more of these videos in every possible iteration. This is the origin of all the weird names in the list above: branded content and nursery rhyme titles and “surprise egg” all stuffed into the same word salad to capture search results, sidebar placement, and “up next” autoplay rankings.

A striking example of the weirdness is the Finger Family videos (harmless example embedded above). I have no idea where they came from or the origin of the children’s rhyme at the core of the trope, but there are at least 17 million versions of this currently on YouTube, and again they cover every possible genre, with billions and billions of aggregated views.

Once again, the view numbers of these videos must be taken under serious advisement. A huge number of these videos are essentially created by bots and viewed by bots, and even commented on by bots. That is a whole strange world in and of itself. But it shouldn’t obscure that there are also many actual children, plugged into iphones and tablets, watching these over and over again — in part accounting for the inflated view numbers — learning to type basic search terms into the browser, or simply mashing the sidebar to bring up another video.

What I find somewhat disturbing about the proliferation of even (relatively) normal kids videos is the impossibility of determining the degree of automation which is at work here; how to parse out the gap between human and machine. The example above, from a channel called Bounce Patrol Kids, with almost two million subscribers, show this effect in action. It posts professionally produced videos, with dedicated human actors, at the rate of about one per week. Once again, I am not alleging anything untoward about Bounce Patrol, which clearly follows in the footsteps of pre-digital kid sensations like their fellow Australians The Wiggles.

And yet, there is something weird about a group of people endlessly acting out the implications of a combination of algorithmically generated keywords: “Halloween Finger Family & more Halloween Songs for Children | Kids Halloween Songs Collection”, “Australian Animals Finger Family Song | Finger Family Nursery Rhymes”, “Farm Animals Finger Family and more Animals Songs | Finger Family Collection – Learn Animals Sounds”, “Safari Animals Finger Family Song | Elephant, Lion, Giraffe, Zebra & Hippo! Wild Animals for kids”, “Superheroes Finger Family and more Finger Family Songs! Superhero Finger Family Collection”, “Batman Finger Family Song — Superheroes and Villains! Batman, Joker, Riddler, Catwoman” and on and on and on. This is content production in the age of algorithmic discovery — even if you’re a human, you have to end up impersonating the machine.

Other channels do away with the human actors to create infinite reconfigurable versions of the same videos over and over again. What is occurring here is clearly automated. Stock animations, audio tracks, and lists of keywords being assembled in their thousands to produce an endless stream of videos. The above channel, Videogyan 3D Rhymes — Nursery Rhymes & Baby Songs, posts several videos a week, in increasingly byzantine combinations of keywords. They have almost five million subscribers — more than double Bounce Patrol — although once again it’s impossible to know who or what is actually racking up these millions and millions of views.

I’m trying not to turn this essay into an endless list of examples, but it’s important to grasp how vast this system is, and how indeterminate its actions, process, and audience. It’s also international: there are variations of Finger Family and Learn Colours videos for Tamil epics and Malaysian cartoonswhich are unlikely to pop up in any Anglophone search results. This very indeterminacy and reach is key to its existence, and its implications. Its dimensionality makes it difficult to grasp, or even to really think about.

We’ve encountered pretty clear examples of the disturbing outcomes of full automation before — some of which have been thankfully leavened with a dark kind of humour, others not so much. Much has been made of the algorithmic interbreeding of stock photo libraries and on-demand production of everything from tshirts to coffee mugs to infant onesies and cell phone covers. The above example, available until recently on Amazon, is one such case, and the story of how it came to occur is fascinating and weird but essentially comprehensible. Nobody set out to create phone cases with drugs and medical equipment on them, it was just a deeply weird mathematical/probabilistic outcome. The fact that it took a while to notice might ring some alarm bells however.

Likewise, the case of the “Keep Calm and Rape A Lot” tshirts (along with the “Keep Calm and Knife Her” and “Keep Calm and Hit Her” ones) is depressing and distressing but comprehensible. Nobody set out to create these shirts: they just paired an unchecked list of verbs and pronouns with an online image generator. It’s quite possible that none of these shirts ever physically existed, were ever purchased or worn, and thus that no harm was done. Once again though, the people creating this content failed to notice, and neither did the distributor. They literally had no idea what they were doing.

What I will argue, on the basis of these cases and of those I’m going to describe further, is that the scale and logic of the system is complicit in these outputs, and requires us to think through their implications.

(Also again: I’m not going to dig into the wider social implications of such processes outside the scope of what I am writing about here, but it’s clear that one can draw a clear line from examples such as these to pressing contemporary issues such as racial and gender bias in big data and machine intelligence-driven systems, which require urgent attention but in the same manner do not have anything resembling easy or even preferable solutions.)

Let’s look at just one video among the piles of kid videos, and try to parse out where it comes from. It’s important to stress that I didn’t set out to find this particular video: it appeared organically and highly ranked in a search for ‘finger family’ in an incognito browser window (i.e. it should not have been influenced by previous searches). This automation takes us to very, very strange places, and at this point the rabbithole is so deep that it’s impossible to know how such a thing came into being.

Once again, a content warning: this video is not inappropriate in any way, but it is decidedly off, and contains elements which might trouble anyone. It’s very mild on the scale of such things, but. I describe it below if you don’t want to watch it and head down that road. This warning will recur.

The above video is entitled Wrong Heads Disney Wrong Ears Wrong Legs Kids Learn Colors Finger Family 2017 Nursery Rhymes. The title alone confirms its automated provenance. I have no idea where the “Wrong Heads” trope originates, but I can imagine, as with the Finger Family Song, that somewhere there is a totally original and harmless version that made enough kids laugh that it started to climb the algorithmic rankings until it made it onto the word salad lists, combining with Learn Colors, Finger Family, and Nursery Rhymes, and all of these tropes — not merely as words but as images, processes, and actions — to be mixed into what we see here.

The video consists of a regular version of the Finger Family song played over an animation of character heads and bodies from Disney’s Aladdin swapping and intersecting. Again, this is weird but frankly no more than the Surprise Egg videos or anything else kids watch. I get how innocent it is. The offness creeps in with the appearance of a non-Aladdin character —Agnes, the little girl from Despicable Me. Agnes is the arbiter of the scene: when the heads don’t match up, she cries, when they do, she cheers.

The video’s creator, BABYFUN TV (screenshot above), has produced many similar videos. As many of the Wrong Heads videos as I could bear to watch all work in exactly the same way. The character Hope from Inside Out weeps through a Smurfs and Trolls head swap. It goes on and on. I get the game, but the constant overlaying and intermixing of different tropes starts to get inside you. BABYFUN TV only has 170 subscribers and very low view rates, but then there are thousands and thousands of channels like this. Numbers in the long tail aren’t significant in the abstract, but in their accumulation.

The question becomes: how did this come to be? The “Bad Baby” trope also present on BABYFUN TV features the same crying. While I find it disturbing, I can understand how it might provide some of the rhythm or cadence or relation to their own experience that actual babies are attracted to in this content, although it has been warped and stretched through algorithmic repetition and recombination in ways that I don’t think anyone actually wants to happen.

Screenshot from Toy Freaks channel

[Edit, 21/11/2017: Following the publication of this article, the Toy Freaks channel was removed by YouTube as part of a widespread removal of contentious content.]

Toy Freaks is a hugely popular channel (68th on the platform) which features a father and his two daughters playing out — or in some cases perhaps originating — many of the tropes we’ve identified so far, including “Bad Baby”, (previously embedded above). As well as nursery rhymes and learning colours, Toy Freaks specialises in gross-out situations, as well as activities which many, many viewers feel border on abuse and exploitation, if not cross the line entirely, including videos of the children vomiting and in pain. Toy Freaks is a YouTube verified channel, whatever that means. (I think we know by now it means nothing useful.)

As with Bounce Patrol Kids, however you feel about the content of these videos, it feels impossible to know where the automation starts and ends, who is coming up with the ideas and who is roleplaying them. In turn, the amplification of tropes in popular, human-led channels such as Toy Freaks leads to them being endlessly repeated across the network in increasingly outlandish and distorted recombinations.

There’s a second level of what I’m characterising as human-led videos which are much more disturbing than the mostly distasteful activities of Toy Freaks and their kin. Here is a relatively mild, but still upsetting example:

A step beyond the simply pirated Peppa Pig videos mentioned previously are the knock-offs. These too seem to teem with violence. In the official Peppa Pig videos, Peppa does indeed go to the dentist, and the episode in which she does so seems to be popular — although, confusingly, what appears to be the real episode is only available on an unofficial channel. In the official timeline, Peppa is appropriately reassured by a kindly dentist. In the version above, she is basically tortured, before turning into a series of Iron Man robots and performing the Learn Colours dance. A search for “peppa pig dentist” returns the above video on the front page, and it only gets worse from here.

[Edit, 21/11/2017: the original video cited here has now been removed as part of YouTube’s recent purge, although many similar videos remain on the platform.]

Disturbing Peppa Pig videos, which tend towards extreme violence and fear, with Peppa eating her father or drinking bleach, are, it turns out very widespread. They make up an entire YouTube subculture. Many are obviously parodies, or even satires of themselves, in the pretty common style of the internet’s outrageous, deliberately offensive kind. All the 4chan tropes are there, the trolls are out, we know this.

In the example above, the agency is less clear: the video starts with a trollish Peppa parody, but later syncs into the kind of automated repetition of tropes we’ve seen already. I don’t know which camp it belongs to. Maybe it’s just trolls. I kind of hope it is. But I don’t think so. Trolls don’t cover the intersection of human actors and more automated examples further down the line. They’re at play here, but they’re not the whole story.

I suppose it’s naive not to see the deliberate versions of this coming, but many are so close to the original, and so unsignposted — like the dentist example — that many, many kids are watching them. I understand that most of them are not trying to mess kids up, not really, even though they are.

I’m trying to understand why, as plainly and simply troubling as it is, this is not a simple matter of “won’t somebody think of the children” hand-wringing. Obviously this content is inappropriate, obviously there are bad actors out there, obviously some of these videos should be removed. Obviously too this raises questions of fair use, appropriation, free speech and so on. But reports which simply understand the problem through this lens fail to fully grasp the mechanisms being deployed, and thus are incapable of thinking its implications in totality, and responding accordingly.

The New York Times, headlining their article on a subset of this issue “On YouTube Kids, Startling Videos Slip Past Filters”, highlights the use of knock-off characters and nursery rhymes in disturbing content, and frames it as a problem of moderation and legislation. YouTube Kids, an official app which claims to be kid-safe but is quite obviously not, is the problem identified, because it wrongly engenders trust in users. An article in the British tabloid The Sun, “Kids left traumatised after sick YouTube clips showing Peppa Pig characters with knives and guns appear on app for children” takes the same line, with an added dose of right-wing technophobia and self-righteousness. But both stories take at face value YouTube’s assertions that these results are incredibly rare and quickly removed: assertions utterly refuted by the proliferation of the stories themselves, and the growing number of social media posts, largely by concerned parents, from which they arise.

But as with Toy Freaks, what is concerning to me about the Peppa videos is how the obvious parodies and even the shadier knock-offs interact with the legions of algorithmic content producers until it is completely impossible to know what is going on. (“The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.”)

Here’s what is basically a version of Toy Freaks produced in Asia (screenshot above). Here’s one from Russia. I don’t really want to use the term “human-led” any more about these videos, although they contain all the same tropes and actual people acting them out. I no longer have any idea what’s going on here and I really don’t want to and I’m starting to think that that is kind of the point. That’s part of why I’m starting to think about the deliberateness of this all. There is a lot of effort going into making these. More than spam revenue can generate — can it? Who’s writing these scripts, editing these videos? Once again, I want to stress: this is still really mild, even funny stuff compared to a lot of what is out there.

Here are a few things which are disturbing me:

The first is the level of horror and violence on display. Some of the times it’s troll-y gross-out stuff; most of the time it seems deeper, and more unconscious than that. The internet has a way of amplifying and enabling many of our latent desires; in fact, it’s what it seems to do best. I spend a lot of time arguing for this tendency, with regards to human sexual freedom, individual identity, and other issues. Here, and overwhelmingly it sometimes feels, that tendency is itself a violent and destructive one.

The second is the levels of exploitation, not of children because they are children but of children because they are powerless. Automated reward systems like YouTube algorithms necessitate exploitation in the same way that capitalism necessitates exploitation, and if you’re someone who bristles at the second half of that equation then maybe this should be what convinces you of its truth. Exploitation is encoded into the systems we are building, making it harder to see, harder to think and explain, harder to counter and defend against. Not in a future of AI overlords and robots in the factories, but right here, now, on your screen, in your living room and in your pocket.

Many of these latest examples confound any attempt to argue that nobody is actually watching these videos, that these are all bots. There are humans in the loop here, even if only on the production side, and I’m pretty worried about them too.

I’ve written enough, too much, but I feel like I actually need to justify all this raving about violence and abuse and automated systems with an example that sums it up. Maybe after everything I’ve said you won’t think it’s so bad. I don’t know what to think any more.

[Edit, 21/11/2017: the original video cited here has now been removed as part of YouTube’s recent purge, although many similar videos remain on the platform. The video used animations from the Grand Theft Auto game series overlaid with cartoon characters assaulting, killing, and burying one another.]

This video, BURIED ALIVE Outdoor Playground Finger Family Song Nursery Rhymes Animation Education Learning Video, contains all of the elements we’ve covered above, and takes them to another level. Familiar characters, nursery tropes, keyword salad, full automation, violence, and the very stuff of kids’ worst dreams. And of course there are vast, vast numbers of these videos. Channel after channel after channel of similar content, churned out at the rate of hundreds of new videos every week. Industrialised nightmare production.

For the final time: There is more violent and more sexual content like this available. I’m not going to link to it. I don’t believe in traumatising other people, but it’s necessary to keep stressing it, and not dismiss the psychological effect on children of things which aren’t overtly disturbing to adults, just incredibly dark and weird.

A friend who works in digital video described to me what it would take to make something like this: a small studio of people (half a dozen, maybe more) making high volumes of low quality content to reap ad revenue by tripping certain requirements of the system (length in particular seems to be a factor). According to my friend, online kids’ content is one of the few alternative ways of making money from 3D animation because the aesthetic standards are lower and independent production can profit through scale. It uses existing and easily available content (such as character models and motion-capture libraries) and it can be repeated and revised endlessly and mostly meaninglessly because the algorithms don’t discriminate — and neither do the kids.

These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place.

To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.

This, I think, is my point: The system is complicit in the abuse.

And right now, right here, YouTube and Google are complicit in that system. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale. I believe they have an absolute responsibility to deal with this, just as they have a responsibility to deal with the radicalisation of (mostly) young (mostly) men via extremist videos — of any political persuasion. They have so far showed absolutely no inclination to do this, which is in itself despicable. However, a huge part of my troubled response to this issue is that I have no idea how they can respond without shutting down the service itself, and most systems which resemble it. We have built a world which operates at scale, where human oversight is simply impossible, and no manner of inhuman oversight will counter most of the examples I’ve used in this essay. The asides I’ve kept in parentheses throughout, if expanded upon, would allow one with minimal effort to rewrite everything I’ve said, with very little effort, to be not about child abuse, but about white nationalism, about violent religious ideologies, about fake news, about climate denialism, about 9/11 conspiracies.

This is a deeply dark time, in which the structures we have built to sustain ourselves are being used against us — all of us — in systematic and automated ways. It is hard to keep faith with the network when it produces horrors such as these. While it is tempting to dismiss the wilder examples as trolling, of which a significant number certainly are, that fails to account for the sheer volume of content weighted in a particularly grotesque direction. It presents many and complexly entangled dangers, including that, just as with the increasing focus on alleged Russian interference in social media, such events will be used as justification for increased control over the internet, increasing censorship, and so on. This is not what many of us want.

I’m going to stop here, saying only this:

What concerns me is not just the violence being done to children here, although that concerns me deeply. What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time, and we’re still struggling to find a way to even talk about it, to describe its mechanisms and its actions and its effects. As I said at the beginning of this essay: this is being done by people and by things and by a combination of things and people. Responsibility for its outcomes is impossible to assign but the damage is very, very real indeed.

By James Bridle and published in Medium on November 6, 2017 and can be found here.

Inside America’s Largest Religious Revival You Know Nothing About [the religion of Athletica]

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in The Federalist which, I thought, was pretty insightful. Be edified.


For decades, demographic studies have indicated the steady decline of religion in America, but new measures suggest that, on the contrary, at least one religion in America is alive and well, thriving in every community, and claiming devoted adherents in nearly every household.

This new religious revival has remained under the radar in large part because its adherents do not claim any religious attachment to this social institution, but by every measure of behaviors typically associated with religion, it is deceitful to label it as anything less. Although it shies away from adopting an overarching organization or name for itself, for the purposes of this study, it will be considered under the name Athletica.

Forget One Service For Week. We Have Daily Meetings

Whereas in traditional American Christianity followers would regularly meet together once or twice a week (a timetable most now find unduly onerous), members of Athletica gather four, five, six, or even seven days a week. Despite the significant time demands, the families of adherents dutifully and unflinchingly keep these meeting commitments and accept as normal the stringent penalties imparted to those who miss a gathering—penalties usually enacted by limiting the devotee’s rights of participation in important group ceremonies.

Nor are the youngest members of Athletica uninitiated in their family’s devotion. Athletica parents regularly begin teaching their children its basic skills as soon as they are able to toddle, and some begin their benevolent indoctrination well before that by dressing their infants in tiny versions of the liturgical vestments. By age four or five, their parents have already catechized most of these youngsters in the basic tenets of Athletica, though this pious education will continue to deepen through daily family conversations, oral and written retellings of important historical moments in Athletica, and inclusion in the essential Athletica ceremonies.

Eager young zealots of elementary age and upward often relish memorizing not only the many Athletica rules, but also masses of historical information about specific persons and events. Although it is hard to believe such memorization would be undertaken voluntarily, there is no trace of a “drill and kill” mentality about this phenomenon. These youngsters apparently love this imparted faith enough that they simply cannot help trying to absorb everything about it that they can, and they especially find pleasure in learning of the great heroes of Athletica’s past, whom they inevitably long to emulate.

Start ‘Em Young for Optimal Results

It will, for instance, provide child-sized items when physical stature would otherwise prevent participation, but in most ways teaches children through full involvement. The astonishing result of interacting with its children through the sometimes daunting vocabulary and directives of the adult adherents is that these youngest disciples prove all the more eager to learn the tenets of Athletica and to mature into full membership.

Around the time of elementary or middle school age, children deemed physically and mentally ready begin to adopt the ascetic lifestyle of Athletica. Depending upon the particular denominational strain, parents will insist either that children rise well before sunup to practice for several hours in Athletica training or that such practice be dutifully performed immediately after school. Some adherents do both.

Late evening hours and weekends are reserved for the equivalent of local and regional worship services, at which Athletica adherents gather corporately, following intricate and time-honored liturgies that can often appear as a tangle of somewhat arbitrary rules to the uninitiate, but which perceptibly rouse Athletica followers into heights of emotional experience.

So Dedicated, This Religion Affects Food and Sleep

Those most dedicated to this life will carefully regulate their sleep to ensure supreme attunement and awareness in the practice of Athletica. Attaining sufficient sleep in the midst of such a demanding schedule can be difficult, but most adherents find that short nights due to Athletica events can be compensated for by using times formerly set aside for other religious activities (e.g. Sunday mornings) to gain extra hours of sleep.

Of course, not every child demonstrates the natural ability to progress to the highest levels of Athletica. However, as in churches of yore, there is room in this religion not only for those who will carry out the priestly duties but also for devoted laity. While tens of thousands participate actively in the life of Athletica at the local level, hundreds of thousands participate in less all-consuming fashion at the national level, transferring hope for their own advancement into hope for vicarious vindication through the advancement of others.

Long before more traditional religious groups thought of using media as a means for finding and retaining converts, Athletica had a well-established presence in radio, television, and internet. Its devotees are therefore long-accustomed to setting aside Sabbath times when Athletica events will be broadcast and to treating these devotional times as sacred. Despite the physical disconnectedness of these media-based believers, such Athletica followers display an astonishing level of knowledge, fervor, and devotion. The younger members of such “observer” Athletica families sometimes even surpass their “participant” peers in sheer memorization of knowledge.

This is clearly reflected in the eagerness of the laity to clothe themselves fittingly for their observance of Athletica rituals. Far from the prevailing Christian drift toward an “anything goes” mentality of dress for religious occasions, Athletica followers put surprising amounts of care and expense into the clothes they wear, even when participating in their own homes via televised events.

The Dark Side of This Popular Religion

This popular religion does have a dark side. Alarmingly, it is not uncommon for those striving to advance through the ranks of Athletica to suffer chronic pain or serious injury from their devout exertions. However, it is a tribute to the depth of conviction Athletica elicits in most of its followers that this does not deter them from persisting in their daily routines. Almost universally, the response to such suffering is that it is simply part of the affliction that must be borne in the Athletica life, and that they endure such pain because of the glory for which they hope.

Underlying all these devoted practices is the recurring theme that for its faithful, Athletica is more than a religion to attend to for a couple hours per week.

For, like every religion, Athletica does offer its devotees a form of hope. In comparison to more traditional religions that typically offer extravagant rewards (e.g., life after death, forgiveness of terrible sins) to virtually any willing convert, Athletica is a more stringent and elitist sect. Its promise is of financial gain and personal glory, but only for the most elect.

Of the tens of thousands who hope for financial reward through Athletica, only 2 percent will be granted their desire. Of those who work to earn a spot in the highest ranks of the Athletica hierarchy, hardly more than one out of a thousand will find their hope fulfilled. Interestingly, though, Athletica adherents commonly convince themselves that they (or more often, their children) will be among the favored few, despite statistical data to the contrary, and many who hope for the financial gain accompanying such advancement fail to recognize the more significant financial outlays they have unquestioningly offered up on its proverbial altars.

Underlying all these devoted practices is the recurring theme that for its faithful, Athletica is more than a religion to attend to for a couple hours per week. It is a complete lifestyle and way of thinking. A rudimentary calculation reveals that Athletica devotees typically spend anywhere from five to ten times as many hours dedicating themselves to religious learning and activity as the typical weekly church-goer. Whereas Christians now tend to compartmentalize their religious and non-religious activity, Athletica adherents purposefully infuse their beliefs into every aspect of their lives, from finances to scheduling to family entertainment.

Unquestionably, the ongoing success of Athletica is rooted in its centrality to the lives of its devotees. As Christianity fades in the West, dying from a desire to be like everything else except itself, Athletica has risen to the ascendance as the self-assured, pervasive cultural influence. Where the Judeo-Christian world has laid down its mantle, Athletica has picked it up, unwittingly following the directives of the Hebrew Bible to teach tenets of the faith to their children, “Talking of them when you are sitting in your house, and when you are walking by the way, and when you lie down, and when you rise.”

This depth of enculturation is most certainly the key to the trenchant, growing success of Athletica, which—by all reasonable evidence—has already replaced its rival religions in most American homes.

By Heather Smith and published on November 10, 2017  in The Federalist and can be found here.

The Illusionist

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in The New Atlantis which, I thought, was pretty insightful. Be edified.


Daniel Dennett’s latest book marks five decades of majestic failure to explain consciousness.

David Bentley Hart

It seems to me that we have come this way before. Some of the signposts are new, perhaps — “Bacteria,” “Bach,” and so on — but the scenery looks very familiar, if now somewhat overgrown, and it is hard not to feel that the path is the same one that Daniel Dennett has been treading for five decades. I suppose it would be foolish to expect anything else. As often as not, it is the questions we fail to ask — and so the presuppositions we leave intact — that determine the courses our arguments take; and Dennett has been studiously avoiding the same set of questions for most of his career.

In a sense, the entire logic of From Bacteria to Bach and Back (though not, of course, all the repetitious details) could be predicted simply from Dennett’s implicit admission on page 364 that no philosopher of mind before Descartes is of any consequence to his thinking. The whole pre-modern tradition of speculation on the matter — Aristotle, Plotinus, the Schoolmen, Ficino, and so on — scarcely qualifies as prologue. And this means that, no matter how many times he sets out, all his journeys can traverse only the same small stretch of intellectual territory. After all, Descartes was remarkable not because, as Dennett claims, his vision was especially “vivid and compelling” — in comparison to the subtleties of earlier theories, it was crude, bizarre, and banal — but simply because no one before him had attempted systematically to situate mental phenomena within a universe otherwise understood as a mindless machine. It was only thus that the “problem” of the mental was born.

The modern scientific novum organum — as Francis Bacon dubbed the new rationality that he hoped would replace classical and medieval sophistries — achieved its first systematic expression in the seventeenth century. With its ambition to perfect a method of pure induction, it proposed to the imagination the idea of a “real” physical world hidden behind the apparent one, an occult realm of pure material causation, utterly devoid of all the properties of mind, most especially intentional purposes. From at least the time of Galileo, a division was introduced between what Wilfrid Sellars called the “manifest image” and the “scientific image” — between, that is, the phenomenal world we experience and that imperceptible order of purely material forces that composes its physical substrate. And, at least at first, the divorce was amicable, inasmuch as phenomenal qualities were still granted a certain legitimacy; they were simply surrendered to the custody of the immaterial soul. But mind was now conceived as an exception within the frame of nature.

In the pre-modern vision of things, the cosmos had been seen as an inherently purposive structure of diverse but integrally inseparable rational relations — for instance, the Aristotelian aitia, which are conventionally translated as “causes,” but which are nothing like the uniform material “causes” of the mechanistic philosophy. And so the natural order was seen as a reality already akin to intellect. Hence the mind, rather than an anomalous tenant of an alien universe, was instead the most concentrated and luminous expression of nature’s deepest essence. This is why it could pass with such wanton liberty through the “veil of Isis” and ever deeper into nature’s inner mysteries.

The Cartesian picture, by contrast, was a chimera, an ungainly and extrinsic alliance of antinomies. And reason abhors a dualism. Moreover, the sciences in their modern form aspire to universal explanation, ideally by way of the most comprehensive and parsimonious principles possible. So it was inevitable that what began as an imperfect method for studying concrete particulars would soon metastasize into a metaphysics of the whole of reality. The manifest image was soon demoted to sheer illusion, and the mind that perceived it to an emergent product of the real (which is to say, mindless) causal order.

Here, in this phantom space between the phenomenal and physical worlds, is just where the most interesting questions should probably be raised. But Dennett has no use for those. He is content with the stark choice with which the modern picture confronts us: to adopt either a Cartesian dualism or a thoroughgoing mechanistic monism. And this is rather a pity, since in fact both options are equally absurd.

Not that this is very surprising. After five decades, it would be astonishing if Dennett were to change direction now. But, by the same token, his project should over that time have acquired not only more complexity, but greater sophistication. And yet it has not. For instance, he still thinks it a solvent critique of Cartesianism to say that interactions between bodies and minds would violate the laws of physics. Apart from involving a particularly doctrinaire view of the causal closure of the physical (the positively Laplacian fantasy that all physical events constitute an inviolable continuum of purely physical causes), this argument clumsily assumes that such an interaction would constitute simply another mechanical exchange of energy in addition to material forces.

In the end, Dennett’s approach has remained largely fixed. Rather than a sequence of careful logical arguments, his method remains, as ever, essentially fabulous: That is, he constructs a grand speculative narrative, comprising a disturbing number of sheer assertions, and an even more disturbing number of missing transitions between episodes. It is often quite a beguiling tale, but its power of persuasion lies in its sprawling relentlessness rather than its cogency. Then again, to be fair, it is at least consistent in its aims. No less than the ancient Aristotelian model of reality, Dennett’s picture is meant to be one in which nature and mind are perfectly congruent with one another, and in which, therefore, the post-Cartesian dilemma need never rear its misshapen head.

Rather, however, than attempt to explain nature in terms of a “mind-like” order of rational relations, as Aristotelian tradition did, Dennett seeks to do very nearly the opposite: to reduce mind and nature alike to a computational system, which emerges from “uncomprehending competences,” as he calls them — small, particulate functions wholly unaware of the larger functions they accomplish in the aggregate — of the sort first fully understood by Alan Turing. And those functions, as retained, combined, and developed by the slow, diffident, mindless designing hand of natural selection, are — like the hugely intricate ensemble of discrete lines of code hiding behind the illusory simplicity of the icons on a computer’s screen — the real engines of everything that happens, hiding behind the phenomenal simplicity of perceptible nature.

In Dennett’s telling, it is all very obvious: Under certain chemical and environmental conditions, life will emerge in time and develop organisms with large brains, and these organisms will of necessity be social organisms. And social organisms require mental activity to survive and flourish. For Dennett, all evolutionary developments occur because they incorporate useful adaptations. He has no patience for talk of “spandrels” — phenotypic traits that are supposedly not adaptations but byproducts of the evolution of other traits — or of large, inexplicable, fortuitous hypertrophies (such as, say, the sudden acquisition of language) that have no specific evolutionary rationale at all.

So sanguine, in fact, is Dennett in his certainty that adaptive usefulness is sufficient explanation for why things happen that he often fails to consider whether the things that he claims have happened are, strictly speaking, possible. For him it seems evident that in the right circumstances, in time, natural selection will generate and preserve ever more competences without comprehension until, at some point of cumulative complexity, certain ensembles of those competences will become comprehension. Slowly, what we think of as self-awareness and reflective consciousness emerged from, and in fact remains wholly dependent upon, innumerable small, unconscious, discrete forces.

Exactly how all of this happens, of course — how physical causality is wondrously inverted into phenomenal awareness — is never quite clear. But for Dennett, once again, the distinction between the useful and the possible is a hazy one at best. And in a sense it hardly matters, since even the appearance of rational conscious agency, as something in addition to or formally distinguishable from those tiny competences underlying it, is for Dennett only a useful illusion; and, again, since usefulness explains all things — well, I shall return to this below.

In any event, something happened, and then there was language, which (once more) was very, very useful, and therefore naturally emerged, under the pressure of the social need to communicate, out of originally quite meaningless sounds and gestures. And once there were minds using language, culture evolved, and brains began shaping the reality they inhabited far more rapidly than the previous dynamisms of natural selection ever had. Even so, however, the process was more or less the same: an algorithmic distillation and recombination of “uncomprehending competences.”

Even the mental and cultural worlds were, it turns out, emergent results of such competences rather than consciously designing or designed realities. They were the product of “memes,” fragments of cultural usage that colonized and slowly reconfigured anthropoid brains and societies, and perished or survived according to the mindless logic of natural selection.

And that — though agonizingly protracted over several hundred pages — is the tale Dennett tells. Were it not for a half-dozen or so explanatory gaps, some of which are positively abyssal in size, it would no doubt amount to something more than just a ripping yarn. But, as it stands, it is nonsense.

Admittedly, part of the problem bedeviling Dennett’s narrative is the difficulty of making a case that seems so hard to reconcile with quotidian experience. But that difficulty is only exacerbated by his fierce adherence to an early modern style of materialism, according to whose tenets there can be no aspect of nature not reducible to blind physical forces. For him, the mechanistic picture, or its late modern equivalent, is absolute; it is convertible with truth as such, and whatever appears to escape its logic can never be more than a monstrosity of the imagination. But then the conscious mind constitutes a special dilemma, since this modern picture was produced precisely by excluding all mental properties from physical nature. And so, in this case, physicalist reduction means trying to explain one particular phenomenon — uniquely among all the phenomena of nature — by realities that are, in qualitative terms, quite literally its opposite.

Really, in this regard, we have progressed very little since Descartes’s day. The classical problems that mental events pose for physicalism remain as numerous and seemingly insoluble as ever. Before all else, there is the enigma of consciousness itself, and of the qualia (direct subjective impressions, such as color or tone) that inhabit it. There is simply no causal narrative — and probably never can be one — capable of uniting the phenomenologically discontinuous regions of “third-person” electrochemical brain events and “first-person” experiences, nor any imaginable science logically capable of crossing that absolute qualitative chasm.

Then there is the irreducible unity of apprehension, without which there could be no coherent perception of anything at all, not even disjunctions within experience. As Kant among others realized, this probably poses an insuperable difficulty for materialism. It is a unity that certainly cannot be reduced to some executive material faculty of the brain, as this would itself be a composite reality in need of unification by some still-more-original faculty, and so on forever, and whatever lay at the “end” of that infinite regress would already have to possess an inexplicable prior understanding of the diversity of experience that it organizes. For, even if we accept that the mind merely represents the world to itself under an assortment of convenient fictions, this would involve a translation of sense data into specific perceptions and meanings; and translation requires a competence transcending the difference between the original “text” and its rendition.

This problem, moreover, points toward the far more capacious and crucial one of mental intentionality as such — the mind’s pure directedness (such that its thoughts are about things), its interpretation of sense experience under determinate aspects and meanings, its movement toward particular ends, its power to act according to rationales that would appear nowhere within any inventory of antecedent physical causes. All of these indicate an irreducibly teleological structure to thought incongruous with a closed physical order supposedly devoid of purposive causality.

Similarly, there is the problem of the semantic and syntactic structure of rational thought, whose logically determined sequences seem impossible to reconcile with any supposed sufficiency of the continuous stream of physical causes occurring in the brain. And then there is the issue of abstraction, and its necessary priority over sense experience — the way, for instance, that primordial and irreducible concepts of causality and of discrete forms are required for any understanding of the world of events around us, or the way some concept of resemblance must already be in place before one is able to note likenesses and unlikenesses between things, or even the way in which the bare concepts of Euclidean geometry permit us to recognize their imperfect analogues in nature. And then, also, there are those more than abstract — in fact, transcendental — orientations of the mind, such as goodness or truth or beauty in the abstract, which appear to underlie every employment of thought and will, and yet which correspond to no concrete objects within nature. And so on and so forth.

Traditionally, most philosophical approaches to these issues have merely restated the problems without any real advance in clarity (theories of supervenience, for example), or tried awkwardly to evade them altogether (neutral monismmysterianism). Sometimes a certain fatigue with the inconclusiveness of simple reductionism has prompted vogues in more exotic naturalisms (say, materialist panpsychism or quantum theories of consciousness), but these simply defer the question to an atomic or subatomic level without in any way diminishing the enigma. In a sense, perhaps, Dennett should be commended for his fidelity to the purer reductionisms of early modernity. In its austere emergentism, his position is very near to eliminativism: Whatever cannot be reduced to the most basic physical explanations cannot really exist.

But, alas, his story does not hold together. Some of the problems posed by mental phenomena Dennett simply dismisses without adequate reason; others he ignores. Most, however, he attempts to prove are mere “user-illusions” generated by evolutionary history, even though this sometimes involves claims so preposterous as to verge on the deranged.

In every case, most of his argument consists in a small set of simple logical errors. The most conspicuous is one I think of as the “pleonastic fallacy”: the attempt to explain away an absolute qualitative difference — such as that between third-person physical events and first-person consciousness — by positing an indefinite number of minute quantitative steps, genetic or structural, supposedly sufficient to span the interval. Somewhere in the depths of phylogenic history something happened, and somewhere in the depths of our neurological machinery something happens, and both those somethings have accomplished within us an inversion of brute, mindless, physical causality into, at the very least, the appearance of unified intentional consciousness.

Then also there is Dennett’s tendency to confuse questions about natural capacities for questions about their contents, as when he repeatedly mistakes the issue of intrinsic, subjective, qualitative consciousness for the issue of the extrinsic, objective verifiability of the objects of consciousness; or as when he fails to distinguish between the mystery of rational thought as such and the simple etiological question of how sophisticated practices of reasoning might have evolved. And then there is what one might call his “Narcissan fallacy”: to wit, the tendency to mistake the reflection of human intentional agency in mindless objects, such as computers, for something analogous to a separate instance of mental agency. And then, also, there is his frequent failure to discern the difference between the literal and the metaphorical…. But I am getting ahead of myself.

Dennett is an orthodox neo-Darwinian, in the most gradualist of the sects. Everything in nature must for him be the result of a vast sequence of tiny steps. This is a fair enough position, but the burden of any narrative of emergence framed in those terms is that the stochastic logic of the tale must be guarded with untiring vigilance against any intrusion by “higher causes.” But, where consciousness is concerned, this may very well be an impossible task.

The heart of Dennett’s project, as I have said, is the idea of “uncomprehending competences,” molded by natural selection into the intricate machinery of mental existence. As a model of the mind, however, the largest difficulty this poses is that of producing a credible catalogue of competences that are not dependent for their existence upon the very mental functions they supposedly compose.

Certainly Dennett fails spectacularly in his treatment of the evolution of human language. As a confirmed gradualist in all things, he takes violent exception to any notion of an irreducible, innate, universal grammar, like that proposed by Noam Chomsky, Robert Berwick, Richard Lewontin, and others. He objects even when those theories reduce the vital evolutionary saltation between pre-linguistic and linguistic abilities to a single mutation, like the sudden appearance in evolutionary history of the elementary computational function called “Merge,” which supposedly all at once allowed for the syntactic combination of two distinct elements, such as a noun and a verb.

Fair enough. From Dennett’s perspective, after all, it would be hard to reconcile this universal grammar — an ability that necessarily began as an internal faculty of thought, dependent upon fully formed and discrete mental concepts, and only thereafter expressed itself in vocal signs — with a truly naturalist picture of reality. So, for Dennett, language must have arisen out of social practices of communication, rooted in basic animal gestures and sounds in an initially accidental association with features of the environment. Only afterward could these elements have become words, spreading and combining and developing into complex structures of reference. There must then, he assumes, have been “proto-languages” that have since died away, liminal systems of communication filling up the interval between animal vocalizations and human semiotic and syntactic capacities.

Unfortunately, this simply cannot be. There is no trace in nature even of primitive languages, let alone proto-languages; all languages possess a full hierarchy of grammatical constraints and powers. And this is not merely an argument from absence, like the missing fossils of all those dragons or unicorns that must have once existed. It is logically impossible even to reverse-engineer anything that would qualify as a proto-language. Every attempt to do so will turn out secretly to rely on the syntactic and semiotic functions of fully developed human language. But Dennett is quite right about how immense an evolutionary saltation the sudden emergence of language would really be. Even the simple algorithm of Merge involves, for instance, a crucial disjunction between what linguists call “structural proximity” and “linear proximity” — between, that is, a hypotactic or grammatical connection between parts of a sentence, regardless of their spatial and temporal proximity to one another, and the simple sequential ordering of signifiers in that sentence. Without such a disjunction, nothing resembling linguistic practice is possible; yet that disjunction can itself exist nowhere except in language.

Dennett, however, writes as if language were simply the cumulative product of countless physical ingredients. It begins, he suggests, in mere phonology. The repeated sound of a given word somehow embeds itself in the brain and creates an “anchor” that functions as a “collection point” for syntactic and semantic meanings to “develop around the sound.” But what could this mean? Are semiotic functions something like iron filings and phonemes something like magnets? What is the physical basis for these marvelous congelations in the brain? The only possible organizing principle for such meanings would be that very innate grammar that Dennett denies exists — and this would seem to require distinctly mental concepts. Not that Dennett appears to think the difference between phonemes and concepts an especially significant one. He does not hesitate, for instance, to describe the “synanthropic” aptitudes that certain organisms (such as bedbugs and mice) acquire in adapting themselves to human beings as “semantic information” that can be “mindlessly gleaned” from the “cycle of generations.”

But there is no such thing as mindless semantics. True, it is imaginable that the accidental development of arbitrary pre-linguistic associations between, say, certain behaviors and certain aspects of a physical environment might be preserved by natural selection, and become beneficial adaptations. But all semantic information consists in the interpretation of signs, and of conventions of meaning in which signs and references are formally separable from one another, and semiotic relations are susceptible of combination with other contexts of meaning. Signs are intentional realities, dependent upon concepts, all the way down. And between mere accidental associations and intentional signs there is a discontinuity that no gradualist — no pleonastic — narrative can span.

Similarly, when Dennett claims that words are “memes” that reproduce like a “virus,” he is speaking pure gibberish. Words reproduce, within minds and between persons, by being intentionally adopted and employed.

Here, as it happens, lurks the most incorrigibly problematic aspect of Dennett’s project. The very concept of memes — Richard Dawkins’s irredeemably vague notion of cultural units of meaning or practice that invade brains and then, rather like genetic materials, thrive or perish through natural selection — is at once so vapid and yet so fantastic that it is scarcely tolerable as a metaphor. But a depressingly substantial part of Dennett’s argument requires not only that memes be accorded the status of real objects, but that they also be regarded as concrete causal forces in the neurology of the brain, whose power of ceaseless combination creates most of the mind’s higher functions. And this is almost poignantly absurd.

Perhaps it is possible to think of intentional consciousness as having arisen from an improbable combination of purely physical ingredients — even if, as yet, the story of that seemingly miraculous metabolism of mechanism into meaning cannot be imagined. But it seems altogether bizarre to think of intentionality as the product of forces that would themselves be, if they existed at all, nothing but acts of intentionality. What could memes be other than mental conventions, meanings subsisting in semiotic practices? As such, their intricate interweaving would not be the source, but rather the product, of the mental faculties they inhabit; they could possess only such complexity as the already present intentional powers of the mind could impose upon them. And it is a fairly inflexible law of logic that no reality can be the emergent result of its own contingent effects.

This is why, also, it is difficult to make much sense of Dennett’s claim that the brain is “a kind of computer,” and mind merely a kind of “interface” between that computer and its “user.” The idea that the mind is software is a fairly popular delusion at the moment, but that hardly excuses a putatively serious philosopher for perpetuating it — though admittedly Dennett does so in a distinctive way. Usually, when confronted by the computational model of mind, it is enough to point out that what minds do is precisely everything that computers do not do, and therein lies much of a computer’s usefulness.

Really, it would be no less apt to describe the mind as a kind of abacus. In the physical functions of a computer, there is neither a semantics nor a syntax of meaning. There is nothing resembling thought at all. There is no intentionality, or anything remotely analogous to intentionality or even to the illusion of intentionality. There is a binary system of notation that subserves a considerable number of intrinsically mindless functions. And, when computers are in operation, they are guided by the mental intentions of their programmers and users, and they provide an instrumentality by which one intending mind can transcribe meanings into traces, and another can translate those traces into meaning again. But the same is true of books when they are “in operation.” And this is why I spoke above of a “Narcissan fallacy”: computers are such wonderfully complicated and versatile abacuses that our own intentional activity, when reflected in their functions, seems at times to take on the haunting appearance of another autonomous rational intellect, just there on the other side of the screen. It is a bewitching illusion, but an illusion all the same. And this would usually suffice as an objection to any given computational model of mind.

But, curiously enough, in Dennett’s case it does not, because to a very large degree he would freely grant that computers only appear to be conscious agents. The perversity of his argument, notoriously, is that he believes the same to be true of us.

For Dennett, the scientific image is the only one that corresponds to reality. The manifest image, by contrast, is a collection of useful illusions, shaped by evolution to provide the interface between our brains and the world, and thus allow us to interact with our environments. The phenomenal qualities that compose our experience, the meanings and intentions that fill our thoughts, the whole world of perception and interpretation — these are merely how the machinery of our nervous systems and brains represent reality to us, for purely practical reasons. Just as the easily manipulated icons on a computer’s screen conceal the innumerable “uncomprehending competences” by which programs run, even while enabling us to use those programs, so the virtual distillates of reality that constitute phenomenal experience permit us to master an unseen world of countless qualityless and purposeless physical forces.

Very well. In a sense, Dennett’s is simply the standard modern account of how the mind relates to the physical order. The extravagant assertion that he adds to this account, however, is that consciousness itself, understood as a real dimension of wholly first-person phenomenal experience and intentional meaning, is itself only another “user-illusion.” That vast abyss between objective physical events and subjective qualitative experience that I mentioned above does not exist. Hence, that seemingly magical transition from the one to the other — whether a genetic or a structural shift — need not be explained, because it has never actually occurred.

The entire notion of consciousness as an illusion is, of course, rather silly. Dennett has been making the argument for most of his career, and it is just abrasively counterintuitive enough to create the strong suspicion in many that it must be more philosophically cogent than it seems, because surely no one would say such a thing if there were not some subtle and penetrating truth hidden behind its apparent absurdity. But there is none. The simple truth of the matter is that Dennett is a fanatic: He believes so fiercely in the unique authority and absolutely comprehensive competency of the third-person scientific perspective that he is willing to deny not only the analytic authority, but also the actual existence, of the first-person vantage. At the very least, though, he is an intellectually consistent fanatic, inasmuch as he correctly grasps (as many other physical reductionists do not) that consciousness really is irreconcilable with a coherent metaphysical naturalism. Since, however, the position he champions is inherently ridiculous, the only way that he can argue on its behalf is by relentlessly, and in as many ways as possible, changing the subject whenever the obvious objections are raised.

For what it is worth, Dennett often exhibits considerable ingenuity in his evasions — so much ingenuity, in fact, that he sometimes seems to have succeeded in baffling even himself. For instance, at one point in this book he takes up the question of “zombies” — the possibility of apparently perfectly functioning human beings who nevertheless possess no interior affective world at all — but in doing so seems to have entirely forgotten what the whole question of consciousness actually is. He rejects the very notion that we “have ‘privileged access’ to the causes and sources of our introspective convictions,” as though knowledge of the causes of consciousness were somehow germane to the issue of knowledge of the experience of consciousness. And if you believe that you know you are not a zombie “unwittingly” imagining that you have “real consciousness with real qualia,” Dennett’s reply is a curt “No, you don’t” — because, you see, “The only support for that conviction is the vehemence of the conviction itself.”

It is hard to know how to answer this argument without mockery. It is quite amazing how thoroughly Dennett seems to have lost the thread here. For one thing, a zombie could not unwittingly imagineanything, since he would possess no consciousness at all, let alone reflective consciousness; that is the whole point of the imaginative exercise. Insofar as you are convinced of anything at all, whether vehemently or tepidly, you do in fact know with absolute certitude that you yourself are not a zombie. Nor does it matter whether you know where your convictions come from; it is the very state of having convictions as such that apprises you of your intrinsic intentionality and your irreducibly private conscious experience.

Simply enough, you cannot suffer the illusion that you are conscious because illusions are possible only for conscious minds. This is so incandescently obvious that it is almost embarrassing to have to state it. But this confusion is entirely typical of Dennett’s position. In this book, as he has done repeatedly in previous texts, he mistakes the question of the existence of subjective experience for the entirely irrelevant question of the objective accuracy of subjective perceptions, and whether we need to appeal to third-person observers to confirm our impressions. But, of course, all that matters for this discussion is that we have impressions at all.

Moreover, and perhaps most bizarrely, Dennett thinks that consciousness can be dismissed as an illusion — the fiction of an inner theater, residing in ourselves and in those around us — on the grounds that behind the appearance of conscious states there are an incalculable number of “uncomprehending competences” at work in both the unseen machinery of our brains and the larger social contexts of others’ brains. In other words, because there are many unknown physical concomitants to conscious states, those states do not exist. But, of course, this is the very problem at issue: that the limpid immediacy and incommunicable privacy of consciousness is utterly unlike the composite, objective, material sequences of physical causality in the brain, and seems impossible to explain in terms of that causality — and yet exists nonetheless, and exists more surely than any presumed world “out there.”

That, as it happens, may be the chief question Dennett neglects to ask: Why presume that the scientific image is true while the manifest image is an illusion when, after all, the scientific image is a supposition of reason dependent upon decisions regarding methods of inquiry, whereas the manifest image — the world as it exists in the conscious mind — presents itself directly to us as an indubitable, inescapable, and eminently coherent reality in every single moment of our lives? How could one possibly determine here what should qualify as reality as such? Dennett certainly provides small reason why anyone else should adopt the prejudices he cherishes. The point of From Bacteria to Bach and Back is to show that minds are only emergent properties of our brains, and brains only aggregates of mindless elements and forces. But it shows nothing of the sort.

The journey the book promises to describe turns out to be the real illusion: Rather than a continuous causal narrative, seamlessly and cumulatively progressing from the most primitive material causes up to the most complex mental results, it turns out to be a hopelessly recursive narrative, a long, languid lemniscate of a tale, twisting back and forth between low and high — between the supposed basic ingredients underlying the mind’s evolution and the fully realized mental phenomena upon which those ingredients turn out to be wholly dependent. It is nearly enough to make one suspect that Dennett must have the whole thing backward.

Perhaps the scientific and manifest images are both accurate. Then again, perhaps only the manifest image is. Perhaps the mind inhabits a real Platonic order of being, where ideal forms express themselves in phenomenal reflections, while the scientific image — a mechanistic regime devoid of purpose and composed of purely particulate causes, stirred only by blind, random impulses — is a fantasy, a pale abstraction decocted from the material residues of an immeasurably richer reality. Certainly, if Dennett’s book encourages one to adopt any position at all, reason dictates that it be something like the exact reverse of the one he defends. The attempt to reduce the phenomena of mental existence to a purely physical history has been attempted before, and has so far always failed. But, after so many years of unremitting labor, and so many enormous books making wildly implausible claims, Dennett can at least be praised for having failed on an altogether majestic scale.

By David Bentley Hart and published in The New Atlantic in Fall 2017 and can be seen here.

In defense of ‘thoughts and prayers’

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in The Week which, I thought, was pretty insightful. Be edified.


When a tragedy occurs — particularly one that involves gun violence, like Sunday’s mass shooting in Texas — two things are quite predictable in the aftermath: First, lots of people, including politicians, will offer their “thoughts and prayers.” And second, an increasingly large cadre of critics will react to these offerings of “thoughts and prayers” with outrage.

Why? It seems people think “thoughts and prayers” are a lazy substitute for embarking on some real political action that might help prevent such tragedies from occurring in the future. Critics believe those who offer up thoughts and prayers — particularly Republican officeholders who get donations from the National Rifle Association — are trying to deflect from their own inaction, or that they are complicit with the status quo.

It’s true some politicians are being opportunistic when they chime in with such platitudes. But in general, this line of thinking is insane, and, what’s more, it makes the world worse.

Contrary to the enraged certainties of many anti-gun liberals, there are actually few policies we know of that could serve as easy remedies to things like gun massacres. Even if you could magically make the NRA go poof, and make the Republican Party go poof, and make the Second Amendment go poof, and suddenly change the minds of the majority of Americans who support gun rights, the country would still be full of guns, and episodic massacres would still occur.

The point isn’t that America’s gun legislation is perfect, or that nothing can or should be done. The point is that NRA-GOP obstruction is not the one and only thing preventing the end of gun violence. The urgency and vigor of those who despise the notion of “thoughts and prayers” would only be justified in their reaction if there were indeed a magic button we could push to fix the problem tomorrow. And there isn’t.

But there’s something more fundamental at play. This isn’t just about guns. It’s about how we see political action. The implicit, maybe unconscious, but clear premise of the anti-“thoughts and prayers” line is that the only proper response to bad things happening is always political action. But turning everything into a political battle ensures that every single issue will become a conflictual one, leading to the progressive fraying away of social norms and of the belief in shared American values — which is what allows for political debate to begin with. Political debate in a democracy is what happens among groups who agree on more things than they disagree, which is why the losers lose gracefully and the winners don’t press their advantage too much. If you disagree with the other side on everything, then there’s no point in having a debate. The only solution is a civil war.

But the problem here isn’t just political — it’s spiritual. No doubt part of what drives people mad about “thoughts and prayers” is that they think prayer doesn’t do anything, presumably because they don’t believe in God. Of course, there are lots of people who believe that prayer is not only effective, it is, at the end of the day, the only effective thing, and that political action without a connection to a higher power ultimately becomes self-defeating.

Some people have to offer “thoughts and prayers” because they genuinely want to express their grief over an unthinkable act. If the only thing you think about after a tragedy is the next bill that should be passed, then you have no consideration for the victims as human beings — they are simply pawns in your political calculations. You are using still-warm bodies as props in a political marketing campaign — how noble!

In this context, “thoughts and prayers” are not beside the point, they are the whole point. They remind us that the Sturm und Drang of politics is not about enemies and allies, winning and losing. It is — or it should be — about actual individual human lives, every single one of whom is endowed with inalienable dignity and splendor. Being reminded of that might be one of the very few things that keeps us from falling off the precipice into a world incalculably more cruel than this already wretched Earth.

And I’ll be praying that doesn’t happen.

By Pascal-Emmanuel Gobry and originally published in The Week on November 7, 2017 and can be found here.



The Research Proves The No. 1 Social Justice Imperative Is Marriage

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in The Federalist which, I thought, was pretty insightful. Be edified.


For the last 20 years, marital status has increasingly become the central factor in whether our neighbors rise above, remain, or descend into poverty. The research is astounding.
A foundational value in our nation is the opportunity for all its citizens to be able to compete for a fair and meaningful shot at the American dream. This begins with access to citizenship, educational opportunity, and securing meaningful work that leads to greater life opportunities via commitment, diligence, and self-sacrifice. But an important contributor to putting and keeping men, women, and children on the escalator toward the American dream is little-known and widely ignored.

Just 70 years ago, social mobility and protection from poverty were largely a factor of employment. Those who had full-time work of any kind were seldom poor. Fifty years ago, education marked the gulf separating the haves from the have-nots. For the last 20 years or more, though, marital status has increasingly become the central factor in whether our neighbors and their children rise above, remain, or descend into poverty. The research is astounding.

Charles Murray of the American Enterprise Institute explains in his important book “Coming Apart: The State of White America” that in 1960, the poorly and moderately educated were only 10 percent less likely to be married than the college educated, with both numbers quite high: 84 and 94 respectively. That parity largely held until the late 1970s.

Today, these two groups are separated by a 35 percent margin and the gap continues to expand. All the movement is on one side. Marriage is sinking dramatically among lower- and middle-class Americans, down to a minority of 48 percent today. No indicators hint at any slowing. It’s remained generally constant among the well-to-do. This stark trend line led Murray to lament, “Marriage has become the fault line dividing America’s classes.” He has company in this conclusion.

Marriage Matters Lots More than Income and Race

Jonathan Rauch writing in the National Journal, certainly no conservative, notes that “marriage is displacing both income and race as the great class divide of the new century.” Isabel Sawhill, a senior scholar at the center-left Brookings Institute, boldly and correctly proclaimed some years ago that “the proliferation of single-parent households accounts for virtually all of the increase in child poverty since the early 1970s.” Virtually all of the increase!

Professor Bill Galston, President Clinton’s domestic policy advisor and now a senior fellow at Brookings, explained in the early 1990s that an American need only do three things to avoid living in poverty: graduate from high school, marry before having a child, and have that child after age twenty. Only 8 percent of people who do so, he reported, will be poor, while 79 percent who fail to do all three will.

Sociologists have referred to keeping these things in proper order as the “success sequence.” It remains true, according to a new research investigation from the Brookings and the American Enterprise institutes. It takes a deeper look at this “first comes love, then comes marriage” sequence by class and generation.

The increase of baby carriages coming before marriage is terribly alarming among the working poor. Working-class women are nearly three times more likely to have babies out of wedlock than upper-class women. Poor women are about five times more likely. These two groups are far less likely to be married overall and twice as likely to be cohabiting, suffering further from inherent instability of living together without marriage.

These troubling family-path trends leading to decreased life success are unfortunately true for millennials, as well.

A recent report on this topic focusing on millennials reports that 97 percent of those who follow the success sequence—earn at least a high-school diploma, work, and marry before having children—will not be poor as they enter their 30s. This is largely true for ethnic minorities and those who grew up in poor families. But sadly, fewer millennials are keeping these things in order, compared to their Boomer and Xer forbearers.

The success sequence of “first comes love” is so much more than moral choice or romantic idealism. These are deeply pragmatic, economic decisions powerfully affecting class mobility, where people live on the social scale, and the opportunities they will be able to provide for their children. This is because of the extraordinary economic power of marriage.

Marriage Boosts Every Measure Of Human Well-Being

This is why it’s not merely one-parent versus two-parent families that makes the difference. The U.S. Census Bureau finds the poverty rate for children living with two unmarried cohabiting parents is similar to that of single-mother homes than to those living with their married mother and father. Married people, regardless of how much they have, tend to manage their money differently than divorced, single, and cohabiting people.

Only 4 percent of homes with a married mother and father are on food stamps at any given time. But 21 percent of cohabiting and 28 percent of single-mother homes require such public assistance. Likewise, 78 percent of married people own their own home—a central goal in achieving the American Dream—while only 41 percent of cohabiting adults and 44 percent of singles do. Data indicates that marital status boosts home ownership more than home ownership increases marital opportunities.

Shotgun Marriages Also Confer Big Benefits

Robert Lerman, an economist at the Urban Institute, reports the marriage benefit holds for even the most poor, and to a lesser degree but still consequentially for those who marry between conception and the birth of their first child. Despite “academic ability, school completion, family background, race, and age at pregnancy, women who are married between pregnancy and the birth of their first child averaged a 30 percent higher income-to-needs ratio and a 15 percent lower degree of [financial volatility].”

These numbers are not insignificant. Such marriages were associated with reducing the number of years the mother, father, and children spent in poverty by half, compared with those who did not marry before the birth of their first child. Remarkably, this difference was even greater—by substantial margins—for black mothers and those with low educational test scores. Lerman concludes, “Even among the mothers with the least qualifications and highest risk of poverty, marriage effects are consistently large and statistically significant.”

Even women entering marriage between the conception and birth of their first child, regardless of class, education, and race, benefit from a greater standard of living by the following percentages.

  • 65 percent over a single mother with no other live-in adult
  • 50 percent over a single mother living with a non-romantic adult
  • 20 percent over a single mother living with a man

Shotgun weddings are not just a moral action. Even among households with similar incomes, and comparable demographic and educational characteristics, the following statistics demonstrate that, over the past year, married households are at least half as likely to have difficulty meeting their basic monthly living expenses and bills. This was most pronounced for black families.

A major 2014 report from the American Enterprise Institute and the Institute for Family Studies at the University of Virginia reports that:

  • Adjusting for family size, family income is 73 percent higher for married women compared to that of their unmarried peers.
  • Married men benefit from an average annual economic “marriage premium” of at least $15,900 per year compared to their unmarried peers.
  • This investigation also finds that the marriage premium is even more substantial for the most disadvantaged.

Marriage Is Good for People of Every Race

The advantages of growing up in an intact family and being married extend across the population. They apply as much to blacks and Hispanics as they do to whites. For instance, black men enjoy a marriage premium of at least $12,500 in their individual income compared to their single peers. The advantages also apply, for the most part, to men and women who are less educated. For instance, men with a high-school degree or less enjoy a marriage premium of at least $17,000 compared to their single peers.

This is not simply because the well-to-do are more likely to marry, but that marriage itself is ‘a wealth-generating institution.’

So marriage is far more than just a personal, sentimental institution, giving folks something to feel-good about at each year’s anniversary. It produces profoundly practical and essential value. The scholars at the National Marriage Project working from the University of Virginia explain this is not simply because the well-to-do are more likely to marry, but that marriage itself is “a wealth-generating institution.” The sociological evidence on this fact is dramatic.

Marriage generates wealth largely because marriage molds men into producers, providers, and savers. Singleness and cohabiting don’t. Nobel-winning economist George Akerlof, in a prominent lecture more than a decade ago, explained the pro-social and market influence of marriage upon men and fathers: “Married men are more attached to the labor force, they have less substance abuse, they commit less crime, are less likely to become the victims of crime, have better health, and are less accident prone.”

Akerlof explains this is because “men settle down when they get married and if they fail to get married, they fail to settle down.” This is precisely why every insurance company offers lower premiums on health and auto insurance to married men. Settled-down men also work more, earn more, save more, and spend more money on their families than on themselves. They boost the well-being of women and children in every important way.

If You Really Care About Inequality, You Support Marriage

The evidence is impossible to ignore or explain away. Even The New York Times noted its importance in a major story some years ago entitled “Two Classes, Divided by ‘I Do.’” Marriage drives well-being and upward mobility. The absence of marriage diminishes it. Thus, the growing class divide. Any smart and compassionate effort to alleviate poverty and increase the well-being of our communities and its citizens cannot ignore this fact.

Today, many unfortunately believe that to be concerned about what kinds of families adults create and raise children in should be no one’s business. It’s a personal matter. Such people have no idea what a family is or does anthropologically. Each family is as much a public institution as it is private, if not more so. Its strength and weaknesses are felt throughout each community in countless ways. Government expands as marriage declines.

Working for healthy, well-formed, enduring marriages is one of the most effective ways we can do the work of social justice. That the effort is not hip and trendy has no bearing on its ability to change lives for the better. Decades of research and the lives of real people make the case over and again every day, for good or for bad. Let’s resolve as a nation to choose and work for the good and halt the ever widening chasm of class.

By Glenn T. Stanton and published in The Federalist on November 3, 2017 and can be found here.

Post Navigation