judicialsupport

Legal Writing for Legal Reading!

Archive for the month “April, 2016”

If God Is Dead…

Every now and again I come across a fantastic article the warrants posting here; I just came across one on Pat Buchanan‘s blog (found on his official website (see here)).  Buchanan channels Nietzsche and speculates about what it means for our culture if God “dies.”  Be edified.

______________________

In a recent column Dennis Prager made an acute observation.

“The vast majority of leading conservative writers … have a secular outlook on life. … They are unaware of the disaster that godlessness in the West has led to.”

These secular conservatives may think that “America can survive the death of God and religion,” writes Prager, but they are wrong.
And, indeed, the last half-century seems to bear him out.

A people’s religion, their faith, creates their culture, and their culture creates their civilization. And when faith dies, the culture dies, the civilization dies, and the people begin to die.

Is this not the recent history of the West?

Today, no great Western nation has a birthrate that will prevent the extinction of its native-born. By century’s end, other peoples and other cultures will have largely repopulated the Old Continent.

European Man seems destined to end like the 10 lost tribes of Israel — overrun, assimilated and disappeared.

And while the European peoples — Russians, Germans, Brits, Balts — shrink in number, the U.N. estimates that the population of Africa will double in 34 years to well over 2 billion people.

What happened to the West?

As G. K. Chesterton wrote, when men cease to believe in God, they do not then believe in nothing, they believe in anything.

As European elites ceased to believe in Christianity, they began to convert to ideologies, to what Dr. Russell Kirk called “secular religions.”

For a time, these secular religions — Marxism-Leninism, fascism, Nazism — captured the hearts and minds of millions. But almost all were among the gods that failed in the 20th century.

Now Western Man embraces the newer religions: egalitarianism, democratism, capitalism, feminism, One Worldism, environmentalism.

These, too, give meaning to the lives of millions, but these, too, are inadequate substitutes for the faith that created the West.

For they lack what Christianity gave man — a cause not only to live for, and die for, but a moral code to live by, with the promise that, at the end a life so lived, would come eternal life. Islam, too, holds out that promise.

Secularism, however, has nothing on offer to match that hope.

Looking back over the centuries, we see what faith has meant.

When, after the fall of the Roman Empire, the West embraced Christianity as a faith superior to all others, as its founder was the Son of God, the West went on to create modern civilization, and then went out and conquered most of the known world.

The truths America has taught the world, of an inherent human dignity and worth, and inviolable human rights, are traceable to a Christianity that teaches that every person is a child of God.

Today, however, with Christianity virtually dead in Europe and slowly dying in America, Western culture grows debased and decadent, and Western civilization is in visible decline.

Rudyard Kipling prophesied all this in “Recessional”:

“Far-called our navies melt away; On dune and headland sinks the fire: Lo, all our pomp of yesterday/Is one with Nineveh and Tyre!”

All the Western empires are gone, and the children of once-subject peoples cross the Mediterranean to repopulate the mother countries, whose native-born have begun to age, shrink and die.

Since 1975, only two European nations, Muslim Albania and Iceland have maintained a birthrate sufficient to keep their peoples alive.

Given the shrinking populations inside Europe and the waves of immigrants rolling in from Africa and the Middle and Near East, an Islamic Europe seems to be in the cards before the end of the century.

Vladimir Putin, who witnessed the death of Marxism-Leninism up close, appears to understand the cruciality of Christianity to Mother Russia, and seeks to revive the Orthodox Church and write its moral code back into Russian law.

And what of America, “God’s country”?

With Christianity excommunicated from her schools and public life for two generations, and Old and New Testament teachings rejected as a basis of law, we have witnessed a startlingly steep social decline.

Since the 1960s, America has set new records for abortions, violent crimes, incarcerations, drug consumption. While HIV/AIDS did not appear until the 1980s, hundreds of thousands have perished from it, and millions now suffer from it and related diseases.

Forty percent of U.S. births are out of wedlock. For Hispanics, the illegitimacy rate is over 50 percent; for African-Americans, it’s over 70 percent.

Test scores of U.S. high school students fall annually and approach parity with Third World countries.
Suicide is a rising cause of death for middle-aged whites.

Secularism seems to have no answer to the question, “Why not?”

“How small, of all that human hearts endure, That part which laws or kings can cause or cure,” wrote Samuel Johnson.

Secular conservatives may have remedies for some of America’s maladies. But, as Johnson observed, no secular politics can cure the sickness of the soul of the West — a lost faith that appears irretrievable.

You can find the above on Pat Buchanan’s blog site here.

 

 

Advertisements

Editing

Here is the latest post by Angela and Daz Croucher to their blog A.D. Croucher! They are up-and-coming young adult authors. Check them out!

A.D. Croucher

We’ve spoken a lot about that first draft. It’s the place where you let loose, write anything and everything that comes to mind. It’s the time to riff like you’re in minute 5 of a guitar solo and you just don’t want to stop. It’s the improv phase. Even if you had an outline.

So we’re going to assume you had a blast, and now you have a first draft on your screen. A big, beautiful, messy, crazy first draft.

matt damon not happy martian You, staring at that first draft like…

What now?

Now, you get ready to edit.

When editing, you’ll focus on a myriad of things: plot, character, world-building, scenes, beats, every line of dialogue… every line… every word…. We’ll look at these in more detail in future posts, but right now, we recommend doing what will feel so unnatural to you: set that first draft aside for a while.

Not for too…

View original post 303 more words

Yes Tour Books: The Ladder Tour

Here is another addition to my series of Yes music posts.  I started this series here and a collection of all my Yes-related posts is here.

I saw the progressive rock band Yes play at  Tower Theater in Upper Darby, Pennsylvania on December 12, 1999 during the The Ladder Tour.  I posted a review from this show here.  You can also read more about this show here.

Every now and again Yes produces and sells a tour book that is sold at the merchandise table at their shows during a given tour.  As it turns out, Yes produced and sold as tour book for the The Ladder Tour!  I took photographs of each page of the tour book and posted them below.  Enjoy!

 

20160126_214224 20160126_214239 20160126_214248 20160126_214255 20160126_214309 20160126_214317 20160126_214328 20160126_214338 20160126_214349 20160126_214358 20160126_214410 20160126_214417 20160126_214425 20160126_214435 20160126_214445 20160126_214452 20160126_214504 20160126_214510 20160126_214523 20160126_214531 20160126_214544  20160126_214553 20160126_214601

Movie Review: Batman v. Superman: Dawn of Justice

I recently saw the movie Batman v. Superman: Dawn of Justice (“BvS”) and these are my thoughts about it (this review contains some spoilers).  It should be noted that I am a big fan of comic books and have been so since I was at least five years old.  I am sure that fandom biases my review in some way. This movie is based on the D.C. comic books and characters.

Introduction:

This movie is the second installment in the DC Extended Universe, DC’s serialized motion picture series, which began with the Superman origin film, Man of Steel.

Man of Steel introduced the character of Superman and many of his traditional supporting characters. The film concluded with an epic battle between Superman and his similarly powered fellow Kryptonian General Zod, which destroyed a large portion of the city of Metropolis, killing thousands of people in the process.

BvS is set about a year-and-a-half after the events of Man of Steel, and finds the world trying to come to terms with what it means to have a being like Superman in its midst. I do appreciate the realism of this aspect of the DC films thus far. The naïve and unquestioning acceptance of an incredibly powerful and invulnerable being as being nothing more than a completely selfless hero – as is traditionally the case in Superman stories – is simply incredible. So, I like the idea that the world wrestles with the implication of the arrival of the Superman.

Batman, who is introduced for the first time in the DC Extended Universe by way of this film, has the basics of his origin story told through the opening credits (i.e.: the murder of his parents and falling into a bat cave).  Aside from being referred to as a vigilante in Gotham City, the film offers no explanation as to who Batman is, what his motivations are, how and why he has his fighting skills and technology, or really much of anything else for that matter. Thankfully, Batman is a near universally known character who has appeared in film many times over the course of decades, so the near total lack of description had little effect on the film.  It almost seems like the screenwriters decided to take a short cut on an already long film because, if Batman were a new character, I do not think they could have gotten away with such thin development for him.  I do hope that, at some point, some further development is offered in the Extended Universe as, otherwise, it would leave it fairly incomplete in terms of its internal story and cogency.

Plot Summary:

A building owned by Bruce Wayne (i.e.: Batman’s true identity) is destroyed between the battle between Superman and Zod (referred to above).  Batman recognizes that Superman may be “good” today, but one day that may not be the case and no one has the ability to stop him.  Seeing the sheer power of Superman, and its effect on the world, Batman instantly thinks that Superman is too great of a threat to be allowed to go unopposed and resolves to kill him.

Concurrent with Batman’s agenda, Lex Luthor sees Superman as similarly threatening and, through LexCorp, seeks government permission via government contracts to weaponize recently found kryptonite.  Lex Luthor manipulates the government and public opinion (through shrewd tactics and terrorist attacks) to become suspicious of Superman.  Eventually, for some reason, Luthor captures Superman’s mother Martha and agrees to allow Martha to live if Superman kills Batman.

Based on the above, the battle between the heroes inevitably occurs.  Batman gains the upper hand and intends to kill Superman until Superman cries out that “Martha” will die.  By coincidence, Bruce Wayne’s murdered mother is named “Martha,” and the threat to Superman’s mother (along with the memories of his own mother Martha) gives Batman sufficient pause to stop his assault on Superman and save Martha Kent while Superman can dedicate his time to another threat which has arisen during his battle with Batman.

Lex Luthor, having been given access to Zod’s scout ship by the government, is able to create a Kryptonian monster called Doomsday who wrecks havoc across Metropolis.  As a result, the kryptonite weapons Batman created to defeat Superman now need to be used on the greater threat Doomsday.

Superman, Batman, and Wonder Woman (who, as Diana Prince, has been pursing her own designs through the film) team up to defeat Doomsday.  After the ensuing battle, Bruce Wayne intends to contact other people who are cropping up that appear to be metahuman.

My Thoughts:

I went into this movie with cautious optimism and, I think, that turned out to be the right decision.  I really enjoyed this movie, and it was great fun seeing these two titans of comic books finally encounter one another in a blockbuster movie and, perhaps more importantly, see the foundation of a budding movie franchise.

The action sequences are fantastic.  The look of the characters is near perfect (except Luther, see below).  The battle scenes were well choreographed.  The characters were introduced well.  I loved seeing what was basically two famous DC comics story lines come to life (i.e.: Dark Knight Returns and The Death of Superman story lines).  The story had a nice build up and did a good job in presenting why people are leery of Superman, and why Batman finds it necessary to oppose him.  It was a great experience for a comic book fan.

In saying all this, I did say my optimism was cautious.  The makers of this movie, like many others in this genre, felt the need to extend its running time to nearly two-and-a-half hours.  Now, as this was a big epic story, I expected it to be long.  What I did not expect was a really good two hour movie with another twenty to thirty minutes hastily appended to it.

The story should have been about the origin, build up, and resolution of a conflict between Batman and Superman (as the title suggests).  Indeed, it seems that was what the movie was going to be until it appeared that the resolution of the conflict between the heroes would be the death of one of them at the hands of the other.  Not wanting that to happen, the writers threw in Doomsday as a plot device to unite the heroes.

The Doomsday portion seemingly came out of nowhere in the last thirty minutes of the movie, and introduced new elements of the film completely absent from the rest of it.  For example, Luthor’s motivation throughout the film is to gain government contracts and access to Kryptonian technology.  Batman barely registers on his radar.  For no reason necessary to the rest of the story, Luthor suddenly provokes a conflict between Superman and Batman.  It isn’t necessary as the story already built up Batman’s own motivation to engage Superman.  Furthermore, upon gaining access to the scout ship, Luthor suddenly has facility with advanced alien technology and, somehow, has learned how to create a creature, and not just a creature, but one containing his own DNA (why his DNA is needed is never explained).  Capturing Martha Kent to motivate Superman to fight Batman was completely contrived, and highlighting the fact that both of their mothers have the same Christian name was obviously a contrived plot device.  Why in the world would someone use his mother’s first name when talking about her if it were not a lame attempt to call Bruce Wayne’s mother to Batman’s mind?  In addition, it is not entirely clear why this would give Batman pause in his fight with Superman anyway.  Batman has grave concerns over a being with nearly unstoppable power.  Why the commonality of mothers’ names would suddenly distract and/or change Batman’s mind about the potential danger of Superman is never explained and, quite frankly, makes little sense in light of the rest of the film.

Speaking of bad plot devices, Batman uses a kryptonite spear to battle Superman.  Batman, who is extremely diligent and always has a well thought out plan, simply and randomly drops the spear and walks away without any thought of the implication of doing so (which could mean losing his opportunity to kill Superman), which is entirely out-of-character for Batman.  Dropping the spear for no good reason becomes a problem later in the film to merely add to the drama in an unrealistic way and to give Lois Lane a role and purpose in helping to save the day instead being a perpetual damsel in distress.

The concern over Superman’s power makes sense.  What does not make sense is for Luthor to unleash a similarly powerful but totally mindless beast in order to fight Superman.  At least Superman is rational and helps people.  Doomsday is just a destructive monster (who is a far greater threat than Superman), and there was no plan developed by Luthor as to how to stop it despite his concerns about Superman.  In short, Doomsday’s creation makes absolutely no sense both in terms of the story and Luthor’s motivations throughout the film.  Finally, perhaps as a way to make the battle with Doomsday even more epic, after Superman battles Doomsday for about 5 minutes (literally), and tries to fly him into space, the United States government, in that incredibly short period of time, suddenly and out of nowhere decides to fire a nuclear rocket at Doomsday in the atmosphere above Metropolis (oh, and, of course, no fall out from this decision is noticed in the film). What a hasty decision!

I also felt that, perhaps to distract away from the nonsensical Doomsday portion of the film, the score suddenly becomes extremely melodramatic and hokey.  The music during the big battle with the three heroes and Doomsday was loud rock(ish) music that tried to send the viewer the message that this battle was cool, awesome, and, perhaps, totally epic.  It was so obvious and transparent.  Similarly, the swooning and melodramatic music at the climax of the battle just seemed so over the top.  Subtly is, apparently, not a superpower.

Criticisms Made by Others:

Some people have concerns over the characterization of the characters.  Some seem legitimate while others less so.  I like the tentative Superman who is still working out his heroism and role in the world (and, so far, has always ultimately chosen the heroic path).  I am not at all keen on how this movie series has presented the Kents.  Instead of a good wholesome couple who teaches their son selflessness, heroism, and righteousness, they, instead, tend to be rather apathetic about the needs of others, sometimes discourage Superman from being heroic, tell Superman to think of himself over others, and, overall, are not the rather virtuous couple they are traditionally presented as being.  I loved this version of Batman, and Ben Affleck’s performance as Batman is my favorite thus far.  This movie even explains why Batman has a growly voice!  Affleck looks like enough of playboy to be a convincing Bruce Wayne, but he also looks sufficiently grizzled to accurately represent the character.  Perry White was a fun character (and it is now impossible for me to see Laurence Fishburne without thinking of his character on Blackish), however I thought that, in this movie, he was a sort of J. Jonah Jameson light as opposed to a character in his own right.  Lex Luthor was not presented well in light of the comics.  Taken independently of the comics, this Lex Luthor is a really interesting and compelling character.  In light of the comics, however, he is presented as a young, sort of skittish, almost Joker-like character instead of the cold and calculating middle-aged man he is usually presented as being. As in Superman, he has hair throughout the film until the end when he assumes his traditional bald look.

There has also been a lot of negative talk about the tone of the movie being serious or even dour.  I am not sure why this is a negative.  This only seems like a negative because people are comparing it to the wildly popular Marvel Cinematic Universe, which tends to be fairly light-hearted even at its heaviest, instead of looking at the film in its own right.  I found the tone to be perfectly fine and completely appropriate for its subject matter.  The fact that people have a hard time viewing comic book movies as “serious” ought not to be a negative reflection on the movie but, rather, on the viewer who insists on a narrow view of comic book movies.

Another common criticism I have seen of this movie is that it is sequel baiting.  I find this criticism completely out-of-place and, quite honestly, not describing anything I would say is a negative.  The movie, as noted above, gives extremely limited background on Batman.  I imagine this will all be fleshed out in future Batman films and/or future DC Extended Universe films.  There are also cameo appearances of the Flash, Cyborg, and Aquaman.  Wonder Woman, while not a main character, has more than a mere cameo appearance.  She, as Diana Prince, appears here and there throughout the film, and as Wonder Woman in the big fight scene at the end.  Aside from her direct role in the plot, there is precious little revealed about her (or her alter ego) at all.  Again, I assume this will all be fleshed out in her own film.  Finally, although probably only noticeable to a hardcore comics fan like me, there are at least three references to Darkseid in the movie (Batman’s weird dream sequence in the desert, the large omega symbol in the sand, and Luthor’s crazy ramblings at the end of the movie (as an aside, if Luthor’s motive and ability to create Doomsday is later revealed to be the result of Darkseid’s influence, then some of my criticisms of this movie will be somewhat tempered)).  Some say all of this is evidence of poor writing and exposition.  I disagree.  We now live in an age of serialized movie making and world building.  These movies presume sequels and greater exposition in those sequels.  The era of a self-contained superhero movie is nearly gone.  This movie revealed as much as required for the story to be told.  All of the other references here and there will be explained in later movies.  When the future movies are made, and all viewed together as a cohesive story, the gaps described above will presumably be filled, and there will no longer be a lack of information.  If they are not filled, then the criticisms of poor writing and exposition will have a lot more merit.  I simply think the criticisms about sequel baiting seem to simply ignore the new reality that modern superhero movies are serialized and go through progressive world building.

Finally, DC’s approach seems to be the opposite of Marvel.  Marvel presented a series of solo movies first, slowly developing each character and revealing their interrelationships, until it climaxed in the big cross over movie The Avengers, and the franchise has continued since then in a similar pattern.  This way of world building was really satisfying and helped develop really good characters.  DC seems to have the opposite approach with the big cross over movie released very early in the franchise with the hopes that it will be a spring board into other movies (especially solo movies) where the characters can develop.  Although DC’s approach did not have well developed characters in its first crossover movie, their approach may still pay off just as well as Marvel’s has.  I think it is too early, at this point, to determine whether Marvel or DC has the better approach.  I think that question should be revisited in a year or two when DC has had opportunity to release a few more movies.

Conclusion:

I would highly recommend this movie, especially to a fan of comics and superheroes.  I am very excited to see where this franchise goes, and I think this movie is a very good start as long as DC does not blow the opportunities it now has in movies.

AN EMPLOYMENT DOCUMENT PRIMER II

Check out Faye Cohen’s post to her blog Toughlawyerlady!

ToughLawyerLady

This blog is a continuation of my previous blog post, titled An Employment Document Primer I, which discussed documents an employee should make certain they receive when they are hired. This Primer II discusses documents which an employee should locate and keep copies of during the course of their employment. When one’s employment becomes problematic, or an employee is terminated, laid off or chooses to resign, these documents or policies will govern the terms of their employment.

During the course of one’s employment many documents cross an employee’s path, whether they are in written format or kept on a computer or in the “cloud”. It is very important for an employee to make certain that any documents or policies pertaining to them are within their possession or they will be able to be easily retrieved. The term “easily retrieved” does not mean storing this information on one’s office or…

View original post 886 more words

Scientific Regress

Every now and again I come across a fantastic article the warrants posting here; I just came across one in First Things, which is a journal (print and online) published by the Institute on Religion and Public Life.  It is a scholarly and rather academic publication which has many well respected contributors.  I found this piece to be particularly interesting given how our society looks to science as a (the?) source of ultimate truths (often as a mutually exclusive alternative to spirituality).  This sort of scientism may be misplaced, and this article delves into the pitfalls that come with such an approach.  Be edified.

______

The problem with ­science is that so much of it simply isn’t. Last summer, the Open Science Collaboration announced that it had tried to replicate one hundred published psychology experiments sampled from three of the most prestigious journals in the field. Scientific claims rest on the idea that experiments repeated under nearly identical conditions ought to yield approximately the same results, but until very recently, very few had bothered to check in a systematic way whether this was actually the case. The OSC was the biggest attempt yet to check a field’s results, and the most shocking. In many cases, they had used original experimental materials, and sometimes even performed the experiments under the guidance of the original researchers. Of the studies that had originally reported positive results, an astonishing 65 percent failed to show statistical significance on replication, and many of the remainder showed greatly reduced effect sizes.

Their findings made the news, and quickly became a club with which to bash the social sciences. But the problem isn’t just with psychology. There’s an ­unspoken rule in the pharmaceutical industry that half of all academic biomedical research will ultimately prove false, and in 2011 a group of researchers at Bayer decided to test it. Looking at sixty-seven recent drug discovery projects based on preclinical cancer biology research, they found that in more than 75 percent of cases the published data did not match up with their in-house attempts to replicate. These were not studies published in fly-by-night oncology journals, but blockbuster research featured in Science, Nature, Cell, and the like. The Bayer researchers were drowning in bad studies, and it was to this, in part, that they attributed the mysteriously declining yields of drug pipelines. Perhaps so many of these new drugs fail to have an effect because the basic research on which their development was based isn’t valid.

When a study fails to replicate, there are two possible interpretations. The first is that, unbeknownst to the investigators, there was a real difference in experimental setup between the original investigation and the failed replication. These are colloquially referred to as “wallpaper effects,” the joke being that the experiment was affected by the color of the wallpaper in the room. This is the happiest possible explanation for failure to reproduce: It means that both experiments have revealed facts about the universe, and we now have the opportunity to learn what the difference was between them and to incorporate a new and subtler distinction into our theories.

The other interpretation is that the original finding was false. Unfortunately, an ingenious statistical argument shows that this second interpretation is far more likely. First articulated by John Ioannidis, a professor at Stanford University’s School of Medicine, this argument proceeds by a simple application of Bayesian statistics. Suppose that there are a hundred and one stones in a certain field. One of them has a diamond inside it, and, luckily, you have a diamond-detecting device that advertises 99 percent accuracy. After an hour or so of moving the device around, examining each stone in turn, suddenly alarms flash and sirens wail while the device is pointed at a promising-looking stone. What is the probability that the stone contains a diamond?

Most would say that if the device advertises 99 percent accuracy, then there is a 99 percent chance that the device is correctly discerning a diamond, and a 1 percent chance that it has given a false positive reading. But consider: Of the one hundred and one stones in the field, only one is truly a diamond. Granted, our machine has a very high probability of correctly declaring it to be a diamond. But there are many more diamond-free stones, and while the machine only has a 1 percent chance of falsely declaring each of them to be a diamond, there are a hundred of them. So if we were to wave the detector over every stone in the field, it would, on average, sound twice—once for the real diamond, and once when a false reading was triggered by a stone. If we know only that the alarm has sounded, these two possibilities are roughly equally probable, giving us an approximately 50 percent chance that the stone really contains a diamond.

This is a simplified version of the argument that Ioannidis applies to the process of science itself. The stones in the field are the set of all possible testable hypotheses, the diamond is a hypothesized connection or effect that happens to be true, and the diamond-detecting device is the scientific method. A tremendous amount depends on the proportion of possible hypotheses which turn out to be true, and on the accuracy with which an experiment can discern truth from falsehood. Ioannidis shows that for a wide variety of scientific settings and fields, the values of these two parameters are not at all favorable.

For instance, consider a team of molecular biologists investigating whether a mutation in one of the countless thousands of human genes is linked to an increased risk of Alzheimer’s. The probability of a randomly selected mutation in a randomly selected gene having precisely that effect is quite low, so just as with the stones in the field, a positive finding is more likely than not to be spurious—unless the experiment is unbelievably successful at sorting the wheat from the chaff. Indeed, Ioannidis finds that in many cases, approaching even 50 percent true positives requires unimaginable accuracy. Hence the eye-catching title of his paper: “Why Most Published Research Findings Are False.”

What about accuracy? Here, too, the news is not good. First, it is a de facto standard in many fields to use one in twenty as an acceptable cutoff for the rate of false positives. To the naive ear, that may sound promising: Surely it means that just 5 percent of scientific studies report a false positive? But this is precisely the same mistake as thinking that a stone has a 99 percent chance of containing a ­diamond just because the detector has sounded. What it really means is that for each of the countless false hypo­theses that are contemplated by researchers, we accept a 5 percent chance that it will be falsely counted as true—a decision with a considerably more deleterious effect on the proportion of correct studies.

Paradoxically, the situation is actually made worse by the fact that a promising connection is often studied by several independent teams. To see why, suppose that three groups of researchers are studying a phenomenon, and when all the data are analyzed, one group announces that it has discovered a connection, but the other two find nothing of note. Assuming that all the tests involved have a high statistical power, the lone positive finding is almost certainly the spurious one. However, when it comes time to report these findings, what happens? The teams that found a negative result may not even bother to write up their non-discovery. After all, a report that a fanciful connection probably isn’t true is not the stuff of which scientific prizes, grant money, and tenure decisions are made.

And even if they did write it up, it probably wouldn’t be accepted for publication. Journals are in competition with one another for attention and “impact factor,” and are always more eager to report a new, exciting finding than a killjoy failure to find an association. In fact, both of these effects can be quantified. Since the majority of all investigated hypotheses are false, if positive and negative evidence were written up and accepted for publication in equal proportions, then the majority of articles in scientific journals should report no findings. When tallies are actually made, though, the precise opposite turns out to be true: Nearly every published scientific article reports the presence of an association. There must be massive bias at work.

Ioannidis’s argument would be potent even if all scientists were angels motivated by the best of intentions, but when the human element is considered, the picture becomes truly dismal. Scientists have long been aware of something euphemistically called the “experimenter effect”: the curious fact that when a phenomenon is investigated by a researcher who happens to believe in the phenomenon, it is far more likely to be detected. Much of the effect can likely be explained by researchers unconsciously giving hints or suggestions to their human or animal subjects, perhaps in something as subtle as body language or tone of voice. Even those with the best of intentions have been caught fudging measurements, or making small errors in rounding or in statistical analysis that happen to give a more favorable result. Very often, this is just the result of an honest statistical error that leads to a desirable outcome, and therefore it isn’t checked as deliberately as it might have been had it pointed in the opposite direction.

But, and there is no putting it nicely, deliberate fraud is far more widespread than the scientific establishment is generally willing to admit. One way we know that there’s a great deal of fraud occurring is that if you phrase your question the right way, ­scientists will confess to it. In a survey of two thousand research psychologists conducted in 2011, over half of those surveyed admitted outright to selectively reporting those experiments which gave the result they were after. Then the investigators asked respondents anonymously to estimate how many of their fellow scientists had engaged in fraudulent behavior, and promised them that the more accurate their guesses, the larger a contribution would be made to the charity of their choice. Through several rounds of anonymous guessing, refined using the number of scientists who would admit their own fraud and other indirect measurements, the investigators concluded that around 10 percent of research psychologists have engaged in outright falsification of data, and more than half have engaged in less brazen but still fraudulent behavior such as reporting that a result was statistically significant when it was not, or deciding between two different data analysis techniques after looking at the results of each and choosing the more favorable.

Many forms of statistical falsification are devilishly difficult to catch, or close enough to a genuine judgment call to provide plausible deniability. Data analysis is very much an art, and one that affords even its most scrupulous practitioners a wide degree of latitude. Which of these two statistical tests, both applicable to this situation, should be used? Should a subpopulation of the research sample with some common criterion be picked out and reanalyzed as if it were the totality? Which of the hundreds of coincident factors measured should be controlled for, and how? The same freedom that empowers a statistician to pick a true signal out of the noise also enables a dishonest scientist to manufacture nearly any result he or she wishes. Cajoling statistical significance where in reality there is none, a practice commonly known as “p-hacking,” is particularly easy to accomplish and difficult to detect on a case-by-case basis. And since the vast majority of studies still do not report their raw data along with their findings, there is often nothing to re-analyze and check even if there were volunteers with the time and inclination to do so.

One creative attempt to estimate how widespread such dishonesty really is involves comparisons between fields of varying “hardness.” The author, Daniele Fanelli, theorized that the farther from physics one gets, the more freedom creeps into one’s experimental methodology, and the fewer constraints there are on a scientist’s conscious and unconscious biases. If all scientists were constantly attempting to influence the results of their analyses, but had more opportunities to do so the “softer” the science, then we might expect that the social sciences have more papers that confirm a sought-after hypothesis than do the physical sciences, with medicine and biology somewhere in the middle. This is exactly what the study discovered: A paper in psychology or psychiatry is about five times as likely to report a positive result as one in astrophysics. This is not necessarily evidence that psychologists are all consciously or unconsciously manipulating their data—it could also be evidence of massive publication bias—but either way, the result is disturbing.

Speaking of physics, how do things go with this hardest of all hard sciences? Better than elsewhere, it would appear, and it’s unsurprising that those who claim all is well in the world of science reach so reliably and so insistently for examples from physics, preferably of the most theoretical sort. Folk histories of physics combine borrowed mathematical luster and Whiggish triumphalism in a way that journalists seem powerless to resist. The outcomes of physics experiments and astronomical observations seem so matter-of-fact, so concretely and immediately connected to underlying reality, that they might let us gingerly sidestep all of these issues concerning motivated or sloppy analysis and interpretation. “E pur si muove,” Galileo is said to have remarked, and one can almost hear in his sigh the hopes of a hundred science journalists for whom it would be all too convenient if Nature were always willing to tell us whose theory is more correct.

And yet the flight to physics rather gives the game away, since measured any way you like—volume of papers, number of working researchers, total amount of funding—deductive, theory-building physics in the mold of Newton and Lagrange, Maxwell and Einstein, is a tiny fraction of modern science as a whole. In fact, it also makes up a tiny fraction of modern physics. Far more common is the delicate and subtle art of scouring inconceivably vast volumes of noise with advanced software and mathematical tools in search of the faintest signal of some hypothesized but never before observed phenomenon, whether an astrophysical event or the decay of a subatomic particle. This sort of work is difficult and beautiful in its own way, but it is not at all self-evident in the manner of a falling apple or an elliptical planetary orbit, and it is very sensitive to the same sorts of accidental contamination, deliberate fraud, and unconscious bias as the medical and social-scientific studies we have discussed. Two of the most vaunted physics results of the past few years—the announced discovery of both cosmic inflation and gravitational waves at the BICEP2 experiment in Antarctica, and the supposed discovery of superluminal neutrinos at the Swiss-Italian border—have now been retracted, with far less fanfare than when they were first published.

Many defenders of the scientific establishment will admit to this problem, then offer hymns to the self-correcting nature of the scientific method. Yes, the path is rocky, they say, but peer review, competition between researchers, and the comforting fact that there is an objective reality out there whose test every theory must withstand or fail, all conspire to mean that sloppiness, bad luck, and even fraud are exposed and swept away by the advances of the field.

So the dogma goes. But these claims are rarely treated like hypotheses to be tested. Partisans of the new scientism are fond of recounting the “Sokal hoax”—physicist Alan Sokal submitted a paper heavy on jargon but full of false and meaningless statements to the postmodern cultural studies journal Social Text, which accepted and published it without quibble—but are unlikely to mention a similar experiment conducted on reviewers of the prestigious British Medical Journal. The experimenters deliberately modified a paper to include eight different major errors in study design, methodology, data analysis, and interpretation of results, and not a single one of the 221 reviewers who participated caught all of the errors. On average, they caught fewer than two—and, unbelievably, these results held up even in the subset of reviewers who had been specifically warned that they were participating in a study and that there might be something a little odd in the paper that they were reviewing. In all, only 30 percent of reviewers recommended that the intentionally flawed paper be rejected.

If peer review is good at anything, it appears to be keeping unpopular ideas from being published. Consider the finding of another (yes, another) of these replicability studies, this time from a group of cancer researchers. In addition to reaching the now unsurprising conclusion that only a dismal 11 percent of the preclinical cancer research they examined could be validated after the fact, the authors identified another horrifying pattern: The “bad” papers that failed to replicate were, on average, cited far more often than the papers that did! As the authors put it, “some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis.”

What they do not mention is that once an entire field has been created—with careers, funding, appointments, and prestige all premised upon an experimental result which was utterly false due either to fraud or to plain bad luck—pointing this fact out is not likely to be very popular. Peer review switches from merely useless to actively harmful. It may be ineffective at keeping papers with analytic or methodological flaws from being published, but it can be deadly effective at suppressing criticism of a dominant research paradigm. Even if a critic is able to get his work published, pointing out that the house you’ve built together is situated over a chasm will not endear him to his colleagues or, more importantly, to his mentors and patrons.

Older scientists contribute to the propagation of scientific fields in ways that go beyond educating and mentoring a new generation. In many fields, it’s common for an established and respected researcher to serve as “senior author” on a bright young star’s first few publications, lending his prestige and credibility to the result, and signaling to reviewers that he stands behind it. In the natural sciences and medicine, senior scientists are frequently the controllers of laboratory resources—which these days include not just scientific instruments, but dedicated staffs of grant proposal writers and regulatory compliance experts—without which a young scientist has no hope of accomplishing significant research. Older scientists control access to scientific prestige by serving on the editorial boards of major journals and on university tenure-review committees. Finally, the government bodies that award the vast majority of scientific funding are either staffed or advised by distinguished practitioners in the field.

All of which makes it rather more bothersome that older scientists are the most likely to be invested in the regnant research paradigm, whatever it is, even if it’s based on an old experiment that has never successfully been replicated. The quantum physicist Max Planck famously quipped: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Planck may have been too optimistic. A recent paper from the National Bureau of Economic Research studied what happens to scientific subfields when star researchers die suddenly and at the peak of their abilities, and finds that while there is considerable evidence that young researchers are reluctant to challenge scientific superstars, a sudden and unexpected death does not significantly improve the situation, particularly when “key collaborators of the star are in a position to channel resources (such as editorial goodwill or funding) to insiders.”

In the idealized Popperian view of scientific progress, new theories are proposed to explain new evidence that contradicts the predictions of old theories. The heretical philosopher of science Paul Feyerabend, on the other hand, claimed that new theories frequently contradict the best available evidence—at least at first. Often, the old observations were inaccurate or irrelevant, and it was the invention of a new theory that stimulated experimentalists to go hunting for new observational techniques to test it. But the success of this “unofficial” process depends on a blithe disregard for evidence while the vulnerable young theory weathers an initial storm of skepticism. Yet if Feyerabend is correct, and an unpopular new theory can ignore or reject experimental data long enough to get its footing, how much longer can an old and creaky theory, buttressed by the reputations and influence and political power of hundreds of established practitioners, continue to hang in the air even when the results upon which it is premised are exposed as false?

The hagiographies of science are full of paeans to the self-correcting, self-healing nature of the enterprise. But if raw results are so often false, the filtering mechanisms so ineffective, and the self-correcting mechanisms so compromised and slow, then science’s approach to truth may not even be monotonic. That is, past theories, now “refuted” by evidence and replaced with new approaches, may be closer to the truth than what we think now. Such regress has happened before: In the nineteenth century, the (correct) vitamin C deficiency theory of scurvy was replaced by the false belief that scurvy was caused by proximity to spoiled foods. Many ancient astronomers believed the heliocentric model of the solar system before it was supplanted by the geocentric theory of Ptolemy. The Whiggish view of scientific history is so dominant today that this possibility is spoken of only in hushed whispers, but ours is a world in which things once known can be lost and buried.

And even if self-correction does occur and theories move strictly along a lifecycle from less to more accurate, what if the unremitting flood of new, mostly false, results pours in faster? Too fast for the sclerotic, compromised truth-discerning mechanisms of science to operate? The result could be a growing body of true theories completely overwhelmed by an ever-larger thicket of baseless theories, such that the proportion of true scientific beliefs shrinks even while the absolute number of them continues to rise. Borges’s Library of Babel contained every true book that could ever be written, but it was useless because it also contained every false book, and both true and false were lost within an ocean of nonsense.

Which brings us to the odd moment in which we live. At the same time as an ever more bloated scientific bureaucracy churns out masses of research results, the majority of which are likely outright false, scientists themselves are lauded as heroes and science is upheld as the only legitimate basis for policy-making. There’s reason to believe that these phenomena are linked. When a formerly ascetic discipline suddenly attains a measure of influence, it is bound to be flooded by opportunists and charlatans, whether it’s the National Academy of Science or the monastery of Cluny.

This comparison is not as outrageous as it seems: Like monasticism, science is an enterprise with a superhuman aim whose achievement is forever beyond the capacities of the flawed humans who aspire toward it. The best scientists know that they must practice a sort of mortification of the ego and cultivate a dispassion that allows them to report their findings, even when those findings might mean the dashing of hopes, the drying up of financial resources, and the loss of professional prestige. It should be no surprise that even after outgrowing the monasteries, the practice of science has attracted souls driven to seek the truth regardless of personal cost and despite, for most of its history, a distinct lack of financial or status reward. Now, however, science and especially science bureaucracy is a career, and one amenable to social climbing. Careers attract careerists, in Feyerabend’s words: “devoid of ideas, full of fear, intent on producing some paltry result so that they can add to the flood of inane papers that now constitutes ‘scientific progress’ in many areas.”

If science was unprepared for the influx of careerists, it was even less prepared for the blossoming of the Cult of Science. The Cult is related to the phenomenon described as “scientism”; both have a tendency to treat the body of scientific knowledge as a holy book or an a-religious revelation that offers simple and decisive resolutions to deep questions. But it adds to this a pinch of glib frivolity and a dash of unembarrassed ignorance. Its rhetorical tics include a forced enthusiasm (a search on Twitter for the hashtag “#sciencedancing” speaks volumes) and a penchant for profanity. Here in Silicon Valley, one can scarcely go a day without seeing a t-shirt reading “Science: It works, b—es!” The hero of the recent popular movie The Martian boasts that he will “science the sh— out of” a situation. One of the largest groups on Facebook is titled “I f—ing love Science!” (a name which, combined with the group’s penchant for posting scarcely any actual scientific material but a lot of pictures of natural phenomena, has prompted more than one actual scientist of my acquaintance to mutter under her breath, “What you truly love is pictures”). Some of the Cult’s leaders like to play dress-up as scientists—Bill Nye and Neil deGrasse Tyson are two particularly prominent examples— but hardly any of them have contributed any research results of note. Rather, Cult leadership trends heavily in the direction of educators, popularizers, and journalists.

At its best, science is a human enterprise with a superhuman aim: the discovery of regularities in the order of nature, and the discerning of the consequences of those regularities. We’ve seen example after example of how the human element of this enterprise harms and damages its progress, through incompetence, fraud, selfishness, prejudice, or the simple combination of an honest oversight or slip with plain bad luck. These failings need not hobble the scientific enterprise broadly conceived, but only if scientists are hyper-aware of and endlessly vigilant about the errors of their colleagues . . . and of themselves. When cultural trends attempt to render science a sort of religion-less clericalism, scientists are apt to forget that they are made of the same crooked timber as the rest of humanity and will necessarily imperil the work that they do. The greatest friends of the Cult of Science are the worst enemies of science’s actual practice.”

By: William A. Wilson is a software engineer in the San Francisco Bay Area.

This article can be found on the First Things website here and was published in the May 2016 editi0n.

Case Again Examines NLRB Jurisdiction Over Religious Colleges

This is from religionclause.blogspot.com which you can find here:

“Last year in the Pacific Lutheran University case, the National Labor Relations Board developed a new test for when it will assert jurisdiction over a religiously-affiliated college. Even if the college holds itself out as providing a religious educational environment, the NLRB will assert jurisdiction unless the faculty members seeking to organize are themselves held out as performing a specific role in maintaining the college’s religious character. (See prior posting.) Last March, applying that test, an NLRB Regional Director held that it had jurisdiction over a faculty union election at Seattle University. (See prior posting.) The University appealed to the full NLRB, and in June it ordered the Regional Director to reopen the record so the parties could introduce additional evidence relevant to the NLRB’s new Pacific Lutheran test. (Docket).

In an August 17, 2015 opinion (full text), the Regional Director examined at length that additional evidence relating to how the faculty is held out and again concluded that the NLRB has jurisdiction over them.  Lexology analyzes that decision. On August 31, the University filed a 50-page request for review of the Regional Director’s latest decision (full text), arguing not just that the Pacific Lutheran test was misapplied, but arguing also:

The new test under PLU  contravenes the United States Supreme Court’s holding in  National Labor Relations Board v. Catholic Bishop of Chicago … which held that Congress did not intend to bring teachers at church-operated schools within the  jurisdiction of the Act. The PLU  test contains the same constitutional infirmities as existed in the Board’s former “substantial religious character” test, which caused the D.C. Circuit Court of Appeals to require a simple, “bright line” test to determine Board jurisdiction over religiously-affiliated colleges and universities…..”

You can learn more about this issue here.

NEARfest 2000 Event Program

This post is in my series regarding the North East Art Rock Festival (NEARFest).  You can find all of my posts regarding NEARFest here and I started the series here.

At each NEARFest, the Festival organizers created a weekend event program.  I was lucky enough to have purchased one from all of the Festivals I attended, and I will post photographs of them all here.  These programs were expertly crafted with many beautiful photographs and well written descriptions and histories and such.  Of course, they also contain their fair share of ads, as one may expect.

I was able to purchase a program at NEARFest 2000 and I thought it would be fun to post it here for prog rock fans who may not have had the opportunity to go to the Festival and/or purchase the program.  Accordingly, I took photographs of each page of the program and posted them below.

Enjoy!

20160213_203343 20160213_203352 20160213_20340420160213_20491920160213_204927 20160213_203416 20160213_203426 20160213_203439 20160213_203449 20160213_203500 20160213_203515 20160213_203536 20160213_203548 20160213_203611 20160213_203619 20160213_203635 20160213_203650 20160213_203704 20160213_203718 20160213_203729 20160213_203747

Modern Weddings Have Lost Interest in the Marriage Bed

Every now and again I come across a fantastic article the warrants posting here; I just came across one in First Things, which is a journal (print and online) published by the Institute on Religion and Public Life.  It is a scholarly and rather academic publication which has many well respected contributors.  I have been a commentator on the changes of sexual culture in the West and its abandonment of traditional sexual ethics and mores (see here for an example).  It should not be a shock to anyone who knows me or reads my material that I think these changes are and/or will be a disaster to our culture, children, families, and marriages.  The article I am sharing here reflects yet another one of those changes.  Due to the diminution and redefinition of marriage, the once holy, sacred, and, indeed, exciting marriage bed – which is to be first experienced on one’s wedding night – no longer has much meaning, and this lack of meaning is reflected in how weddings are planned and scheduled.  I will leave it at that and let the article do the rest of the talking.  Be edified.

_________

I became engaged at Easter, and, as I’ve started planning our wedding with my fiancé, I’ve noticed a suspicious lacuna in the wedding how-to’s I’ve picked up. I would have thought, after one magazine’s handbook covered strategies for getting your pet turtle to join your wedding procession (they won’t walk down the aisle quickly enough, so you must tow its tank in a tulle-swathed wagon), that there was nothing the wedding-industrial complex was going to leave undiscussed.

Except the wedding night.

It’s not that the books and magazines and websites draw a modest veil over the occasion or that their remit stops when the ceremony ends (there are plenty of discussions of honeymoon planning). As I read through The Knot Book of Wedding Lists, it was clear that the wedding night wasn’t simply being ignored but actively treated as an afterthought.

The book encouraged readers to remember that the festival spirit of a wedding wasn’t limited to the ceremony and reception.

There’s not just the one (huge) celebration to think about—kick off your engagement with a cocktail party; throw a rehearsal dinner to remember; extend the wedding-night celebrations with an after-party; and send your guests off with a post-wedding brunch.

It’s those last two that cause the problem. The book recommends planning (and planning to attend) a second party after the reception winds down, telling spouses-to-be: “An after-party is more than just an extension of your wedding day—it’s a great way to show off more of your wedding style with surprising details and personal touches.”

How late will that party go? Well, the planning guide notes that it’s up to you, but that most venues will close by two or three in the morning. “A good rule of thumb is two to four hours, depending on the time your reception ends.”

After that, the newlyweds go home, get into bed, and, just before they pass out from exhaustion, they set the alarm for the recommended farewell brunch, now just a couple hours away.

It can’t be that the book’s authors didn’t notice that they’d squeezed the wedding night down to nothing (this is a book that reminds you that if you’re only booking one hairstylist for you and your bridesmaids, someone will need to volunteer for the early morning slot).

It’s simply that this is a plan that assumes there will be nothing particularly special about the first night that a couple spends together. It’s a to-do list for engaged couples who have already been sexually intimate before marriage and don’t need to reserve any time or energy for consummation. In all the hustle and bustle of a wedding weekend, there’s no time for non-essentials, and one more night together doesn’t manage make the schedule.

But the editors of The Knot and the brides and grooms that listen to them aren’t simply not choosing the wedding night, they’re neglecting it in favor of something that does deserve a little more respect than processional turtles. The reason they recommend a wedding brunch (when many run-ragged spouses might prefer to sleep in) is that it’s a “chance to thank your guests and spend a bit more time with loved ones who’ve travelled far to partake in the celebration.”

If the bride and groom have already lived as man and wife, then it may be their friends that seem to offer the rarest, most urgent opportunity to give and receive love. It might be the one time this year you see the friend who moved out to California, or the very busy former roommate whose job keeps her traveling, or the cousin with a lot of small children who isn’t making a lot of trips until the youngest can fly. So why not pack in all the time with your guests that you can, since the bridegroom you will always have with you, but everyone else will be gone by Monday.

This is a kinder sort of error than the conventional forms of wedding excess. It is rooted in a love for others and a desire to make as great a self-gift as is possible. But it’s still a form of profligacy. Party after party robs the newlyweds of the chance to give themselves to each other.

Far better, even for a couple who has been sexually active before marriage, to set aside their night as their own, and to recognize that, as much as they love their friends, that they are no longer only their own, their time not only their own to spend.

Instead of recommending wedding schedules that erase the bride and groom’s obligation to (and delight in) each other, the Knot and other wedding guides might do well to carve more time out of the reception for the couple to spend together. They could borrow the tradition of the Yichud Room from Jewish weddings. After they are wed, a Jewish bride and groom head into a separate, locked room for a private interlude. It may be brief (eight minutes is the minimum required) but it allows them to not be hosts, but simply to be two people, a little awed by what they’ve offered to each other.

By: Leah Libresco who is a blogger for Patheos and works as a statistician in Washington, DC. Her first, recently published book is called Arriving at Amen: Seven Catholic Prayers That Even I Can Offer.

This article can be found on the First Things website here and was published on April 19, 2016.

 

Burden is Heavy When Considering the Weight of the Evidence

In the matter of Hatchigian v. The Connelly Firm, Pennsylvania Superior Court, Case No.: 1413 EDA 2013, the Court weighed in on whether sufficient evidence was presented to warrant a verdict at trial.

The underlying matter was a lawsuit brought by a client against a law firm. The Plaintiff retained the Defendant for legal representation for a reduced fee of $75 per hour and, accordingly, remitted a $750 non-refundable retainer. After three months’ time, Plaintiff terminated Defendant’s representation only two days prior to a pre-trial hearing. Plaintiff proceeded pro se and Defendant filed for, and was granted, leave to withdraw as Plaintiff’s attorney. Defendant did not refund to Plaintiff any of the retainer paid.

Plaintiff subsequently filed a legal malpractice and breach of fiduciary claim against Defendant. In response, Defendant filed a counterclaim requesting compensation for the work performed in excess of the $750.

At arbitration Defendant succeeded against Plaintiff’s claims and was awarded damages against Plaintiff for its counterclaim. Plaintiff appealed to trial. At trial, which was heard by a jury, a verdict was returned against Plaintiff for his claim and against Defendant for its counterclaim. Plaintiff appealed again, this time to Superior Court.

On appeal, the Superior Court reviewed Plaintiff’s argument that the jury’s verdict was against the weight of the evidence presented at the trial. Plaintiff sought a new trial. As an initial matter, Plaintiff also argued that the trial court, in the context of post-trial motions, ruled that Plaintiff waived his weight-of-the-evidence argument; the Superior Court found that the trial court did not rule that Plaintiff waived this argument but, instead, dealt with it directly.

In making his argument that the jury’s verdict was against the weight of the evidence, Plaintiff claimed that the jury did not consider his contentions that Defendant was willing to be flexible regarding its fees, would return unused fees, and did not provide appropriate representation.

The Court noted that it must provide the trial court the “gravest consideration” to its findings and reasons regarding a trial court’s determination that the jury’s verdict was not against the weight of the evidence presented at the trial. Furthermore, the Court pointed out that the determination of witness credibility is an issue solely for the jury to determine. Finally, it is only when the jury’s verdict is so contrary to the evidence that it shocks the conscience that a court is authorized to award a new trial.

In making its ruling, the Superior Court listed all of the facts that the parties agreed to prior to trial which was, indeed, quite lengthy; in the face of that, it is clear that the facts actually in dispute were fairly few in number. Considering the broad agreement between the parties on the facts, and the jury’s discretion when it comes to credibility, there simply was not any justification to suggest that the trial court’s verdict was against the weight of the evidence.

Ultimately, the argument that a court’s verdict is against the weight of the evidence must meet an incredibly high evidentiary standard in order to succeed.

Originally published in Upon Further Review on April 21, 2015 and can be seen here.

Post Navigation