When did anti-semitism become “horrible”?

I was just reading about the case of Steven Salaita, who had his offer of a tenured professorship of American Indian Studies at the University of Illinois withdrawn because of some fairly ferocious anti-Israel tweets that he perpetrated. Now, I strongly support his right to write whatever he wants, particularly in his free time in a non-academic forum, as long as it does not cross the line into outright personal abuse or overt racism, sexism, etc.

Nonetheless, I feel obliged to point out that the content of these tweets would not encourage me to believe that their author is a clear and careful thinker. In particular, there was this one:

Zionists: transforming ‘anti-Semitism’ from something horrible into something honorable since 1948.

For someone in a field with a significant historical component this is particularly embarrassing. For substantial portions of respectable society anti-semitism was considered perfectly honourable, until the Nazis embarrassed everyone by taking it too far. So maybe there was a period of about 3 years when anti-semitism was “horrible”. Then it went back to being honourable. But it’s all the fault of the Zionists.

Actually, there need not be any gap at all, since some of the atrocities of Jewish fighters in Palestinians are at least as bad as the current attack on Gaza. So he might have made an even better tweet:

Zionists: preventing ‘anti-Semitism’ from being horrible after 1945.

I’m guessing he wouldn’t have felt comfortable with that one, though.

But I’m still writing to the University of Illinois chancellor to protest against this firing. I am appalled by the weaselly excuses of former AAUP president Cary Nelson (who proudly drapes that emeritus title about himself while undermining the AAUP’s principles), that this is striking a blow for “civility”, and that Salaita was fomenting violence.

La planète des singes

Imagine a French cartoon showing a scene from “‘Batman’ à l’américaine”, showing a portly Batman stuffing his face with a cheeseburger, demonstrating how an American Batman would differ from the normal French Batman that everyone is familiar with.

The most recent (28 July, 2014) issue of The New Yorker has a cartoon, showing two figures trudging along a beach, from which the top of the Eiffel Tower can be seen poking through the sand; the woman holds the reins of a horse, the man has hurled himself to the ground, crying “Non!” The caption is “‘Planet of the Apes’ in French”.

So, I found myself wondering how this is supposed to be understood. Do most people know that Planet of the Apes was originally a novel (in French) by Pierre Boulle (author as well of The Bridge on the River Kwai), and that the Eiffel Tower does play an important role in the end of the novel? Does the author know? Does he expect the readers to know that? It very much affects how you read the cartoon, and what the humour is (which I’m having difficulty discerning). If he knows, then it’s presumably supposed to be some sort of comment on Hollywood appropriation of other countries’ icons. If he doesn’t know, then the joke is supposed to be about how ridiculous it would be for space explorers to be exotic Gauls, rather than normal Americans. Hence my thought above about how a comparably ignorant French cartoonist might “americanise” a normal French character that he doesn’t realise was already American to begin with.

Early 20th century MOOCs

It is always enlightening to see how some of the same breathless optimism derived from our newest innovations, the claims that perennial problems are going to be solved at last, were also derived from innovations a century or more old, when they were new. In particular, I was struck by Kevin Birmingham’s account (in his remarkable book on the genesis of James Joyce’s Ulysses) of the early days of Random House, and its Modern Library series:

Both within and beyond universities, people began thinking that certain books illuminated eternal features of the human condition. They didn’t demand expertise — one didn’t need to speak classical Greek or read all of Plato to benefit from The Republic — all they demanded was, as [Professor John] Erskine put it, “a comfortable chair and a good light.” […]

The Modern Library offered commodified prestige with the illusion of self-reliance. Readers could have the benefits of institutional culture without the institutions. They could rise above the masses by purchasing a dozen inexpensive books.

Replace “good light” by “fast internet connection”, and you have the promise of Coursera. Of course, that jibes well with the feelings that many skeptics have, who wonder why we need new technology to democratise education. As long as you’re lecturing to masses, where personal feedback is logistically impossible, doesn’t it suffice to have a well-stocked library?

Porn suits

One of Roald Dahl’s final and most bitter stories (from the late 1980s) tells of a scam engineered by a rare book seller, who picks names out of the obituaries, and then sends a bill to the grieving family that includes expensive items of exotic erotica. The families inevitably pay the bill and avoid asking any embarrassing questions, assuming that the deceased had kept these proclivities well hidden.

I was reminded of this when I saw this article in The New Yorker, about the web site X-art.com, that has become “the biggest filer of copyright-infringement lawsuits” in the US.

Today, they average more than three suits a day, and defendants have included elderly women, a former lieutenant governor, and countless others. “Please be advised that I am ninety years old and have no idea how to download anything,” one defendant wrote in a letter, filed in a Florida court. Nearly every case settles on confidential terms, according to a review of dozens of court records…

It is hard to see why anyone facing such a suit would choose not to settle: hiring a lawyer costs more than settling, and damages are exponentially higher in the event of a loss at trial. Plus, no one wants to be publicly accused of stealing pornography. To avoid embarrassment, many defendants may choose to settle before Malibu Media names them in a complaint.

The long arm of the gay mafia

I was amused by the intimations that cropped up in reports on Brendan Eich’s dismissal as CEO of Mozilla that he had been (in the words of one comedian) “whacked by the gay mafia”. Now, the “X mafia” is a standard lazy joke, and the more nonviolent the image of the group whose mafia this is supposed to be the better the appeal to those whose livelihood depends on a steady stream of cheap laughs. But my first reaction was that for gay people to be accused of mafia tactics must be a marker of progress — people don’t like the mafia, but they respect its power! Surely the notion that gay people are too powerful would have been a difficult concept to formulate until very recently.

I was wrong, at least as regards the entertainment industry. In Terry Teachout’s fascinating new biography of Duke Ellington, Mercer Ellington is quoted as saying that his father was unconcerned about Billy Strayhorn’s homosexuality.

But Mercer also reports that Ellington believed in the existence of “a Faggot Mafia… He went on to recount how homosexuals hired their own kind whenever they could, and how, when they had achieved executive status, they maneuvred to keep straight guys out of the influential positions.”

The domestic elephant

I’ve long been bemused by the function of the elephant in the popular phrase “the elephant in the living room”. When it was invented by the recovery movement — I think in the 1980’s — it clearly was supposed to be both a shocking and ridiculous image. Families, it was saying, often deal with huge and obvious problems, such as addiction or abuse, by developing elaborate mechanisms for ignoring the very existence of the problem, that to an outsider seem both confounding and absurd. It’s as though you had an elephant in your living room, but acted as though you could pretend it wasn’t there.

The weird thing about the later career of the expression is that it has come to be an everyday expression — “That’s the elephant in the living room, isn’t it?” — as though it were perfectly ordinary to have such a thing; indeed, as though every living room has its elephants. I thought of this when I encountered an early use of elephants in the domestic setting, but with a different thrust. In Dominic Sandbrook’s history of Britain in the late 1970’s, Seasons in the Sun, there is a quote from Labour’s Welsh Secretary John Morris, acknowledging defeat in the devolution referendum:

If you see an elephant on your doorstep, you know what it is.

(The second episode of the new season of the BBC’s Sherlock made excellent comic use of the phrase, playing on its strange ubiquity. Giving a wedding toast to Watson, Sherlock reels off a list of some of their cases, concluding with “And then there’s the elephant in the living room.” For a moment it sounds like he’s switching modes, from the CV to something more personal, but then we have a split-second flashback to the detective encountering a real elephant in a real living room, and you remember that “The Elephant in the Living Room” does sound kind of like the title of a Conan Doyle story.)

Demographic fallacies and classical music

I was just reading an article in Slate with the title “Classical Music in America is Dead”. The argument boils down to two points:

  1. Classical music listeners are a small portion of the population.
  2. Relatively few young people in the audience.

With regard to (1), I thought it interesting that he writes

Just 2.8 percent of albums sold in 2013 were categorized as classical. By comparison, rock took 35 percent; R&B 18 percent; soundtracks 4 percent. Only jazz, at 2.3 percent, is more incidental to the business of American music.

What’s interesting is that, while jazz is certainly a minority taste, and its trajectory in American culture has closely paralleled that of classical music in the 20th century, I don’t think anyone would claim that jazz is dead.

He quotes the critic Richard Sandow, who makes a demographic argument that

And the aging audience is also a shrinking one. The NEA, ever since 1982, has reported a smaller and smaller percentage of American adults going to classical music performances. And, as time goes on, those who do go are increasingly concentrated in the older age groups (so that by now, the only group going as often as it did in the past are those over 65).

Which means that the audience is most definitely shrinking. Younger people aren’t coming into it. In the 1980s, the NEA reported, the percentage of people under 30 in the classical music audience fell in half. And older people also aren’t coming into the classical audience. If they were, we’d see a steady percentage of people in their 40s and 50s going to classical events, but we don’t. That percentage is falling.

Of course, this is vastly overstated. “Younger people” are “coming into it”… in smaller numbers than before. It’s an absurd fallacy (not uncommon, and addressed first (to my knowledge) in theoretical ageing research by Yashin et al. in the 1980s) that you can determine the longitudinal dynamics for individuals by looking at the cross-sectional age distribution.

Consider a model where individuals are recruited into classical music at a constant rate over their lifetimes, ending with 10% of the 80-year-olds. (We’ll leave the older population out of it.) Then about 11% of the adult audience would be under 30. Suppose there were now a change, just so that children under 15 were no longer being recruited into classical music, but after that age they continued to be enter at the same rate as before. Then the fraction of the adult audience under 30 would be halved, to about 5.5%. The number of people in their 40s and 50s going to concerts would decline by about 15%.

I’m not arguing that this is what is going on. A lot of the story is probably the general splintering of the music audience, and the fact that people increasingly prefer to stay home for their entertainment. (This is one reason why I have argued that the classical music establishment’s reliance on enormously expensive orchestras and opera companies is a mistake.) Just that you can’t make inferences about individual trajectories over time without data about individual trajectories.

American exceptionalism: Harassing tourists and others

A discussion broke out on The Dish about the high-handed and sometimes abusive treatment that foreigners entering the US are subjected to, even citizens of international peers, like the EU, compared with the treatment that Americans (and others) receive entering most European countries. All foreigners entering the US are, by law, treated as “an intending immigrant” when they arrive, and need to prove otherwise. Now, a former immigration official has replied with a justification:

Congress demands by law that every applicant for a tourist visa (or any nonimmigrant visa) be considered “an intending immigrant” until they prove otherwise. With good reason – a lot of them are intending immigrants. Why is it Americans have such an easier time traveling to other countries than citizens of those countries have traveling here? Because Americans go home, that’s why.

Even when US citizens work off the books for a year or two overseas, they almost always wind up coming home. The same can’t be said of most foreigners who come here, even Europeans.

Sounds pretty convincing. But is it true? How would he know? I’m always suspicious of categorical claims like this, even when I make them myself.

How about if we compare the number of people from different countries living abroad. According to the French government, there are 1.6 million French citizens living abroad, so about 2.7% of the population. About 2 million Germans (not counting the 600,000 or so Russians who are officially considered “Germans” by ancient descent), so about 2.5%. And Americans? According to the Bureau of Consular Affairs (part of the State Department) there are 7.6 million Americans living abroad. Divided into a population of 316 million, we get about 2.4%. Even if some of these estimates are off, it’s clearly not a qualitative difference.

Sorry, America, the world just isn’t as into you as you like to imagine.

Compute the interest

Another comment based on Sharon Ann Murphy’s wonderful book on 19th century life insurance in the US: She describes an 1852 case in which the American Mutual Insurance Company tried to renege on a claim, where a preëxisting condition was found in an autopsy.

Not surprisingly, the jury sided with the beneficiaries; they “were out thirteen minutes, just long enough to compute the interest” on the original claim.

Indeed, the verdict is not surprising. What is most surprising, however, is that the jury computed the interest. I wonder how likely it is that a jury of twelve today would include even a single person capable of computing compound interest.

Cool nerds

An interesting article by Carl Wilson (apparently the start of a month-long series) in Slate looks at the word “cool” in its past and current incarnations. It’s a lot more readable and to the point than jazz critic Ted Gioia’s fundamentally trivial book The Birth and Death of the Cool, but I found myself hung up on his comment

 You’d be unlikely to use other decades-old slang—groovy or rad or fly—to endorse any current cultural object, at least with a straight face, but somehow cool remains evergreen.

As it happens, I was just recently having a conversation about the word nerd. I have a very clear memory that when the ’50s nostalgia wave broke in the mid-1970s (so I was about 8 years old), I encountered the word in TV programs like Happy Days as an antiquated idiom. I had never heard anyone use the word, and I associated it with my parents’ childhoods. When I was a student the prevailing word for someone too bookish to be cool (such as myself) was weenie. As late as 1993, according to an OED citation, Scientific American felt the need to explain

 ‘Nerd’..is movie shorthand for scientists, engineers and assorted technical types who play chess, perhaps, or the violin.

And I remember encountering the word again in the self-righteous name of the Society of Nerds and Geeks (SONG), an undergraduate club that popped up at Harvard about 1989 (when I was a graduate student in mathematics). This was a self-conscious attempt to co-opt these words, which at the time were exclusively terms of abuse, along the lines of the way what was formerly the sexual invert community, or whatever, renamed itself gay, and later queer. Harvard mathematics graduate student Leonid Fridman, who advised the club, published an op-ed on Jan 11, 1990 in the NY Times arguing that the popular disdain for the brainy and bookish would put the US at a disadvantage in competing with its economic and military competitors. (Remember, this was still the Cold War.) The article concluded with this plea:

Until the words “nerd” and “geek” become terms of approbation and not derision, we do not stand a chance.

This dream has come to fulfilment more than could have been imagined in the linguistic sense, but my impression is that there has been little change in the effective social status of academically-inclined American youth. Fridman’s NY Times op-ed is mysteriously unfindable in the Times online archive, so I have copied the text below: Continue reading “Cool nerds”