James Green, preeminent labor historian, has died. His books reached far beyond the academy to transform popular understanding of the United States’ most dramatic labor incidents. Probably his most famous book is Death in the Haymarket: A Story of Chicago, the First Labor Movement, and the Bombing that Divided America. His last book was also brilliant. The Devil Is Here In These Hills: West Virginia’s Coal Miners and Their Battle for Freedom brought the reality of the incredibly terrible lives of these workers and their rebellion that led to the Battle of Blair Mountain, the largest civil uprising in the United States since the Civil War, into the public consciousness. His book was adapted by PBS into a documentary for The American Experience titled The Mine Wars, which is also quite excellent. A great historian and a great loss.
Because Pink Floyd makes most things better. (For everything else there’s Robitussin.)
This is a guest post by Jacob Remes, who is clinical assistant professor at NYU’s Gallatin School of Individualized Study. His book, Disaster Citizenship: Survivors, Solidarity, and Power in the Progressive Era, is available from the University of Illinois Press. He tweets at @jacremes.
Charles Lee worked at a patent leather factory in the Blubber Hollow neighborhood of Salem, Massachusetts. It was unpleasant work in a rickety building. Workers like Lee dissolved flammable scrap celluloid film in flammable amyl acetate and alcohol, painted it on leather, added another layer of wood alcohol, and then steam heated it.
A hundred and two years ago today, on the afternoon of June 25, 1914, the inevitable came: a fire broke out. Charles Lee was the worker standing closest to the fire’s origin, and he broke both his legs jumping out of a window to escape the flames. Half an hour later, 300 workers had been forced to flee their factories. By evening, the fire had consumed 50 factories across the city, including, most devastatingly, Salem’s largest employer, a sheet factory called Pequot Mills. More than 18,000 people were left homeless or jobless.
Every disaster is a workplace disaster for someone. Sometimes, as for Charles Lee, the disaster is part of work. Other times, as for Pequot Mills employees, a disaster destroys opportunity for work. For others, including the 87 firefighters who died in the line of duty in 2015, it is disaster itself that is the worksite. Workers have long responded to workplace disasters by coming together with their coworkers and neighbors to think about–and fight over–the conditions of their labor.
Changes in illumination, heating, firefighting, and transportation technologies–together with organizing and government regulation–led to a gradual decline in the sort of fires that once regularly destroyed large swaths of cities. In 1918, a Canadian government researcher counted 290 urban conflagrations in the United States and Canada between 1815 and 1915, more than half of the global total. Salem’s was among the last.
But industrial risk was not vanquished. In the United States in 2014, the last year for which data were available, about 13 people a day were killed at work, whether in small accidents or big disasters. This risk–of lives lost, of bodies mangled, of property and livelihoods diminished–is never evenly distributed (as Erik, among others, has reminded us). Who bears the bodily risk of industrialism is a political choice we all make. Most of the time, workers die in ones or twos, invisible except to their families, coworkers, and friends. Disasters–like when 29 coal miners died in Upper Big Branch, West Virginia, in 2013, or when, in the same year, 1,100 garment workers died in a factory collapse at Rana Plaza in Dhaka, Bangladesh–are the times we see our choices and have an opportunity to correct them.
After the Salem fire, as after disasters today, people debated how to organize society and its risk in their neighborhoods and churches, in town meetings and voting booths. Six months after fire, Salemites recalled their mayor in the first modern recall in New England. Catholic laypeople argued with priests and the archbishop about how their parish should be rebuilt. Neighbors argued about whether a new building code, designed to make the city less flammable, was worth the cost.
Most of all, they fought for power in their workplaces. Pequot Mills was rebuilt and reopened a year and a half after the fire, in 1916. Soon, workers began to experiment with new ways of organizing and building power across skill, gender, and ethnicity. At a time when in most Massachusetts textile mills only the most skilled workers, mostly men, were welcomed into unions, workers at Pequot Mills organized a union that included women, unskilled workers, and French Canadians, whom many labor leaders at the time thought were unorganizable.
Workers at Pequot Mills fought for, and won, higher pay, but more importantly they wanted a say in how the factory would be run. They won seniority rights, a grievance system, and defined job categories and so limited management’s arbitrary ability to hire, fire, promote, and discipline workers. By the late 1920s, the union had taken charge of the company’s sales and marketing departments, and it controlled a joint labor-management committee that sought to increase productivity through scientific management. Workers’ willingness to sacrifice some material gains for control over how the factory would run got press attention as a national model.
It did not last. While at first union power meant democratic control of the workplace by workers, within a few years the business manager, not the workers themselves, controlled the process. “I didn’t bother to report,” he told visiting researchers, “because they are a bunch of ignorant Canucks and Polacks who wouldn’t understand anyway.”
After a few years of growing union autocracy, workers took the skills they had honed in the aftermath of the fire and rebelled against their own leaders. Led by women, who were especially hurt by the business manager, they rebelled and struck in 1933 and again in 1935 to found a new, more democratic union. A generation after the fire, workers were still debating with each other, with management, and with their neighbors how to organize work.
In our own era of workplace disasters, we too can debate how labor should be organized. Disasters offer opportunities for solidarity in the workplace, in the community, and up and down the supply chain. They are times when the choices society makes about whose lives are more or less valuable become visible, and they are times we can make different choices.
One example was the prosecution of Massey Energy CEO Don Blankenship for his role creating the Upper Big Branch disaster. (He was sentenced to a mere year in prison.) So too was the Rana Plaza factory collapse. The horror of that disaster forced the North American companies that had subcontracted work to those factories to impose greater–though still inadequate–safety standards. More importantly, it spurred greater garment worker organizing, so that in Dhaka, as in Salem, workers can build power and set their own standards.
This post also encourages readers to donate to the Rosenberg Fund, supporting the children of targeted activists. You can read more about the Rosenberg fund here.
This is the 182nd post in this series. Previous posts are archived here.
Good morning, newly sovereign Britain and welcome to your new leadership team! We had Blair-Brown, then Cameron-Osborne, and now we’ll enjoy Prime Minister Johnson and Chancellor of the Exchequer Rubble.
Unlike yesterday’s immediate reaction post, this one is written on
24 hours of sleep uh, written 24 hours later on seven hours of sleep.
First, Scott L has it bang-on. While there are distinct similarities in the motivations of the Leavers and Trump supporters (as well as parts of Sanders support), the the contexts of the two votes are different enough such that the lessons that can be drawn are background, and not worthy of basing a forecast upon. Of course, given my very recent track record of superiority in forecasting, this should probably make people worry. I did get the Cameron resignation right, but then low hanging fruit . . .
We now have some (patchy) data to assess the two variables I said that we should be attentive to in the run in to the referendum. Turnout we’re not going to be able to assess yet, and in terms of raw turnout, we have nothing to compare it with really so that might never be properly assessed. However, the received wisdom of political science and psephology, that undecideds significantly break for the status quo the closer we get to polling day (I estimated 3:1), is not receiving support, and this might be one key to understanding how my 52:48 prediction went ass-backwards (as well as why NCP’s forecast of 53:47 was just a bit off).
Those data are from an Ashcroft poll (details here, a lot of interesting stuff to pore over) conducted immediately after the referendum, with an N of over 12,000. Those who decided a week out split 7%-6% for remain, a few days out 8%-7%, on the day 10%-9%. While there’s a marginal advantage for remain in the late deciders, these data suggest it’s only 53%. Not the 2:1 or 3:1 breaks for the status quo we typically expect.
Demographics did work out as expected. The age gap (which we’ve known about for well over a month) is receiving a lot of attention in the media at present, as though everybody is surprised. Social class worked out as well; the higher up the socio-economic ladder one is, the odds of voting remain increase. However, where pre-referendum models suggested social class C1 (lower middle class) would just support remain, they ultimately just supported leave (and the professional classes, A & B, did not support remain at the rate initially thought). The age gap is striking.
The next two figures illustrate the support for each side in party political terms, and how the parties own supporters voted. I have two observations here. First, it was Conservative voters who drove Brexit. 40% of leave support was Tory, 25% UKIP, and 21% Labour. Yes, a nice cross-party distribution, but there were over three Tory/UKIP (in the parlance of Plymouth Labour as we’re now enjoying a Con-Kip coalition in this fair city, “Blukip”) supporters for every one Labour supporter voting leave. Additionally, this also tests my off-the-cuff suggestion yesterday morning that greater than 35% of Labour supporters voted leave. Ashcroft estimates the figure at 37%. While the traditional Labour heartlands of the northwest and northeast went huge for leave, Labour by and large did not. 58% of Conservatives did.
As the Labour Party itself is going through an uncertain period (charitably stated), one thing I’ve been hearing and reading that does need to stop now is that we didn’t “lose” more of our supporters than originally expected. Indeed, the 37% estimate is in line with expectations. Additionally, it doesn’t mean that all of those supporters we “lost” to leave were the traditional working class base. That figure must include a large degree of Lexiters, as evidenced by the 25% of Green Party supporters who inexplicably voted leave.
And just who are those 4% of UKIP supporters who voted remain? Statistically, there had to be some, of course, but it’s still hilarious fun to point it out.
And now, the money shot. Why did the lies of the leave campaign resonate? Why was Gove sage to suggest that Britain has had enough of experts experting us to death with facts in their expert ways? Self-reported political attentiveness breaks as we would think.
It’s not as stark as I’d have thought, but this is self-reported. There’s a lot more there in the link above worth looking at, of course. Later today, but more likely tomorrow, I’ll have some further thoughts on the result of the result, Article 50, and speculate as to just what the hell Boris Johnson is up to.
Running a marathon. Why the hell would somebody do that to themselves?
Indeed a vast, disturbing literature has now accumulated on the ill effects of running marathons. Studies find that up to 1 in 12 participants end up seeking medical help during the race. (At hot-weather events, runners can end up “dropping like flies.”) As many as four-fifths report having gastrointestinal problems such as bloating, cramps, vomiting, diarrhea, and fecal incontinence while on the course. Some runners suffer from blood poisoning. Others must endure a blitz of dermatological conditions: sore nipples (affecting up to 1 in 6 on race day); chafing (another 1 in 6); blisters (1 in 3); and jogger’s toe (1 in 40). Given all the risks, it’s no wonder that some marathon organizers have asked doctors to embed as race participants so they can quickly tend to runners who collapse.
When researchers consider all the injuries that accrue during the period of training—and not just on the day of the marathon itself—they find even greater cause for alarm. One study looked at 255 participants in an extended, 32-week marathon training program and found that 90 of them—that’s 35 percent—experienced “overuse” injuries. (Among the most common training ailments are anterior knee pain, Achilles tendinitis, shin splints, and stress fractures.) Another research group surveyed 725 men who raced in the 2005 Rotterdam Marathon, and found that more than half of them had sustained a running injury over the course of the year. Among those who sustained a new injury during the month leading up to the race, one-quarter were still suffering, to some extent, three months later.
Deaths do occur during the marathon, but I’m glad to say they’re very, very rare. Most runners’ ailments will be temporary; then again, most runners won’t have any benefits to weigh against those modest costs. Even if they don’t ruin their knees, twist their ankles, or bang their toes while training, their weekly hobby won’t do much to help their health. Marathoners fail to lose weight, as a rule, and while aerobic exercise may be good for the heart, doing a huge amount of aerobic exercise brings at best diminishing returns.
The sport isn’t merely dangerous; it’s extravagant. It costs more than $250 just to enter the New York City Marathon and to have the chance to chafe your nipples alongside 50,000 other people. Meanwhile, humanity’s oldest form of exercise has spawned a multibillion-dollar industry in footwear. Even efforts to pare down the sport to fundamentals have been absorbed into this marketing, such that there now exists a set of high-priced products known, improbably enough, as “barefoot running shoes.”
I get the feeling that marathoners think of themselves as gritty, motivated types, who would rather train and get things done than sit around watching videos on Facebook. Indeed, they’ll often note the fact of their accomplishment (we might think of this as “showing off”) on social media. For them, the pursuit of running 26 miles may have less to do with any functional reward than merely having gone through training in the first place. It’s an exercise of will, not one of purpose; the marathoner views achievement as a virtue of its own—like climbing Everest because it’s there.
It’s telling that this monomania gets rewarded—every single time, with cheering crowds and Facebook likes—despite its lack of substance. (At least Everest has a view!) I guess the form itself excites us: We’re so starved for ways to show self-discipline, and to regiment our time, that any goal will do, even one so imbecilic as the marathon. This only calls attention to the wasted opportunity: If we want to celebrate the act of building up to something hard—if we’re ready to devote ourselves, for at least 100 hours, to regimented training—then we should strive for something better. Instead of spending all that time purely for the sake of having spent it, let’s pursue a goal that has some meaning in itself.
I can think of no form of “leisure” less appealing than running 26 miles. Except for those crazy supermarathon bastards running 100 miles or whatever while they figure out where to poop.
Opinions may differ.
Growth in median compensation may have slowed lately, or even fallen for some of the highest-paid chief executives. But this is little recompense for workers who have seen their wages stagnate or fall for decades.
Last year, the average chief executive of an S&P 500 company was paid 335 times more than the average nonsupervisory worker, according to the AFL-CIO’s useful interactive site, Executive Paywatch.
This stunning disparity has been the norm since the 1990s, but it wasn’t always this way. In 1965, the average CEO made 20 times the pay of the average worker; it was around 34-to-1 in 1980. By 1998, it was nearly 322-to-1.
What to do about it is fairly obvious. Tax the living heck out of them:
One measure would be returning to the progressive taxation system that operated from the 1940s until 1981, with a top marginal rate of, say, 70 percent as opposed to today’s 39.6 percent.
Another is to eliminate the tax-option loophole, which helps subsidize high compensation. (It allows companies to deduct the market value of the options, even though they are not a real expense, thus lowering their taxes. This arguably encourages companies to grant even more options in big comp packages.) According to a report from Citizens for Tax Justice, 315 big companies have used this to avoid $64.6 billion in taxes over the past five years.
Corporate tax rates could be set higher for firms with high CEO-to-worker pay differentials. Say-on-pay could be made mandatory rather than advisory. Public companies could be required to separate the chairman and chief-executive jobs. And unionization could be made easier, giving workers greater bargaining power.
Unionization certainly does need to be made easier, although I’m not sure that it would really make a difference unless unions start bargaining over peak executive pay. I’d love that, but it seems that tax, tax, tax is the answer, in a variety of ways. Time to reclaim that money for the public good.
Will hidden sexism hurt Hillary Clinton this fall? Quite possibly, although I’d like to see more than yet another story on Real American Voters, i.e., working-class white men from the Rust Belt, to suggest it may be so.
The great Ralph Stanley died last night at the age of 89. Stanley was the last major living figure of the early bluegrass era. He began recording with his brother Carter in 1947. They never had major financial success–really only Bill Monroe and Flatt & Scruggs did. They were a great band, pretty squarely within the emerging bluegrass tradition. But when Carter died in 1966, Ralph took his music back in time a bit. He always thought of himself as an old-time singer and banjo player, not a bluegrass musician. And that’s accurate. Bluegrass quickly developed into something pretty slick, with fancy instrumentals and a certain sense of virtuosity. Monroe developed the music by taking old-time and combining it with jazz, pop, and country music. While Stanley never completely rejected that, he emphasized the old-time Appalachian music much more. This led to some really outstanding music in the years after Carter’s death. I want to point out a few starting points for Dr. Ralph’s (he received an honorary Ph.D. from Lincoln Memorial University in Harrogate, Tennessee) discography. His 1969 album, Hills of Home, is an outstanding entrypoint. While I don’t care about the subject matter, the 1972 gospel album, Cry from the Cross, is probably the best bluegrass gospel album ever recorded. During these years, he mentored a number of young Appalachian singers, including Roy Lee Centers, Keith Whitley, and Ricky Skaggs. The last two of course became stars after switching to country music while Centers was pointlessly murdered. My collection of Stanley is this 2-CD set from these years, including live performances from all three. Really amazing stuff.
I saw Ralph Stanley perform twice. The first was in about 1998 at the Tennessee Theater in Knoxville. By this point, he was signing with his son, Ralph II. His son doesn’t have a good bluegrass lead voice. Good enough for country music, but not good enough for that style. So it wasn’t like seeing him in the 1970s, but was a ton of fun nonetheless, especially in front of a crowd that cared deeply about that style of music. I saw him again in about 2002 in Albuquerque. By this time, his late-career revival thanks to O Brother Where Art Thou had kicked in. He played to a packed house, played “Man of Constant Sorrow” like 3 times, and during the set break, shook every hand and sold every piece of merchandise he could. An old man now, he was going to cash in while he could. And who could blame him, given his long struggle to be financially successful, even if this meant the set break was a full hour.
Rest in Peace, Ralph. You were a true giant of American music. A few sample cuts:
And since this is a political blog, let’s not forget his endorsement of Barack Obama in 2008.
I suspect we’re going to see a lot of this kind of analysis:
Britain’s stunning vote to leave the European Union suggests that we’ve been seriously underestimating Donald Trump’s ability to win the presidential election.
When you consider all his controversies and self-inflicted wounds over the past month, combined with how much he’s getting outspent on the airwaves in the battleground states, it is actually quite surprising that Trump and Hillary Clinton are so close in the polls. He’s holding his own, especially in the Rust Belt.
Does this make sense? Not really:
- I mean, on one level it’s scary that Trump is within 6 points. But, still 6 points, in presidential election terms, is getting your ass kicked. And there’s no reason to think he has much upside potential.
- It might be possible for a formidable campaign organization to overperform the polls. But Trump has the opposite of that. Clinton’s dominance of the airwaves and superior organization is going to make it harder for Trump to overcome a substantial deficit and harder to get his supporters out.
- The argument against these facts seems to be something like “nobody expected Brexit to win, nobody expected Trump to win, but Brexit won, and Trump has already won once, so Trump can win twice.” But this doesn’t really make any sense. Unlike with Brexit, Trump took a commanding lead in the polls early on in the primaries; skeptics (like me) were ignoring the polls. I don’t think there’s any reason to believe there’s a large reservoir of untapped support for Trump that polls aren’t picking up.
- One major comparative advantage for Brexit is that none of the prominent assholes on its side were actually on the ballot. People who would never dream of voting for Nigel Farage or Boris Johnson in a national election could vote Brexit. Implicitly voting against Cameron didn’t require voting for someone you hate as much or more. If the question on the ballot in November was “do you want Hillary Clinton to be president?” I would be pretty worried. But it’s not. If Trump is going to win, he’s going to need a plurality of voters to affirmatively vote for him, although he’s a very well-known and widely despised figure heading a nationally unpopular party while barely running a presidential campaign at all.
- The United States is a much bigger and more diverse country, which really makes a big difference in terms, which is rather important for how a campaign based around mobilizing white resentment will play out. How is Trump going to win Florida, barely a white majority state? What’s his path to the Electoral College without it? (Hint: even if he can win Ohio and Pennsylvania, that’s not enough.)
- Brexit is helpful to Trump for one reason only: if it harms the American economy, it hurts the incumbent party. Will the effects on the American economy be enough to make a big difference? I doubt it, but that’s the only reason to worry about Brexit in terms of the American presidential election.
I was going to cover this ridiculous piece of crap, I swear, but but beloved dumb jerk, Roy Edroso, beat me to it. Having just moved in to a new house and being on entertain-the-child duty pretty much 24/7 makes blogging difficult, so I suppose I should thank him. But, dang, this kind of aimless stupidity is really in my wheelhouse.
Anyway, it’s about how too many boobies make men stop right in the middle of inventing their time machines so they can whack it. Now, you may be thinking “But can’t women invent things if the men are too busy choking the chicken?” To which I say “WHAT ARE YOU SOME KIND OF BOOBY-SHOWING FEMINIST?!! WOMEN. DON’T. INVENT. THINGS.” Geez.
I don’t want to fill up the cyberpages of LGM with sordid academic squabbles, but I also don’t want to let Steve Diamond quote me in a fraudulent way without making a record of it. Posted as a comment on Taxprof:
Stephen Diamond is a very dishonest man. Diamond does not link to my LGM post he quotes. This is not merely a matter of netiquette, because he quotes me in a way that intentionally hides the fact that my major criticism of him has nothing to do with his quibble regarding the minor point I made in the part of the post he does quote. He is intentionally misquoting me, and in such an egregious way that his behavior is a form of academic fraud. (ETA: Warren Terra in comments suggests that the phrase “academic fraud” shouldn’t apply to this context — a blog post — even if Diamond is behaving in a way that would be academic fraud in a more formal context. I’m of two minds about this).
Here’s what I wrote:
Let’s go to the numbers. Diamond cites Bureau of Labor Statistics occupational employment stats for his claim that incomes for lawyers “have increased steadily for at least two decades.” That’s a very misleading statement, for two reasons, one relatively minor, and the other not minor at all. The relatively minor reason is that, adjusted for inflation, median salaries (a crucial term, as we’ll see shortly) for lawyers have been essentially flat since the mid-1990s, which is as far back as the BLS stats go: adjusted for inflation, the median salary for lawyers has increased by less than 5%, from $110,000 to $115,00. That’s approximately half the wage growth experienced by the average American worker over the past two decades — which, needless to say, have hardly compromised a banner era for American workers in general.
Note that Diamond removes the bolded portion of the paragraph, for reasons that will soon be painfully evident.
Diamond’s complaint is that I compared growth in median lawyer salaries with growth in mean worker salaries. That is a fair point as far as it goes, but it doesn’t go very far: median salaries of all workers still increased more in percentage terms than median salaries of lawyers, and in any case this is all a distraction from my main initial point, which is that the earnings of salaried lawyers have been, as I said, essentially flat. (Diamond claims that a five percent cumulative growth rate in salaries over 17 years means salaried lawyers are staying “comfortably ahead” of inflation. Over this same time frame, Diamond’s employer increased sticker tuition for Diamond’s students by 60% in constant dollars. I wonder if Diamond’s students think that raising tuition 12 times faster than the growth rate in lawyer salaries constitutes a “comfortable” rate of growth for the cost of a Santa Clara law degree?).
But as I said in the original post, my initial point was a minor one, because Diamond’s claims about lawyer earnings are actually far more misleading. My main point, which Diamond hides from any readers he might have by distorting my text via elision, was this:
But, misleading as that part of Diamond’s statement is in context, that’s a minor point in comparison to another one, which is that the BLS wage statistics Diamond cites don’t include self-employed workers. How important is this omission when calculating the actual compensation of lawyers? (Let alone law school graduates, which is a very different category).
Consider that 75% of American lawyers are in private practice, and the large majority of those people are self-employed, either as individuals or in partnerships, meaning that they’re not salaried or hourly workers, and thus not included in the BLS wage stats. Diamond is aware of this, and thinks it means lawyers are making even more money than the BLS stats suggest:
Now, these numbers are “employed” lawyers so they do not include solo practitioners or partners who qualify as employers. But the first number is relatively small, approximately 4% on average of all practicing lawyers over that time period. And the second number is likely to skew income higher not lower, so excluding that number does not help the critics case that much. Arguably solos do less well financially (though we don’t know for sure based on the BLS data) so perhaps they cancel each other out.
Factor in higher paid partners and [it’s] likely they [lawyers] have stayed comfortably ahead of inflation.
Steve Diamond, a man who pontificates regularly on the economic status of lawyers, thinks that 4% of practicing lawyers are in solo practice. He produces this estimate by citing NALP data on the employment status of law graduates nine months after graduation. But many lawyers — perhaps most — graduated from law school more than nine months ago. How many of them are in solo practice? According to the ABA, the answer is roughly two out of every five, i.e., approximately ten times as many as the learned professor estimated. And what’s happened to their wages?
Fortunately, we don’t have to guess: the mean earnings (the median is certainly much lower) of solo practitioners have declined by 30% in real terms over the past 25 years, from $71,000 to $49,000 per year, inflation-adjusted.
In other words, if we combine the BLS data on median lawyer salaries with tax data on the earnings of self-employed lawyers, we find that the median real compensation for lawyers – again, not law school graduates, but actual employed lawyers — is surely a good deal lower than it was a generation ago.
As is evident if one actually reads it, the main point of my post, as I emphasized at the time, was that, contrary to the assertions of Diamond and Michael Simkovic, the median earnings of lawyers (not just salaried lawyers) have decreased over the period covered by BLS data, because almost half of all lawyers in private practice are solos, and their income has decreased markedly over this period.
Looking at the actual post would also reveal to readers that Diamond’s analysis of lawyer income was so radically off the mark because, absurdly, he used the employment status of new law graduates to estimate how many lawyers are solos
: a figure which he then proceeded to underestimate by 825%, when the real figure is 825% higher.
Imagine if Donald Trump claimed that affirmative action was destroying the career prospects of white men in America, and cited the “fact” that in America today only 4% of 25 to 29 year old white men have college degrees. If it were then pointed out to him that the real figure is 37%, would you expect him to give up on his pretensions of being an expert on the subject, and slink away quietly?
Of course not: what you would expect would be for Trump to then misquote his critics, while throwing rhetorical dust in the air and brazenly ignoring the fact that he had been exposed as someone who has no idea what he is talking about. But at least Santa Clara law students aren’t paying $75,000 per year for the privilege of having that particular lying blowhard spout ignorant nonsense at them.