Home / General / Student Evals and Sexism

Student Evals and Sexism

Comments
/
/
/
278 Views

Cartoon

As nearly every faculty member knows, student evaluations are a horrible way to measure teaching. That’s for many reasons. Students are primarily evaluating teachers on what grade they think they will get and the easiness of the class, whether you are a white male or a person of color or a woman, how you dress, etc. Yet student evaluations are often the only way administrations want to measure teaching because a) they don’t want to put the resources into evaluating teaching and b) they want to have happy customers who return the next semester. But these evals can be tremendously damaging, especially to the boatloads of contingent faculty who increasingly teach college courses. On the connection between evaluations and sexism:

There’s mounting evidence suggesting that student evaluations of teaching are unreliable. But are these evaluations, commonly referred to as SET, so bad that they’re actually better at gauging students’ gender bias and grade expectations than they are at measuring teaching effectiveness? A new paper argues that’s the case, and that evaluations are biased against female instructors in particular in so many ways that adjusting them for that bias is impossible.

Moreover, the paper says, gender biases about instructors — which vary by discipline, student gender and other factors — affect how students rate even supposedly objective practices, such as how quickly assignments are graded. And these biases can be large enough to cause more effective instructors to get lower teaching ratings than instructors who prove less effective by other measures, according to the study based on analyses of data sets from one French and one U.S. institution.

“In two very different universities and in a broad range of course topics, SET measure students’ gender biases better than they measure the instructor’s teaching effectiveness,” the paper says. “Overall, SET disadvantage female instructors. There is no evidence that this is the exception rather than the rule.”

Accordingly, the “onus should be on universities that rely on SET for employment decisions to provide convincing affirmative evidence that such reliance does not have disparate impact on women, underrepresented minorities, or other protected groups,” the paper says. Absent such specific evidence, “SET should not be used for personnel decisions.”

Needless to say, university administrations will at best pay lip service to this problem.

FacebookTwitterGoogle+Share
  • Facebook
  • Twitter
  • Google+
  • Linkedin
  • Pinterest
  • J. Otto Pohl

    I agree they are a horrible way to measure teaching ability. It seems to me that if there is going to be any meaningful teaching evaluations at universities then they need to be done by some sort of peer mechanism. From the point of view of providing useful information for improvement it is in the interest of faculty to have peer evaluations rather than student ones. Currently our student evaluations are done online and are completely voluntary. This means that out of every 100 students maybe 3 fill out an evaluation form so lots of lecturers go years without any evaluations. But, given their very limited utility that probably doesn’t matter much. We are fixing to move to some sort of peer review of teaching in the future, but it is still rather vague right now.

    • DAS

      The problem is that peer evaluations can also be quite biased and can be used as a tool by those in positions of power to create a hostile work environment. For example, in my department (the offending faculty have fortunately since retired) there were two junior faculty members that some on the senior faculty really didn’t like (the junior faculty members in question in fact did have, shall we say, “issues” but the senior faculty members were curmudgeons who could only relate to people younger than they were as students and not as colleagues … especially if said younger people were women). One of the ways in which the junior faculty were harassed was the use of near constant peer evaluations. In general, I think most of what can be said about what is wrong with students evaluating teachers can also apply to what happens in peer teaching evaluations.

      In general (and this overlaps with Nick never Nick’s comments below), I tend to think that there is no good method of teaching evaluation. If there were a good method of teaching evaluation, as my university is a “teaching oriented” institution, both our university’s administration and our union would absolutely embrace such a method and the key factor in getting a promotion would be performance on those teaching evaluations(*). Thus, since even at my “teaching oriented” institution, the key factor in getting a promotion is research/scholarly achievement (why? because it’s easy to evaluate “objectively”: if you get peer reviewed papers published or are invited to give talks at national conferences or are writing books or doing whatever constitutes recognized scholarly work in your field, you must be doing well in research/scholarship!), by syllogism it must be the case that there is no good way of doing teaching evaluations.

      That is why in the debates over K-12 education, when people talk about “merit pay” and “evaluating teacher performance”, my first reaction is “that’s all fine and good, and I’d love to reward good teachers, but there is no ‘objective’ system of identifying such teachers … after all, if they had such a system, we’d be using it here at 2nd Tier State U as the primary factor in determining who gets promoted”

      * The converse of this statement is not true: I wouldn’t put it past my university’s administration to adopt some trendy new form of teaching evaluation or the union to push the administration to adopt some trendy new form of teaching evaluation, even if that method of teaching evaluation is obviously horrid and useless. However, the converse needn’t be true for my syllogism to apply

      • J. Otto Pohl

        Certainly there can be bad peer evaluation of teaching just like there is bad peer review of written scholarship. But, I think it is possible that peer evaluations could be done in a beneficial manner. I am not exactly sure what it would look like right now. However, just as there is beneficial constructive criticism from peer reviewers of manuscripts I think there can be similar benefits regarding teaching. Certainly, it has more potential than student evaluations.

        • DAS

          I’ve had a lot of criticism of my work that is less than constructive, but the trick is to ask yourself “if that critique were a constructive critique, what would that constructive version of the critique be, and how would I respond to it?”. Of course, in peer evaluation, a lot of it comes down to having a good editor whose comments usually are of great assistance in figuring out how to translate peer comments into constructive criticism and hence how to respond to the comments. I don’t know who could play that role in terms of teaching evaluations: a department chair? An administrator? A committee from your union composed of people who know nothing of your field and what constitutes effective teaching in your field and who may be jealous of your department besides? And are those people whom you want to play the role an editor plays in the peer review process?

  • Nick never Nick

    Teaching can be good, or bad, in so many ways that measuring it is, to me kind of absurd. All of the following here represent something that could be termed good teaching, but could also be considered bad teaching, depending on the situation:

    1) a teacher who succeeds in teaching the basic information to a broad section of the class, has few people who fail to learn it (but doesn’t make students interested in the material or go beyond the prescribed outlines to give a deeper and more meaningful understanding)

    2) a teacher who is brilliant at teaching the few people in the class who are most likely to go forward in the subject (but doesn’t do such a great job of reaching those who are just there for some requirement or other).

    3) a teacher whose students largely fail, for reasons that can’t necessarily be pinned on the instructor (but who leave the class with a sense of enjoyment, interest in the subject, and a willingness to try again).

    Education, and especially university education, is a medieval institution that predates the modern state, and absolutely predates the modern trend towards assessment and accountability. It formed as a personal interaction between a master and a student, and now it’s being fit into the measurability matrices that accompany standardization. How do you standardize such a thing? Just as a minister might give excellent sermons but suck at pastoral services, there are many ways of being a good or a bad teacher. Rating people on teaching seems to assume that different ‘good’ attributes co-exist.

    Obviously, I’m pretty ignorant of teacher evaluation theory. I think that something more important than a standardized approach to evaluations is a clear institutional understanding of what a class should accomplish, how it can be accomplished, and what role is expected of the teacher in accomplishing it.

    • ThrottleJockey

      Good examples. I taught by my college that professor evaluations are important. Me and my classmates took them seriously. The last day of the semester was dedicated to filling them out, in class, with just a proctor there to collect them afterward.

      There’s 1 that I remember in particular. We had a new professor come in my senior year to teach our capstone micro-Econ class. It had been a rave class before–everyone wanted to take it, the teaching was fabulous, the discussion stimulating, the insight timeless. The new professor though was straight out of a post-doc. He had done his doc at U of Chicago and liked to brag about Gary Becker being his adviser. He spent the entire semester doing a deep-fucking-dive on Utility Theory. It was all mathematical proofs all the time. It wasn’t so bad for me with my physics and calc background but good Lord did a lot of the kids with STEM training suffer. And for what? So we can write some elegant proofs for a theoretical world which existed solely in Gary Becker’s mind? Really??

      I took 30 minutes writing that teaching evaluation. I reamed his ass. To be fair, I had given him this feedback directly, much earlier in the semester. To this day I’m pissed they let him teach that class.

    • Lamont Cranston

      I should probably look this up, rather than just assuming it, but… isn’t there a whole field of pedagogy? We surely have some idea of how to objectively evaluate teaching.

      I agree with every point you made, but I would think they could be plotted on graphs – this teacher engaged X percentage of students; Y percentage of students mastered the material; and Z percentage obtained perfunctory understanding (obviously, there could be analysis of how these variables interacted with one another).

      Surely some professionals study this kind of thing, right?

  • muntz

    It doesn’t surprise me that there are biases in these evaluations. I believe there is also a bias that rewards professors who “dumb down” their courses. If you teach “less” and make the course easier, students feel like they have a better mastery of the material and would be more likely to give the professor a better evaluation.

    However, I paid a hell of a lot of money for graduate school and I expected to receive value for that money. I was the customer and expected value in exchange for my time, effort, money and opportunity costs. Some of my professors were fantastic, a few were sub-par and a couple complete jokes. How do we reward the great teachers and get rid of those who don’t belong in a classroom?

  • ribber

    Anecdote: I went to very science-oriented school. All the undergrads were very adept at math because they all were getting in on standards that were the same for aspiring physicists or for the poetry majors. Grad students, on the other hand were specialized for their field. I had a professor who taught a very math-heavy portion of our education, and I took 2 undergrad courses from him, and due to my timing I needed one more high-level course to graduate so I enrolled in the grad course in that sector of the program and had him again, the only undergrad in that course. This professor got rave reviews from the undergrads, and torched by the grad students. Why? Because the undergrads could do the math, and the grad students could not. The grad class was easier, but he was getting raked over the coals because though it was supposed to be advanced, people were failing it because they couldn’t do the math like the nerdy undergrads could. That is when I learned that course evaluations are entirely about what grade the students are getting.

    • Philip

      This is one of the reasons (as a student) I’m glad I went to a very small (800 undergrads, no grad students) school. I knew every professor in my major personally, and probably more than half the professors in other departments too. I usually knew which classes would be good and which wouldn’t, and if I didn’t I could ask a friend what a professor was like, and being friends with them have a pretty good sense of the biases in what they’d say.

      (And fairly earned grades wouldn’t factor too heavily, since we all got used to heavy grade deflation after the first year or so)

      And on the faculty side, it was small enough that it always seemed to me the professors all had a very good sense of what other professors’ classes were like, often even cross department, because again, tiny school.

    • ThrottleJockey

      Maybe a lot depends on where you go to school–or if its graduate work. As an undergrad tough vs easy almost never entered into the discussion. In fact if you let on that you were interested in taking easy classes you were immediately reamed by other students for being dumb. Admittedly my college was nerdy, so perhaps its a poor case to generalize from.

      That being said, the school’s response to poor teaching evals is to figure out if the professor is being too hard, and if so why. If the school has done a poor job readying students for a ridiculously difficult 301 course, then perhaps the 301 professor needs to teach down, or restrict entry to the class to just the top 1%.

      On the other hand, if a professor is only as hard as the other professors then blame lies with the students. Teaching evaluations should be only 1 metric in evaluating a professor’s performance–like “360 Reviews” in corporate America.

  • janitor_of_lunacy

    There is also a pretty strong correlation regarding electives. From what I remember (when I was last in Academia fifteen years ago), on a scale of 1-5, mandatory courses within a major scored a half point lower than electives, but a half point higher than service courses which were mandatory for students in other majors. So teaching excel and Dbase to business majors was a whole point worse than teaching AI to Comp. Sci. majors.

    • DAS

      That’s my understanding too. But such strong correlations are easily accounted for: you just treat a median evaluation of 3 in a mandatory service course as equivalent to a median evaluation of 3.5 in a mandatory majors’ level course and to a median evaluation of 4 in an elective course.

  • CrunchyFrog

    When I attended grad school in the late 1990s (MBA) the one time I consulted teacher evaluations before a course I looked at the comments instead of the raw numbers. I knew, for example, that in one class I attended the professor had everyone in the group rate the others in our group on a confidential card, and that one of the people in our group was given a universal low rating for not doing anything. That person naturally gave the professor the lowest rating. So having the occasional really bad numerical rating may not be a bad thing, but the written comments at least gave you some insight.

    In particular, I was trying to avoid a repeat of a situation in which a professor had a grading system whereby if your first draft project – which was positioned as “no big deal” – didn’t get at least a B+ you had no chance of getting an A in the course. Sure enough, several people had made comments for him along those lines (but he never changed the grading system).

    Similarly, when evaluating managers who’ve worked for me I’ve cared much less about raw numbers and much more about specifics that would be included in comments – anonymous or otherwise. It’s not a science by any means, but if applied properly it gets better results.

  • Crusty

    While it might happen subconsciously, I felt like I never evaluated a class based on the grade I expected to receive and hard professors weren’t penalized. Perhaps that was in part because it always seemed like something between a B and an A- was a foregone conclusion if I did the required work.

    I felt it unfair that professors were penalized if lecturing and being funny weren’t their strength. I tried to go by if a class was a rewarding experience which might have sometimes occurred through the professor’s careful planning and selection of reading materials, even if the act of attending class didn’t feel like watching my favorite tv show.

    • Nick never Nick

      But what, really, is the evaluation based on?

      The class was boring as hell. Is it the fault of the professor? Many people will think so — and yet, some classes are naturally as boring as hell, and are still useful. “Nuclear applications to archeology” was one such where I studied. Anthropologists hated it, no one else took it, but what it taught was very useful.

      My grade. This is a combination of the professor’s decisions, the student’s decisions, and a certain amount of luck. Suppose the student blows the 80% final exam — is it the professor’s fault for having an 80% final exam, or the student’s fault for blowing it? Some percentage of the class will like an 80% final exam, some will hate it; the professor is guaranteed some bad evaluations. Suppose the ones who like the 80% final exam don’t really notice it, whereas the ones who hate it do, and are vocal about it?

      The course mechanics. Do we really notice what we like? Or do we know if the things that we don’t like could have been worse? The textbook sucked and it was expensive. Do we know if the prof was ordered to use it, or if they used it because it had been used before and there were a few used copies available (which I didn’t get), or if the other textbooks both suck and cost more?

      If I was going to pay attention to evaluations, they would not have a rating system — rating systems are bullshit. Instead, they would have a few open-ended questions: What sucked about the course, what did you like about the course, etc. These would be compiled into a list, and the professor would answer them to some sort of peer-review panel. That way if 30% of students point out that the 80% final exam was crap, the prof could answer that 70% of students were OK with it. If you have a rating, every student — who feels (not unjustly) that this is their opportunity to be a critic finds things to criticize and lower the ‘rating’, but they are all different things.

      • notahack

        “Nuclear applications to archeology”

        I thought that this was a joke until Google reminded me that radioactive decay is a thing.

        • DrDick

          Also isotopic analysis.

        • ajay

          Sometimes you get bored of just digging holes very slowly.

      • Thirtyish

        The class was boring as hell. Is it the fault of the professor? Many people will think so — and yet, some classes are naturally as boring as hell, and are still useful. “Nuclear applications to archeology” was one such where I studied. Anthropologists hated it, no one else took it, but what it taught was very useful.

        I am about to start a class in my graduate studies for the spring semester that I *know* is going to be boring as hell. The nature of the topic virtually guarantees that it will be (for me). In some ways I feel for the department professors/adjuncts who will be teaching this course, because they probably realize that most students dread the material and only take the course to complete the program. It can’t be terrific to be known as one of the instructors of “that class.” But then, I’m sure at least some of them came by teaching the course out of genuine passion for the subject, and in which case I say it’s great they’ve found their bliss.

  • junker

    My two favorite eval stories are from friends. I have a friend from India and the first time he taught a lab as a grad student at the start of the first class he apologized for his accent and said that if anyone needed him to repeat himself he won be happy to clarify. At the end of the semester he got several complaints from students about how hard it was to understand him. The following semester he omitted the apology and not one person complained.

    Another friend wore sandals to class, once, and got several complaints about his lack of professionalism for wearing sandals at the end of the semester.

    My eval stories are mainly of the contradictory type, like getting “he moves too fast,” and “he doesn’t move fast enougg” in the same set of evals.

    • TribalistMeathead

      I have a friend from India and the first time he taught a lab as a grad student at the start of the first class he apologized for his accent and said that if anyone needed him to repeat himself he won be happy to clarify. At the end of the semester he got several complaints from students about how hard it was to understand him.

      I took a 15-year break from academia, but I spent those 15 years in corporate America, so I’m going to go out on a limb here and guess that none of the students who complained about how hard it was to understand him asked him to repeat anything he said during that semester.

    • NonyNony

      My eval stories are mainly of the contradictory type, like getting “he moves too fast,” and “he doesn’t move fast enougg” in the same set of evals.

      These drive me crazy.

      I’ve started giving the students some examples of feedback that are helpful, and examples of feedback that are not helpful. It pains me to have to spend minutes of classtime on this, but it’s the only way I’ve been able to get effective feedback. Thankfully at least it’s stopped the ubiquitous “I hated this class”/”This was my favorite class” non-comments that I used to get. Now at least it’s “I hated this class because” or “This was my favorite class because” which can be somewhat helpful.

    • Lost Left Coaster

      That’s what my evals are like — too fast, not fast enough — but I actually had more than one student write that they wished I talked more about my own research.

      Careful what you wish for…

  • Matt_L

    Gah. Its nice to see someone quantitatively prove the gender discrimination inherent in student teaching evals. Its something I have always suspected and only had anecdotal evidence for. But damn, this is going to be difficult to root out and fight against.

    The problem is that student teaching evaluations can help an individual instructor to improve the class and their teaching methods. Generally speaking students know when they have learned something and when they have not. If you construct a good survey you can find some problems with the course or your teaching. But I am in a position where I get to write my own surveys and choose when to administer them.

    It is invidious to use these Standardized Teaching Evals for rating someone’s teaching or to compare someone with their peers. The questions on the standardized forms are so generic as to be useless. The comparative aspect practically invites sexism and racism.

    • alex284

      “quantitatively prove ”

      I wouldn’t go so far. This is one of the situations where the “the results do not support the null hypothesis” thing is distinct from “prove.”

      The linked article discusses 2 studies. The first is more relevant – it looks at thousands of evals and finds gender bias (among male students), but but it just compared evals with gender of prof and tried to control for the few things it could control for. But there are still a lot of things that it couldn’t control for (whether students choose a class based on the prof’s gender, whether students think they’ll get a better grade from a male prof for whatever reasons, whether there’s gender bias in how classes get assigned to profs that accounts for some of the discrepancy, etc.). Still, at least it’s a large dataset consistent with the idea that there is gender bias on student evals.

      The other study, though, is more interesting but less useful. They looked at 4 sections of an online class, 2 taught by a man and 2 taught by a woman. Each one used the male’s name once and the female’s name once. Female students rated the male name (no matter the actual instructor) better but male students rated them the same.

      Interesting and it controls for a lot of things, but it’s still just 4 sections, presumably at the same university and in the same subject at the same level.

      The idea that evals are gender-biased is probably true in my book (although I find the idea that only female students are responsible for the bias pretty far-fetched), but these studies are still away from definitively proving that.

  • xq

    They claim that teacher evals don’t measure effectiveness because final exam scores don’t correlate with teacher eval scores. But if the students disagree with the final exam on teacher effectiveness, I’m not sure I trust the final exam more.

    • TribalistMeathead

      wrong place

    • alex284

      Yeah, I thought the same thing when I read the article.

      Especially since the data came from france, where profs have much more leeway in terms of grading than in the US. Maybe the male profs were just harder graders and not teaching less information, who knows. Or maybe they were teaching different sections, different levels, or whatever.

      But claiming that students learned less from male profs is a bit of a stretch if the final is the only way of evaluating how much they learned.

      Also too, are we now saying that the only measure of teacher effectiveness is how well students do on a test? Ok.

  • Thom

    One of my colleagues, who teaches East Asian history, had an evaluation that said “I wanted to learn about China, but not this much.”

    • notahack

      Hah!

    • DrDick

      Back in the days when I taught Native American Studies, I used to get a number of comments on the evaluations from students in my intro class (which filled a gen ed requirement) that I was a racist who hated white people. It came as rather a shock to this predominately German white boy.

      • Jackov

        I hope you saved and framed the most hperbolic example.

  • TribalistMeathead

    Yet student evaluations are often the only way administrations want to measure teaching because a) they don’t want to put the resources into evaluating teaching and b) they want to have happy customers who return the next semester.

    Given the fact that the head of the MBA program at my school saw fit to give the Intro to Finance section taught on Saturday mornings (to a cohort that consists 100% of students who also work full time) to a professor whose policy is to base the grade for the course entirely on the results of 3 exams, who does not allow ANY excused absences from one of the exams, and who is so proud of himself for being an asshole that he includes excerpts from poor student evaluations in the syllabus for the course, I don’t really think b) is universal.

    • alex284

      It’s a standard trope among a certain set. University professors are awesome, students are whiny and need to shut up and listen, and administrators hate profs and give everything to students that they want because money.

      I’ve had plenty of terrible profs who the university just plain didn’t care about. I don’t think I put much effort into student evals because of that – obviously no one cared.

      My high school ran a much tighter ship when it came to teacher effectiveness than any university I attended. I don’t remember a single high school teacher showing up and talking about the pie he ate last night or some such nonsense for the entire class (and for every lecture for the whole semester), but somehow university profs can get away with that.

      Of course this is entirely different for adjunct lecturers and profs who just got hired, and varies with the institution.

      • TribalistMeathead

        Last semester I had one tenured prof and one adjunct, and it was very easy to tell which was which based on the amount of effort they put into teaching the class.

  • Joseph Slater

    I don’t doubt that student evaluations can be infected with sex, race, and even other forms of bias, and I strongly believe that senior colleagues should be on the look-out for that. Having said that, there are only a limited number of ways teaching can be evaluated — the standard other way, in university education, is peer evaluations. These are important, but they can also be flawed, not only (as was pointed out above) by sex, race, and other forms of bias, but also (mostly obviously) by the fact that peers won’t be able to observe very many classes.

    So, Erik (or anyone else interested in answering), even acknowledging the sexism problem is real, two questions. First, are you aware of many examples of a prof getting, on a consistent basis, really good peer evaluations but poor student evaluations? Second, while some student evaluation comments are eye-rollingly dumb in all sorts of ways, and lots of good teachers have an “off” course every so often where something doesn’t click, if a prof is consistently getting really bad student evaluations, is it likely to be true that this person is in fact a good teacher?

  • Murietta

    I’m a woman adjunct, and the comments on my evals with numbing regularity describe me as “nice,” as though that had anything to do with anything. If I were not “nice,” would that matter? My male colleagues tell me, anecdotally, that they don’t get the nice comment. I also get comments on my clothing style, my shoes, my laugh, and my voice. Do men? Does anyone carefully discount those evals? Or control for them? I think the question answers itself.

    It is impossible for me to separate this nonsense from the numerical ratings I get, particularly the one for “How reflective was this course of current developments in its field?” As if a class of freshmen (it’s a course for first-years) would know!! Why even ask them that? The course, by the way, is on a very hot topic in my field, and almost all of the readings are from the last few years — though it is a humanities field where this is not critical in the way it might be in the sciences.

    I tell them about the recentness of the material, of course, but my theory is that since they are not in a position to assess that themselves, their answer falls under the category of their perception of me more broadly. Which, of course, is gender-driven.

    There is no winning at that. You don’t have to get bad evals to not have your course renewed. You just have to get less-good ones.

    • DrDick

      My female colleagues definitely get criticized for things my male colleagues do not, as well as often getting less respect from students.

    • Linnaeus

      I also get comments on my clothing style, my shoes, my laugh, and my voice. Do men?

      I have received some comments in that vein, but not nearly as much as my female colleagues did.

    • Warren Terra

      I also get comments on my clothing style, my shoes, my laugh, and my voice.

      I think it would be useful if the numerical scores from the people who think your shoes are any of their business were to be weighed as coming from less valuable responders.

  • sahd

    I’m not an expert on pedagogical methods, but my understanding is that the one clear finding of the research is that -many- different teaching styles can be effective (or ineffective). Each year a new crop of Education savants touts the latest trendy methodology. Some of them work for some teachers, some don’t. In the end, no one has ever come up with a formula for what makes a successful teacher, or even a catalog of methods that inevitably improve the effectiveness of teachers.

    I second the comments above about student evals. When I get mine, I look for broad trends of substantive complaints or praise and sometimes change some things accordingly. But it’s common to get comments that are diametrically opposed–several folks really liked X and thought it was effective, several others hated X and insist it detracted from the class. I specifically ask my students to be as concrete as they can in their evaluations, which seems to have helped a little re: getting useful feedback.

    In my experience, peer reviews are little better. Every formal peer review I have been subjected to consisted of an older colleague observing part of one or two of my classes and then praising the things I do that are like they ways they teach, while complaining about things I do that aren’t their style. (Asking a colleague informally to observe a class and give feedback is another story. And in that case, no one has to complete a standardized form that goes into my personnel jacket.) This problem goes back to my first observation: As I understand it, the data show that “effective teachers” (whatever that means) come in many forms. Some may stand at the lectern and mumble, others may prance about telling jokes and using multimedia presentations. Both styles can work.

    (And in case anyone wants to talk about “learning styles”… Do a little research. You will quickly find that despite decades of trying, no one has ever been able to demonstrate that such things exist or that they matter to educational outcomes. And yet every faculty I have been part of persists in talking about them, even though they are one small rung above phrenology so far as evidentiary support is concerned.)

  • DAS

    If I ever receive anything like the following as a comment on a student evaluation, I will know the whole process is worthwhile:

    I wish Prof. DAS would stop pestering us with his left-wing, ideologically biased exploration of the so-called scientific subject of “biochemistry”. All his talk of “macromolecules”, “catabolism”, “anabolism”, the so-called “laws of thermodynamics”, “linked equilibria” and “reciprocal regulation” merely indicates the quasi-Marxist agenda Prof. DAS has regarding science. Shouldn’t a science course involve more explosions and less of the left-wing hermeneutics pinko professors like Dr. DAS call “mathematics”?

It is main inner container footer text