Via Kay Steiger, an excellent COHE article about U.S. News and World Report‘s ludicrously arbitrary university ranking system. The rankings, created with formulae that have little internal logic, are worse than useless. First, because the apparent certainty of quantification gives them an authority they don’t remotely merit. And more importantly, because they’re so arbitrary they’re also easily gamed, causing universities to shift priorities to increase their (educationally meaningless but believed to be meaningful) rankings. The two pathologies work together in distorting the educational missions of institutions:
In other words, you have to act like Baylor. One of the first steps the university took, after appointing Van Gray, associate vice president for strategic planning and improvement, to oversee the efforts of all departments, was to tie money for new programs to the standards set forth in its strategic plan. Any official who wanted money beyond his or her budget for a new project had to fill out a form stating how that project would further the goals of Baylor 2012.
At Baylor, as at many other institutions, the admissions office plays a crucial role in improving the rankings because 15 percent of U.S. News’s formula is determined by measures of student selectivity, including scores on standardized entrance exams and the institution’s acceptance rate. To improve those numbers, Baylor increased its total scholarship offerings from $38-million in 2001 to $86-million in 2005 and created an honors college. Since 2002 applications have increased (from 7,431 to 26,421) and the acceptance rate has dropped from 81 percent to 42 percent. Over the last five years, the average SAT score of enrolling first-year students has risen 30 points, to 1219.
“We looked very deliberately at what kind of class we wanted because that’s an issue that’s somewhat controllable,” says Mr. Gray. “I believe we have attracted much higher-performing students as the direct result of this 10-year plan.”
While Baylor says the changes it is making are within the overall mission of the institution, colleges that are ranked lower and want to rise may need to change their very nature.
Take, for example, Chapman University.
Chapman, in the heart of Orange County, Calif., has long been known as a college that gave a second chance to underachieving high-school students who showed promise. When James L. Doti became president, in 1991, he says, Chapman essentially had no admissions criteria, other than the best judgment of the staff.
Students were “using Chapman like a community college,” he says. Only 42 percent of students graduated within five years. The university had one endowed chair. There was almost no money for merit-based financial aid.
So Mr. Doti dropped the athletics program from Division II to Division III, thereby eliminating all athletics scholarships.
“We took that $2-million a year in athletic aid and added it to the financial-aid budget,” he says. The institution increased its tuition one year by 25 percent, so parents and students would perceive that the college had as good a program as “the colleges we wanted to compete with.”
Mr. Doti decided to set a minimum SAT score required for admission. “It was 740, which is nothing great, but for Chapman, at least it was something,” he says. “The next year, it was 760. That lops off a lot of people at the bottom. Every year we went up another 10 or 20 points.” The university began a scholars program with grants for high-achieving students.
Almost all the changes were designed expressly to help the college rise in the U.S. News rankings. “I can quibble with the methodology, but what else is out there?” says Mr. Doti. “We probably use it more than anything else to give us objective data to see if we are making progress on our strategic goals.”
The liberal arts colleges who refuse to participate have the right idea.