The Game's Method
Warning: Due to changes in the available data, the first 6 Examples under the examples tab no longer work properly.
The general approach
All scores on one criterion are divided by the standard deviation of those scores. At that point, all criteria have equal weight. The criteria are ready for your weightings. The scores are multiplied by the weights you supply and then are added up for a total score.
Specific criteria and their components
Many of the data in this game are entered after viewing a book of statistical information published each year titled *** LSAC Official Guide to ***-Approved Law Schools, which is produced by the Law School Admission Council (LSAC) and a section of the ***. For some reason, the ***, a well-known association of members of American bars, does not want it's name used on this website, so the source will be cited as "*** LSAC". The reader should note, however, that data for past years might have come from a publication that was produced by the *** without the help of the LSAC. The reader should also keep in mind that the data were not supplied by the *** or the LSAC to us. They published their books and we copied some of their data. Needless to say, the *** and LSAC are not responsible for our errors in copying. We did ask the *** to supply the data in electronic form to minimize the chance of copying errors, but the *** did not respond to that request.
- tuition These data are taken from the American Bar Association book of statistical information on "Approved Law Schools" (*** LSAC). The figures are at least a year out of date by now. Still, they give a good indication of comparative price. Although I have given it no initial weight, give tuition the weight you think it deserves.
- student/faculty ratio criterion A common measure of the quality of educational opportunity in schools is the student/faculty ratio. Student enrollment is divided by faculty size. Unfortunately, this factor is not perfectly reliable. A school looks a bit better than it should if it does not require the usual amount of teaching from its faculty. Some schools are wealthy enough to afford faculty with light teaching loads. Just because the teachers are being paid at some schools does not mean that you will get to see them. The writing that teachers do improves their ability to teach. In addition, research done by the faculty enhances the reputation of a school, which in turn redounds to the benefit of the students. Hence, the ideal teacher would spend considerable time writing for publication. It is likely, however, that some teachers spend more of their time writing than they would if they were trying to maximize student welfare. I am not suggesting that law schools should consider only students; good schools have other valuable functions in society. The point to remember is that members of the faculty of some schools dedicate more hours to the students than do teachers at other schools. Despite this difference, I think the ratio of students to faculty members is a useful criterion.
- faculty/student ratio criterion All else equal, students should benefit from having a larger faculty. However, having to share the faculty with lots of other students reduces that benefit. Faculty/student ratio is faculty size divided by student enrollment. Interestingly, this criterion does not have the same effect as the student/faculty ratio, its reciprocal. The reason they are different is that neither is a linear transformation of the other. Taking a reciprocal is not like dividing by a constant. Actually, taking a reciprocal is like dividing a number by its square.
- academic and non-academic The academic reputation and non-academic reputation scores are taken from data published by US News & World Report. That magazine attempts to provide a useful informational service in generating this reputational data. However, some faculty members and administrators who are familiar with the survey forms US News & World Report distributes question whether those forms allow enough variance to make the resulting reputational rankings reliable. However, since US News & World Report does not publish its method for generating reputational data, I cannot determine whether that method is a good one.
Student aptitude and knowledge criteria
- LSAT and GPA Both law students and their teachers learn a lot from the students present in a school. The better the students, the more can be learned from them. The LSAT is a measure of student aptitude and knowledge. The GPA data should be viewed and used with great caution because grading practices vary widely across undergraduate institutions and the GPA data have not been adjusted to account for these differences. The medians are taken from from *** LSAC data.
Career placement criteria
- percentage employed This figure is calculated from *** LSAC data.
comparative bar pass rate The comparative bar pass rate for each school is the quotient of the passing percentage for the school in the state
where the plurality of graduates take the exam over the passing percentage for all first time examinees in that state, with both figures taken
from the *** LSAC data.
An alternative approach would be to use each school's pass rate unadjusted by jurisdiction. Doing that would give an advantage to schools whose graduates take the bar exam in states where the test is easy and the overall passing rate is high. On the other hand, the problem with the current USN&WR method is that it gives an advantage to schools in jurisdictions where a large number of students fail the exam. The problem stems from the available data. We cannot tell whether a high pass rate in a given state is the result of well prepared students or an easy test or both. The method I have not used is better if the students pass primarily because they are well qualified. The method I have used is the better approach if the variation in state pass rates is due primarily to differences in the test rather than the test takers. It is my guess that more of the state-by-state variation in pass rates is due to differences in bar exams than in differences in applicants' qualifications. For most schools it does not matter much which method is used. But it can make a difference in some cases.
A more subtle problem is also created by this method. Imagine a school that produces most of the applicants to the state bar. Its bar pass rate will be very similar to the state's bar pass rate, so the ratio will be about 1, which is an average score. If that school is above average on other criteria, its dominance of the local bar will pull it down in the ranking. On the other hand, if it is below average on other criteria, its dominance of the local bar will pull it up.
Yet another problem is that some schools send more of their top graduates to employers out of state. For example, Chicago is a major market for top law school graduates. For that reason, many of the top students from Illinois will stay in state. Compared to Illinois schools, more of the top graduates from Indiana schools will probably leave the state to take the bar. Since the students who do well on law school exams (and have greater opportunity to work in Chicago) tend do well on the bar exam, the emigration of Indiana's top students will lower Indiana's pass rate on the Indiana bar. To compensate for this disadvantage for schools with lots of students leaving the state, I have added a criterion (see above) that gives an advantage to schools sending graduates to other states.
An additional problem is that bar pass rates vary substantially from year to year. Indiana-Bloomington's was 82% one year and 94% the next. Those results suggest a dramatic improvement in the school, yet very little had changed in the classrooms or in admissions.
I was tempted to omit the bar pass criterion entirely. In addition to the problems just noted, it has a number of other serious faults as a measure of quality. For starters, it is not clear that it adds useful information to other criteria such as student quality and employment percentage. After all, if students do not pass the bar, that should show up in employment statistics since students must pass the bar in order to get most legal jobs. More important, many legal academics do not consider it their job to prepare students to pass the bar. Indeed, since independent commercial operators offer bar review courses specifically for that purpose in most states, it would be a shame to waste years of study and thousands of tuition dollars on training to pass the bar.
Many good law schools consider their function to be preparing students to be good lawyers and citizens, rather than bar passers, and society will likely benefit more from good lawyering than from additional bar passage. The bar pass rate mismeasures quality because time spent on teaching lessons related to the bar exam may decrease class time spent on material that is more important for students to learn to be good lawyers. The students who graduate less prepared for the bar may be more prepared to be good lawyers. Moreover, it may be socially harmful to compare law schools on bar passage because doing so creates an incentive for them to try to increase the passage rates. This may result in poorer classes and could even result in law schools cutting back admissions of students belonging to groups that are statistically less likely to pass the bar. The bottom line is that the inclusion of bar pass rates in ranking systems may make law schools and lawyers worse.
Despite all these problems, I have included bar pass ratio as a potential criterion. However, I urge you to give bar passage zero weight. And if you do give it weight, consider giving equal weight to the out-of-state-percentage criterion (if available) to balance the bias against schools in states with smaller employment markets.
- library titles and active serial subscriptions These two items are taken from *** LSAC data. They measure how many different titles and subscriptions there are in the library. The larger the number, the greater the chances the library has the book you need for your research.
- campus beauty The scores on this criterion are derived from a book titled "Campus as a Work of Art", by Thomas A. Gaines. Summing scores on four criteria, Gaines gave his top schools total scores from 19 down to 17. He did not publish the scores below that in his table. I have given all other schools 16, whether they deserve it or not. Weighting this criterion says you like to go to school or attend your homecomings in an attractive setting.
- Tibetan restaurants within 400 meters Just to prove, if it is not already obvious, that my choices of criteria (like everyone else's) are subjective and idiosyncratic, I have included a factor designed to account for whether there are adequate restaurants within walking distance of the law school. Rather than count the restaurants near each school, which would be entirely too much work, however pleasurable, I have chosen to simplify the search in a non-random fashion. My proxy for the availability of adequate repast is the number of Tibetan restaurants within 400 meters. (If I have miscounted this number for any school, please alert me.) I have given this criterion no weight in the initial ranking on the ground that you might not agree with my choice of proxy. However, if you start with the initial weightings and give this factor its appropriate weight of 1, the ranking will reflect the rightful place of Indiana-Bloomington in the law-school world.
Criteria not included
- 25%tile LSAT and GPA This program uses the 50th percentile figures on GPA and the median (50th percentile) on LSAT. This program does not use the 25th percentile data on GPA and LSAT, data which are now also available. Despite the fact that the purpose of this program is to reduce reliance on rankings, I am aware that some schools might respond by trying to improve on the criteria used here. If I were to use the 25th percentile data, schools might try to increase the GPA and LSAT of the top 75 percent of their students. The only way to increase the GPA and LSAT is to decrease the attention paid to other student information. Admissions officers would have even less freedom to favor students that they predict will either be better lawyers or add more to the experience of other students (and faculty) in the school. The figures used in the Ranking Game give an indication of student quality without putting much pressure on the admissions process.
- attrition Some people, for society's sake and their own, should flunk out of law school. I do not include attrition as a criterion because doing so might increase pressure for teachers to refrain from failing students that have not performed at a passing level. Attrition is also not a direct measure of school quality.
- budget As a general point, prospective students should care about the quality of faculty and other resources, not what the law school spends to get them. Professors, for example, often take a pay cut to take their teaching job, and demand more pay to teach at lesser schools. In other words, to get the same professor, a lesser school would have to spend more. For that reason, a smaller budget does not necessarily mean a lesser law school. Even if you adopt the assumption that more spending creates a better education, fianancial data are helpful only if they are very carefully used. First, the total operating budget should be reduced by the amount students contribute by way of tuition plus the amount given to students in financial aid. If the preceding adjustments are not made, then a school can increase its ranking by simply increasing tuition and increasing financial aid, dollar for dollar. Once that figure (the amount of money the school is spending that does not come from or go to students) is determined, overhead should be subtracted and the result should be divided by the number of students. Then that amount should be adjusted for the cost of living in the area of the school, since some schools can buy more benefits for students with each dollar. There may be other appropriate adjustments that I have not considered and do not include.
- percentage of minority students This criterion is clearly and appropriately relevant to the decisions of students. It is not, however, a direct measure of quality, even as to diversity. A higher minority percentage is not unambiguously better than a lower percentage. For example, Howard, with 20% white, is probably not 4 times better on diversity than Hawaii, with 20% minority. There is no good, simple measure of diversity. Therefore, although I think students should consider the admissions policy of the school and their own needs for a diverse set of peers, I do not include a diversity or gender criterion in this game.
Explanation of columns in the data set
There are a number of columns of data provided (including all of the data used by the game). The data that are under the column headings are as follows:
- rank the rank of the school
- sum the sum of the weighted scores
- school the law school name
- price tuition
- beauty total score in "Campus as a Work of Art"
- S/F student/faculty ratio
- LSAT median LSAT for entering class
- GPA GPA at 75th percentile of entering class
- ac-rep academic reputation
- o-rep non-academic reputation
- %employ percentage known employed
- bar ratio of bar passage
- titles total number of titles in the library
- Tibet Tibetan restaurants within 500 yards
- f/s faculty/student ratio
- af/s faculty/student ratio including adjuncts
- full-s full time students
- part-s part time students
- students students
- tentrack tenure track faculty members
- admin administrators teaching more than 1/2 time
- teachers other full time teachers
- employees sum of various teachers
- adjunct part time teachers
- total teachers sum of various teachers
- known percentage of grads whose employ is known
- jobs percentage of known that are employed
- gradschool percentage of known that are in school
- barpass percentage of students passing local bar
- allpass percentage of all takers passing local bar
- volumes total number of volumes in library
- seats number of seats in library