Course allocation is the problem of allocating seats in university courses among students. Many universities impose an upper bound on the number of students allowed to register to each course, in order to ensure that the teachers can give sufficient attention to each individual student. Since the demand for some courses is higher than the upper bound, a natural question is which students should be allowed to register to each course.
Many institutions allow students to register on a first come, first served basis. However, this may lead to unfair outcomes: a student who happens to be near his/her computer when registration starts can manage to register to all the most wanted courses, while a student who comes too late might find that all wanted courses are already full and be able to register only to less-wanted courses. To mitigate this unfairness, many institutions use more sophisticated allocation mechanisms.[1]
In a draft mechanism (also called round-robin), students take turns in picking courses from the set of courses with available seats. The choosing order is random at the first round, and then reverses in subsequent rounds. In practice, students do not have to pick by rounds: they can just report their preferences over individual courses to a computer, and the computer chooses courses for them one at a time. This procedure has been used, for example, in the Harvard Business School since the mid-1990s.[2] An important advantage of draft mechanisms is that they are relatively fair, in the sense that all students get their t-th course before any student receives his (t+1)-th course.
One problem with the draft procedure is that it is not strategyproof: students may potentially get better courses by manipulating their reported preferences. Moreover, the draft is easy to manipulate: students should overreport how much they like the more wanted courses, and underreport how much they like the less wanted courses. Results from a field study at Harvard show that students indeed manipulate their preferences, and this manipulation leads to allocations that are not Pareto efficient and have a low social welfare.[2]
A variant of draft that can potentially reduce the inefficiencies due to manipulation is the proxy draft. In this mechanism, the students still report their preferences to a computer, but this time, the computer manipulates the preferences for them in an optimal way, and then plays the original draft. This procedure reduces the welfare loss due to mistakes in manipulation and lack of knowledge of the position in the choosing sequence.[3]: 5 Other variants are the quest draft[4] and Pareto-improving draft.[5]
Another problem with draft is that it only considers the ordinal ranking of the students, and ignores their cardinal valuations. This may lead to inefficiencies. For example, suppose the first student in the random order slightly prefers course A to course B, whereas the second student strongly prefers A to B. The draft mechanism will allocate A to the first student, but it would have been more efficient (at least from a utilitarian perspective) to allocate A to the second student.
Economic theorists have proved that random serial dictatorship (RSD) is the only strategyproof mechanism that is ex-post Pareto-efficient and satisfies some other natural properties. Based on this theoretical fact, they suggested to use it in practice for course allocation.[6][7][8]
However, field experiments have shown that RSD performs worse than the manipulable draft mechanism in natural measures such as the number of students who get their first choice, the average rank of courses per student.[3]: 5
In a bidding mechanism, each student is given a fixed amount of some unreal money, and can allocate this "money" among courses he/she wishes to take. The bids of all students on all courses are ordered from high to low and processed one at a time. Each bid in turn is honored if and only if the student has not filled his/her schedule and the course has available seats. Similar mechanisms are used in the Ross School of Business, Columbia Business School, Haas School of Business, Kellogg School of Management, Princeton University, Yale School of Management[9] and Tel-Aviv University.[10]
Bidding mechanisms have several disadvantages. First, just like first-price auctions, they are not strategyproof. This may cause students to spend a lot of effort on deciding how much to bid on each course, by guessing how much other students would bid on these courses. Second, the outcomes may be inefficient. The students' bids have two roles: inferring student preferences, and determining who have bigger claims on seats. These two roles may be in conflict, and this may lead to inefficient outcomes.[11] Third, the outcome of a bidding mechanism may be very unfair: some students may receive no desired course, while others receive all their desired courses.[12]
Kominers, Rubbery and Ullman[1] introduce a proxy bidding mechanism, which aims to computes high-quality manipulations in behalf of each student. According to their simulations, this mechanism decreases the incentives to manipulate, and hence may improve efficiency.
In an equilibrium mechanism, each student is allowed to rank all the feasible schedules of courses (i.e., all subsets of courses in which no two courses overlap in time, no two courses teach the same material in different times, etc.) Then, a computer finds a competitive equilibrium from equal incomes in this market. Since an exact competitive equilibrium may not exist, a mechanism often used in practice is the Approximate Competitive Equilibrium from Equal Incomes (A-CEEI). Eric Budish developed the theory;[12] Othman and Sandholm[13] provided efficient computer implementations. Budish, Cachon, Kessler and Othman improved the implementation; their implementation, called CourseMatch, has been implemented in the Wharton Business School, replacing the previous bidding-based mechanism.[14] It has been commercially implemented by Cognomos.[15] Recently, Budish, Gao, Othman, Rubinstein and Zhang presented a new algorithm for finding an approximate CEEI, which is substantially faster, attains zero clearing error on all practical instances, and has better incentive properties than the previous algorithm.[16]
The need to report a ranking over schedules is a major challenge in implementing such algorithms, since the number of feasible schedules might be very large.[17][18] Overcoming this challenge requires to design a simple language that allows students to describe their preferences in a reasonable time. The language developed at Wharton allows students to specify a utility for each individual course, and an "adjustment value" for each pair of courses. The utility of each pair is the sum of utilities of the individual courses, plus the adjustment value. Zero / positive / negative adjustment values correspond to courses that are independent goods / complementary goods / substitute goods respectively. In addition, some specific combinations of courses (e.g. those that are given at the same time or have the same content) are specifically disallowed. While this language does not allow to express every possible ranking on schedules, it is sufficient in practice.[14]
Soumalis, Zamanlooy, Weissteiner and Seuken[19] present a method for using machine learning to learn and correct errors in the students' report.
A major problem with A-CEEI is that it is very computationally-intensive, as it needs to search through the space of price-vectors, and for each price-vector, it should compute the optimal bundles of many students.
Atef-Yekta and Day[20] aim to improve the efficiency of draft mechanisms by incorporating elements of bidding, while still keeping its round-by-round structure that enhances the fairness. They present several heuristic algorithms:
They compared their five algorithms with the bidding and draft mechanisms on 100 sample markets, each having 900 students with a capacity of 6 courses. There were 112 course-sections, some of them belong to the same course, and some of them overlap (so they cannot be taken together). The course capacities were drawn at random from discrete uniform distributions. The characteristics are similar to those in the Harvard Business School. They evaluated the algorithms using several metrics:
In the binary and ordinal aspects, OC scored best on both efficiency and fairness; then SP-O and TTC-O; then Draft, SP and TTC; and Bidding scored worst. In the cardinal aspect, OC and BPM were the most efficient, but SP-O and TTC-O were the most fair. Draft was very inefficient, and BPM was very unfair, SP and TTC were moderately efficient and moderately fair.
As no algorithm is strategyproof, they studied the incentives for strategic manipulation in each algorithm, that is, how much a student can gain by manipulation. Their experiments show that, in the Bidding mechanism, the gain to manipulators is highest, and the harm from manipulation to truthful students is highest. The lowest gain+harm was found in TTC-O, SP-O, and Draft.
Most works assume that only the students have preferences over the courses, whereas the courses do not have preferences; that is, the market is one-sided. However, some works assume that courses may also have preferences, and therefore the market is two-sided.[21] The main goal in a two-sided market is finding a stable matching, and the main algorithm is the Gale-Shapley algorithm (deferred-acceptance, DA).
Diebold, Aziz, Bichler, Matthes and Schneider[22] compare two mechanisms: student-optimal DA, and efficiency-adjusted DA. They also survey recent extensions regarding assignment of schedules of courses, rather than individual courses. They report a field experiment showing the benefits of stable matching mechanisms.
Diebold and Bichler[23] compare various mechanisms for two-sided matching on course-allocation information.
Krishna and Unver[24] and Sonmez and Unver[9] consider a one-sided market, but still suggest to use a two-sided matching. Their rationale is that, in existing mechanisms, the student bids have two different roles: they are used to determine who has a bigger claim on each course seat (and in this role, they are used as a strategic tool); and they are used to infer the preferences of students. They suggest to split these two roles: they allow each student to report both a cardinal value for each course, and an ordinal ranking of the courses; these two reports need not be consistent. When running DA, the preferences of the students are determined by their ordinal rankings, and the preferences of the courses are determined by the students' cardinal values. Effectively, a course "prefers" to accept a student who wants it more. Like in DA algorithm, each student "proposes" to the course ranked the highest in his ordinal ranking; the over-demanded courses then order the students according to their cardinal values, and reject the lowest ones. They report that, based on theory and field experiments, this scheme improves the efficiency of the allocation. However, reporting two inconsistent sets of preferences may increase the incentive problems. Additionally, the algorithm has no fairness guarantees.
Other mechanisms for course allocation use fair random assignment.[25]
{{cite journal}}
: Cite journal requires |journal=
(help)
{{cite journal}}
: Cite journal requires |journal=
(help)