From the fact-based fiction of Tom Wolfe’s I Am Charlotte Simmons to the undercover anthropology of Rebecca Nathan’s My Freshman Year, scholars, journalists, and educators have begun to depict the college cam-pus as a place where academic effort is scarcely detectable and the primary student activities are leisure-based. But if history is a guide, every generation has a tendency to slander its progeny with allegations of decadence and sloth. Do recent characterizations of a shift in college culture reflect real, quantifiable changes over time in the choices and behaviors of students, or are they little more than the rants of curmudgeons, stoking the common prejudice with selective examples?
We answer this question with hard data from time-use surveys that go back half a century. Figure 1 offers a condensed preview of the results. In 1961, the average full-time student at a four-year college in the United States studied about twenty-four hours per week, while his modern counterpart puts in only fourteen hours per week – a whopping ten-hour decline. As we explain below, the trend depicted in Figure 1 is not explained by differences in the wording of survey questions, is clearly visible across a dozen separate data sets, and does not appear to be driven by changes in the composition of the college-going population over time. Study time fell for students from all demographic subgroups, for students who worked and those who did not, within every major, and at four-year colleges of every type, degree structure, and level of selectivity. This mountain of evidence suggests that a change in college culture has taken place over the past fifty years, a change that may have profound implications for the production of human capital and economic growth.
While it is not clear why time spent studying has declined, we argue that the observed ten-hour-per-week decline could not have occurred without the cooperation of postsecondary institutions. Education-policy observers commonly use the word “standards” in reference to education outputs, such as student achievement or learning. But in a university setting, “standards” often refers to inputs, such as time spent in class or time spent studying, as well as outputs. Universities commonly claim that eliciting student effort is a goal and even define a unit of academic credit as the number of hours per week a student should have to spend in class and studying in order to earn it. For decades, educators and administrators have also expressed a common expectation about the amount of study time that should correspond to each hour spent in class, what we call the “traditional effort standard”: in general, the standard is that students study two or more hours outside of class for every hour of class time. We will also present evidence that study time has meaningful benefits and that colleges produce these when they elicit it.
Average Study Time for Full-time Students at Four-Year U.S Colleges, 1961 and 2013
Data and Findings
We base our analysis on four large data sets that cover the time periods 1961, 1981, 1987-89, and 2003-2005, and we have restricted the samples to full-time students at four-year colleges in the United States. Each survey asked students to report the number of hours per week they spend studying outside of class. Data for 1961 time use come from Project Talent, for 1981 from the National Longitudinal Survey of Youth (NLSY79), for the late 1980s from the Higher Education Research Institute (HERI), and for the post-2000 years from HERI (2003-2005) and the National Survey of Student Engagement (NSSE). Very recent data (for study times after 2003-2005) show a similar trend, and the decline we document here can be replicated using eight alternative data sets stretching all the way back to 1928. 1988-2004, and 1961-2003, based on the comparability of the surveys. We compare 1961 and 1981 samples because both are nationally representative. We compare the HERI surveys (1988 and 2004) but restrict the data to a subset of forty-six colleges for which data are available in both periods. And finally we compare a consistent set of schools between 1961 and 2003 using 156 NSSE colleges that have data available in both time periods. As we will show, study time declined significantly in each of these periods.
Comparing different surveys over time, however, raises important issues of interpretation. We confront two issues in the next section: first, that these trends are a function of differences in survey questions rather than real differences in behavior; and second, that these trends are the result of changes in the types of students who attend college, rather than changes in student behavior while they are in college.
Different Questions on Different Surveys. The relevant study-time questions in the various time-use surveys were not identical. It could be that subtle differences in the framing of the questions evoked very different answers from students and created the illusion of a study-time decline. To account for this possibility, we estimate these framing effects empirical-ly. Our finding is that framing effects account for very little of the over-all study-time decline. (Results displayed in Figure 1 and throughout this Outlook include the adjustment for framing effects.) After accounting for differences in the wording of the surveys, we observe statistically significant declines in study time of about eight hours per week between 1961 and 1981, about two hours per week between 1988 and 2004, and about ten hours per week between 1961 and 2003. The evidence clearly indicates that the study-time decline is not an artifact of the way the questions were asked in the different surveys.
The rest of our analysis focuses on the NSSE colleges, as these allow comparison over the longest period for a large, representative set of colleges. It is worth reiterating that the broad study-time patterns we document are not limited to these particular schools or these particular years. The patterns are clearly visible in data sets stretching from 1928 to 2008.
Changes in the College-Going Population. The college-going population has changed in many ways that could be related to study choices. For instance, a greater fraction of students work at jobs now than was the case in earlier years. Are students studying less because they are working more? Working students do, indeed, study less on average than nonworking students; however, only a small fraction of the change in study time can be accounted for by changes in work hours. As shown in Figure 2, study hours fell for students in every category of work intensity, including those who did not work at all. Holding work hours constant, then, students invested far less time studying in the 2000s than they did in 1961. The evidence indicates not only that college students are studying less than they used to, but also that the vast majority of the time they once devoted to studying is now being allocated to leisure activities, rather than paid work. Leisure means time that is spent neither working (for pay) nor studying.
Are recent cohorts of students simply better prepared than earlier ones?This seems unlikely, as there is little evidence of rising preparedness in the test scores of entering students. Further, changes in parental characteristics do not explain the study-time decline: Figure 2 shows that study time declined even while holding parental education constant. How about gender? More women now go to college than did so before. Are female students lazier or less serious, and does that explain the move away from studying? The answer is a resounding no. In Figure 2, we observe that women in recent cohorts studied more than men and that study time fell dramatically for both women and men. Could it be that students have simply begun to choose less demanding majors? Again, the answer is no. Although different majors require different levels of academic time investment, study time plunged for all majors, as shown in Figure 3. Perhaps a few low-quality colleges have begun to resemble diploma mills, but higher-quality colleges have maintained their effort standards, which would mean that the erosion in studying is restricted to a narrow class of colleges. But the evidence indicates not: although students at liberal arts colleges or highly selective universities did study more than other students, both in 1961 and in the 2000s, Figure 4 shows that studying fell dramatically at universities of every type.
The bottom line: study time fell within every demographic subgroup, for working students and those without jobs, for every major, and at every type of college. Further, students do not appear to have reduced study time to work for pay. Students appear to be studying less in order to have more leisure time.
Average Study Time for Full-Time Students at Four-Year E.S. Colleges by Work Status, Parental Education, and Gender, 1961 and 2003
Why Study Time Has Fallen
The findings above raise many questions about the practices and cultures of postsecondary institutions. Given that eliciting academic effort has been, and continues to be, an explicit part of the university mission, why have postsecondary institutions allowed this decline to occur? Possible explanations fall into two broad categories: improvements in education technology and declines in academic standards.
Improvements in Education Technology. Information technologies may have reduced the time required for some study tasks. Term papers have certainly become less time-consuming to write with the advent of word processors, and the search for texts in libraries has become faster with help from the Internet. We acknowledge these factors but seriously doubt that they tell the whole story. A major reason for our skepticism is that most of the study-time decline took place prior to 1981, well before the relevant technological advances. Moreover, the study-time decline is visible across disciplines, despite the fact that some disciplines, such as mathematics or engineering, feature little or no paper writing or library research. We conclude from the evidence that the Internet and word processors are, at best, a small part of the answer.
Falling Standards. The other explanation for the study-time decline is that colleges have lowered achievement standards. Because there is no uni-form measure of student learning in college – no exit exam for undergraduates – it is difficult to determine conclusively whether students are, in fact, learning less in college than they used to. It is possible that achievement standards have not declined, even though student effort has. College instructors may have become so masterful at delivering knowledge to their charges that today’s students are able to match or exceed the achievement of their predecessors without putting in much effort. (As college professors ourselves, we are flattered by the idea that we possess these magical talents, but we find it hard to believe.) However, if we take universities at their word about the average amount of academic effort necessary to produce the appropriate level of learning in college, we can examine their performance based on this metric. The traditional effort standard, virtually unchanged for the better part of a century, requires that students put in two or more hours of study time per week for every hour of class time (or course unit).
Recent formulations of this standard abound in college catalogs and websites, the writings of educators, and university regulations that define how units of academic credit are to be awarded. Based on average course loads in national data sets, this effort standard requires that full-time students study thirty hours per week to pass their courses. College students used to come close to meeting this standard, but they now study only fourteen hours per week. So even though we lack the data to observe directly whether college has been “dumbed down,” we are able to draw from the data a solid conclusion about university practices: standards for effort have plummeted – in practice, if not in word.
Average Study Time for Full-Time Students at Four-Year U.S. Colleges by Major, 1961 and 2003
Why has this happened? Educators have put forth a few theories. David L. Kirp, in Richard Hersch and John Merrow’s Declining by Degrees, emphasizes student empowerment vis-à-vis the university and argues that increased market pressures have caused colleges to cater to students’ desires for leisure. In the same volume, Murray Sperber emphasizes a change in faculty incentives: “A nonaggression pact exists between many faculty members and students: Because the former believe that they must spend most of their time doing research and the latter often prefer to pass their time having fun, a mutual nonaggression pact occurs with each side agreeing not to impinge on the other.” Consistent with this explanation, recent evidence suggests that student evaluations of instructors (which exploded in popularity in the 1960s and 1970s) create perverse incentives: “easier” instructors receive higher student evaluations, and a given instructor in a given course receives higher ratings during terms when he or she requires less or grades more leniently. Because students appear to put in less effort when grading is more lenient, grade inflation may have contributed to the decline. Perhaps it is not surprising that effort standards have fallen. We are hard-pressed to name any reliable, noninternal reward that instructors receive for maintaining high standards – and the penalties for doing so are clear.
Average Study Time for Full-Time Students at Four-Year U.S. Colleges by Institution Type and Selectivity, 1961 and 2003
Student Incentives. If standards have fallen at colleges, and if the explanation for this change is that colleges are catering to the leisure preferences of their students, this raises the question of why students would demand more leisure and fewer study hours in the first place. After all, time investment in college is supposed to benefit the students themselves. If students study less, they learn less; if learning is a determinant of earnings, students who demand more leisure will reduce their future earning power.
One theory is that the population has become wealthier over time and that this “wealth effect” has caused students to demand more leisure. Oddly, though, students are spending more time working for pay while in college than they did before. This does not fit well with the theory of a wealthier stu-dent population that demands more free time. Further, as shown in Figure 2, advantaged students from educated families appear to study more than other students. This, too, casts doubt on the theory that increased wealth and advantage have caused lower study time. Another theory is that the opposite has occurred, and students feel poorer due to tuition increases: in response to a perceived increase in the cost of college, students could be working more and studying less. But we have already seen that students are studying less even when work choices are held constant. In other words, students do not appear to be studying less to work more. Thus, neither of these human-capital explanations seems very convincing.
Another theory is that some components of leisure are activities that build human capital and that today’s students are engaged in more of these types of activities, such as volunteer work. Though we do not have the breakdown for leisure activities by subcategory in the early data sets, it does not look as though today’s students are spending much time on this activity. Students in the post-2000 era spend about two hours per week on volunteer work. (By contrast, students in 2006 in the University of California system spent 11.4 hours per week playing on their computers “for fun” – a category of leisure that would not have existed in 1961.) We see little evidence that volunteer work or other worklike leisure activities account for the decline in study time.
An alternative to the human-capital explanations is that students acquire a degree for the signal it sends to future employers, regardless of whether they have learned anything. It has been documented that differences in student ability between colleges have increased over time, while differences in student ability within colleges have decreased. In other words, colleges differ more from one another, whereas students in a given college differ less from one another, than they once did. In the past, then, some students may have worked hard to signal they were high-ability types, relative to the other students in their college. But if students within a given college are now of similar ability, grades or rankings may now lack content as a signal. Perhaps there is no longer as great a reward for students distinguishing themselves in college because an employer learns most of what he needs to know from the name of their alma mater.
Research on hiring decisions adds support for this explanation: studies have found that employers have come to rely less on college grades in hiring decisions in recent years. Also, students appear to put more time than they once did into preparing for college entrance exams, tailoring their high school resumes for the purpose of college admission, hiring college admissions consultants, and filling out their college applications. Consistent with the above explanation, students seem to be allocating more time toward distinguishing themselves from their competitors to get into a good college, but less time distinguishing themselves academically from their college classmates once they get there.
We have discussed only a few of many possible explanations of why students may be demanding more leisure and fewer study hours. Based on the data, we are not able to prove conclusively which one – if any – is right. As educators, we remain somewhat puzzled by students’ apparent demand for leisure and the reduction in learning that this demand seems to entail.
Should we be alarmed by the study-time decline? The answer depends on whether studying is an important input to the production of knowledge, skills, and human capital. There is strong empirical evidence to this effect. Ralph Stinebrickner and Todd R. Stinebrickner show that randomly induced decreases in study time of about forty minutes per day produce a decrease in student GPAs of 0.24 points. Thus, studying is clearly related to knowledge or learning, as captured by grades.
A more compelling question is whether study time is a good predictor of productivity in the long run. Some of the longitudinal data cited above bear on this question directly. The NLSY79 includes data on time use in college and long-run wages, allowing us to combine time-use data from students who were in college in 1981 with subsequent wage data for these students at two-year intervals from 1986 to 2004. We find that post-college wages are positively correlated with study time in college. The increase in wages associated with studying is small in the early post-college years, but it grows over time, becoming large and statistically significant in the later years. By 2004, one standard deviation in hours studied in 1981 is associated with a wage gain of 8.8 percent. We do not claim to have proved a causal effect, but we conclude – consistent with common sense and the intuitions of educators – that increased effort in college is associated with increased productivity later in life.
If one believes that declining study time signifies declining acquisition of human capital, as suggested by the evidence here, then the study-time trend is a serious problem. Human capital is extremely important, both for the individuals who acquire it and for the nation as a whole. Evidence indicates that increases in the human capital of the workforce accounted for most of the economic growth in the United States over the twentieth century.
On the plus side, declining study time also implies increased access to college because it makes college more affordable. Returns from a college degree remain high, but because students need to invest less time per week to earn a degree, college attendance now requires a much smaller sacrifice in terms of lost wages. This makes college more affordable to more people. The common perception that college is becoming less affordable ignores this apparent reduction in opportunity cost. Our evidence indicates that for most people (that is, those who choose public institutions) college is actually cheaper now than it was in 1961. The savings in time cost (based on the average wages for workers with a high school degree) more than compensates for rising tuition. Though it may be good news that college is cheaper than most people think, this appears to be a byproduct of lowering standards. We would question whether this is the optimal strategy for making college more affordable.
We have argued that academic effort is an important input to the production of skills and human capital, but whether or not student effort matters, the pattern in the data is clear. Postsecondary institutions in the United States are falling short of their self-stated standard for academic time investment, and the amount they fall short by has quadrupled over time. We submit that if academic effort is, in fact, a crucial input to the production of knowledge, and eliciting such effort is an important part of the university’s mission, then this widespread deterioration of the standard for student effort demands attention and considered action from all who have a stake in the quality of higher education in the United States.
1) For convenience, we will refer to the multiyear samples by their middle year.
2) For detailed information on these data sets, see Philip Babcock and Mindy Marks, “The Falling Time Cost of College: Evidence from Half a Century of Time-Use Data,” Review of Economics and Statistics (forth-coming). The additional data sets include Americans’ Use of Time (1965), Time Use in Economic and Social Accounts (1975), Americans’ Use of Time (1985), the U.S. Bureau of Labor Statistics’s American Time Use Survey (2003), and several early surveys from the 1920s and 1930s.
3) Seymour Sudman, Norman Bradburn, and Norbert Schwarz, Thinking about Answers: The Application of Cognitive Processes to Survey Methodology (San Francisco: Jossey-Bass, 1996).
4) Specifically, we administered surveys to four large classes of students at a major public university in California. For each survey referenced, we created a survey instrument that contained a verbatim re-creation of the study-time question from that survey. The undergraduates were then randomly assigned to a different question wording, so any significant differences across the treatment groups were attributable to differences in survey wording. This allowed us to estimate the specific effects of differences in wording.
5) Not only did study time fall when work choice was held constant, but our best evidence indicates that time allocated toward leisure increased by about nine hours per week between 1961 and the 2000s (see Philip Babcock and Mindy Marks, “The Falling Time Cost of College: Evidence from Half a Century of Time Use Data”).
6) Formulations of this standard can be found in Alfred G. Goldsmith and C. C. Crawford, “How College Students Spend Their Time,” School and Society 27 (1928): 399-402; Margaret F. Lorimer, “How Much Is a Credit Hour? A Plea for Clarification,” Journal of Higher Education 33, no. 6 (1962): 302-6; and George Kuh, “How Are We Doing? Tracking the Quality of the Undergraduate Experience, 1960s to the Present,” Review of Higher Education 22, no. 2 (1999): 99-119. Up-to-the-minute examples of the effort standard are posted on the official websites (URLs available upon request) of numerous colleges, including Auburn University, Pennsylvania State University, Ohio State University, Purdue University, North Carolina State University, University of California, University of Michigan, University of Mississippi, and University of New Hampshire.
7) Richard Hersch and John Merrow, Declining by Degrees: Higher Education at Risk (New York: Palgrave Macmillan, 2005).
8) Philip Babcock, “Real Costs of Nominal Grade Inflation?New Evidence from Student Course Evaluations,” Economic Inquiry (forth-coming).
9) Steven Brint and Allison M. Cantwell, Undergraduate Time Use and Academic Outcomes: Results from UCUES 2006 (Berkeley, CA: Center for Studies in Higher Education, University of California-Berkeley, 2008), available at http://cshe.berkeley.edu/publications/docs/ROPS-Brint-TimeUse-9-24-08.pdf (accessed July 23, 2010).
10) Caroline Hoxby, “The Effects of Geographic Integration and Increasing Competition in the Market for College Education” (mimeo, Harvard University, 2000).
11) Henry Rosovsky and Matthew Hartley, Evaluation and the Academy: Are We Doing the Right Thing? (Cambridge, MA: American Academy of Arts and Sciences, 2002).
12) See, for example, Alex Williams, “Lost Summer for the College-Bound,” New York Times, June 4, 2006.
13) Ralph Stinebrickner and Todd R. Stinebrickner, “The Causal Effect of Studying on Academic Performance,” B. E. Journal of Economic Analysis and Policy 8, no. 1 (2008). These decreases in study time were associated with having been randomly assigned a roommate who had an Xbox.
14) Detailed results from these regressions are available from the authors upon request.
15) J. Bradford DeLong, Claudia Goldin, and Lawrence F. Katz, “Sustaining U.S. Economic Growth,” in Agenda for the Nation, ed. H. Aaron et al. (Washington, DC: Brookings Institution, 2003), 17-60.
Philip Babcock (firstname.lastname@example.org) is an assistant professor at the University of California-Santa Barbara. Mindy Marks (email@example.com) is an assistant professor at the University of California-Riverside.
This article originally appeared in AEI Education Outlook No. 7. Reprinted with the permission of the American Enterprise Institute for Public Policy Research, Washington, D.C.
This article originally appeared in AEI Education Outlook No. 7. Reprinted with the permission of the American Enterprise Institute for Public Policy Research, Washington, D.C.