By Jonathan M. Turk & Nicholas Hillman
Since taking office in 2009, expanding college access has been a major goal of the Obama administration. While there has been a multifaceted approach to increasing the number of Americans with a postsecondary credential, a key focus has been on informing the college choice process. Specifically, the administration has sought to ensure better information about colleges and universities is reaching students and their families—the most recent example being the 2015 relaunch of the College Scorecard—while also holding institutions more accountable for student outcomes.
We agree generally that the college choice process can be aided by accessible and meaningful data on institutions. But for many students, the decision of whether to attend college, let alone which college, is likely determined more by where they live than by graduation rates, programs offered or even salary after completion. The importance of place therefore needs to be emphasized not only in the college choice process, but also in the greater discussion of higher education accountability as well.
The goals of consumer information tools such as the College Scorecard, College Navigator and the Financial Aid Shopping Sheet are two-fold. First, these tools are designed to provide students and families with more detailed information about colleges and universities so that they may make a decision that maximizes the student’s chances of success. The second goal is to foster greater accountability within the higher education market. After all, the revamped or version 2.0 of the College Scorecard was born out of the administration’s abandoned plans for an institutional rating system.
The utility of any consumer information tool on increasing college access and success and institutional accountability relies on a set of major assumptions about college student choice. The first assumption is that students and families utilize such tools when selecting a college. Many do not. By assuming they do, the next assumption is that the tool not only contains the information students and families are looking for, but that they can accurately digest the information, which might have major data-quality issues and other limitations that hinder accessibility.
Ultimately, the primary assumption is that when comparable information about a set of colleges and universities is presented to students—information about graduation rates, costs and available programs—they will select a higher performing institution rather than a lower one. In effect, students are expected to select an institution where they perceive their chances of success are maximized.
If all of these assumptions hold correct, accountability is achieved as higher performing institutions are rewarded through increased enrollment and tuition revenue, while lower performing institutions are driven to improve or face dwindling enrollment and financial distress.
But what if the assumptions prove to be false or only partially true? A recent study helps us answer this question by examining whether releasing College Scorecard data affected where students sent their SAT scores. These “score sends” are a key step in the college application process, so the Scorecard data should have theoretically induced more students to send scores to higher-performing colleges.
But this did not happen: releasing more information about graduation rates and costs did not affect score sending behaviors. Notably, the release of this new data did not influence score sending behaviors for students from high-minority and high-free/reduced-price lunch high schools. It did, however, encourage students from well-resourced high schools to send their scores to colleges reporting higher median earnings. This study suggests that Scorecard data likely benefits students who are already highly privileged and least likely to make sub-optimal college choices.
While information about costs, program availability and completion is important and potentially influential to many students, college choice is a complex and multi-stage process that is heavily influenced by social, cultural, economic and geographic characteristics as well. In fact, for most students, college choice is often limited by the number of institutions within 50 miles from their home. And for some students, this reduces college choice to a matter of whether to go, rather than where to go.
As highlighted in the recent ACE report “Education Deserts: The Continued Significance of ‘Place’ in the Twenty-First Century,” place matters. Policymakers must consider the reality that for many students, the market for higher education is not a national or even state one, but a local one. In fact, for reasons such as current employment status, family connections, financial resources and academic preparation, perspective students, particularly post-traditional students, are often restricted to attending a nearby institution.
At face value, this is not necessarily problematic. While there are communities throughout the country that support multiple higher education institutions (e.g., a community college, public university and a private liberal arts college), there are others with only a single community college serving as the broad-access institution, and still others with no institutions at all. Such latter communities, where college opportunities are few, are education deserts. They exist in all regions of the United States; by one estimate, nearly 25.3 million adults call an education desert home. Even students outside of education deserts often find themselves limited only to a small number of broad-access institutions.
The prevalence of education deserts requires federal and state policymakers to consider the importance of the geography of opportunity as it relates to not only college access and success, but also to the accountability movement. We offer the following recommendations to assist in achieving the aims of greater college access and student success and increased accountability.
1. Research consumer information tools and their influence on college choices. Lessons from financial aid policy show that simply providing “better information” is less effective than coupling that information with personal guidance, mentorship and coaching (Bettinger & Baker, 2014; Castleman, Page, & Schooley, 2014). Information tools like those mentioned above are likely to have limited impact if they are not part of a broader effort to help make that information useful.
2. Federal policymakers should consider expanding Title III of the Higher Education Act to help institutions build the capacity necessary to improve student retention, degree opportunities and completion rates. Institutions in education deserts, as well as institutions that serve largely first-generation and underserved students, need to be strengthened, not weakened further by limiting the flow of financial support. One of the strongest predictors of timely degree completion is the amount of resources a campus has to serve its students, and so targeting resources to colleges with the least current capacity is likely to yield the greatest results (Bound & Turner, 2007; Bound, Lovenheim, & Turner, 2012).
3. State policymakers should consider the disproportionately negative impact performance-based funding formulas have on institutions operating in an education desert. Such colleges may require additional resources to meet local needs and improve student outcomes, so states should consider investing additional resources. Without these investments, colleges might unintentionally become more selective and less diverse in response to performance incentives (Dougherty & Reddy, 2013; Kelchen & Stedrak, 2016; Umbricht, Fernandez, Ortagus, 2015).
Efforts to provide students and families with new and meaningful data to inform college choice are admirable. However, if policymakers want to improve postsecondary attainment levels and strengthen institutions, simply trying to nudge students to make “better choices” about where to attend is not sufficient. Policymakers also need to consider the supply and capacity of colleges and universities—where they are located, whether they are serving their local communities and the role geography and place has in shaping students’ choices.
References
Bettinger, E. P., & Baker, R. B. (2014). The Effects of Student Coaching An Evaluation of a Randomized Experiment in Student Advising. Educational Evaluation and Policy Analysis, 36(1), 3–19. http://doi.org/10.3102/0162373713500523
Bound, J., & Turner, S. (2007). Cohort crowding: How resources affect collegiate attainment. Journal of Public Economics, 91(5–6), 877–899. http://doi.org/10.1016/j.jpubeco.2006.07.006
Bound, J., Lovenheim, M. F., & Turner, S. (2012). Increasing time to baccalaureate degree in the United States. Education Finance and Policy, 7(4), 375–424.
Castleman, B. L., Page, L. C., & Schooley, K. (2014). The Forgotten Summer: Does the Offer of College Counseling After High School Mitigate Summer Melt Among College-Intending, Low-Income High School Graduates? Journal of Policy Analysis and Management, 33(2), 320–344. http://doi.org/10.1002/pam.21743
Dougherty, K. J., & Reddy, V. (2013). Performance Funding for Higher Education: What Are the Mechanisms What Are the Impacts? ASHE Higher Education Report, 39(2).
Hurwitz, M. & Smith, H. (2016). Student responsiveness to earnings data in the college scorecard. Available at SSRN: http://ssrn.com/abstract=2768157
Kelchen, R., & Stedrak, L. J. (2016). Does Performance-Based Funding Affect Colleges’ Financial Priorities? Journal of Education Finance, 41(3), 302–321.
Umbricht, M. R., Fernandez, F., & Ortagus, J. C. (2015). An Examination of the (Un) Intended Consequences of Performance Funding in Higher Education. Educational Policy.
If you have any questions or comments about this blog post, please contact us.