Logo

IPEDS and the trouble with student metrics in the US

The IPEDS education data surveys hold great weight in the HE system, but they are not inclusive enough and thus no longer fit for purpose, says Elizabeth Harris

Elizabeth Harris's avatar
7 Jan 2022
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
The higher education sector in the US is using an antiquated system of judging student metrics

Created in partnership with

Created in partnership with

Colorado State University Global

You may also like

Re-engaging adult learners is key to a sustainable HE recovery
Tempting back adults who left college early without a degree could be key to higher education's recovery

Popular resources

We’re in the midst of the most magical time of year for the institutional research community. That’s right, the Integrated Postsecondary Education Data System (IPEDS) surveys have been released for the winter and spring collections. For non-traditional, four-year institutions, this heralds the Sisyphean task of stuffing round pegs into square holes.

IPEDS is a series of 12 surveys that schools are required to answer. It’s considered the “primary source for information on US colleges, universities and technical and vocational institutions” and contains metrics such as number of degrees awarded, enrolment figures and  retention and graduation rates, which are then reported in almost every major accreditation and ranking.

Institutions are required to include this information on their websites, according to the Higher Education Act, and it has become an important factor for students assessing colleges, as the US Department of Education College Scorecard pulls directly from IPEDS information.

This is critical because the College Scorecard is a tool prospective students use to search for and compare colleges that are the right fit for them. Consequently, some institutions, particularly online universities, that report lower graduation or retention rates to IPEDS are either misrepresented or, worse, data is entirely missing. Therefore, they may not be viewed favourably, or at all, by prospective students who could actually be a natural fit and excel in the programmes.

The metrics required for IPEDS do not capture non-traditional students in meaningful ways because they rely heavily on first-time students. This can have very negative consequences for online schools such as ours – Colorado State University Global – and others that serve older, non-first-time, often part-time, adult students, because they are not clearly represented in some of the metrics as well as they could be. While IPEDS has increased its representation of part-time students and students who took college-level classes at other institutions in some of its metrics, such as graduation rates, fall-to-fall retention remains a metric included on the Scorecard.

Indeed, fall-to-fall retention is the holy grail of metrics, and in 2003 it was first collected by IPEDS for first-time, degree-seeking undergraduates. Fall-to-fall retention tracks the percentage of students who return to their school after their first year. This metric is reported for both full-time and part-time students who have not attended college prior to enrolling at the reporting institution. It’s designed to help show if students are progressing through their education as expected, because students who persist to their second year are more likely to stay in school and, in turn, graduate.

The landscape of higher education was, of course, a bit different in 2003, with only 15.6 per cent of students enrolled in distance education courses. By 2019, according to National Center for Education Statistics data, 36.6 per cent of students were enrolled in distance education courses. Students who take these courses are often “non-traditional” learners, such as those who may have previous college credits and came to an online campus to complete their degree. However, those students are not included in this metric because they do not meet the criteria of being a first-time student.

As such, the retention rates for institutions that have low populations of first-time students are very volatile and subject to variability because such low numbers must be reported. Additionally, at online schools, classes do not just begin in the fall; start dates are dotted throughout the year to accommodate students’ schedules and needs.

However, retention continues to be a metric that is important to bodies of accountability for universities. In addition to the College Navigator publicly available data, it’s also collected as a component of the Common Data Set, an agreed-upon list of data points that are the basis of college rankings including those in College Board, a non-profit that provides college readiness programmes; US News & World Report, a recognised media leader in college rankings; and Peterson’s, a leading educational services company.

Rankings help potential students select where they should go to university, but retention is also a metric that is common to accrediting bodies, which ensure colleges are meeting quality standards in their academic programmes and provide public accountability.

As a result, institutions with many non-traditional students with previous college credits are forced to report much lower retention rates to IPEDS, which is not accurately capturing the success and progression of their student body to degree completion. These IPEDS metrics should not dictate our understanding of retention or persistence through a programme. Alternative measures can be used internally to track retention and successful progression through an academic programme, rather than only relying on IPEDS. But these metrics essentially do not matter if prospective students and accreditors are still relying on metrics that are antiquated and fail to represent all students.

I recommend that IPEDS swiftly takes steps to shift its approach for better and more inclusive reporting by reimagining data collection and putting an end to IPEDS as we know it. Currently, data are not supplied in their raw form but as answers to the questions in the surveys. This system made sense in 1993 when it was initially designed, but institutions now supply data to the government in much more fluid ways.

For example, the National Science Foundation grants collect raw data. The data being requested for these grants can answer every question asked on IPEDS. By supplying raw data instead of filling out long surveys, the government can present better, more agile and more inclusive data on retention and persistence to potential students to ensure they are selecting the schools that best meet their needs. It’s long past time that IPEDS enters the “new” century.

Elizabeth Harris is director of institutional effectiveness at Colorado State University Global. She has worked in institutional research for six years and in assessment and evaluation for the past decade.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site