Late last week, after five years of deliberation and debate, the U.S. Department of Education released new regulations to hold the nation’s teacher preparation programs accountable for the quality of their graduates. The controversial regulations require states to track the performance of new teachers after they enter the classroom, use the data to rank programs based how well they prepared those teachers, and then publicly report the findings. For a sector unaccustomed to much accountability, this is radical.

Under these new rules, programs that aren’t effective can lose access to some funding and prospective teachers and schools hiring teachers will have information to make informed decisions. In theory, this combination of market forces and financial incentive might push preparation programs to improve.

Yet the track record on accountability measures for teacher preparation programs is not encouraging. For decades, schools of education at colleges and universities have worked with sympathetic legislators, advocates and policymakers to undermine all manner of accountability efforts and reforms.

That’s why the most promising thing about these regulations isn’t their specifics or sharp edges, which will be softened and aren’t even scheduled to be fully implemented until 2022. Instead, these regulations set the stage, at last, for some systematic collection of information to provide preparation programs and policymakers with a roadmap for improvement. The craziest aspect of teacher preparation isn’t even the lack of accountability. Rather, it’s that despite a century of research and theory, hundreds of millions of dollars in public money spent annually, enormous costs and opportunity costs for prospective teachers, and an array of accrediting bodies and bureaucracies, we still know depressingly little about how to prepare an effective teacher or how to design a high-quality teacher preparation program.

The dearth of proven methods is not for lack of trying. For much of the history of teacher preparation, programs and policymakers assumed they could mix the perfect cocktail of inputs, like the number and content of courses and student teaching hours, to guarantee program quality. This approach is more or less the status quo of policy today. Yet research finds little evidence linking these inputs to quality. Today, the differences in performance between candidates within various routes into teaching are greater than the overall differences between the various types of preparation regimes.

More recently, some analysts and policymakers have advocated for a different strategy: outcomes-based research and accountability. The new teacher preparation regulations are an example of this approach. Instead of prescribing what a teacher prep program should look like, this approach assumes that completer performance data will reveal the best preparation programs.

So far, that hasn’t happened where it’s been tried. The existing outcomes-based research can’t differentiate between many programs based on how well teachers do after their training. Most programs produce completers who are comparatively similar, at least in terms of effectiveness in the classroom.

It sounds hopeless, yes, but it’s actually an exciting time for teacher preparation. The one thing combatants on all sides of the debate agree on is that teacher preparation needs genuine improvement. A new analysis from Bellwether Education Partners shows the regulations provide an opportunity for a better research agenda, one that tests which components of program design are effective – specifically how effective, for whom, and under what circumstances. Those are the kind of nuanced questions lost in today’s debate about whether or not to have accountability or teacher preparation at all.

Doing this type of research in a meaningful way will require genuine partnerships between teacher preparation programs, states and researchers. Even more challenging: It will require not just the generally shared understanding that the state of teacher preparation is unacceptable, but a harder acknowledgement that we know a lot less than many advocates, experts and stakeholders would have you believe about exactly how to fix it.