Wednesday, May 30, 2012

Assessment indicators

In our discussions on educational data analysis and reporting, we've been looking at what data to collect and display, and what to compare it against. A student at JMSS will study some subjects in year 10. Those have some assessment tasks, and are also given VELS scores. Some subjects carry across both semesters, some don't. The student will study a completely different set of subjects in Year 11. Some will have a certain amount of carry-through, many won't. These subjects have assessment tasks, and VCE outcomes, which are pass/fail. Year 12 is similar, but slightly different again.

Any piece of data, to be meaningful, requires a context. It needs more data around it. That can be provided by time, by peer data, goals, and more. The effect of our situation is that there is no meaningful data to compare over time, nor for the student to set goals against. We could aggregate certain assessment data at the end of each semester, and use that, but a semester is a long time, and subjects are quite diverse, as are the assessments and marking practices even within one subject group. We could compare a student's datum against the cohort (e.g. place that student on a box-and-whisker plot), but that arguably doesn't achieve much - it's basically a league table, and this sort of competition can be counter-productive.

This is where VELS is really quite nice. It provides a consistent set of indicators across year levels. Perhaps we need to extend collection of VELS data beyond year 10, or develop our own set of indicators. This would increase teachers' assessment and reporting workload, and thus would need to be extremely well thought out, and in fact integrated. If every assessment task used a rubric that related directly back to these indicators, then the mere act of marking the assessment would cover that extra work. Developing these indicators would be no mean feat - curriculum frameworks take time to develop. But informal discussions have suggested at least two faculties have already been considering such a set of indicators - maybe it's worth investigating.

It is worth asking, if we can't use assessment data meaningfully, what is the purpose of that assessment?

(The data discussed here is what is directly linked to assessment items - i.e. markable things. We also have monthly progress reports which we can use the data for more meaningfully)

No comments:

Post a Comment