Thursday, May 31, 2012

Weekly wrap up

A very thoughtful week - discussing and digesting the nature and purposes of data in a school environment. A number of interruptions - from weeks past going on PD and a study tour to Bendigo, to various meetings with a variety of people (Monash Security, a Swiss International School, Samsung, Monash eSolutions strategic consultant, Cisco wireless engineers, and then some).

Any scheduled meeting inherently affects what I do on that day. I won't get started on something I know could take hours if a meeting is coming up, even though I'd likely get interrupted and not spend hours on it anyway. A little mental barrier to overcome, or work with.

Anyway, on to the thoughtful stuff: system control vs individual control. I've been looking at this through the lens of data collection, and more specifically, assessment. The more control put into the system, the more useful data becomes at a systematic level. Certain individual tasks can be automated and removed from the individual. Decisions can be removed from the individual (this can be a good thing as well as bad). Things can get consistent - from a student's (and administrator's) point of view this can be a great thing. The flip side is that systems bring momentum, and become hard to change. Individuals can experiment, try new things when there are fewer system constraints. Our school has long been down the individual (or small group) end of the scale. Teachers and subject groups can use what they want, adapt how they see fit. Play around with stuff, and keep what works. The drawback is in sharing what works, and building a useful body of student data - with no consistent performance measures, this is difficult to achieve. The next question is to ask to what end this data is being collected and used - I can see in a typical school why this is done, but JMSS ain't no typical school. Also, I need think about how badges can fit into this equation.

Wednesday, May 30, 2012

Assessment indicators

In our discussions on educational data analysis and reporting, we've been looking at what data to collect and display, and what to compare it against. A student at JMSS will study some subjects in year 10. Those have some assessment tasks, and are also given VELS scores. Some subjects carry across both semesters, some don't. The student will study a completely different set of subjects in Year 11. Some will have a certain amount of carry-through, many won't. These subjects have assessment tasks, and VCE outcomes, which are pass/fail. Year 12 is similar, but slightly different again.

Any piece of data, to be meaningful, requires a context. It needs more data around it. That can be provided by time, by peer data, goals, and more. The effect of our situation is that there is no meaningful data to compare over time, nor for the student to set goals against. We could aggregate certain assessment data at the end of each semester, and use that, but a semester is a long time, and subjects are quite diverse, as are the assessments and marking practices even within one subject group. We could compare a student's datum against the cohort (e.g. place that student on a box-and-whisker plot), but that arguably doesn't achieve much - it's basically a league table, and this sort of competition can be counter-productive.

This is where VELS is really quite nice. It provides a consistent set of indicators across year levels. Perhaps we need to extend collection of VELS data beyond year 10, or develop our own set of indicators. This would increase teachers' assessment and reporting workload, and thus would need to be extremely well thought out, and in fact integrated. If every assessment task used a rubric that related directly back to these indicators, then the mere act of marking the assessment would cover that extra work. Developing these indicators would be no mean feat - curriculum frameworks take time to develop. But informal discussions have suggested at least two faculties have already been considering such a set of indicators - maybe it's worth investigating.

It is worth asking, if we can't use assessment data meaningfully, what is the purpose of that assessment?

(The data discussed here is what is directly linked to assessment items - i.e. markable things. We also have monthly progress reports which we can use the data for more meaningfully)

Sunday, May 27, 2012

The School and the Software Developer




JMSS have much of their eServices in the cloud, or running locally but created and maintained by an external provider. Basically, most things are commodity systems.

Department Systems

The Department tries to provide all systems that a school would need. Unfortunately, being a large bureaucracy, it doesn't do a particularly good job fitting to any one school's needs. Nonetheless, there are certain mandated IT systems in place, which we have to use, or at least interface with. Smaller school tend to rely on these systems due to financial reasons.

Commodity systems

Things not covered by the department, or not done well are often outsourced. This might be a service in the cloud, a service hosted locally but maintained remotely, or hosted locally with support provided. Some of these may fill the exact niche required. They tend to be written by organisations specialising in that type of software or service, so have plenty of experience of multiple clients to go off. Some may fit well enough, or provide some flexibility in their implementation. Others fit well enough at the time of commissioning, but can't adapt to the school's changing circumstances or requirements.

Home-grown systems

Where an outsourced system isn't available, is too expensive, doesn't fit, or isn't agile enough, schools can implement home-grown systems. This obviously requires a developer (or team thereof). While a home grown system can be tailored exactly to the school's requirements (even moving targets), the big down-side is maintainability. Programmers don't come cheap, and sysadmins who try their hand at programming may not be sufficiently skilled in software liffecycle management - documentation, maintenance, scaling. When the person who wrote the software leaves - what happens then?

This leads us to the role of the programmer in the school. There are three options:
  1. Settle for a less-than-ideal solution by using a department or commodity system.
  1. Work closely with a company to create/modify and maintain a custom system
  1. Employ a developer (or team thereof).

While option 1 might be the only way for some schools, I shall not seriously consider it here. Option 2 is a viable possibility, but most software development organisations work for business (there being all the money). They don't necessarily have an understanding of the needs of an education environment. Further, this options requires someone at the school to create a watertight specification of the service required. In doing that, you're half-way to option 3. Before embarking on a school driven software project, however, a spec is certainly needed, and processes need to be in place. Source code management, a documentation system and convention, and succession plan. My preference is for small components that work with other systems (be they home-grown, commodity, or department). Modularity leads to flexibility, and agility. 

Wednesday, May 2, 2012

On multiple systems




Before I get started, a comic:

(source: xkcd.com/927)

Not quite the same, but very similar to the multitude of ICT systems we use at JMSS. The Department of Education provides the school with CASES (the central school admin system), which does some things, but falls behind in access and UI. So some smart people developed Compass. In an earlier day and age, a school's techies might develop their own DB systems to keep track of their school's data (attendance, timetabling, reporting, finances, etc etc). But in-house developed software requires maintenance, which might not be feasible in small organisations like schools. I understand that Compass grew out of such a system, but the development and maintenance happens elsewhere. Great. Only, rather than replacing all of the existing systems, it adds to them. While it has messaging capabilities, people still use email. There is a schedule, but only for school events, and it doesn't gel with our calendaring system quite as nicely as you'd like. Purchase orders still require a print-out. We use a seperate timetable system, and reporting has been put on hold. Which is what I'm leading in to. We do semester reports using Accelerus - a dedicated system for assessment data reporting and analysis. But interim reports go to Compass. Compass doesn't do much with them except present them to parents and students, so one of our staff members developed an Excel sheet to present this data in a colourful format that can be filtered by student, tute group, etc, so tutors can get an idea of where their cohort is at. We've now cut off parent access to the interim reports, and are using them for school use only. So: Interim report data entry: Compass (web based). Interim report analysis: Excel. Semester report entry: Accelerus. Semester report analysis: Accelerus. Semeter report parent viewing: Compass. Is that too much? Should there be one system to rule them all, or does diversity bring resilience? If a teacher has to enter three different spaces to gather the data on a student/class/subject/etc, they might start forgetting about which is which. They might have different representations, or not 'link' together. Or perhaps I'm trying to dumb this down too much. It is a complicated problem. These aren't even all of the relevant data we have on each student.

The same goes for any other system. As soon as 'one system to rule them all' is implemented, another requirement arises that can't or won't be implemented in that system. Someone will come along and implement it independently. They might talk to some extent, but there are now two systems. Big systems have large momentum, and don't respond quickly to user needs. Small, independent, communicating systems (The Unix Way) help, but with heavily interlinked data, how do they fit in?

Tuesday, May 1, 2012

A meta-reflection


Most of the blogs I read are somewhat widely read. The blogger writes for an audience: an audience that is mostly unknown, perhaps loosely known through some comments/social systems, and maybe a few well-known readers. The style of writing is thus quite different for writing for oneself, or a small, well-known audience - among other things, context is a major thing. How much background info does one need to detail? Such reflection is certainly useful for later reading by oneself, and may at some later point attract readers, so in one sense, one should probably write as though one has an unknown audience. Even if that unknown audience is one's future self. Still, it feels awkward to be writing to many, when one knows that in fact very few (if any) people are actually reading it now.

I usually start a reflection spree with some meta-reflection such as this; some actual content will be forthcoming.