Thursday, June 17, 2021

How the Service Victoria QR code check-in system can miss out on check-ins

Update #1

I can't replicate the issue today (18/6/21); here's hoping they have fixed it since I recorded the problem on the 17/6/21. Interestingly the last update to the app was on the 9/6/21.


Victoria's QR code contact-tracing check-in system, while addressing an important goal, appears to have a relatively big usability flaw (specifically on the Android App), that can result in check-ins being missed. The problem occurs when using a QR code reader app (e.g. Google Lens) that is not the Service Vic app, but having the Service Vic app installed on the phone. I've been able to semi-regularly reproduce this on my own phone, but haven't tried any others, so I've only got a small sample size.

What's the problem?

The Services Vic app will sometimes ignore the current location, and instead check-in to your previous location a second time. This occurs if you have the Services Vic app installed on your phone, but scan the QR code with a separate QR reader app such as Google Lens.

Steps to reproduce

  1. Ensure that the Service Vic app, and a QR code reader are installed. I have only tested with the Google Lens app, but by my understanding of Android, this should be the same with any QR code reader app.
  2. Check in to a location (initially, it doesn't matter whether through the Services Vic app, or the QR code reader) - let's call this Location 1.
  3. When the "Check-in successful" page in the Services Vic app appears, do not press the "Done" button - leave it there (possibly as proof to the shop you're entering that you've checked in).
  4. Press the Home button on your phone, to put the Services Vic app into the background.
  5. Go to another check-in location (Location 2)
  6. Open Google Lens, point at the QR code for Location 2
  7. Google Lens should now open the Services Vic app, however, the app will still be showing the "Check-in successful" page of Location 1, and not any option to check in to Location 2.
  8. Pressing the "Done" button will take you back to the main page of the Services Vic app. 
  9. Reviewing where you are currently checked in to will show you are checked into Location 1, while Location 2 has not been logged at all.


Why is it a problem?

Because check-in locations might be skipped, and a chain of transmission could be lost.


Why not just use the Services Vic app to read the QR code?

You could, and this would solve the problem. But people use phone in different ways, and might want to use a single QR code reader for all of their QR code reading purposes (e.g. if they travel interstate, and have to use different state check-in systems, or they are used to the one app that they used before there was a unified Services Vic QR code system). And given that the functionality exists within the Services Vic app to accept a QR code (actually, a URL) from another app, this is a valid use case that has already been considered, but implemented incorrectly.


But shouldn't people check their app to make sure they're checked into the right place?

Yes, they should. But some (perhaps many) won't. When you're going shopping and are checking into a new place every couple of minutes, you're probably not going to read the details, you'll just be looking to the "Check-in successful" page, and you're done. Some people are busy, some people don't notice details, some people aren't good with this technology that's been forced on them, some people can't read well. 


What are the technical details?

I haven't delved too deep into this, but from what I know about Android apps (and it's been a couple of years since I've done any development there), here's what I think is going on:
  • When you check in to a location in Services Victoria, it shows the "Check in successful" page; I think this is implemented as an Android Activity.
  • When you go Home, or to another app, the Services Victoria app remains running in the background, with the "Check in successful" activity at the front.
  • When a QR code reader app finds a valid QR code (in this case, for a URL), it fires an Intent  to the rest of the Android system to ask which app is best equipped to handle this particular address.
  • The Services Vic app has previously registered that it's the one for these types of Intents/URLs, so it gets sent the Intent/URL.
  • But the Services Vic app is on the "Check-in successful" activity, which either doesn't know how to deal with the new Intent/URL, or ignores it, and stays on it's current page, and doesn't register the new Check-in location.

Can I see this in action?

Sure, here's a video of me trying to check in to Roti Road, but after scanning the QR code, it goes to "Check-in successful" for the previous location: 



Tuesday, November 6, 2018

Observations of self-paced programming systems for teaching year 8 students

My year 7 and 8 Digital Technologies classes have recently completed a programming course using Grok, which I've also used with year 10s and 11s in the past. It's a relatively straight-forward concept - a set of programming challenges/mini-tasks dispersed among descriptions and examples of what you need to be able to solve it. The examples are interactive - they let you run the code in-line, and also edit it to see what happens if you change something. Students can learn at their own pace.

I started the (mostly unmotivated) year 8 class on a turtle module, thinking this gives them something less abstract to cling onto. Some observations follow:

  • A _lot_ (most) of the students didn't bother reading the explanations and examples, and tried to jump straight to the tasks, whereupon they struggled majorly as they hadn't learned what they need to complete it. One of the main things I needed to do as a teacher was remind students to read the pages before the challenge. I had to do this repeatedly for many students. 
  • Partly due to the above, many students didn't get the idea of a repeat/loop construct, preferring to copy/paste or re-type code multiple times rather than use a for loop. Even after discussion with them, pointing to the pages that step them through this, and showing them how to use such a loop, many still went on (in later exercises) to duplicate code rather than loop.
  • Many students didn't read the error messages from the auto-marking process. Sometimes the error messages gave the wrong line number (e.g. due to un-closed brackets), and sometimes the error message were unintelligible or hard to see (UI issues on a laptop screen).
From the first two points, one conclusion is that students are just trying to rush through the activities as quickly as possible, which makes me think there's a lack of engagement and interest. While the turtle does makes things more appealing to visual learners, I'm thinking it's not enough. Perhaps a more thorough exploration of why we're doing this might also be useful - we did cover this a bit, but perhaps not enough. On the other hand, if not done carefully, this might disengage those students even more - I was hoping that by diving in to building things sooner, more students would be motivated. 

Another emanating theme is the difficulty students have with independent learning. I have found this over a large section of my teaching career - students don't like or know how to work through things by themselves. I don't have a great answer to this, short of researching ways to promote this; perhaps gamify the process a bit more?

The year 7 class, being an accelerated class, didn't have quite as many of the above issues, but they were all still present to a lesser extent. It also helps that year 7s are generally more motivated than year 8s and 9s. The year 7s got through a much harder course in Grok 

Next year I will try the BBC Micro:bits for the first programming activity, as these are quite engaging. Another option worth considering is to start with a simpler approach to following instructions - either unplugged (program the teacher) or with a simpler or less abstract application (LightBot app, or perhaps a simple Robot). I might also consider introducing a project that we'll be working towards _before_ starting Grok so that we have a context and interest for the programming learning. Another thing to try to embed (perhaps less related to anything talked about, but still relevant) is to learn how to read and understand code, not just write it.

Monday, August 28, 2017

AzureAD: attributes, groups and roles in SAML applications

Recently while migrating a bunch of things to AzureAD, I entered attribute hell, where the attributes required by the relying party (application) don't match the attributes sent from the IDP (AzureAD in this case), and the supposed convention of just swapping XML files and everything just working doesn't really work, and the terminology is different between different IDPs, and even IDPs from the same vendor (ADFS and AzureAD). But the biggest discovery was how groups and roles work in AzureAD SAML apps. I'll detail some of the backstory here.

So in the old days, you had an application that uses AD to authenticate, and also determine access levels using AD groups. In some cases these groups might already exist (students and staff), and can be used directly. Sometimes they exist, but the application needs to have a specific name, so you add the group (staff) to another group (appl-staff) and hope the application supports nested groups. Or sometimes they don't exist so you need to create them as required. Maintenance of these groups can suck. An an application tied into the AD has access to all users and all groups in the AD (perhaps this can be restricted using OU permissions, but that again creates maintenance hassles, and I suspect is not so often actually done).

When first trying to get a particular application rolling as an AzureAD SAML app, I tried to expose a user's group memberships as a SAML attribute. Eventually I managed to do this (it wasn't easy, and required powershell, or possibly downloading and upload a manifest JSON file). But this only provides the group GUIDs, not their names, which turns out to be useless for most cases. So this is where application roles come in. When creating a SAML app registration in AzureAD, you can create roles specific to that application (that are not visible anywhere else), then assign users and groups to those roles. So for a particular application (let's say a library system), you might have the roles SuperAdmin, Librarian, Staff and Student. Then you assign users or groups from AzureAD to those roles. When a SAML claim is processed, those roles will come through in the user.assignedroles, which you can map to whatever attribute the application requires (e.g. groupMembership).

This is all really quite sensible and logical.

Except for one thing.

Creating application roles is something of a pain. This seemingly simple thing can't be done through the Azure portal (at least at the time of writing; maybe they'll get to it one day). There are two ways to do this; one is via an application manifest, which the application provider has to provide. However, this assumes that the application provider knows what they're doing, which, in our experiences, hasn't been the case. A description of setting this up using an application manifest is available here and here, but I don't have any experience doing it this way as none of the vendors provided this). The way we did it was with the venerable PowerShell. A code snippet, largely borrowed from this repository follows:

Import-Module AzureAD
connect-azuread

Function CreateAppRole([string] $Name, [string] $Description)
{
    $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole
    $appRole.AllowedMemberTypes = New-Object System.Collections.Generic.List[string]
    $appRole.AllowedMemberTypes.Add("User");  # this has to be User; we can still add groups to it
    $appRole.DisplayName = $Name
    $appRole.Id = New-Guid
    $appRole.IsEnabled = $true
    $appRole.Description = $Description
    $appRole.Value = $Name;
    return $appRole
}

$app_id = PUT_APPLICATION_OBJECT_ID_HERE  # get this from the portal or some other AzureAD powershell command
$sponsors = CreateAppRole -Name "Librarians" -Description  "Librarian Role"

$app = Get-AzureADApplication -ObjectId $app_id
$appRoles = $app.AppRoles
# $appRoles = New-Object System.Collections.Generic.List[Microsoft.Open.AzureAD.Model.AppRole] # might need this if your app doesn't have any roles yet
$appRoles.add($sponsors)

Set-AzureADApplication  -ObjectId $app_id -AppRoles $appRoles


With app roles created, you can assign users and groups to those roles. In the portal, this is very tedious; whoever wrote that UI should be shot; if you have a lot of them you'll probably want to delve into PowerShell or the Graph API to implement it faster. 

One gotcha is that nested group membership doesn't appear to work, so if you have a role called AppStaff, and assign to that a group called Staff which has as a member another group called Teaching-Staff, and your teachers are only direct members of Teaching-Staff, then those teachers will not get the AppStaff role in the SAML claim - they need to be a direct member of the role group. Perhaps one day MS will implement nested groups for roles, at least as an option. 

Tuesday, October 4, 2016

Hacking an Aldi Cocoon360 360 degree VR camera

This cheap little number is quite decent, and the apps are OK as well. But what if you want more? To live stream from the camera, you turn on Wifi, and connect your phone to the camera's AP. I connected my laptop instead and ran nmap to see what's open - lo an behold, ports 21 (ftp) and 554 (rtsp) are open. Easy. VLC to the rtsp port (rtsp://192.168.1.1) yields a stream straight up. This is almost too easy. 

But how can we change the view that the camera is showing? For this I installed Packet Capture on my phone, and captured a few packets while change the view in the app. At first glance, it seems that it uses FTP for changing the view. The username and password are both wificam (I wonder if this can be changed). After setting passive mode, we get a RETR \FULL_VIEW.BIN; unfortunately this doesn't seem to yield much; it's possibly an update mechanism from the app.

Moving on, we see port 15740 in use. Annoyingly, it's a binary protocol. Playing around and looking at the traffic dump, it seems that a straight replay of the previous commands from the app doesn't quite work - it doesn't give the same response. I suspect there is a handshake going on, but I don't have the time to decipher it at the moment.

Thankfully, it appears that whatever view mode was last set via the Android app persists when watching the RTSP stream, so that might be all I need in order to re-broadcast a live stream.

Annoyingly, the wifi connection on this camera seems to be flaky on the connection phase; specifically both my Android and OSX computer often fail to pick up a DHCP lease.

Monday, September 9, 2013

Software and service development in schools

One of the dilemmas a modern school faces is whether to develop systems and software in-house, or to use an external provider.

The biggest problem with the former is in on-going maintenance, particularly if the lead developer departs the school. And, let's face it, most school IT people aren't software engineers, so development practices aimed at maintainability are likely to be lacking.

On the other hand, external providers often don't quite fit all of the school's need, meaning the school either has to adapt to what the provider can offer, or add a custom system as well, which leads to the problems of multiple systems.

Another possibility is to have an external contract develop the service or system. This can probably work provided that development processes are high quality. Extensive consultation is imperative though - product requirements and specifications are notoriously difficult to lock down, and without a solid understanding of how schools, teachers and students operate, this could be too far removed from the end-user environment to be effective.

The solution to these that has got me thinking (and somewhat excited) of late is to develop open source software/open systems, which a strong focus on community building. In all likelihood, a school will not be unique in it's need for a particular system. In building its own, but making it open source and actively sharing and promoting it to the broader educational community it could alleviate the maintenance problems by 'pooling' the maintenance issues among other users. Schools could also turn this into an income stream by providing paid support or development of specific features. The DISCO ICT management project is one example of such a development practice that looks promising. Community-building takes time and energy, and is often in the skill-set of a software developer or technician - the more successful larger OSS projects often have dedicated non-techie people in this role.


Thursday, July 26, 2012

Communications and the flood of information

We live with too much information. Perhaps work rather than live - the individual can choose what their information landscape looks like outside of work. Here's something of a taxonomy (in our school; likely somewhat similar in many schools):

  • email (access anywhere, anytime)
  • online calendars
  • portals (e.g. compass)
  • intranet/websites
  • social media
  • files on a computer
  • cloud based files/applications
  • smartphone apps
  • tablet apps
  • paper-based chronicles
  • books
Compare to 5 years ago:
  • email (access at laptop on desk, probably wired connection)
  • paper-based chronicles
  • books
  • files on a computer
  • intranet/websites
  • social media
Compare to 10 years ago:
  • email (access via PC)
  • paper-based chronicles
  • books
  • files on a computer
As the process has been quite gradual, there hasn't really been any discussion on the implications of this, nor training on how to manage the flood. This is certainly a workload issue, but for many the situation exists outside of work, or the demarcation of work blurs (this is an issue in itself). And it's not only the implications and training of staff - this applies as much to students.

A major concern here is the signal-to-noise ratio. The noise is both getting more frequent (more communications occuring), and stronger (every communication fighting to stand out). So we end up missing things, and the cycle feeds back into itself. Finding the signal in the noise takes time and energy. There are two possible ways forward: decrease the noise, or filter the noise. The former comes from training, etiquette, and careful selection of systems (perhaps cutting out systems). The latter - training people how to use systems more effectively, and employing smarter/better suited systems.

A (perhaps more important concern) is determining if the flood of information is worth it. And if it's not, whether it can be stopped (you can't stop progress).

This issue will be the focus of discussion for much of the coming year.

Continuous and Connected

On paper, data has to follow slow cycles. Computers deprecate such cycles, but our workflows seem stuck in a paper-based ideology. Here's what's possible now:

Continuous Collection

There is simply no need to have a reporting cycle at the end of each semester, when that data can be continuously gathered throughout the semester.

Continuous Analysis

Then, the data can be continuously looked at, trends noted, students and teachers who are falling behind given a hand. The concept of reporting could almost be dropped. Parents could also be given the data on a continuous basis, but that opens up a separate can of worms.

Multiple Sources

Data is connected. Some systems may try to become the One True Data Source(TM), but in reality they fail, and become just another data source in the equation. Data analysis has to account for this. I don't know how well analysis services can query disparate DB systems - for the moment I'm figuring some synchronisation will be the easiest way to achieve this (i.e. creating One True Data Source(TM) from a number of external ones).

The (newly developed) school reporting package we've just moved to doesn't account for this. It can handle the continuous nature of data collection, but less so the continuous nature of analysis, and barely at all multiple data sources. Those latter two can probably be hacked onto it using some SQL, but it's a shame the analysis and import features of the product itself look to the past more than to the future.

On the other hand, if everything is continuously continuous, we lose milestones, a sense of accomplishment, of finishing something. Our minds may adapt to this in time (facebook status vs a letter to a dear friend, web snippets vs a book, youtube vs a feature movie), but for now there still is probably some need to have clear end points for some of this (i.e. end of semester reports). But that can co-exist with the continuous collection and analysis.