As part of a regular Emerging Technology series the Service Integration team at the Department of Internal Affairs (DIA), in partnership with the Ministry for Social Development (MSD), brought together presenters from inside and outside of government to share how they were using measurement to inform decision-making.
Good measurement helps us understand the impacts of our actions and interventions. The presenters shared how they use sensors, data and analytics to pilot new ways to communicate important messages at the border, monitor the home environment, better target health services for individual needs, and inform better service delivery across government. We also heard from a measurement scientist about the need to understand the purpose of your measurement to ensure you use the right approach.
If you’re interested in finding out more about emerging technologies at our regular showcase then please sign up to our mailing list. We encourage you to also view the presentations from our previous Emerging Tech session.
Making sense with sensors
Holger Spill from the Ministry for Primary Industries (MPI) talked about measuring people's engagement with a holographic display in the NZ Customs area of the Auckland International Airport. MPI took an experimental approach to solving the problem of attracting traveler attention to important biosecurity messages, and used measurements from sensors to inform the success of the pilot.
Transcript of Holger Spill's presentation on measuring engagement with a hologram
Hello, Kia Ora. I'm going to take you on a little tour to Auckland Airport. MPI has installed a holo display unit there, and I have the little brother, so while I duck out and get it, you can ponder this one [QUESTION in the presentation slide "How many international visitors came to New Zealand in the September 2017 year?"]
So this is the small version, and I'm going to pass it around so you can actually see it. What we've installed at Auckland Airport. And it tries to convey, emphasize, a biosecurity message, because we have a lot of international visitors coming to New Zealand and they bring all kinds of things, and biosecurity items that we don't want brought into the country.
So, just for a quick feel, how many international travelers did we have coming to New Zealand?
[Audience] Ah, sorry I thought you were asking who was an international traveler coming to New Zealand.
Ah, no we have that as well, but just for the last twelve months, how many people? A million? Two million? More. Three point seven.
Yes. It's staggering. And if you look a little bit into it, Australia's featuring heavily there, as you would expect. But then there's enormous growth from China and US, and then it drops off quite a bit. So we're getting more and more people so you can expect three point nine, soon we'll have more than four million. And it is quite hard to convey to everyone that New Zealand is a little bit different, that you please be careful of what you bring to the country. Food is not food for example because we are after a lot of food items because in different cultures, different languages we don't classify it that way. So there are interesting things around that. And MPI's always looking for ways to convey the message, and to actually entice people to dispose of what's coming into the country. And we're running different campaigns, social media, advertising. Certain visitors need a visa, so they already committed, they are in the process. They get a little booklet, please drive on the left of the road, please don't bring oranges, things like that. And so throughout the whole customer journey of tourists and international travelers coming to New Zealand we have this consistent messaging.
And Auckland International Airport because you can imagine a lot of these visitors arrive in Auckland by airplane that's the last chance where we can actually engage. And the message flips from informing more to 'Hey guys there's a fine. This is your last chance. Please dispose of these items here'. And, usually we use posters and cute dogs. And you see the message in the background. Big banners, but it's static, it's paper, it's on the wall. You can imagine you're coming off the plane, it's just one of these things in the environment. So, could we actually find something that would be a bit more engaging. And if we trial that, and that's where the idea with the holographic display was born, can we actually also see does it make a difference.
So, the whole concept was born out of the RTI practice, Research, Technology and Innovation that has been with MPI for a while now. And the idea is to just trial it for a short period to actually measure, to gain the feedback, and start with a concept, but then see where it takes us. And after a few design rounds we ended up with this device, which is a bigger version of this one and it flips from the head. So the actual monitor is down here. You see the glass panes. This is where the hologram plays. Branding on there and one of the amnesty bins. These amnesty bins are all around the airport and you can actually dispose of your items there, no harm done. And we check these bins. We actually see what gets disposed. Is it of interest, or is it just something that shouldn't have been thrown away, but people tried to be extra careful. So, we're getting stats on that, but we had an opportunity to also put into the device more sensors and capture actual quantitative data to enrich what we're doing anyhow with interviewing passengers and the qualitative data we have. So, when you open you see in the back there is a fair amount of space. Down here is the disposal bin. Here is actually seeing where the electronics is housed. And the whole unit can be moved around. It's pretty heavy, about 250kgs, because with the glass, the monitors and everything else around it, it's not really lightweight. And it's a bit more than 2 metres high. You'll see that in a moment in one of the images on location.
So, from the unit itself we get depth information. There's a depth camera in there. It gives us some information. We can't identify people, because we didn't want to have that in the airport. We would have to get permission for that. But you can actually track if someone is looking at the unit and how long they engaged, and then we also built a weight sensor and put that in there, so we can actually track how heavy the items in the amnesty bin get when they get disposed. And obviously we know how much video is playing, and things like that. Unfortunately, we couldn't hook it up to wifi or a network or anything like that. So, it had to be all pretty self-contained. One of the learnings later on is that's a good idea, so that if you ever do this push really hard for real-time feedback and real-time connections.
So with the depth sensor, it kinda looks a bit like that. You get a pixelated image of the environment and then through the SDK from the vendor, if you're familiar with Mirosoft's Connect, or similar device, you can actually track individuals. This person's picked, these people here in the background are not. It picks up a little bit, but not too much. So, we used it as it comes out of the box to track how many people, up to ten. And how long have they engaged with the unit. But we're not actually recording the raw data. So, it's processed on the spot in the unit of the host PC. The other, a bit of hobby electronics, we put weight sensors in there, and as any good project when you're doing innovation it's Lego. So we put the sensors in there as well. It has a base and then the weigh bin sits on there. This became a problem later on and I'm going to talk about that as well. Hooked up a little arduino by USB connects up to the host PC unit, again wifi and all this stuff would have been easier, but we had to work around it.
After all that, putting it together, trailing it. Shipped it out to Auckland and put it in an area between passport control and before you come to the luggage pick up. If you know Auckland Airport you go down the stair, around the bend and then there's this small corridor, and we were like right in that corridor, so you couldn't miss the unit. And then we waited, to see what we actually got back.
This little bit you can really see from the video and also what's playing on the little sibling that I was passing around. So, this went through in September,so petty recent. And so, only the first batch of data so what you're seeing and what I'm talking about is not the final report, it's not all our insights. This is still treated pretty early days. But we've got 23 days from this first location, but now the unit has been moved to a second location, and we got over 12,000 observations, which is great because it means somehow we picked up 12,000 people passing the unit. If you don't look at it the camera won't pick it up it's programmed so that if you just walk pass it, you don't make eye contact it doesn't pick it up.
So, downloaded it with a USB stick then unfortunately discovered we don't have weight data. What actually happened, because we had a little bit, is that when the bin got pulled out the base got pulled off with it and despite trying to be careful with the cable it got ripped and that was the end of that. And since we didn't have real-time feedback, we didn't pick it up and it wasn't part of our process because obviously the amnesty bin gets emptied and the unit gets checked and what not that wasn't part of the protocol. So, after a month it was kinda like bummer, we wish we had thought of that earlier.
However, we got data and this is simply plotted every interaction in terms of the viewing distances. It's a bit hard to see in terms of scale, but you see 1 metre, 2 metre, 3 metre, up to 5 metres. The sensor doesn't play longer than 6 metres, so that's the next one we get. And, you see also lots of bunching around the 2 metre mark here. And, this one was actually the trial we did up in Wellington before we shipped the unit up to Auckland. This was in Pastoral House. And this other here essentially transporting and assembling it, and this is when it actually went live, and until we got the first data housed. You can see in the big block that there are gaps in there and we want to actually know what's happening there, because yes we had outages and we could actually determine how big that was in here for ones where for what about yes unit power has been turned off, things like that and we had to improve how we have a protocol, how it recovers itself, things like that. So, that was the first thing we got from that. And then we're wondering, because we're measuring how long people stay around and where the sensor picks up the most. You can't really see it, but there's this yellow band, this is kind of where we got the most interactions. The bigger blob, the one that people hung around that was kind of like a lot of bigger blocks in the beginning and we suspect oh that's people seeing not what people normally see at the airport' and 'that's a helpful use of advertising', and then it became more business as usual. And, how there is pretty in distribution. We did also look at the difference in the weekday, things like that. And also we will have to verify what our sensor is doing. Is it actually picking up people here at the same rate actually closer to the unit, and things like that, or are people there that step back further, and what's happening there.
So, next things we're doing, is overlay our estimated arrival number, so the volume of people coming through because if we know that then that's our expectation. It's getting closer to Christmas, it will be busier, so if you want to compare things. It will be a useful cross check with our items we've actually found in the bin, and is there a difference. Actually interviewed passengers and mix with the qualitative information. Then modify the trail from what we learnt. We've moved the display unit around, it's now in a second location behind passport control. And, also place a standard bin that we have in the old location and then measure the interaction with that. And the message is pretty long because you saw how, if you watched the whole thing it's like 70 seconds, and that was the initial compromise of like how do we get all the stuff in there, but is it really suitable if you're actually traveling you want to get to your luggage, and oh yeah you see it but then you willing to hang around 8 seconds, 10 seconds. So, we're going to play around with that. Or simply static. Just a rotating orange that's getting cross-off or something like that, and see does it make a difference. It might be that yes we measured difference, but it's actually a design of the unit. It looks quite different to the normal things we have and attracts more attention.
Yeah, so more data to come in. We hope to have everything in by January and then learn actually from the trail that's it's worthwhile and what is the difference. That's me.
Transcript of Hīria Te Rangi's presentation on measuring the home environment
But 18 Degrees World Health recommendations for a healthy home. We created Whare Houora because with our technical skill sets, we can build these things. Software development, been doing it for 15 years. Brenda's been in hardware for God knows how long. And so we all came up with-- but Brenda mostly-- came up with, "what is not measured is not managed".
Because we all know, right? We all know. But we don't actually do anything about it because we don't, we aren't absolutely sure. There is no evidence. So we created Whare Houora. Whare Houora is a charity, it is for the resident. We don't actually expect to make any money, ever. Our technology isn't the most flashest. And to be honest, we just buy our components off of AliExpress.
But it's that kind of reuse-- the technology from us is actually done in code clubs for schools, because it's that accessible. We want it to be that simple. We have two business models. One is for community, all about the community. We go in, we teach them how to build their own sensors, their own gateways. And then we leave them with a health professional that is already in their community. But we pay them to support them, and making sure that these sensors are up and running.
We also have the other model, the commercial model. Because in order for this to keep going, to have legs, we're going to need money. And so we're in talks to start scaling out, to start creating house sets so that if you're not low socio-economic or you're not in one of our courses, you can buy our set. And out of that money, we'll be able to give another set to someone else. Or to fund our community noho marae.
This is how our whare sensors work. So we teach the whanau how to build the sensors, and then you put one sensor in each room. We teach you how to make the gateway, The gateway pulls the data from the whare sensors, pushes it up to the Cloud, then to the dashboard. In practise, here's our first prototype-- don't laugh-- and it's not a fire hazard.
This is the first one that Brenda ever made. It was big, clunky. There's an arduino, wires all over the place, big power cord. But it worked. She was able to measure her child's room. And out of that, she was able to realise that it was making her child sick. So she moved it down to the next best room, and then moved her child there. That's how easy it is. Because if you know, you can make changes.
We're on our fourth prototype. This is like the pre-- just to see if it works. And this is our fourth go. So we went to Kahao funding. That's the partnership between TPK and MBIE, and we pitched for some money. And out of that, we got $150K. And we were able to miniaturise. Because as you saw, the last one was like this big.
But you need one in each room. Kids sometimes unplug them because they want to charge their phone, or someone might kick it, or any number of things. So we're miniaturising. That's our new PCB, our Printed Circuit Board, and it's only about that big. And here's our new cases.
And it's battery powered. Because some of the feedback that we got from Kahao was A, it's too big. B, what happens if they don't have a power source? And C, what happens if they don't have internet? We'll come to the internet part soon. So that's where you put in the batteries. That's the PCB. And then you just slip a command strip adhesive on the back, turn it on, put it on the wall. That's it.
This is what the dashboard looks like. This is our new iteration. At the moment, it's got all the data underneath it. But you can tell that it was a bunch of devs that did it. It works, but it doesn't look like this. So as a resident, I'll be able to go onto the Web, log in, I'll be able to-- do the setup. So if I've got new sensors, or a new gateway, I'll be able to tell them, that gateway's mine, those sensors are mine. Add in any whanau that you want to have access as well-- it might be that this is your nan's house, and so I'd want access.
Then you'll break it out. Each sensor per room. So we've got temperature and humidity. And then because this tells you the data-- I think it's 10 minutes, every 10 minutes it takes a reading-- you can change that too depending on what your need is. Say if you were in a bach you wouldn't need that, right. And then we give you an interpretation of what that data means.
Slightly too cold, but otherwise healthy. Because we know the basics, right, from World Health Organisation. 18 degrees. We also know that humidity, like this one, this one obviously has a spa pool in there. And this one is too cold, but the humidity is OK. But it's still way too cold. A child in there, over time, will definitely get sick.
So we've got the temperature, humidity for right now.
And then if you clicked on analyse, it would give you the data over time. We give you a breakdown of what it is, but we also calculate the dew point. So that's when moisture in the air condenses enough to form moisture, actual pools of water. That's on window sills, windows, maybe in the carpet. Who knows? And then we give it a rating just because we can tell, you know, if the temperature is well above dew point, the room appears healthy, warm, and dry. Because that's what we're going for, right? Warm and dry.
Then we have the readings, you can switch between temperature or humidity so that you know what's going on here. Why is the humidity so high but the temperature is quite low? Something might have happened there, so it pays to check, right? If you know, then you can check. Something occurs at that time. Maybe it's daily. Who knows?
We also are able to tell them when it's mediocre or when it's really bad. So we're working with Otago University to ensure that our interpretation of the data is correct. We take a first crack at it, because that's what we do, and then they correct us. Because they are really, really helpful. They don't need to be, but they are. So we try to make sure that we don't say anything that's really dumb. And we show them our sources of where we got the information from, usually World Health Organisation recommendations. But the interpretation of that is tricky. And so we've go to them to make sure we're OK.
But yes, it means that we can give health warnings.
And this is the web version. On the mobile phone, we'll give them notifications. Hey, so for instance, you put the kids to bed at 8 o'clock. It was quite warm then. You leave the door, the window open. At about, I don't know, 10:00, 11:00, it got cold. The temperature goes under 18 degrees. It will give you a notification and you'll go, oh, that's right, the window is still open. Go and close it. Or something like that. But of course, if it's not measured and you don't get a notification, then how will you possibly know?
And this is the data underlying it.
Which gets us to data privacy and security. So because we are for the resident, we only have the address long enough to find the suburb that they live in and the sensor's area. After that we remove it. We do that for security. Because if we're ever hacked, they can have all the data in the world as long as they never are able to correlate that with an address. Because if you do, you can figure out when someone is home just on heat, or if someone's been away for a long period of time.
This might need to change if we ever actually hit the commercial model. Because then we have to start to deal with addresses associated with homes. Because if someplace funded you to have your set so that they could have an aggregate of that data, then they'd need to have the addresses that goes with it, but we will worry about that when we get there. One thing at a time.
So we do anonomise where possible, and we also will provide an aggregate to-- because we're open data proponents. So we also have a rule that we will release a map of all of our data if there are at least 10 homes in a suburb that are on Whare Houora. Anything less than that, you'll pretty much be able to figure out whose data is what. So yeah. And we do that just for public good. And to see if maybe that will help in making decisions around prioritisation of suburbs for respiratory illness, say, or for, I don't know. We don't know, but we're going to do it.
So could Whare Houora have helped my nan? Yes. We would have known. We would have known that, she would still had a cold, but her room was still under 18 degrees for long periods of time. That's the key thing is long periods of time. After she had died and we had stayed in her home for a week, I measured. Because that's what guilt does to you. And her room was, each night, between 11 and 12 degrees. So I know for a fact.
So we keep going. One third of all houses, one in six people have respiratory illness. The third leading cause of death. That's something that we really have to fix. And so we're going to. Whare Houora's mission-- and I'm putting that down to a year, I've got a year to do this-- is to get whare sensors into every living and sleeping space in Aotearoa. That's 1.7 million homes. That's a lot of sensors. And so, of course, we have a scaling issue.
Yeah. But of course. We'll get to that part when we get there. And there are some manufacturers that we have in New Zealand that want to help. And we have, remarkably, all of our PCBs, our printed circuitry, it's free. A place called OSH Park gives it to us. Our software has been built by volunteers. Everything that is tech-based has pretty much been given to us freely from people's time because they know the need and they see that we have to fix this. So yeah, Rabid, OSH Park and Kahao, without which really wouldn't be here. Yay, volunteers.
Kia ora tatou [APPLAUSE]
A more complete picture with smart analytics
Kay Poulsten from the social enterprise Help4U talked about using standardised taxonomies and smart analytics to monitor key health and wellbeing indicators for individual patients, understand how best to target interventions, and the overall effectiveness of their care.
Transcript of Kay Poulsten's presentation on monitoring health indicators
OK, so my name's Kay Poulsen. I'm founder and managing director of Help4U. We are actually a social enterprise, so not specifically a charity. And we've been in the role of health navigators and case managers for just on ten years.So what I'm here to talk about today is the learnings that we've had over the last ten years in terms of providing that service and the data that we've collected, and the next step as we understand it in this landscape of measurement, and data, and involving a number of organisations in that process, which is about engaging the consumer, not only in terms of co-design or engagement, but consumer led care coordination.
A little bit about us. We've helped navigate over three and a half thousand people over that time. The key thing is, 80% of our activity is resolved on the same day. So if you think of any kind of issue with access to health services, finding out what's happening, ACC, insurance, Healthy Homes, pretty much anything you can think of in terms of keeping someone healthy and well and safe and confident in their home environment, that's what we do. We turn around those cases, 80% on the same day, 90% within 24 hours. And the way we do that is the database and the systems of process that we've developed.
In 2009 we started out by running a government funded language of health forum where we engaged a range of stakeholders across health and social services, including patients and families, poor people, and understanding the language of health. And obviously what we found was that all the terms and languages were used interchangeably and there was not a common understanding of it.
From there we developed a community care dataset, which again was supported by government funding, Ministry of Health, and it was taking the 18 national collections that we alread had in government, and trying to map them to some sort of standard index. We've done work with Plunket and with Nurse Maude over the last five or six years in terms of supporting their electronic health record initiatives.
Established in 2007, case management care coordination, got particularly expertise in taxonomy guidelines and pathways. We have a software application that we developed on Microsoft Dynamic CRM, and that's the Activities, Events, Services, and Outcomes Planner, and it tells patient stories. And we use a taxonomy which I'm going to talk about briefly called the Omaha System, which is an international methodology for capturing patient, individual, family, and community level information.
In terms of measurement, which is one of the themes of today, our measurement objectives are primarily to document the inputs to care and evaluate the effectiveness of the services. We assume in that that there was a plan. We assume for any individual, family, or community that we are dealing with that we actually know what we are planning to do for them. We are looking for, at a glance, visibility of those inputs across the care continuum, and everything that entails.
Obviously, we want a minimum of duplication and rework, which I think we're all in agreement around, but it also creates an opportunity, with good data integrity, to look at the costs of the inputs of care, both in terms of what's planned and what we do. And obviously from that we've evolved to wanting to understand, what's the difference between what the pathway is, which is the whole population of what's possible, relative to what's the plan for this person in their context, to what did actually happen.
We have a platform called AESOP. You know AESOP wrote fables. We think our software as providing short, succinct patient stories with a moral message, much like a fable, focused on community and home-based support. As I mentioned we use the Omaha System taxonomy. We map to all the national collections and international coding systems. It's on Microsoft. We've been in production since 2011, and we've been a finalist in national and international awards.
I'm going to take a moment to talk about Omaha System, which is the taxonomy of choice. We undertook a process of looking at all of the different methodologies for structuring data for health and social services. We did this through a couple of government grants, but also through extensive international research, including traveling overseas quite a bit.
And we found this taxonomy out of the state of Omaha, or Nebraska, in the US. It's been around for about fourty years, and it was designed to support home based services. There's been a lot of work done by ourselves, Plunket, government agencies, University of Auckland and others who've identified that it is probably a good fit for community care in New Zealand.
There's a lot of challenges around different coding systems, I don't know if you've heard of them. Snomed, ICD, NIC, NOC, NANDA lots of different things which most of you won't know what I'm talking about, but there's lots of debate about what systems you use. This one works, and has been proven to work. So it basically goes from the concept of a problem, what is your problem? Rather than talking to patients about diagnosis and those medicalised terms, it talks about a problem, and then from the problem we take a look at what is that person's knowledge, their behaviour, and the status of that problem.
How well do they understand it? How compliant are they with the recommendations that we give them, and as a result of that, what is their status?
Now most of the medicalised environment focuses solely on status. This is about dealing with the social and behavioural determinants of health. We know from the work we've done over many years that capturing that has a significant impact on the quality of the outcome.
This has been around for about 40 odd years. It's the taxonomy of choice for the Netherlands, so the Netherlands' government have approved this as the way in which health services in the Netherlands will be funded And numerous other countries around the world have picked this up for community services.
So problem, knowledge, behaviour, and status from there you can determine what kinds of interventions you provide.
So, task based, funded based, what am I going to do for this person? But again, what it factors into it is not only treatment and procedures, but also what kind of education can we provide, what case management can we support, what kind of surveillance do we need put in to address the knowledge and behavioural issues, which is the outcome rating scale there.
The other key thing other than problem, intervention, and knowledge, behaviour, and status, is it allows for what we call client specific information, which is all of the other information that goes into constructing the story for this person. And that's particularly why I'm here, because some of the work that's being done with LabPlus and Service Innovation Lab is around looking at, making available all of the services, all of the resources in the public domain that people are accessing or potentially accessing to support them living in their homes.
This particular taxonomy has a framework to support that. So we can talk about things like what kind of assessment tools or eligibility rules apply, what kind of organisations are involved, what kinds of providers are involved, what kinds of qualifications they have to have, products, equipment, you name it. So this will allow for those things to go in to this framework and say, for this person, this is the bare materials, or this is the recipe that's going to kick them in their homes.
This is a really rough-- I won't spend too much on this, I think the key point I'm trying to demonstrate, this is an AESOP page that we've got in our software, and the key point I'll make is that everything you see there is a lookup to a data table. And it's a table to a national standard or an international coding system, or some reference source that's standardised.
We're looking for single source of truth. So the key thing about this is, when we're putting together what is the problem for this person? What are we going to put in? How well do they understand what's going on, how compliant are they? And what are all the services, and rules that are going in for this person's plan. Everything is linked up to a table, we don't have any free text of any kind. It's all just pulling from existing data sources.
And this is an example of the kind of pathway or the kind of-- at its widest level, what's possible. So this is really just talking about neuromuscular-skeletal function. So it's something if you're looking to see an orthopaedic surgeon, or you've got some problem with mobility. It's the kind of thing you'd be looking at it. It describes as a pathway-- OK, this is everything that's possible for that kind of a condition. If I'm in a health sector environment, what kinds of interventions would I do? And here's an example of all the different types of information or resources that I can apply to it.
From there, we go from a pathway, the whole of the population options, through to what's planned for this person, down to what have I done today. So you go from something like 66 options, 42 are relevant for this person, and today I did five of those.
Really quickly, in terms of data visualisations, this is what you can get out of that Omaha system taxonomy, So here's an example of number of nurses dealing with the same patient scenario. Same symptoms, same problems, but what it's demonstrating is different providers and different organisations are applying the recommended interventions differently. I won't spend too much time, but it gives you a demonstration.
This one here is what we call the Sunburst example. And what that's showing is that, again, mothers with mental health problems, and the different colours indicate how many problems they've got. The different rings indicate how many interventions we are doing for them, and a little bits bling, what we call the bling on the edges, is how many signs and symptoms you're demonstrating.
So we were able to look at, so immediately you can look at that and you're wanting to say, lots and lots of colours and lots and lots of bling makes lots of problems, and lots of actual signs and symptoms or evidence that they've got problems going on. So those are the people you'd want to target in terms of what you're putting in to services.
We've done all that, but what have we learned? We've learned that in order to be able to do deliver this kind of data there's a significant burden of documentation on the providers, to be able to capture it. We're using tablets in the field. And it's considerable. Lots and lots of dispute and debate around language and terminology. We're mostly interested in capturing what we get paid for. Don't really focus too much on social behavioural determinants of health in terms of data collection. We don't have any information around regional resources, or what's available in my town in my community, and very little engagement, despite our rhetoric on consumer engagement.
But if you're on my side of it, where you're an advocate or a case manager, or a care coordinator, what you find is people are more likely to engage if they're looking at their knowledge and behaviour. They will respond to different types of interventions such as teaching and guidance, education and coordination. And there's a massive scope to make available to them what are the resources available. What is my entitlement?
How do we match those two? The burden of documentation against what we want to make available to the consumers. We've taken AESOP, we've stripped it out, still remaining with the Omaha System architecture and said let's build a consumer application. We are looking at engaging individuals in self-directed documentations. So we've got the plans already. They're clinically valid they've been proven, benchmarked, best practice, let's engage the individuals in participating in that documentation. Make them the owner of the information. Use the pathways that are already defined as best practice nationally to drive the work flow in terms of what people can and can't document. And most importantly, using a consumer facing language.
Really quickly, so we've gone from that really complex diagram that I was showing you a moment ago, in terms of the app to what it looks like now in terms of just really simple what's bothering me today? Why am I worried about it? I won't waste too much there, but fundamentally same architecture, but just a much simpler app. It goes from a two hour assessment to a two minute journaling.
What are we going to do next? Make it available to consumers and their advocates. Make the navigational pathways that we have available direct to consumers. Obviously, we stick with the Omaha System, because its very generic, workable application in terms of language. We are currently preparing an R&D project around the best platform to do this, technological platform, and we're going to look at feasibility of blockchain as part of that exercise. The feasibility.
And we're doing quite a bit of international benchmarking. We're part of an international community of practice. And you can see there, if you're interested in learning more about Omaha System, there's a couple of websites there, and that will give you a lot more insight into who and what we're working with. And here's our website and my email, so feel free to give me a call and contact me, I'm happy to talk to you. Thank you.
Transcript of Pia Andrew's presentation on service analytics
Not up and running, it's not a final design. It's not a full fait accompli. But it's part of the problem that we've identified around measurement when it comes to service, design, and delivery in an all government context. So I'm going to talk to you about a couple things about the problem that we're seeing, they hypothesis that we have to try to address that problem, and the approach that we're considering taking, and a bit of a proof of concept around that problem.So the first part is-- first, the problem. What we tend to see right now is an individual agency will have, within usually a series of teams, some visibility to the various channels of service delivery. Now, some of these will be web, some of them will be application-based transactional services, some of them will be help desk, telephony, even social media. There are many, many channels.
One team might have access to one of these. In some agencies there are some excellent, I guess, omnichannel-- to use sort of the current buzzword for it-- view of the service analytics. But in most agencies, what we're seeing is a huge amount of investment and time going into data analytics, where they're looking how they use their administrative data in better ways.
But there's not actually a lot of attention really at all, apart from a few small pockets, on service analytics, on actually pulling in the real time data. And there's where service analytics gets really interesting. Because data analytics through administrative systems, there are some agencies--- and just a quick call out to what MSD are doing, which is the operationalisation of their admin analytics, which is amazing.
There are a few that are doing, sort of, that real time use of their administrative data, but a lot of people's use of administrative data is not real time. It's updated every month, every three months, every year, in some cases. So what you get is wonderful historical analysis, to some degree some predictional stuff.
But in the applied use of it, you also get a lot of issues around normative approaches to data and forming policy moving forward, which is a bit worrying. What service analytics gives you, it's absolutely-- it's still necessarily retrospective, and I think we all need to be aware when we're thinking about measurements how that fact that you've captured data means it's in the past, which means it's not necessarily where things are going, but it's real time.
And where it can actually inform service design and delivery is in a couple of key ways. First of all, it shows you behaviour right now. It gives you opportunities, which is similar to what Kay was talking about, around opportunities to, if not intervene, then to direct people according to the behaviour that they have. I'll come to an example of that in a second.
You have opportunities around-- we suddenly see a spike of people from a particular area. And we don't actually want to know who people are, just to be clear. We are not interested in knowing who the person is. We want to understand the behavioural trends, the user journeys, and what that means in those broader trends across the system.
But if you can see an upsurge of people from, let's say, a particular area, broadly having requests across the entire government domain around for instance, things around drug dependency. Being able to feed that trend through to the front line service delivery people, to say, look, you might see an increase in this. Here's some additional service information around drug dependency for your area. That would be really quite powerful.
It's about taking all of the intelligence that comes out of these systems, and there's a lot of intelligence out of those systems to be-- there's a lot of untapped potential in the data that sits behind those service analytic systems that we could actually use, both for, I guess, interventions, which doesn't have to be a hard intervention, to actually inform the front line, or inform the service delivery, I guess, more broadly.
Service delivery, and of course, to improve and continuously improve our services on an ongoing basis, in some cases to use that data to get rid of services, in some cases to use that data to identify gaps in the service and actually create new services. And the other part there is getting that all government view of a user's journey, and pain points, and behaviours, and where they're going.
Because if you have a whole bunch of websites, and of course they will be across multiple domains, multiple agencies, and multiple sectors, of course, if the person going through that journey, goes back out to Google for a search, comes back in, goes back out, comes back in, and ends up at a contact page somewhere, they're probably having a pretty bad day.
However, we can probably tell, well, 80% of people that go to that page, or are looking at that type of content, are also interested in this type of content, or that page, why wouldn't we use that to start to automate and push the people, did you also mean-- you know, Amazon-style reference or preference sort of options. And again, we don't need to know who people are. I think we have this habit in government of saying, as soon as I know who you are, I'll figure out what you need and then I'll tell you.
Now if we look at this trend of moving towards user-centered service design-- and service design, generally, putting the users at the centre of the design-- I think we've taken a little too literally, I think, that we're going to figure out what the user needs. And just do everything to the user that we think they need, because the problem there is that it depends on my agency, depends on my mandate, depends on my view of the world as to what I'm giving to them from what they need. Because it will always be a subset of what that they need that my agency has to provide to them.
So how do I actually look at all the needs of the user and then redirect them accordingly? Imagine if we could actually make available across the entire of government to all of front lines, whether it's a help desk for immigration, or a help desk for DIA citizenship, or a help desk for MSD-- someone just calling up any of our front line people help desks-- and everyone had the same access to information about services, the same information about the business rules, the same, sort of, logic flows to be able to direct people. So that rather than being 14 hops, it might turn into one hop.
Anyway, I'm going sort of slightly sideways. So the problem we have is that we don't have the ability to understand across-- often enough, across one department, across multiple channels, across multiple services, let alone across all of government. And what that means is that there's some perverse incentives that are starting to be-- starting to emerge.
So I have a quite understandable and natural incentive to reduce my cost of service delivery and to improve the experience that my users have. But if that creates a cost imposed on another agency, I have no motivation to wonder about that, to worry about that, to think about that, to take responsibility for that. If we-- you know, there's an incentive to say, well, if they're not someone coming to me for my service, then I'm sorry, it's not with us.
And of course, a lot of our front line try to make up for that by saying, well, I think you can go to them or I think you can go to them. We end up using our front line as a bit of a manual switching service. And it's not all that efficient or all that effective, which is why we end up with a whole bunch of service integration actors in the non-profit sector and the for-profit sector actually trying to fill those boots, because we don't necessarily do it all ourselves.
So there's also, I guess, just finally, the opportunity or the problem around what success looks like. And again, service analytics kind of helps with this. So it's not just about the analytics and the user journey. It's also about the systems and services themselves.
We did a bit of a cheeky thing in Australia in the Digital Transformation Office, where we set up, public facing, we took the top 100 websites by volume. And then we could actually rank them by volume and by cost, just for fun. And then we just started doing just ping tests. I mean, a ping test is just a hello, are you alive, and them coming back and saying, yes, I am.
And what you get from a ping test is-- what you can do is as often as you like. It just gives you a whether the thing is up, and how long it takes to respond to you. That's all it gives you. But tracking that every-- I think we were doing it every minute or every 30 seconds-- we got some pretty accurate up times and latency of some of those major services.
We actually found, in some cases, some government services-- and they weren't human services in all those cases, it might be an API, it might be a website-- that would actually switch off at 5:30 PM on a Friday--
Come back up at 9 o'clock on a Monday. That was interesting.
We found that the up time and latency of the most expensive services or most expensive websites didn't necessarily correlate to the most up, the most reliable, the most effective. There was interesting things that you can get from service analytics, not just about your user, but about the systems and platforms themselves. And the whole point of this is not to embarrass anyone. It's to actually identify and prioritise funding, user needs, service improvements over time, and actually improving the whole system.
OK, so our hypothesis is, what if we could actually pull intelligence out of all of government around service analytics, what benefits would we get from that? And we're looking at setting up a little bit of a proof of concept around exactly this topic. Again, we don't want to identify data about people. We don't want to do that at all.
What we do want is to be able to see the patterns and the journeys, you know, broadly speaking, in a de-identified way. So your basic inputs for this end up becoming web analytics as the first one. Now there's different sorts of web analytics. A number of agencies use different sort of web analytic tools, whether it's AwStats, or Piwik, or indeed Google Analytics. All of these can be drawn together to get-- that they all do their analytics slightly differently, so you have to take that into account. But you can get some intelligence.
In Australia, we brought together the agencies that were using Google Analytics already. We set up Google Analytics Premium account, so that if they chose, through their own systems, to want to use Google Analytics and they wanted to get access to the Premium account, then they could get it from us. And that is now in the process of being set up in TSSD.
So if you're in government and you've already chosen to use Google Analytics for some of your services, then please chat to me afterwards about getting access to that all government arrangement that's being put in place, financial arrangement. But at the same time, we highly recommend a whole bunch of the non-Google analytics tools for that.
But you can actually pull in those analytics into something. The second source for these kind of things, aligns to these things. And generally speaking, your transactional systems-- so where you are applying for something, or being paid something, or paying something, or updating something-- you don't tend to get good analytics from web. You have to get it out of logs. You end up having systems around help desk, of course, telephony, social media.
So just starting with the web, if we actually create what is commonly, modernly known as a data lake, rather than shoving it all to an analytic system, which means you get only the ability to reuse data within the scope of that particular tool. Our plan is to pull stuff into a data lake, which basically means it's in a structured format that's not beholden to a particular vendor or a particular product, which means that then we can create, actually, a layer, an analysis layer.
So we've got the data in a structured format that we can switch and swap with whatever we want in the analysis layer. And then we can actually create dashboards, we can create analysis, we can create user journeys. We can create a personalisation engine, which is my personal favourite little thing we want to do as part of this proof of concept, where we will be able to say, pass me a URL, and I'll tell you the five most closely correlated URLs, just as a simple thing to be able serve up to our users to say, are you also interested in. Just to help them bypass this, and hopefully, not have to get to this.
And then what we'll be able to do, hopefully, is be able to tell trends around services. And again, this is all very-- it's a little bit hypothetical. It's not entirely, because we've done it once before in another country. But the idea here is once we set up proof of concept to see what the value of it is, to see what the agency interest would be, agencies would have access for their own data and access for the full analytics layer. And we would work with them to pull in what is needed and then de-identified before we get a form.
Because here where it gets cool, and this is the last thing I'll say. You start to see trends. And then when a new service is introduced or removed from the system, what impact did that have across the whole system? That's one of the key things that we want to be able to see, and in many cases, that's going to be a great validation for excellent services being set up.
It will also create a motivation to create services that genuinely help people, because we'll be able to see if it is or isn't. You can imagine a future where we start to combine the insights-- not the data, but the insights-- between that and an IDI, between that and potentially other things. But for the moment, this is a proof of concept that we're hoping to setup over the next three or four months. If anyone's interested in coming to play please let me know. Cheers.
The science of measurement
Annette Koo from the Measurement and Standards Laboratory (MSL), Callaghan Innovation talked about the need to understand the purpose of your measurement to ensure you use the right approach. As well as supporting New Zealand’s quality system of measurement, MSL scientists help people to measure the right things in the right way to achieve the desired outcome.
Transcript of Annette Koo's presentation on the science of measurement
Good afternoon. Thanks so much for giving us this opportunity. And as you can imagine, working at the Measurement Standards Laboratory, the topic of this afternoon is very close to our hearts.So what is the Measurement Standards Laboratory? We are a team of about 30 scientists, who comprise New Zealand's National Metrology Institute. That means that we are responsible for ensuring that measurements made in New Zealand are equivalent to those made anywhere else. And that we can actually talk about measurements in a meaningful way with anyone, hopefully, if they're within the system that New Zealand has signed up to internationally. So our team of experts, and their technical understanding of measurement, make it possible to operate in New Zealand with confidence whenever we quote a measurement.
For example, you go to the supermarket, you buy two kilogrammes of potatoes, you don't worry about whether you're being cheated or not. If Airbus in Germany contract a manufacturer in New Zealand to precision engineer a component for a jet engine, neither party has to worry about whether it will fit when it gets shipped over there. If Frontera make a big money decision based on a temperature sensor inside their processing plant, it's a low risk decision because the measurement is accurate.
And the way that New Zealand government and industry access the expertise that we have is through calibration service. We calibrate instruments that make measurements. We provide training to those who also calibrate instruments, or who use measurements. And we provide consultancy service to solve the really difficult and complex measurement problems.
To be precise, these are the actual areas of expertise we have. So electrical measurements, all parts of electrical measurements. Current, voltage, impedance, et cetera. Temperature and humidity measurements, mass and pressure, time and frequency-- so if you're time-stamping anything, you're probably accessing the clocks at MSL-- length, and then light and colour.
So let me just tell you a few stories of how we impact New Zealand. Importantly, we're part of New Zealand's quality infrastructure. Glow-in-the-dark items are not only good for kids' parties and for toys, they also are incorporated into emergency lighting for safety purposes. So if you're going to instal lighting on your ship or in a stadium, you need to know that if it ever gets put into use, it's actually going to achieve the aim, to get people to the exit that they need to find. And to do that you need several players.
The first one is a documentary standard which describes, for example, how bright these strips have to be an hour after the lights go out in order for them to be visible for a human visual system. And so Standards New Zealand verify and publish documentary standards that materials need to comply with.
In order to check that your product does indeed comply with this, you need a lab which is accredited by an authority to say that, yes, it does. This laboratory is doing the testing in compliance with the standard. And in New Zealand, that authority is IANZ.
And for your laboratory to get accreditation, they'll have to prove that they're doing the measurement correctly. Which means that their instruments are calibrated, and that's where MSL comes in. We actually provide the technical underpinning of the quality system that allows us to trust systems that have been put in place, for example, for safety, and for other things.
But what about when measurements have an ethical or legal assurance implications? Ministry of Primary Industries, for example, try to protect our fisheries by setting limits on the size of crayfish that you can take out of the ocean. Similarly the New Zealand Police protect drivers by enforcing speed limits. But the measurements that they make have to be not only fair, but they have to also stand up in court. And both of these agencies rely on the technical expertise of MSL to support enforcement, when necessary, but also to protect the rights of New Zealand citizens against inaccurate measurements. So we act as an impartial third-party arbiter of accuracy and measurements in this context.
More broadly, we're just really interested in making sure that all the measurements that are made in New Zealand are fit for purpose. As humans, we have used light as therapy for years and years, for as long as we've existed, but with also a very high appreciation of the dangers of sources of light. And recent research has shown how much we can do with light, how it can affect our lives positively. We use laser sources for eye surgery. We can adjust our mood by influencing our circadian rhythm. We use it to treat eczema, and jaundice in babies.
But we've also become very aware of some of the risks of the lighting systems that we've put in. Blue light can also damage our circadian rhythms. Obviously UV, we understand now, induces cancer. And there are flicker and glare issues with some of the new types of lighting that are coming out. So as devices are proliferating with very clever tunable LED sources, we often have to ask the question, is this device safe?
Seems like a simple question. Well, we'll make a measurement. But a simple question like "is this light safe?" actually forces a whole lot of other questions. Can a user look directly into this light or not? Or will it only be exposed on the skin? What wavelength is it? Is it in the UV, is it visible, is it infrared? Is it pulsed or is it continuous?
All of these questions have to be answered before you can decide what measurement you should make, and then inform your decision about how are you going to protect people, or whether there's even an issue there to worry about. And the scientists at MSL understand all the various technologies-- how you can make measurements, for example, of light-- and navigate that difficult-- What am I trying to achieve? And what's the best solution, so that I can make a good decision at the end with the data that I receive?
Another quick example, if you take a chicken out of the oven, you might ask, what temperature is it? Well, again it depends what's the purpose of your measurement? If you want to know whether you will get burned by touching it, we need to know the surface temperature and we need to know how well chicken conducts heat. But if you want to know, is it safe to eat? We need to obviously measure the core temperature and you might use completely different technologies, different sensors, to answer a single question, what's the temperature? And again, that's where MSL is really able to help you think through the purpose of measurement, and what's the best solution for you.
So just before I finish, I want to flag a couple of issues that we see coming in measurement. The first one is the proliferation of smart systems. Smart cities, smart farms, smart lighting, smart house. Everything's getting smarter. And what that means is that we are yielding decision making to systems, based on data that they collected, from usually, some system of sensors. It may be a sensor in the road which is detecting via magnetic measurement whether a car is in a car park. It may be light sensors checking how many people are passing a certain point. Maybe cameras in an airport, or temperature sensors in a home. So that all of these data have been collected so that we can make decisions. Or, not we, but some autonomous system can make a decision.
And obviously the risk around decision making depends on the accuracy of the data that's being fed into it. And we need to, whenever we build these systems, be able to answer questions like, is this sensor fit for the purpose? Is it measuring what we think it's measuring? Has it been calibrated? How stable is this sensor over time? Is it going to be affected if next door turns into a construction site, or if we put a cable through the wall behind it? And to answer all those questions, you need to actually technically understand the technology, and measurement, and where errors can creep in. And so that's something that we were working quite hard on.
The next thing that we are starting to really sweat about a little bit is the future of measurement. What's coming? Sensors now are taking advantage of advances in physics, in terms of understanding the quantum world, and other technological developments to miniaturise, to digitise, to put sensors into complex systems that might be, for example, wearable devices that are monitoring our health. Or devices that might claim to improve educational outcomes by measuring the conditions of a classroom. And so at MSL, we're also investing in what's coming in the future. We have scientists working on measuring or modelling single photons. Measuring single electrons. Quantum cryptography. Remote calibration. Can we do some of this hard measurement work over communications systems and the internet of things?
So I hope I've given you a picture of the depth of capability we have at MSL. And I hope that, if you do have a measurement problem that you're trying to solve, that you'll give us a call, because we'd love to talk about it. And our scientists really have a very deep and intimate knowledge of all things measurement.