Product Owner Case Study #4: Survey Reporting

This post is about my software product Hyperion. You can read more about it here.

The Original Design

The very first feature of Hyperion was the custom survey. An admin account comes pre-loaded with a survey that asks the questions you need for The Department of Housing and Urban Development (HUD), but you can also edit and extend it.

These pre-loaded questions were used to power other Hyperion features. For instance, the “initials” and “age” of a survey are used as basic summary information throughout the app.

The biggest win we provide our customers is bypassing the manual data entry required with paper forms. Another big win is automatically calculating the HUD numbers. The questions we ask have all the information we need, so we just need to parse the answers, right?

The Problem

Almost all my customers (for the first year) heavily edited the initial survey. For example, they would delete the “initials” question and replace it with one asking for the person’s full name. Or they would take the “age” question and change it from a number field to a date field, and change the wording to ask for the person’s date of birth.

These changes broke things. Sometimes, they would cause things to fail silently, changing the reporting numbers without the administrators realizing they were wrong.

Luckily, I only had 15 organizations using Hyperion for the first year, and I saw this issue coming a few months before the live PIT counts. As soon as I saw one person experience the issue, I realized my mistake and looked into all the accounts.

I reached out to each of the users personally, apologizing and explaining the issue and what needed to happen to fix it. I tried to understand the intent behind the changes to support the functionality they wanted while ensuring everything still worked.

Obviously, I never should have been in this position. People shouldn’t be able to make changes that break things. It was a miss.

I didn’t have time to rearchitect Hyperion (at that time). The PIT count was coming up and all my customers had already built their surveys anyway.

The Complication

I’m using a library called SurveyJS. It handles the survey editor, displaying the survey, and exporting the answers.

Because I’m using this library, I’m less flexible. I’m using the question “names” to identify them (really, SurveyJS does) but the question names are editable in the survey editor. I can’t change that.

The Solution

The question names are editable, and the only way I can identify questions is by their names. So what do I do?

Part 1

I add another (custom) step to the survey creation process where they link the editable question names to the conceptual pieces of information I’m looking for.

They can keep the initial question names – or not. Regardless, I’ll have the information I need. I’ll just save that metadata and use it for interpreting the data later.

Part 2

Before, each record was automatically parsed and the result used for the aggregate calculations. This wasn’t transparent and could confuse people. If data was entered in a non-standard way it could be interpreted incorrectly. Also, there wasn’t any visibility at the individual record level.

Going forward, I will add a “reconciliation” step. After the PIT count is complete, the administrators will go through each record and validate the assumptions/conclusions about that record. That person’s age, race, gender, homeless status, etc. In most cases these calculations will be automatic and correct. If they’ve removed some required questions, they’ll have to enter those conclusions manually. Either way, they’ll have more visibility and control.

comments powered by Disqus