After proving that a simpler scoring experience was possible, the product team at Taskstream set themselves to develop a simpler way to manage assessment processes across campuses.
Taskstream has been a leader in the assessment software space for the past 15 years. Their Learning Achievement Tool is a very robust solution which supports a great array of use cases, however, such flexibility comes at a cost: it can easily generate unnecessarily complex interaction paths for the users.
After partnering with AAC&U to design a simpler scoring experience for the MSC project, Taskstream had a unique opportunity to bring that simplicity to campuses across the country, so the PO team set themselves to create an easier way of managing assessment processes across campus.
In order to create a really simple solution for assessment on campuses, we first needed to map out the problem. We conducted exploratory user research to better identify our user needs and to explore the way users currently deal with the problem both inside and outside the existing system.
We used the following exploratory methods:
We synthesized our research using several different models:
Help build a better culture of assessment on campus: Meet them where they are
We realized that while many AC were willing to deal with the inherent complexity of assessment, most faculty viewed this processes as meaningless extra work, thus helping assessment coordinators get faculty to buy-into the process of assessment and engage was a key goal.
Proposed approach:
‘Meet them where they are’ in terms of engagement.
“I’ll take whatever they [faculty] can give me”
A guided experience: There’s gotta be a better way, the Taskstream way!
From all of the emerging patterns one really stroke us: several power users mentioned feeling unsure of whether they were using the system ‘the right way’. The lack of positive feedback loops and the minimal sense of completion provided by the current system created in our users an underlying feeling of distrust and lack of control. Creating guided experience that provides positive reassurance and boost our user’s confidence while remaining flexible enough to keep her always in control became one of our main tenets.
Proposed approach:
“There’s gotta be a better way…”
We proposed a solution based on 3 tenets: The concept of ‘project’ as a wrapper for assessment activities, a navigation structure to support that concept, and a pattern library that would allow us to embody our guiding principles throughout the interface.
The concept of a Project as organizing narrative.
Making the project narrative the backbone of our UX.
Patterns to support guidance
We wanted to create a simple workflow to map to these key questions: “What are we measuring? Where are we collecting the evidence? How are we measuring it? Who is evaluating the evidence?”
We had a great foundation for the scoring experience, but iterated on it to support additional use cases, like scoring an assignment with more than one rubric, and scoring multiple files with a single assignment.
All of those phases – the project setup, the collection of work and the scoring – are critical points of engagement, but are ultimately means to an end: steps to get to meaningful data about student learning outcomes. So we sought to remove the friction at each of those steps and offer a payoff at the end, in the form of intuitive, interactive reports about student performance.
Originally the MVP database was structured around capability modules, making each area of the product –manage submissions, scoring and reporting– a separate feature that needed to then ‘load’ the proper project data.
The research around navigation made it clear that users’ organized their mental model one project at a time, so we created a nav strategy to support that narrative.
We explored concept grouping with an Open Card Sorting exercise in person and then tested the proposed IA with an closed card sorting exercise online.
We used these insights to inform our layout: we used cards in the homepage as a way to show key info for each project and allow users to track project status and scoring progress, while the ‘project layout’ navigation structure aligns with the assessment narrative: Plan – ID /Manage Evidence – Assess – Analyze
Here are some examples of the library of patterns and best practices we’ve built.
Immersiveness
We propose using immersiveness as a way of supporting our principle of ‘guided experience’. We use immersiveness in two ways:
Full-screen mode:
To keep the user focused in the current most important task (ie: scoring student work)
Sliding Panel:
To allow the user to open and complete secondary tasks, and easily come back to the point where they left off their primary workflow.
Inviting empty states
Every element requiring user input has a clear empty state (never an error message, as was common practice in the legacy system) that invites the user to provide such input.