University College London

Sentire

What can others learn from the origins of, thinking behind, and design of your technique or tool?: 

The origins of the research question and the eventual technique are tied down to this conundrum:

We strive to design for users, however “users” can be an extremely rare resource during the design process.

Furthermore involving users only at occasional and pre-defined milestones may be a risky move: what if a sub-optimal decision has already been taken (resulting in major re-work)? What if the issue is not uncovered at all? And the usual question, what is the right amount of test-participants?

UX issues are highly accentuated in e-government services: users have no other choice, there’s no alternative provider. This may result in non-adoption (risk for governments – low ROI and political retaliation) and penalties for the citizen (risk for citizens). If compulsion exists, resentment ensues (risk for both).

At this point we started to ask whether an elegant solution to this problem exists.

Solution 1 (unrealistic): For each new project, invite a statistically significant amount of users to participate during each and every design meeting. This would allow us to get their feedback immediately (synchronously) at each design decision

Problem: This is a prohibitive scenario, both financially and logistically. A more practical solution, and one to which UX people are accustomed, is required

Solution 2 (current): Represent “the user” during requirements and design meetings with user centered techniques and tools, including, but not limited to, personas. Occasional meetings with end-users would help us confirm or reject decisions taken throughout the process (major rework is always a risk).

Problems:

  1. How would one define “occasional” given the costs and time involved – what is the correct number of user-review meetings?
  2. Personas as user archetypes help in directing and informing the design discourse during the early stages, however this is based on heuristics, is highly subjective and there is a strong dependence on the designers’ (contractor’s) experience with these tools.
  3. In the e-government scenario it is uncommon to find in-house UX specialists (although some countries have invested in this) and furthermore when contracts are awarded competitively, UX is generally one of the first items that is dropped in order to qualify as the lowest bidder. This is mainly because there are no concrete and **measurable** ways to define UX requirements (as opposed to usability requirements)
  4. Even if UX is considered, knowledge gained stays with the contractor and is not shared across government departments/entities.

So, this lead us to these questions:

1) How can we bridge the gap between Solution 1 and Solution 2?

  • Can we create a systematic requirements and design process that places the user and UX at its heart, as part of the workflow, rather than as an afterthought?
  • Can we simulate user feedback at design time and for each design decision in real-time? This would help us understand the users’ lived experience through simulations based on pre-generated user models, and without the need to get users into the design room for each and every design decision being considered.
  • This might not be possible on every aspect of design, however we can start by looking at critical design decisions. By critical we are considering those elements, such as enrolment, that can have a significant impact on the lived experience due to excessive workload and which may also lead to non-adoption.
  • Finally, can we develop a way to specify measurable and verifiable UX requirements?

 

2) Can we store and re-use UX knowledge?

  • We believe that a systematic process for designing acceptable services and processes must also build on previous knowledge.
  • E-government projects are commonplace across departments and entities, and we believe that knowledge accumulation and re-use is a critical success factor.
  • Knowledge on user behavior is extremely valuable, and storing it for future re-use in e-gov projects is equivalent to building a UX-oriented national reserve.

These questions were the foundation of Sentire – a requirements and design tool that ‘listens’ to Personas. It extends the Volere requirements process as a basis for a rigorous yet agile requirements elaboration, specification and design framework. It is based on the idea of Calibrated Personas – personas extended with behavioral models – which models are stored in a library for cross-project/entity re-use.

What insights, outcomes, information, etc. does your tool or technique have the capacity to generate or illuminate that might have been harder or impossible to arrive at using existing tools and techniques?: 

Calibrated Personas are objective and statistically sound user behavioral models that explain attitudes towards specific design factors that may have a significant impact on the users’ experience.

Sentire adopts these models as part of the requirements and design process to simulate reactions towards design alternatives represented by scenarios and use cases. Feedback granularity is at use case level and is generated for each individual persona involved.

Different user groups (represented by different personas) may react differently to specific design decisions and Sentire provides quantitative insights that allow us to compare design alternatives and their performance across different user groups.

This introduces the concept of UX-analytics at design time as part of a systematic and iterative process which aims at shortening the feedback loop on critical design decisions at the earliest stages of a system’s lifecycle. This has the effect of ‘listening’ to the user during the requirements and design process, informing designers on the possible reaction towards specific designs in an objective and actionable manner. The user is now placed at the heart of decision making.

How broadly can the results and outputs of your tool or technique be applied, and what are the limits of their validity and applicability?: 

User models are built following a calibration process. This requires a number of persona representatives to go through a small number of standard tasks during which several measurements are obtained.

A larger number of participants would generate stronger models however a saturation point exists where additional participants will not yield any significant marginal benefits to justify the costs. So far calibration for different user groups consisted of 9 to 20 participants. We are currently studying the validity of supervised versus unsupervised calibration processes in order to see whether there is significant degradation in modelling power – this could help us determine situations where remote calibration would be possible.

User models are stored in a persona library, and are re-used for other government projects this time without the need for calibration. The initial effort to calibrate participants includes a structured process of understanding into the user group's attitudes. Interesting insights might emerge which may also inform the construction and evolution of project personas. The most significant benefit from calibrated personas are obtained when these models are re-used across different projects.

Nonetheless occasional maintenance – re-calibration – might be beneficial due to changes in cultures and contexts. Calibration could be outsourced to research institutes or commercial entities interested in building such models.

We are currently focusing on the enrolment process as a critical success factor in e-government services. The current user modelling technique (persona calibration) is used to build behavioral models that provide us with insights on the levels of:

  • Perceived workload 
  • Willingness to complete the task at hand (i.e. sign-up versus give-up).

 

Although our focus is on e-service enrolment we believe that this technique can be used as a design pattern for other critical design factors, such as checkout processes, security processes and so forth.

How might your tool or technique serve as the inspiration or starting point for future innovations?: 

By adopting Sentire, government entities (but also research and commercial design entities) could start building a knowledgebase of user behaviour models which could be used to simulate reactions towards specific design decisions across projects and across different user groups – from the earliest stages (i.e. requirements and design).

Sentire and Calibrated Personas do not replace heuristics and user centered design tools but augment these by providing a systematic framework that captures user behavior and simulates reactions towards critical design decisions (such as enrolment processes but possibly also towards other factors such as pricing, security and so forth). This puts the user at centre-stage while mitigating the risk of late changes (or uncaptured issues) on those aspects that carry the highest risks in terms of consequence severity. Through Sentire we want to introduce the idea of design-time UX-analytics, providing dashboard-styled simulated feedback on the impact that different designs might have on the users’ experience – in a quantitative and actionable way. 

Sentire gives users a voice throughout the requirements process following a systematic process of understanding while encouraging knowledge re-use.