Putting experience design to the test in the MCD Insights Lab
Here at MCD Partners, customers inform everything we do—what they want, how they use a site, meeting their needs. We create journey maps, construct proto personas, and use any number of techniques to get into their mindset. But nothing is more enlightening than actually testing an experience with users. So we wanted to share some of the things we’ve learned over the years that help us make the most of our testing sessions.
The benefits of moderated testing
We use a number of tools that allow for remote quantitative testing—sites like Validately, UserTesting.com, among others. This can be a quick and easy way to test at scale, to see if users can understand a process or complete a given task. However, one key limitation of most remote user testing is the human factor. Having a live moderator in a room with the participant often allows you to gain valuable insights.
For example, if a user appears confused, we can stop and probe further to find out what their concerns are. Why did they hesitate before continuing? What are they unsure of? What is preventing them from answering a question or taking an action? A good moderator can identify opportunities to delve deeper and better understand user behavior.

Tips for preparing a test plan
One of the first things to prepare is a test plan to identify the study goals and what you want to learn from testing. Think through what parts of the experience can benefit most from user input. If it is a test with tasks for users to complete, nail down which screens users will need to see and interact with.
Generate a list of questions you want to ask users, this will later be shaped into a moderator’s guide with a more formal plan of how the sessions will be run. Work with the client on identifying the desired audience for the testing and specific screener questions. Are you after any users? Is there a specific set of users who are more the focus of this new experience? How do we describe these users’ demographics? Can we screen for them to come in for testing?

Building a prototype
When testing multiple page designs, we tend to use tools like Marvelapp or Invisionapp to link together a series of designs or wireframes. This works quite well for most concept and design validation testing, as long as the prototype doesn’t require users to type or provide a lot of other input. Users can easily imagine typing out simple information such as their name or email address, so a shortcut where the user taps on a field and the prototype advances to a screen with that field filled is completely acceptable.
For more complex experiences, a fully developed HTML prototype with working inputs and Javascript will work better. We tend use HTML prototypes for interactions where it is important for users to see how their input and actions change the experience, like on calculators or payment flows. These interactive prototypes are ideal for testing since they are very close approximations of the final developed product. This can really help with design validation, as users can get themselves into all sorts of unexpected places that a clickable static prototype can’t get to. Things like turning on and off optional fields, problems with mobile keyboards or specialized inputs, error handling, and other items, can only be observed through an HTML prototype.

Setting up the lab
Our typical lab setup involves a few HD cameras to capture the participant and how they are handling the computer or mobile device. We tend to use Silverback for Mac to capture computer input. For mobile we use a HDMI dongle from the device into a video mixer. We record and stream a video with the audio from the session, the screen of the device or computer, and video of the user and their interactions with the device. This mix of sources helps us by providing a single view with all relevant information.
Our lab has an observation area behind mirrored glass, and we also stream our tests to allow clients to participate remotely. Typically we will have one designated person in the observation area take all client input and questions, both remote and local, and then feed those questions to the moderator. This helps us keep the stream of messages the moderator sees during the test to a minimum, while also allowing client concerns to be heard.
Choosing a moderator
For our formal testing we try to use a third-party moderator to avoid any internal bias. We brief the moderator on user needs, functions of the prototype, and the test plan. We then generally have the moderator write their own guide of how they see the test going. Then MCD and the client both provide feedback on the moderator’s guide, working toward a final approved guide for the test.

Deciding what it is you want to learn
With concept testing, we will take two or more concepts into the lab to learn user preferences. During the test we are asking the user about their preferences, having them compare and contrast, and probing for what they like and why. We also alter the order of the test to make sure the designs are not always presented in the same order, and also alternate devices so that mobile and desktop experiences are shown equally in alternating order. After testing, we often find that specific elements from different concepts resonate with users. So our final recommendation may reflect that and combine the best elements from each concept.
Design validation testing helps us understand if user-identified needs are being met. We will take a single design, with as many pages of the experience as needed, to meet that user need. For example, if the need is the ability to schedule an automatic credit card payment, this type of testing will entail taking users through the process of enrolling in automatic payments and then verifying that their needs were met. We also look for opportunities to improve the experience by targeting areas that users had challenges with to better improve our design.
We recommend multi-day testing
When we plan for testing, we typically recommend at least 2 days of testing, with a day off in between. This allows us to apply learnings from the first day of testing to our prototypes, smooth out any rough spots, or introduce new concepts for the second day of testing.
At the end of each test day the MCD team and our clients discuss what went well, and what didn’t, as well as what we think we should improve for the next day. It could be changing the order of screens, removing/adding elements to the experience, or changing language.
There have been times in testing when we found that one of the core experiences worked well enough that we wanted to use the second day of testing to focus on just a portion of the original test to get better insight into just that area. The idea is to use the time where it will be most valuable.

Wrapping it up with a final report
Our final deliverable is a report written by the test moderator along with saved copies of the test videos. Depending on the testing type, this can include recommendations on the preferred concept, or detailed findings that include what things were successful or areas for improvement.
