This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Once you understand where the bottlenecks in your pipeline are occurring, it's time to decide where improving the tools used in that process can help, and what improvements should be made.
There are a variety of methods for testing usability, including surveys, user interviews, inspection techniques, thinking aloud protocol, and eye tracking.
Some techniques are difficult to implement, or give subjective results that must be interpreted by an expert, while others give a more formal score that just about anyone can use to gauge the usability of a product.
You can use some of the simpler techniques before investing in the more difficult ones. This will give you a really good idea where to concentrate your efforts.
The System Usability Scale (SUS) was developed at Digital Equipment Corporation (DEC) to measure usability. It is probably one of the simplest methods to implement because it is survey-based.
Participants are asked to rate ten statements from the below figure, on a scale from "strongly disagree" to "strongly agree". Each response maps to a value from 0 to 4, where even-numbered items have the score values reversed.
An overall score ranging from 0-100 is derived from the individual values by adding them together and multiplying by 2.5.
Software in the 70-80 range has average usability. If your software ranks higher than that, you may want to look elsewhere for savings. If it scores less than that, there's a good chance you can get significant savings in your development time by investing in usability.
The Single Usability Metric (SUM) is a bit more complex to implement than SUS, since it requires that a user complete three to four common tasks with the software, while a moderator scores how each task is completed. The final score is a combination of task completion rates, task time, error counts and user satisfaction.
Task completion is the ratio of successfully completed tasks to attempts. Error rate is measured by breaking down individual tasks into subtasks, called "error opportunities" and counting the number of successful steps performed per task.
Satisfaction rate is measured with a simple three-question survey given after each task is completed that asks the user to rate how easy the task was, how long the task took, and how satisfied he felt using the tool.
Task time is just the time it takes to complete each task compared to an ideal time, which can be derived using the task times of the users who reported the highest satisfaction.
Because SUM measures usability at the task level, you can use this method to determine which features should be looked at for improvement. Combining this info with thinking aloud protocol -- sitting with someone using the tool while they describe what they are thinking -- will give more detail on how to fix issues.
Of course, overall usability should be the goal, and a redesign of the interface based on user's goals will often give much better results than targeting specific functionality.
The simplest tools give the user the ability to edit data. Your favorite text editor gives you at least this much functionality. Not coincidentally, it also has the widest scope, since any type of data that can be represented as text can be edited.
These tools are very unforgiving, since there is no protection of the user from himself. Iteration is the key for its users, since it is easy to make mistakes, and when errors occur during the runtime, the bad data may be extremely difficult to find. The users may spend hours looking for the number with the wrong decimal place or the text with the incorrect capitalization.
The first and last stop for many game development tools is one that adds a layer of protection from user errors by limiting the possible input to valid values. This is a major step forward, since developers are able to create game assets that are less prone to error. Designers and artists can begin to feel safe in the knowledge that "at least it will run".