This week I tried another approach about reading reflection. Previously I wrote a review after the reading the text book entirely, which can be too abstract or vague, or sometimes additive not mentioning the textbook. So today I tried to write what I learned or questioned from the reading in the process of the reading. The disadvantage is it is too detail and there is no whole subject in the passage. The advantage is that it reflects more detail about the reading. So I want some feedback about which is better. If you are kind reader, please leave a comment, which do you prefer.
1. For iteration is basically circular process, it can scare managers. Usability report showing each iteration is getting better can be something relieving this worry for managers.
2. Setting a quantitive target level for UX measure seems too arbitrary too me. For it then proposes a difficult problem of how to setting target value. I think maybe common sense will be better in this case.
3. I learned a lot reading the “Not All errors are created equal”. I had a doubt about relying on only the quantitive aspect of performance evaluation. However there is a various kind of errors and it will be impossible to eliminate all errors. For example voice dictation can produce many errors. But rather than reviewing and correcting all the errors, just showing the memo with error can be better solution, because users are usually busy when they are making voice memo and they usually don’t have a problem recognizing the content with errors.
4. When to stop improving was a nice question during the Apple CTO presentation. Even though the book mention this problem, there was just general remarks about it like when the team feels enough, the budget and time limit.
5. Formative data analysis method by Whitney Quensenbery was very interesting to me. For what I felt the hardest part was gathering all stakeholder and let them watch the video patiently. It would be perfect only if I could do that.
“We suggest bringing in the problem analyst as early as possible, especially if the analyst is not on the data collection team.”
“We set up the evaluator team to ensure that someone with the requisite designer knowledge will be present during the evaluation session to include that information in the UX problem instance content that we now need in this transition to data analysis.”
This sentence really worries me. I think my domain and the author’s domain is not overlapping. Maybe it’s because I am follower of Cooper and IDEO method, which usually team of 2~5 members for the design. In my standard, there is nothing like problem analyst and data collection team and designer. The designer learns about the problem through User Research and he/she/team collects the data themselves, because he/she/team want to see the non-verbal subjective interaction themselves. However there seems to be a trend in this specialization of the role, which is reflected in this Persona usage paper too.
7. The distinction between Critical incidents and UX problem instances and was quite new, although they looked quite useless.
8. I learned how to decide how to decide priority between UX problem systematically. The breakdown of factors like importance and cost and calculating ratio was new. And distinguishing group cost and single cost was refreshing. Putting the UX problems in quadrant was interesting,too. Also the adding actual cost estimates so that we can get feedback is clever, even though it might be a little sophisticated. The cumulative cost and the line of affordability concept was simple but effective decision method.
9. Reading chapter 17, the author clear states that these report is for the internal use. And if it should be for the other people than design team, it should be careful about it use. Reading this, I began to think maybe I am focusing only on the small domain of the interaction design, where small design team completes the interaction design part and pass it to the dev team. There maybe other domains, where the engineering part holds the key or it has a large organization where the role of the design is limited in to changing existing designs. For example, if I am an interaction designer in Google, and if I would like to change the labels, which I think confusing, I can’t tell them change it. I would have to gather evidence to support my claim like user testing of 100 people showing it is really confusing. I am beginning to drop my prejudice and be open minded to accept the new learnings from the book. It is quite refreshing how reading books changes the understanding.
10. I learned that there is an industry standard for the usability report by ANSI and ISO. However these are for summative reports. Even though most of the usability report is formative reports, the scope, audience, format is so different between reports, there is not yet standard for formative reports.
11. A individual problem reporting may contain following content
- The problem description
- a best judgement of the causes of the problem
- its severity or impact
- suggested solutions
12. The section about “introducing UX engineering to your audience” is general, but contains practical advice for the situation, where you have to explain to sell your role to get support. Even though UX has been popular nowadays, many other team may not be familiar with the concept yet. So starting with teaching concepts is good approach.
12. There is many practical advice on how to survive office politics by “How to win friends and influence people” like manner. Even though they are general and a little bit off scope of this book, it is useful. I began to see this book as a handbook which tries to explain every detail. It may not be the pleasant reading, because if you know it, it can be too verbose. But it will be useful as reference, when you would like to make checklist.
13. The section about how to effectively convince and sell the UX problem was interesting and useful. After all, if you are not Steve Jobs, you have to convince others about the problem to fix it. Because it is hot issue, there is no given answer. Some prefer design recommendation, while do not actually use it. Also an estimated effort to fix a problem was not effective.
14. Even though I knew that delivering report alone will not make any change, I would like to emphasize it again. You should explain it again and again with face-to-face meeting.
So that’s all! Thanks for reading.