Visar inlägg med etikett #readingSeminar. Visa alla inlägg
Visar inlägg med etikett #readingSeminar. Visa alla inlägg

måndag 23 februari 2015

Reading Seminar #2

Here are notes to the second reading seminar! I have some points I really want to bring up and discuss plus somethings I want to highlight, hence the bulletpoints.

  • Conceptual model are best done with wireframes, which is what we in our team have been doing. They are really great at giving you an overview over the product. This is what we instantly did when tasked with designing the website for our conventional design from last exercise (the Route Planner thingy). (page 409, chapter. 11)
  • What functions will the product perform? In the course literature we have this interesting scenario in which a travel service is discussed. The authors point out how difficult it is to draw a line and say here the point in which we do not let our program/service do more stuff. It is all about defining the boundaries. Deciding this is called task allocation. (page 408, chapter. 11)
  • After the design stages we will need to evaluate our designs and get feedback from users of our prototypes. Therefore it is good to have some iterative workflow to rely on. The course literature brings up one good example of just such a workflow. The first part is to make a field study and get some early feedback then make some design changes. The second part of this is to test the design changes in some sort of usability test with users then go back out on a field study after which you will do one last design change phase. This is something we might do in with our designs. 
  • The DECIDE framework to evaluation seems to the way to go. The framework consist of the following points. One main thing to take away from this framework is that the order does not matter. It is consider to be iterative and able to go backwards and forwards. I for one think this is a good starting point for the next step in our designs.
    • Determine the goals.
      • Who wants it and why?
      • High-level goals?
      • Determine the scope of the design.
    • Explore the questions.
      • Why are the trends as we see them?
      • How is X more/less/etc for the users?
      • Be specific; Is the menus difficult to navigate? Not enough feedback? Et cetera ...
    • Choose the evaluation methods.
      • What data do we want/got?
      • How do we want our data?
      • Theories? Frameworks?
    • Identify the practical issues.
      • Pilot study!
      • Unpredictable events/consequences 
    • Decide how to deal with the ethical issues.
      • See ethical codes
      • Privacy, etc ...
    • Evaluate, analyze, interpret, and present the data.
      • Reliability; how well it produces the same results under different times, etc ...
      • Validity; consider if the evaluation method measures what the intent to measure was.
      • Ecological Validity; this part is about how the evaluation method might corrupt the results. Placebo is an excellent example in this subcategory.
This is me! In case you forgot to check the link last time! :-)

söndag 22 februari 2015

Concerning evaluations and user studies

Evaluation

There are several ways of gathering data when developing a prototype.
Below is a general description of some of them;

Quick and dirty evaluation - Quick feedback, not very carefully documented. Done in a short space of time. Inexpensive and therefore quite attractive to companies.

Usability testing - Observe the user. Record and process every action. This type of test tests the general efficiency of the prototype.

Field studies - Natural setting. Observe the user's natural actions. Good for identifying certain needs and determining certain requirements.

Different techniques for evaluating

Observing users - Don't disturb or interfere, just observe.

Asking users - What do they want, how do they think and why? How many users will be asked?

Asking experts - Cheap and easy. Experts often have solutions to certain problems.

User testing - Conducted in controlled environment. Well-defined tasks. Data collected mainly revolves around the time to complete a certain task, amount of errors made and how easy it is to use the prototype.

Framework to guide evaluation

Determine goals - Why evaluate? Who needs data? Why?

Explore questions to be answered - What to ask? Can questions be divided into sub-questions?

Identify practical uses - Think of this before evaluation. Budget, equipment, facilities?

Users - Are users chosen for evaluation relevant to the study?

Ethical issues

Tell users goal of evaluation. Ensure no personal information is used without permission.

Pilot study 

Small study or evaluation to see if the real study is viable.

Asking users and experts

Interviews

Planning - avoid long questions; hard to remember. Split broad questions into more specific ones. Speak in a way that the interviewee is comfortable with.

Unstructured interviews

  • Make sure the interviewee is at ease.
  • Respond carefully and with sympathy. Do not change the interviewees' opinion.
  • Analyse data as soon as possible.
Structured interviews
  • Short, well-defined and clearly worded questions.
  • "Closed" questions, requires precise answers.
Semi-structured interviews
  • Both closed and open questions.
  • Observe body language.
  • Do not prompt answers.
Group interviews
  • Interviewees develop opinions by talking to each other.
  • Facilitator guides and prompts the discussion.
Testing and modeling users

User testing 

Observe how the prototype is used. What errors occur? How long does it take to perform a certain task?

Doing user testing

Plan the testing thoroughly. Are the conditions the same for every participant?
Try to avoid environments with much noise and/or disturbances. If possible, modify the testing space to match a relevant environment.

Be sure to inform the participants about the presence of cameras, microphones, etc.

Question for the second reading seminar; What would be our best choice of gathering data?