måndag 23 februari 2015

Reading Seminar #2

Here are notes to the second reading seminar! I have some points I really want to bring up and discuss plus somethings I want to highlight, hence the bulletpoints.

  • Conceptual model are best done with wireframes, which is what we in our team have been doing. They are really great at giving you an overview over the product. This is what we instantly did when tasked with designing the website for our conventional design from last exercise (the Route Planner thingy). (page 409, chapter. 11)
  • What functions will the product perform? In the course literature we have this interesting scenario in which a travel service is discussed. The authors point out how difficult it is to draw a line and say here the point in which we do not let our program/service do more stuff. It is all about defining the boundaries. Deciding this is called task allocation. (page 408, chapter. 11)
  • After the design stages we will need to evaluate our designs and get feedback from users of our prototypes. Therefore it is good to have some iterative workflow to rely on. The course literature brings up one good example of just such a workflow. The first part is to make a field study and get some early feedback then make some design changes. The second part of this is to test the design changes in some sort of usability test with users then go back out on a field study after which you will do one last design change phase. This is something we might do in with our designs. 
  • The DECIDE framework to evaluation seems to the way to go. The framework consist of the following points. One main thing to take away from this framework is that the order does not matter. It is consider to be iterative and able to go backwards and forwards. I for one think this is a good starting point for the next step in our designs.
    • Determine the goals.
      • Who wants it and why?
      • High-level goals?
      • Determine the scope of the design.
    • Explore the questions.
      • Why are the trends as we see them?
      • How is X more/less/etc for the users?
      • Be specific; Is the menus difficult to navigate? Not enough feedback? Et cetera ...
    • Choose the evaluation methods.
      • What data do we want/got?
      • How do we want our data?
      • Theories? Frameworks?
    • Identify the practical issues.
      • Pilot study!
      • Unpredictable events/consequences 
    • Decide how to deal with the ethical issues.
      • See ethical codes
      • Privacy, etc ...
    • Evaluate, analyze, interpret, and present the data.
      • Reliability; how well it produces the same results under different times, etc ...
      • Validity; consider if the evaluation method measures what the intent to measure was.
      • Ecological Validity; this part is about how the evaluation method might corrupt the results. Placebo is an excellent example in this subcategory.
This is me! In case you forgot to check the link last time! :-)