Monday, December 7, 2009
Week 9
Why We Play Games—Four Keys to More Emotion Without Story—Nicole Lazarro, XeoDesign
Somewhat rambling article about a small study examining the role of emotion in game play—specifically emotion that is not tied to story. The author identifies four “keys” that evoke emotion in players:
1.Hard fun – the challenge of strategy and problem solving produces positive, enjoyable emotion
2. Easy fun—the chance to explore and play and satisfy their curiosity in an absorbing game produces emotions of excitement and pleasure.
3. Altered states –refers to how games can alter the player’s mood and change his or her internal experience in an enjoyable way.
4. The People Factor – refers to playing with friends and the emotions produced by teamwork, competition, and social bonding.
I found this article kind of hard to read because it was not written well, but it did make an interesting distinction between emotion tied to story and emotion tied to game play. I am not generally a fan of games –either making them up or playing them – so I don’t have much to say on this topic. Although it may be interesting to consider why I am not a fan of games. I resist them, often finding them boring, though sometimes when pressed to play I am capable of enjoying the experience. Maybe it has to do with a tendency to be internally oriented, and games force an extended engagement with something outside of myself. I like to relax by reading, which is a more self-contained, internal experience. I just sort of revert to my inner world whenever possible, I guess.
Why We Play Games Together: The People Factor – Nicole Lazarro, XeoDesign
A companion piece that outlines 7 ways to produce emotion during group game play:
1. Support player interaction—build in mechanisms that allow players to affect one another, even help one another
2. Put on a spectacle – make the game interesting to watch as well as play, which will encourage interaction, especially among players at different levels of expertise
3. Include tools to communicate emotion and to enable users to create their own meanings
4. Use non-player characters that display emotion and inspire emotion in players
5. Create emotionally expressive tools and objects as part of the game
6. Emotion cycles, feedback, chains—take into account how groups of players process emotions, how one person’s emotion affects another, and how situations can be created specifically to form feedback loops and interactions
7. “Save money,” by which she means that developers should include user testing early in the development process, so you can get it right the first time. Early inclusion of user feedback is the only way to know what works and produce the best possible” entertainment appliance.”
Saturday, December 5, 2009
Week 8
Digital Acting – George Maestri
· To support visualization and mental representation of concepts to be learned
· To produce a cognitive conflict, for example, in scenarios where learners are asked to choose the correct animation of a process or phenomenon from several provided
· To enable learners to explore a phenomenon interactively in a process that involves generating and testing hypotheses
She states that research on the effects of animation on learning has yielded inconsistent results, and that the real issue is determining when and why animation is more effective than static graphics. In some cases, animation can actually hinder learning: “It may induce a shallow processing of the animated content, and consequently leads to what can be called the illusion of understanding. Then the elaboration of a mental model is inhibited by animation.”
This seems to be mostly dealing with animation used to convey concepts or processes, and not animation when it is used to engage the learner, possibly be adding entertainment value or humor to a piece of communication. The latter is also an important component of education, and the role of animation here shouldn't be forgotten.
Thursday, December 3, 2009
Week 7
Monday, November 2, 2009
Week 6
Multimedia Learning – Richard E. Mayer—Chapter 14
A summary of Mayer’s theory of multimedia learning that puts forth five conditions for effective multimedia presentations: 1. Spacial contiguity, when related words and pictures are near each other on the page; 2. Temporal contiguity, when related words and pictures are presented simultaneously rather than successively; 3. Coherence, which calls for extraneous words and pictures to be kept to a minimum; 4. Modality, when words are presented as speech instead of text; and 5. Redundancy, when words are presented as speech rather than speech and text. Three assumptions apply: learning involves separate visual and auditory channels; there is limited capacity in these channels; and meaningful learning requires actively selecting, organizing and integrating information.
These are good basic guidelines to keep in mind – and they also seem sort of obvious and common sense principles. Still, it’s good to have research to back up intuition. Another point made is that multimedia learning is more effective for some types of learners than for others. It is best for learners who are low in prior knowledge about the subject and for those who have high spacial ability.
Elements of Experience Design –Nathan Sherdoff
Provides a broad view of all the elements to consider when analyzing an experience or designing one. These include stages of an experience-- attraction, engagement and conclusion—as well as presentation and organization; visualization and design; and navigation, interactivity and creativity. The article is all theory and abstraction and thus kind of hard to plow through – but again provides a framework to help you think about experience design. This would be handy to refer to when working on a project.
Confronting the Challenges of Participatory Culture: Media Education for the 21st Century—Jenkins White Paper
Really interesting exploration of the contemporary youth media culture, the challenges and issues it raises and how it is affecting society, education and young people. This is certainly a topic I have gotten to experience first-hand as a parent. I’ll offer a few observations of how the culture and experience of the internet has affected my daughter. First, she doesn’t have enough patience to read an entire book. She is used to getting information quickly and in short chunks. She has relied on Spark Notes her entire school career, much to her parents’ dismay. I will not claim that ALL teenagers do this, but I think many do – so many that even my daughter’s English teacher at one of the best public high schools in Manhattan said that she had given up on assigning novels. She was moving to short stories because she couldn’t get the kids to really read an entire novel.
My daughter’s social life was conducted largely online. She came home from school and went immediately online and stayed there for hours, just as we used to get on the phone with our friends. Then texting started and consumed even more time each day than Facebook. Only when my husband and I started using Facebook and texting ourselves did we start to understand how addictive they are and also how truly new and different online social life is. For my daughter, the experience of going away to college and leaving her high school friends is quite different than it was for previous generations because she can still be in constant touch with her old friends via Facebook and texting. Yet she can also watch them as they move on into a new life. When we visited her at college early in the semester, I noticed her looking at a friend’s Facebook page where he had posted photos of himself with his new college friends. I had a sense of how this may have been hard for her – to be connected to him and yet actually be able see him making new friends and possibly starting to move away from a close relationship with her. It sort of dramatized the fact that she was losing her old ties, or at least they were changing significantly. It struck me that this is really something new. It adds an element that has never been present in social life before--kind of a blending and blurring of old boundaries. It’s just a small example of the vast changes the Jenkins paper addresses.
Monday, October 19, 2009
Panwapa observations
Week 5 readings
Week 4 readings
Tuesday, October 6, 2009
55-word story
Sunday, October 4, 2009
Week 3 Readings
These are complementary introductions to gestural interfaces, their current uses and the considerations that go into their design. There is a huge amount of information here – lots to think about. I am intrigued by the idea of finding the most natural way to match people’s natural behavior to the technology –enabling gesture. It reminds me a bit of the notion of distributed cognition—how, for example, a calculator functions as an extension of the human brain to offload certain mental tasks and free up mental resources for tasks that cannot be automated. In the same way, gesture might enable technology that would seamlessly enhance and extend physical functioning. The idea of potentially using the entire human body as the tool for enabling technology is so interesting and opens up a huge world of possibility.
What Every Game Developer Needs to Know about Story – John Sutherland
Oy. I hate games. I have never in my life played a computer game except when I had to play around with Second Life for an ECT assignment. My daughter used to love Sims, and she would sometimes ask me to sit and do it with her, and I just went into a coma I found it so boring. I literally could not make myself pay attention to it. These fantasy worlds, especially the ones featuring kill-or-be-killed scenarios are just immensely boring and irritating to me. They’re for the adolescent boy aged 6-96.
Maybe if a game were something set in the world I live in, I would be interested. I have in mind a Dilbert-inspired game in which the story and all it entails – protagonist, inciting incident, risk, reversal, more risk, more reversal, character-revealing choices, triumphant or disastrous ending—was built around the absurdities of the work world. An “Office”-like game – that would be interesting to me, one in which I would get to plot against an irrational, semi-competent, passive-aggressive boss like the one I actually have. This is actually a good idea – to use a game format to teach people how to deal with workplace situations/conflicts and how different choices lead to different results—questions like “Should I go to HR about this or just ignore it or find a way to deal with it head-on? “ Hmmm…..
The Design of Everyday Things – Norman—Chapter 2
An elaboration of the psychology of people who tend to blame themselves when they have trouble operating machines or devices due to poor design. He reiterates the principles of good design:
• Visibility—the user can easily tell what to do, how to effect a result and where to start
• A good conceptual model – one that is logical, consistent and easy to understand
• Good mappings – strong, logical relationships between actions and results, controls and their effects, the system state and what is visible
• Feedback – the user receives feedback so he knows what the system is doing
The story about the airplane was a sobering reminder of just how high the stakes are in product design.
Monday, September 28, 2009
September 29--Interface critique
Home blood pressure testing apparatus
This is a pretty simple device that is quite easy to use. I have no idea how accurate it is: my experience is that blood pressure readings are all over the map, very different when taken with different equipment at different times of day and by different people. But accuracy aside, here’s my critique of the device itself.
Overall look and feel
It is a fairly small, (about 7 inches by 5 inches) lightweight plastic box with a Velcro-closing cuff attached with a plastic cable. The box is angled down to make the top easier to see. At the back it’s about 4 inches high sloping down to the front where it’s about an inch an a half high. The top features a screen with demarcations to the left that say “sys” (for systolic pressure), “dia” (for diastolic pressure) and “pulse.” Below the screen there is a small, round button and above that is a small clock icon. To the right of that is a larger button that says “memory.” To the right of the screen is a large blue button that says “start” on top and “stop” on the bottom. The cuff has a small label sewn on that has a graphic showing where the cuff goes on the arm. It is labeled “1/2” inch and ~ 2 cm to indicate that is how far above the elbow the cuff is to be placed. There is also an arrow at the edge of the cuff where the cord come out of it, which corresponds to an arrow in the graphic to indicate the cuff should be placed with the arrow facing the user’s elbow.
Function and feedback
To use the cuff you place it on your arm and hit the start button. You feel the cuff tighten on your arm and while it inflates the machine makes a whirring sound that tells you it is in the process of measuring your blood pressure. You also see a series of numbers that start higher and go lower. When the device reaches the point of inflation when it can take the measurement the numbers stop and the remaining air is let out of the cuff. You can see the three measurements (systolic pressure, diastolic pressure and pulse) as well as the time. The time is recorded because the device stores your readings. When you press the button marked “memory” the most recent reading appears with the date and then the date switches to the time. Press “memory” again and the next most recent reading appears with the date, and then time. The feedback is in the form of the sound that tells you the cuff is inflating and then deflating to the necessary point and in the form of the numbers on the screen.
Affordances
The three buttons and the cuff. They are all labeled to make clear what they do and how to use them. The cuff shows you where on your body to put it, and then you push “start” and machine measures your blood pressure. If you want to know how today’s blood pressure compares to yesterday’s reading or the one before that, push “memory.” Pretty simple.
Mapping
The cuff is literally mapped to its use since it has a diagram showing you exactly how to use it. You would have to read English to use the device since “start” and “stop” and “memory” are in words. The clock is an icon that doesn’t depend on language. When you hit the clock button you see numbers that flash first the day and then on the second push the time. There is no indication of how to change the date or time, though there must have been an instruction book that outlined a way to set that up the first time the device was used. The “start” and “stop” and “memory” buttons couldn’t be simpler.
Comment:
This is a simple device that pretty much explains how to use itself. However when the batteries run out I assume you would have to set the date and time again and there is no indication of how to do that. It also might be nice to have a brief explanation of what “sys” and “dia” mean, but I guess there is an assumption that you are using this device on doctor’s orders and have been given a basic explanation. Also it might be nice to have the basic instructions written on the back, i.e. ,whether you should have your arm resting on something or dangling down, whether it matters if you stand or sit – that sort of thing. That could also be conveyed via diagrams such as the one on the cuff.
September 22
An exhaustive analysis of different types of data and different ways of visualizing it.
The guiding principle is this “visual-information-seeking mantra:”
Overview, zoom and filter, details on demand.
Categories include linear, map, 3D world, multidimensional, temporal, tree, network, overview task, zoom task, filter task, and on and on…it’s complicated. There are many possibilities and considerations – it’s literally endless. I have no idea what to say on this topic beyond that.
Schnotz and Bannert – Construction and interference in learning from multiple representation
A paper describing a research study on how the addition of graphic representations to text affects learning. The authors discuss how mental models are constructed during learning and how the addition of graphics or pictures affects that process. They find that in some cases pictorial representations actually interfere with learning. Their findings show the importance of choosing a pictorial representation that is well matched to the mental task. The study challenges Paivo’s dual coding theory by examining the question in greater detail and showing that Paivo’s theory is too simplistic. The addition of graphics to text does not always reinforce and enhance learning. Rather, how pictorial representations are used, which ones are used (and for which kinds of learners), as well as when they are used have a profound effect on learning.
Norman—The Design of Everyday Things Chapter 1
This had me saying “Yes, yes, yes, yes, yes” to myself as I was reading. As someone who is frequently baffled by the technology I need to use on a daily basis, I was happy to read his complaints about how overcomplicated and badly designed so many things are. Telephones, DVD players, cable remotes – nothing is as simple and intuitive as it is often promised to be. I am allergic to reading manuals so I pretty much just do what I can figure out without having to do so. Not having grown up with technology, I don’t have the native grasp of how these things work that, for example, my daughter does. Has anyone done a study of this issue? Everyone talks about it – the kids can just pick up the remotes and the cell phones and they somehow know what to do without instruction or reading the manual. My daughter is always rolling her eyes in utter frustration with both her parents – that we don’t seem to understand the simplest things. We need for things to be self-explanatory, simple, logical, and easy to use. They rarely are. I don’t have a DVD player in our bedroom simply because I couldn’t get so far as to buy one I was so confused about the different formats. I can’t even understand it well enough to buy it, much less use it! I watch DVDs on my laptop because that’s all I can handle. Actually I just started downloading them to my laptop and bypassing the issue entirely.
As I am part of the huge boomer generation, you would think people would be madly working on making technology more accessible to us, especially as we get older, more forgetful, less able to hit tiny keys and read instructions in tiny type. Much as we like to pretend that we are not now and never will be old – it’s reality. I don’t see a whole lot of attention being paid to that fact.
Tuesday, September 22, 2009
Wednesday, September 16, 2009
Week One -- September 8
Horn: Information Design: Emergence of a New Profession
This is an overview that gives a general introduction to the emerging profession of information design. Horn defines it as “the art and science of preparing information so that it can be used by human beings with efficiency and effectiveness.” (p 15)
He describes how this is still a fragmented profession, with numerous participants who work in different fields under different nomenclature, eg, “information graphics” in newspapers and magazines, “interface design” in computing, “signage” or “wayfinding” in architecture, and “presentation graphics” in the business world.
Horn names the people who have led the way in the field (Gui Bonsiepe, Scott McCloud, Will Eisner, William Bowman, Michael Twyman, and others) and describes general categories within the field including universalists (working for a purely visual form of communication), collectors (who work on documenting the field), writers of instruction manuals, aestheticians (especially Edward Tufte and his concepts of data-to-ink ratio and chartjunk), popularizers, such as those who write about information design in advertising and popular media, researchers and the British Information Design Society, which hosts important conferences bringing together practitioners in the field. He also discusses structured writing, which is a “systematic way of analyzing any subject matter to be conveyed in a written document, “ a method that sorts information into information blocks.
His discussion of visual language, defined as the “tight coupling of words, images and shapes into a unified communication unit [Horn 1998] “ reminds me of my advertising training, which stressed that words and images must work synergistically. Ideally in a piece of communication, the words and images are interdependent and either one alone does not convey the meaning. The tension between the two is necessary to draw the reader in, and the leap that is required of the reader – the gap the reader must bridge in order to understand the communication—is desirable since it involves the reader and thus communicates more powerfully.
Hall: Representation, Meaning and Language
This book chapter explains the notion of representation via verbal and visual language and some of the ways it has been discussed and understood. Hall says that representation “connects meaning and language to culture.” He explains the three categories of how representation has been analyzed: reflective (language simply reflects a meaning that already exists in the world), intentional (language expresses only what the individual speaker or writer personally wants to say) and constructionist (meaning is constructed in and through language). The last understanding has been the most influential, so the remaining discussion focuses on constructionist ideas of how meaning is created and represented referring in particular to the influential work of Swiss linguist Ferdinand de Saussure (semiotic approach) and French philosopher and historian Michel Foucault (discursive approach).
Hall says that “meaning depends on the system of concepts and images formed in our thoughts which can stand for or ‘represent’ the world, enabling us to refer to things both inside and outside our heads.” He shows how these are culturally determined using as an example our system of traffic lights. The actual colors used and the sequence of the lights is arbitrary, yet we are culturally induced to understand what a green light means, etc. We have a shared conceptual map as members of this culture. The term used to refer to words, sounds or images that convey meaning is “signs.” Visual signs are called iconic signs. Verbal signs are called indexical signs. Meaning is not in the object being described or in the word that describes it – meaning is constructed by us, using the system of representation by means of a code that we all learn and adhere to. It is “the result of a signifying practice – a practice that produces meaning, that makes things mean.” This fact underlies the popular notion of cultural relativism.
One intriguing part of this reading: the painting by Spanish painter Cotan (1521-1627) that is reproduced as an exercise. The reader is supposed to find meaning in the painting and then is directed to an explanation that was not included in the pages we received. I had NO IDEA what the heck the painting was trying to say! The only notion I could come up with had to do with sexual symbols in the fruits and vegetables represented, but even that made no sense to me. I eagerly await tonight’s class, hoping for illumination.
Plass and Salisbury: A living-systems design model for web-based knowledge management systems
The authors describe their development of a knowledge management system for a large organization. In contrast to more commonly used design models, they devised a model based on a living-systems approach that incorporates the ever-changing requirements of the system’s various users. This model is grounded in situation cognition and cognitive flexibility theories and it is designed to accommodate the changes that are continuously happening in an organization. It can also more effectively uncover and meet user needs because it has built-in in feedback mechanisms that solicit user input and thus enable the system to organically adapt to changing conditions.
The basic steps of this model are: analyze end-user requirements, design instructional information architecture, implement developmental evaluation and adjust as needed, develop instructional interaction design, perform developmental evaluation and adjust as needed, develop instructional information design, perform developmental evaluation and adjust as needed, and implement system design. Constant reassessment with input from users is the most important feature of this design; it is considered a “living-system” design because it is never finished, but always in a process of development and change. They refer to various mechanisms that automate this feedback from users as the system’s “digital nervous system,” which emphasizes the “living systems” model.
This is a very complex model for a very complex knowledge management system that also must be very expensive to develop and maintain, thus it would seem to have limited applicability. In addition, the major role played by user input in developing the system creates one issue: how to get users to participate fully by offering their opinions and expertise. Also, as users change, the system changes – but is that always a good thing? In some cases, more standardization and less customization might be desirable to help enforce certain performance goals and standards.
RANDOM OBSERVATION:
I was at a party (reunion brunch for a bunch of 50-somethings) and I really loved what one person said while several of us were trying to take group photos with our cell phones:
"Why is it that everyone always looks at their cell phone as if it belonged to somebody else?"
--Jim Cathcart
Meaning, we were all squinting and looking puzzled as we jabbed at the buttons. I am a recent convert to the Blackberry, which has become permanently attached to my hand, but I wish there were some way to make the buttons bigger. It's just impossible to write text messages -- the only way my daughter will communicate --without huge numbers of typos.
For some reason I have many "b's" scattered randomly throughout all my IMs, which I hate. It's possible but laborious to correct the text but why do the buttons have to be so tiny? And why don't they have a version of the Blackberry designed specifically for my age group? It would have to be somehow cool to appeal to the baby boomers, while also being somehow adapted to people who can't see very clearly and don't have the manual dexterity to text intelligibly on those tiny keys. Okay, it would have to be bigger, or maybe it could fold out with buttons on either side? I see this as a big problem -- and hence, also a big opportunity. Adapting technology for use by the huge baby boom generation as it ages.