Faucet and Knob Designs

Today I am gonna write about a very basic thing: faucet, door knobs and switches. I’ve recently moved into a new house and I’ve been struggling with its basic facility designs ever since. Initially I thought I will get used to it but its been one month and I will haven't been able to get a hang of it. If I hadn't been a student of user experience design, I would have blamed myself for being dumb and ineffective in adapting to the new design. But now, I know its the faulty design. Its frustrating to fiddle with the faucets and door knobs every time you use them. So I thought about doing some research to know about how are these things suppose to be designs and what is the basic thing used to design faucets.

so to talk about door knobs and my confusion on its afforded rotation. The the ability to rotate a knob directs us towards the affordance perceived by the user interacting with the knob. My confusion is the result of incorrect affordances perceived by me. So I did some research about why am I being confused and what has been the basic thinking behind the design of a door knob which has been incorrect in my new house and which has led me to this confusion. So I came across something which stuck with me, i.e. door knobs do not pertain any affordance of rotation. There is no indication on door knobs which indicates the direction in which it should be rotated. It just affords itself to be gripped. while the rotation and its direction is built more from the convection and use. This somewhat explained my confusions because this new door knob did not convey anything about the direction of rotation.

 

Affordance is basically what all can be done with an object while perceived affordance is what is conveyed to the user, about the functions afforded by the system. Convention is cultural constraints. They are built overtime. These are practices . So when i come to think of it. Systems should have a perceived affordance to direct the user of the right action but if not available the user automatically goes to try the convention and if even that doesn't work the user realizes the fiction in the system.

Next is to talk about faucets. We have crossed faucet handles and they rotate inwards to open and outwards(away from the faucet) to close. This is definitely not something that you expect. Similarly the door knobs open towards theI lock as compared to the normal movement of being away from the lock to open the door. This becomes irritating if it happens to you every time. It becomes frustrating because you think you should be able to remember this rather than struggling with it every time. Similar is the case with switches, here the switches are switched off when pushed down while in india it used to be the other way. The lighting is switched on by pushing the switch downwards.

Exemplar of Micro-interaction.

After trying to understand micro-interactions by researching about it, my next project is to observe a micro-interaction. Observing a system to point-out the micro-interactions used in it. I choose the Airbnb app. I tried to look for some exemplars of micro-interaction in this app.

So the first part of micro interaction is a trigger. The welcome screen of the app says “Hi Sunakshi”. This is an information known to the system. It gives a human touch to the system. This can be a trigger for a micro-interaction. But there was no interaction attached to this information. So it doesn't really qualify as a trigger of any micro-interaction. The app makes its banner photos to some effect(animation effect). Making the picture to move and keeping a static content below it, indicates an affordance of scrolling. This offers an indication for an interaction.

Then next thing on the app is an “action button”, providing a search functionality. Clicking on that icon opens up a text box. My current location appears as a suggestion for search. The “current location” suggestion serves as a trigger. On clicking this suggestion, it brings up the search results for that location. This is the “rule” of the micro interaction. Touching(clicking through touch) on the “current location” text creates a ripple effect on the screen. This is the feedback for the touch. For this micro interaction “the current location”, there are no modes or loops because current location is a static information. There cannot be a different current location for a different mode or an iterative loop.

There is another micro-interaction on the edge of the search box. It is a mic icon. The icon is the trigger. Clicking on this icon brings up the speech detection function. There is a text (“Where are you going?” ) in it. The speech detection function describes the rule of the micro-interaction. If the user speaks out the name of a specific location, then the search result for that location pops up.This micro-interaction has several modes as well. One is for an invalid entry and another one is when the user doesn't input any data. Feedback of the interaction can be the ripple effect on touching the mic icon. This micro-interaction has a loop as well. Every input of the user, becomes the top most item on the suggestion list. Observing and breaking down a micro-interaction helped me understand it in a better way. 

Search is the major task in the app and within it, there are several micro-interactions. These micro-interaction like mic and current location suggestion define the intricate details of the search task.

Delving into micro-interactions.

I’ve been intrigued by this whole new thing called “micro-interactions” for quite some time now. I have heard it in conversations and read it in some articles but I could never make out its exact meaning (except for the literal meaning). I didn't know what people were referring to, when they talk about micro interactions. Maybe its not a new thing for professional designers but it was new to my design vocabulary. 

So, I started reading about it. After doing some preliminary research, I came across some basic concepts that shape micro-interactions. These were pretty common across all resources that I went through. They have basically shaped my current understandings of this term. 

 

This is what I understand about the term “micro interactions”.

“Micro-interactions” are small and tiny interactions within a task. These interactions combine together to make up for the bigger tasks. These interactions might be used to smooth out the task flow process or to transition between various task steps. They can be either in the form of an information feedback or animations creating some cute moments for the app. 

 

There are some guidelines or principles, governing these interactions. One of them, is for the designers to never to assume that they have no information about the user i.e., it suggests that the designers always have some or other information about the user. This information can either be about the user or their context. It can be based on their habits as well. These are the things which make user feel its presence being recognized, users feel that the app recognizes its user or knows about him. This initial information acts as the trigger for the micro-interaction. In order to get users to interact with their micro interactions, designers provide them with a trigger and the trigger is also based on the information that the designer has, about the user. It may be about the user or its context.

Next are the rules of micro interactions. These rules are basically the definitions of the micro-interaction. Triggers are a way to introduce the interaction and rules are what defines the interaction. It is basically the interaction itself. Like, the gmail message that we get when we forget to include an attachment in the e-mail.

Next principle is feedback. These are different than the general feedbacks. Generally, feedbacks should be conveyed clearly to the user, while for micro interactions feedbacks should be very subtle. Micro-interactions are suppose to be invisible but effective. Thus their feedbacks should be subtle as well. There is a concept of “overlooked”, i.e. using a part of the UI that is already present in it. We do not create a new element of the UI for these feedbacks. But these feedbacks should be present in the system nevertheless, they convey an output to the user. The occurrence of something.

The last part is modes and loops. Modes occur infrequently, they are generally avoided. Modes are introduced to the system, only if they are necessary and used infrequently. While loops define the use of micro-interactions over time. They are used for recording and changing as per the user’s regular use. Designers are advised to make the micro-interactions adapt differently for each of its use. 

These are the guidelines used to define a micro-interaction. After going through several sources, these four guidelines the most commonly occurring things, which define a micro-interactions

Gesture for Facebook

Today, just as I woke up I opened the Facebook app on my phone. My cousin had uploaded some pictures of her recent trip and I decided to browse through them. As I was browsing, I got confused about the gesture to see the next photo.

Let me explain this.So as you might know, Facebook mobile app has two modes to browse through photos. If someone has uploaded a large number of pictures and you click on one of them. You enter the first mode, which is a vertical list of pictures with footer for “comments and likes” for each picture. When you further click on the picture, a larger view(second mode) of that photo opens up. Now, to see the next photo you have to scroll through the pictures horizontally. In my half sleepy state, I got confused about how to scroll through these enlarged photos-(vertically or horizontally ?). This was because, somewhere in my unconscious mind I had learned that vertical and horizontal scroll both work for Facebook photos. I had never acknowledged the fact that there are two gestures for different scenarios. Now should I remember that vertical scroll works for the zoomed-out version and horizontal scroll for zoomed-in version of the photos?

I think this is a bad design. I am not sure if having both-vertical and horizontal scroll on the same app within the same feature is a right thing to do. All I know is as a user of Facebook app, I woke up to confusion ! 

Maybe Facebook should have only one type of scrolling or maybe the gesture can be clued. If user testing, approves of combining two types of scroll(fb must have done user testing for this) within the same feature then ambiguity can be removed, just by using an arrow to indicate horizontal scrolling.

P.s. Maybe this happened to me because I was half sleepy. But accessing Facebook as soon as you wake up is a common habit nowadays. Maybe other users also get confused. If nothing else, its a usability issue and it deserves to be considered and user tested.

Mental Models and Pasta

Today I read about mental models and its use in performing tasks. Mental models help users perform various tasks. They are a description of how processes take place in real life. These descriptions might be inaccurate, they are descriptions according to the users. For example: in user's mind, electric flow might be the similar to water flowing through sewage pipes. Designers need to make sure that their designs evoke the mental model that corresponds to a particular task, even if the model in user's mind is inaccurate.

Donald Norman’s book “ Design of Everyday Things”  argues that designers should make sure that their designs instill and invoke an accurate mental model in user’s mind. This way users do not have to rote learn the process and it doesn't require a lot of mental efforts. In order to understand this better, I try to think of a process where an accurate mental model would have helped me understand and implement the process better. I tried thinking in terms of software but I ended up thinking about pasta.

So, sometimes I end up making a mushy pasta and other times it is cooked perfectly. I was not able to figure out the cause of this problem. Once I ended up making a mushy pasta for my friend. This friend of mine was observing me and later on she informed me, that my cooking technique was incorrectly. She explained that putting the uncooked pasta before the water begins boiling was the source of this inconsistency. I didn't understand, how does putting pasta before and after the water boils makes it so different?Putting in the uncooked pasta before and after, both had: water boiling and pasta added in that water steps. Later I tried putting in the uncooked pasta after the water begins to boil and it cooked perfectly. I could see it("friend's suggestion") being true but I didn't understand the reason so I had to learn it:  Wait till the water boils and then add pasta to it. Although this is not a big thing to remember, still it was an extra effort.

After reading about mental models and its affects on users, I started thinking about this pasta boiling process. I related it to having a correct mental model and its affect on the amount of efforts required to perform the task. I did some research for a scientific explanation to this process. After some extensive research, I found my answer. 

What I understood after my research was that cooking is about the way you want the heat to enter your food material. Example: to boiling a potato, we want the potato to be cooked completely, i.e. from inside and outside. In this case, you would put the potato in cold water and then boil both of them. This makes the entire potato receive heat simultaneously. However, if you putting the potato after water boils gets the outer cover of potato to be in contact with high temperature instantly as compared to the inner potato substance. This will make the outer cover heat up before the inner substance. As the outer cover will be in contact with high temperature for longer period of time as compared to the inner one, the outer will get cooked before the inner one. This is opposite to what we want for cooking pasta. For pasta, we want the outer covering to get heated up and solidify before the inner one. If we put the pasta in cold water and then boil the contents, then the pasta stays in water for a long period of time. Then the water enters the inner starch in pasta and gelatinize it, i.e. breaks it. This makes it mushy. 

So, the information about how food substance and boiling water interacts helped me understand the inconsistencies in my cooked pasta dishes. Having a better understanding of the process helped me remember it. It is no longer an extra burden for me.

This made me realize another thing: designers need to impart only the amount of information that is required for the user to perform a task correctly. Knowledge about the gelatinizing of starch does not provide me with any extra benefit, for boiling pasta. Thus it is not required. 

Information Imparted

So today I read about the information that is imparted through design. Information used for a task is combination of internal knowledge and external knowledge. Internal knowledge is the knowledge in user’s mind and it is required to perform the task. While external knowledge is the information supplied by the design. Now the goal of any design is to make it easier for the user to perform a task. So if the designer provides more external knowledge, the user will have to rely less on its internal knowledge, this leads to less cognitive load.

I started thinking about scenarios or examples for the application of this concept. Icons came to my mind. I’ve done a research project where I was trying to understand the characteristics of an icon design, which makes them easily interpretable. One of the characteristics that came up in this study was: icon design should be abstract i.e., it shouldn’t have complex details. It should be made up of simple lines and arrows. It shouldn't contain intrinsic details in it. Now I understand the reason. 

As designers we should provide the minimum amount of information; minimum information that is required for the user to distinguish between different icons. Using lines and arrows solves our purpose. They ensure low complexity thus low cognitive load for the user. Working with simple things like lines and arrows, makes it hard to design complex things. While at the same time, it can be different enough to be distinguishable from other icons.

Similar to this, is the concept of Affordances. Design elements should have correct affordances to it, so that the user gets a good amount of external knowledge about how to use the element. They should not require developing too much internal knowledge, i.e. learning things or processes. An example of this, is the iPhone toggle button. It is a solid circle and a cylindrical depressed space. This space provides the icon with the affordance. It imparts knowledge about sliding the circle. This icon uses “constraints” as well, to direct users towards the right action. Constraining from all other sides except the sliding direction helps guide users for the slide action, sliding it in the right direction.

Design Exploration

Today I start my 100 days of my Design Exploration

I start today not only to write this blog about my design understanding but 100 days of running as well. I told the world(actually just my brother but now I have to accomplish it. Even though I have registered for swimming as well and I am gonna join the gym. Maybe running can either be in gym or outside. tThis way workout is regular even with no gym.

Its surprising that I finish my Masters and then start my exploration but honestly this is the time I am actually starting to read and explore the field of design on my own. This is the time I am sitting and thinking about design concepts.

So today I start with reading Edward Tufte. He has written a lot about information visualization. So I am reading his principles about how to represent data. Initially I think about graphs and other types of statistical information we come across in various articles. I start thinking that I don't want to read about stats and quantitative data, i want to read about design, color, typography etc. But then it comes to my mind, something my sister told me. She suggested me to read tufte and she gave me an example for it. She suggested I read design principles and try and apply them to something. This might be a boarding pass or a website. The principles can be applied to display information like departure time on the boarding pass. Its just one information i.e. time but it has to be conveyed to the user properly. Similarly everything website, documents etc presents some information and designers have to make sure to provide good visualization to it.