Date

March 2016 (2 weeks)

Course

HCID 521 Prototyping Studio

Link to Github Repository

Role

Interaction Design, Electronic engineering, UX Design, User Research, Visual Design

Collaborators

Xinglu YaoYitao Wang

 

 

Motivation

WonderLamp is a smart lamp project that aims to bring tangible interaction to mundane everyday experience. We started off our ideation by asking ourselves one question: how might we make our home and working environment more human?  We realized that plainly 2D interfaces were hard to capture the richness of such interactions, but what if we moved the interactions that were traditionally on our 2D interfaces into our physical world? This inspired us to explore a more playful way technology could interact with people. 

Hence, combining our goal of making home/working environment more human and the potential for non-verbal interactions in physical objects, Wonderlamp came into birth. Inspired by Pixar’s Luxo Jr., we designed an smart lamp that embodies human-like emotions while tracks, reminds and motivates people to get things done.

 

Process Overview

Over the course of two weeks, we had four iterations of prototypes. Our first two iterations helped us explore what non-verbal gestures provides the best effect, the best and cheapest technology to make the product, as well as how would users respond to different sets of gestures. Our third and final iteration focused on aesthetics and fine-tuning the lamp behaviors. 

In the final iteration, there are three components of Wonderlamp: three servos provide 3 degrees of freedom, enabling the lamp to shake or nod head while turning left and right; one NeoPixel ring that changes light patterns and color to assist lamp’s non-verbal communications and one infrared light reflective sensor to detect the user's interaction with the to-do list board.

*iteration one to four from left to right.

*iteration one to four from left to right.

Interaction Design

In the very beginning of our process, we built one interaction model for the lamp. The goal of this interaction model is to demonstrate the proof of concept that animated physical objects could facilitate richer interactions. Our four iterations of prototypes were all based on this interaction model as shown.

interaction model

Crafting the emotion

Based on the Arduino servo library, we deconstructed each servo movement down to precise angles. For instance, to accomplish “nodding”, we programmed the servo to move to 100-degree angle, then to 170, then back to 100. We also incorporated animation theory (slow-in-slow-out) to gave servos a smooth motion. The chart above illustrated how we accomplished this goal.

*Ideating the lamp movement

*Ideating the lamp movement

Understand the Design

We conducted six rounds of user tests during our second iteration of WonderLamp. The first three was conducted before adding the lamp lights and the IR sensor, hence we used Wizard of Oz technique. During the test, we asked users to “guess” what the lamp was trying to communicate. We had three major learnings: 

  1. The lamp’s head position was too low in the neutral position
  2. All three major interactions happened too quick, giving users limited time to comprehend
  3. Some test emotion's turning angle was too subtle for people to perceive the granularity of the emotion.

Based on the user tests, we adjusted the servo behaviors. For the next three rounds of user tests, we added IR sensor and lights. Due to the lights patterns and turning rhythms, two users had a hard time recognizing the “happy” emotion. We changed the lights color and reprogrammed the servo behaviors for the “happy” mode. Finally, among the design team, we conducted additional six rounds of tests to tweak small behaviors such as blinking timing or nodding frequency.

On the demo day, we received a great amount of informal feedbacks from users, including moving beyond the scope of “to-do list” and connecting the lamp to the cloud. Those would be the future directions we are excited to explore.

test1
test2
 

Challenges and Learning

In a nutshell, making many iterations, continuously obtaining feedbacks, combining different prototyping techniques and setting clear goals for each stage of prototypes helped us as a team move forward quickly.

Physical computing (Arduino)

Although three of us have had some level of experience with other programming languages, the addition of physical components add complexity to the development process. Furthermore, the three electronic components (Servo, NeoPixel and IR Sensor) that we chose for the design don’t work well together. These two constraints made our prototyping process challenging. However, thanks to the open-sourced nature of the technology, we were able to come up with work-arounds (using two Arduinos to communicate with each other using digital pins) through consultation with our peers and instructors. Additionally, to synchronize the timing between Servo and the LED patterns, we also needed to  to calculate the time for each behavior manually and revised the LED speed accordingly.

Because of the complexity and unpredictability of the hardware, we need to integrate flexibility into our codes that enabling us to modify the electronic component’s behavior on the fly. For example, IR sensors sensitive to environment light, when we were filming our video prototyping in a room with broad daylight, we need to revise our code accordingly. Same issue happened to our Servos after we installed the new laser cut lamp body, we need to change the maximum and minimum angle of our servos to match the weight of the model to avoid overloading our servos.

Behavioral prototype (Wizard of Oz)

Because of the iterative nature of our process, we wanted to collect user feedbacks in parallel with our implementation. Rather than testing the interaction only when we integrate all parts of the system, in the second iteration, we utilized the Wizard of Oz technique to simulate the proposed interaction model between users and the lamp. This technique also allowed us to demonstrate multiple versions of designs during our user test quickly. In addition to gathering user feedback, we also utilized this technique when creating our video prototypes.

Using behavioral prototypes allowed us to determine what was the appropriate level of feedbacks needed to deliver a good experience. For example, how quickly should the lamp response to the user interaction? What was the ideal amount of lamp movement that would catch the user’s attention? However, when we were able to create the integrated prototype, some users were suspicious that we were controlling the lamp behind the scene while the lamp was reacting to the user input autonomously.