Keith Hamilton

Cozmo:
Lost in Reddit

Anki approached our team to do some off-the-beaten-path advertising for their first AI robot, Cozmo. I got involved on this in the role of Game Designer, taking the initial creative concepts and designing a game from them—game mechanics and gameplay. I also had the opportunity to pilot Cozmo for the six-hour live game, which was the first-ever Reddit-initiated live-stream event on their platform.


Client
Anki, Inc.
 

Role
Game Design lead, Puppeteering, Emotion Management


Cozmo idling about in r/HailCorporate.

game overview

Reddit had six hours to collectively vote on what actions Cozmo took. There were seven rooms all fashioned after popular subreddits. Three of the rooms contained challenges Cozmo could only play if he earned Karma in the non-challenge rooms. Winning each challenge earned Cozmo a golden "key cube," all three of which were needed to unlock the final door, letting him exit the escape room.

Cozmo: Lost in Reddit was nominated for a One Show award for Innovation in Interactive Gaming.

 

My initial grids for the game—on the right, each color represents a theme of subreddits.

mapping

Initial comps of the layout of rooms. I took heavy inspiration from Zelda, first considering each room (subreddit) to be a mini-dungeon of its own.

 

Initial concept for r/Gaming including gameplay mechanics, inspiration, and color direction.

Room design

I worked very closely with our Set Designer and production partners to ensure the layout and decor of each room was in harmony with the game play for each room.

 

There's me on the right. I had to develop a special overhand claw grip to accommodate Cozmo's range of motions during the live event.

Piloting

I spent hours practicing with Cozmo using custom software written in-house that translated actions on a PS4 controller to Cozmo's movements. I had direct access to the code, so I was able to make tweaks to the controller's response as I went, dialing in the perfect setup.

A logical flow of how Cozmo could move around r/HailCorporate. Colored blocks indicate encapsulated logic patterns show in detail elsewhere.

mechanics

Each room of the game was meticulously plotted out in a series of logical pathways that we could expect Cozmo to take throughout the event.

 

storyboards

I worked with a storyboard artist to help engage clients early in the process and to help the set design and fabricators to visualize the flow of each room.

 

Each key maps to a different emotion, or a level of a single emotion.

emotional expression

Expressing emotion is one of Cozmo's key features, so we mapped the broadest range of emotions possible to a standard Apple keyboard, with layered functions for each room. 

 

Gently tucking Cozmo away into his sling. The goal here was to shoot him across the room into a net.

a superior siege weapon

In r/WhatCouldGoWrong, we gave a nod to r/TrebuchetMemes and included a functioning trebuchet Reddit needed to use to win the challenge.

 

A scan of the actual keyboard used during the run of show. The keys off to the right, "BODY," "HEAD," and "LIFT" were used most frequently to mute pieces of Cozmo while I was driving him around.

 
 

Nike Makers' Experience

For over three years we'd been developing and building a new type of experience for Nike. The insight that people wear shoes on their feet, but the Nike iD customization process was on-screen made us think. What if you could design custom Nike shoes while you were wearing them? In August of 2017, we launched Nike by You in partnership with the Nike Space Kitchen, and a whole bunch of people got to find out.


Client
Nike / Nike space kitchen

Role
Experience Design, iPad Tooling Design, Prototyping


The experience

Participants would enter into a voice-assisted environment, and, with the help of a docent, be walked through a unique customization experience. First entering their name, or a piece of text, then choosing a pattern and color combination—each step of the process spoken into the environment, and reflected in real-time. The patterns were in part generated by the text each participant chose, ensuring each pair was unique.

Using state-of-the-art technology, each participant would walk out of the studio with a new, custom pair of Nike Presto shoes in about 45 minutes.

This project was massive and included many moving parts: projection mapping, iPad tooling, lighting, multiple monitors updating in real-time with changes on the shoes, motion tracking cameras, and a whole complement of fabricators behind the scenes.

 

prototyping

Initial interaction flow and comps were generated using Illustrator and Photoshop. The comps were then worked into Framer as interactive prototypes we used to test assumptions.

 

tooling

The final iPad tool was an optional control mechanism for the docent. The tool design accounted for each possible step of the design process.

 

annotated designs

To serve as documentation for the team and on-site handlers, I created detailed annotations for the iPad tool. These annotations detailed actions triggered by buttons, gestures, menu trees, and API calls that are detailed in separate interaction flows. 

interaction flow

I created a detailed interaction flow to map each phase of the experience. This document served two purposes: Educate client and team about expectations at each stage, and to document for our engineers and site handlers which API calls were being made throughout the system at each stage.

 
 

scripting

A prototype script was used with the production team to walk everyone through the experience as we developed it, accounting for timing, which elements of the experience were active/inactive, and docent dialog with guest.

 

 

Needybot

Robots do things for us all of the time. They're efficient, tireless, and dedicated to serving humanity. We got together and discussed what it would be like if we had a robot that needed us, instead of helping us. How would we treat it? Would we help it? Could we empathize with it? Better yet, could we get others to empathize with it?


Client
W+K lodge (intenral project)


ROLE
Experience Design lead, Software engineer, conversation designer, speech recording


The Concept

It started with a simple idea: what if we made a robot whose only desire was to ride the elevator. The catch was, it couldn't push the button to get on the elevator, so it needed to enlist humans as helpers.

That seed developed into an autonomous floor-walker, roaming the W+K Portland office, seeking connection with humans. Its goal became to meet everyone at W+K Portland—if it was to enlist us as helpers, it needed to first get to know us. Using a large, sympathetic eye, a child-like voice, and a diminutive, furry frame, Needybot successfully met nearly half of the the employees in our building before breaking down so far that it stopped running.

We learned a lot about what drives people to interact with robotics and software during the four months of operation. We also learned that some people don't have any empathy for robots, seeing them only as targets to be bullied. 

 

An exploded, illustrated, view of Needybot's internal composition.

engineering

Needybot was simple on the outside, but on the inside, it was a complex web of parts, cameras, and various USB-attached components. An infrared camera attached to the CPU via Raspberry PI was used to lock onto and follow humans. Speakers announced Needy's arrival, and the TurtleBot base was operated via ROS, the Robot Operating System.

 

conversation

In all, we wrote, recorded, and filtered over 2,000 audio clips to create Needybot's speech. We needed names, emotions, questions, desires, the whole gamut. Using ROS, we would trigger sequences that called for multiple clips to get stitched together, resulting in the artifice of speech.

Needy feeling sad.

Needybot reacts after receiving help.

Needybot feeling optimistic about life.

Needy cries after failing at a task.

eye contact

Needybot's main channel of communication was its eye—Needy depended on being able to lure humans in from a hundred feet off, so we designed a large, cyclopean, eye for it to use to communicate its emotional state. We created loops for happiness, sadness, and in-between, as well as victory, needing assistance, and failure.

 

A rabid pack of human children fail to resist Needybot's gravitational pull.

insights

Over the course of Needy's run, we learnt a lot. Most importantly are these four facts:

  1. Parents and children empathized with Needybot more easily than others.
  2. People enjoyed bullying Needybot—these people were presumably not parents and children.
  3. Needybot drew humans closer together—evidence of small tribal affiliations surfaced during Needy's life.
  4. One interaction can turn cynicism to joy—Needy had an uncanny ability to make believers out of us on first contact.
 

Monocle podcast

Needybot was featured in an episode of the Monocle podcast. Check it out for more information about the project and some insights into what the team at-large found out about company culture through the little furry robot that couldn't.

 
 

Operation: Queso Fresco

It's not everyday you get a blank check on creative direction for an ad agency's website, so when Colleen DeCourcy, W+K's Global Chief Creative Officer, pinged W+K Lodge to concept around how to do something unique for W+K's corporate site, we jumped. Although it never made it to production, due to many factors, the prototyped concept of the world's first really real website was pretty awesome.


Client
W+K Global


ROLE
Experience Design lead, Software architect, backend developer, prototyping


My architecture map for each piece of the prototype website.

The Concept

Everyone has websites online. We decided to build a website in real life. Using twin cameras, one RGB and one Infra-red, we would print content in full-color with QR codes on them only visible in the IR spectrum. The QR code would identify the asset, and the asset would be pinned to the "website"—a wall in our building.

The wall would be live streamed 24 hours a day. During office hours people could come in and "visit" our web page. If you were to visit it online, all of the printed content would be translated into live digital content based on the source—images, videos, web pages, et al. The method of translating them would be by using the hidden QR codes on each page, and substituting in the live content in-place of the printed paper.

 

Computer Vision

The CV layer of the application was one of the most crucial. It was responsible for both streaming a live RGB feed directly to the browser—over websocket—as well as parsing QR codes printed in invisible IR ink on the content attached to the website. I had to fork multiple processes in different languages to handle all of the information coming in at 24 FPS.
 

Printing

The printer ideally would have been a stand-alone printer in the space where the website was erected. Inside of the printer would have been all of the hardware needed to run the site—printer, cameras, linux servers.
 

Prototype

For the purposes of the prototype, I stood up both cameras on tripods, aimed at my desk. A partner of mine wrote some great front-end code that translated X-Y positional information of where a printed artifact was placed on my desk, accounting for the offset between the RGB and IR cameras.

We sold global management on the concept, with a simple prototype using some local hardware, including a $70 inkjet printer outfitted with some custom ink I had to inject by syringe, a few cameras, and some AWS infrastructure. Unfortunately, due to external circumstances, the project stalled, and was overtaken by client work. I suppose there's a silver lining there, cause hey—I still had a job. 

 

Gear 360 Retail Experience

We were asked by Samsung, U.S.A. to rethink the in-store demo experience around the Samsung Gear 360 camera. The experience was meant to immerse the viewer in the world of rich 360º content using a Samsung Gear VR headset and the latest Samsung smart phone. Conceptually, we wanted to show the viewer real content that would push them to feel inspired and capable of telling their own story in 360º.


Client
Samsung, U.S.A.


ROLE
Experience Design, Concept


INTeraction mapping

Our clients were very technically-minded, and made an early ask of us to give them a granular view of the experience we had planned. I abstracted the various steps and logic of the experience into interaction maps, which satisfied their appetites and added comfort to the process. This type of interaction mapping has become a cornerstone to my process of reasoning out an experience.

 

360º Content

We wanted to create four main categories of 360-shot content that looked stunning, was fun and engaging, and, most importantly, was the kind of footage anyone could theoretically shoot. This last point was critical, as this experience was to be found in-store at Best Buy, where anyone could view it, and think "I could do this."

 

Wireframes

We created detailed wireframes for the end-to-end experience to enable team and client to better understand and tweak the experience as needed.

phone-based menu system

Because we had no real means of predicting the in-store capabilities of any given Best Buy the experience would be shown, we concepted a take-down experience from the VR-based flagship experience that used only a Galaxy S8 as the navigation tool.