Stroke Rehabilitation Research

Photo by Loren Olson

Photo by Loren Olson

Stroke is a leading cause of disability but through repetitive physical therapy, recovery gains can be made.  As a result, significant research in stroke rehabilitation is attempting to better understand the mechanisms of recovery and the best way to structure and provide therapy.  This has led to looking at interactive technology solutions.  Unfortunately, many times in current rehabilitation approaches, interactive experiences (such as video games) just serve as a wrapper around traditional approaches, and the veneer of game-ifying therapy is presented as the next step. But truly engaging, embodied learning experiences aren’t and can’t be constructed that way. Centuries of arts practice and current research in learning models demonstrate how people best engage with and consume information as well as how individuals construct cognitive models. At the School of Arts, Media and Engineering (AME) at Arizona State University, myself along with a team of artists, engineers and physical therapists attempted to build rehabilitation experiences that could maximize engagement and benefit.  I led the design and development of a minimally supervised interactive rehabilitation system that patients used for four weeks of therapy and demonstrated significant gains in functional recovery.

Role: Lead Researcher and Experience Designer

  • Project lead for an interdisciplinary team (Computer Science, Digital Arts, Electrical Engineering, Fabrication, Rehabilitation Science and Physical Therapy)
  • Translated research spanning user interviews, literature reviews, and test sessions into comprehensive interactive system design guidelines.
  • Designed and implemented the core software architecture to extend and integrate multiple software and hardware components.
  • Developed sensing, analysis, and control modules in Objective-C.
  • Designed two five-week patient training protocols for interactive learning in a minimally supervised environment.
  • Managed multiple parallel cycles of iterative design throughout system development timeline to meet client deadlines.
  • Delivered two complete systems to Emory University and Rehabilitation Institute of Chicago and conducted a pilot study with stroke patients.
  • Analyzed study data and synthesized insights and recommendations in both presentations and academic publications.

Dates:  August 2009 – December 2014

Full dissertation can be found here



When someone has a stroke, they can loose their ability not only for fine motor control but also proprioception: the sense that we all have of knowing where our limbs are in space without actively looking at them.  As a result, patients will use compensatory strategies when completing tasks, such as using their non-affected limb or other parts of the body that typically wouldn’t be utilized during the task. (For example: imagine reaching for a glass but instead of opening your elbow to extend your hand out, you keep your elbow fixed and lean forward to move your hand.) The stroke therapy process involves repeating physical tasks over and over again with feedback from a therapist, helping the patient essentially re-learn how their limbs move and relate to one another during a task.

When I joined the rehabilitation team at AME, they were just about to deploy an interactive, mixed reality rehabilitation system. The system was found to be very successful and showed gains over traditional approaches to therapy. However, the system was not suitable for long-term therapy. AME’s system utilized a clinical space with the permanent installation of motion capture cameras and also required the continual presence of a physical therapist. It also required the stroke patient to travel to the clinic three or more times a week for physical therapy sessions.


At AME, I served as a lead experience designer and developer (along with Nicole Lehrer) and my task was to take the interactive media approaches that worked well in AME’s previous system and translate them to a form and experience that was more appropriate for a patient’s home.  There were a few key problems that needed to be addressed:

  1. Physical design: We can no longer expect to configure a room with full motion capture (plus, putting individual reflective markers on a patient requires a trained assistant and significant time).  Our solution needs to be cognizant and supportive of the home environment.
  2. Long-term interactive experiences: In the previous AME system, patients engaged with an hour-long experience, three times a week. However, at home, there are no longer explicit restrictions on time and the system experience needs to support that. In other words, we could no longer design an interactive short story. We needed to design an interactive novel.
  3. Lack of supervision: As a result of moving into the home, a benefit and a challenge is the therapist is not present.  The benefit is by not requiring a therapist to be observing, therapy can scale to levels not previously attainable.  However, the therapist knows how to craft and adapt the therapy in the moment given a particular patient’s needs.  How can we ensure that therapy is progressing at an appropriate challenge level while also continuing to support some level of human engagement with their therapist?  In addition, since the therapist is not present, the system needs to be fully configurable and controllable by a physically impaired user.
  4. Designing for unknowns: Through observations of AME’s previous system and thorough academic literature reviews, I had some sense of appropriate approaches to physical therapy in an interactive media context.  But, there are many details that were unknown, and they could only be revealed by research.  How do we design a system that can both utilize current understandings but also quickly adapt to new information as it is collected during use?



I first wanted to understand the context of the problem space.  I sat in on many of the therapy sessions of AME’s first system, interviewed the physical therapist and some of the patients as well as reviewed extensive academic research spanning motor learning and constructivist learning.  I also explored principles in long-term media forms as well as reductionist hierarchy forms.

From the contextual research, my first goal was to identify key design guidelines and constraints. I started with thinking about what tasks we should be asking patients to complete as part of the therapy and as a result, identified key task constraints that began to lay the foundation for the system design requirements. 

Similarly, I also identified key constraints for assessing patient progress during therapy. At the end of the day, the ultimate metric for the system’s success will be the level of patient recovery and there are many recognized traditional methods for assessing progress.  The constraints I presented served as key metrics later for software and hardware development to ensure we could make the appropriate resolution of measurements.

With these key task design and assessment constraints in hand, I began to think about the overall design of the system that could meet these constraints.  One key consideration at this stage was the overall cost of the system. To lower costs, we emphasized open-source and quick prototyping solutions (such as 3D printed components) instead of expensive proprietary solutions. There was certainly a balance, as going the route of open-source can require significant development time, so I had to make decisions based on available team resources and research priorities.

Photo by Tim Trumble

Photo by Tim Trumble

A key design contribution of mine was a framework and architecture for the system.  We had multiple team members, all developing individual components of the system, at various time scales.  However, all of these pieces needed to serve the overall task and assessment constraints. Therefore, I designed a modular architecture for the system that could support easy integration of components.  I developed most of the components of the system (excluding the audio and video feedback) in Objective-C as well as the methods by which components were connected to one another.  The idea was this architecture would be stable and maintain key design constraints while the individual implementations could change.

I also developed the main control component of the system. As previously discussed, a therapist would no longer be present, so the system needed to automate certain potions of the experience and streamline required interactions to support patient autonomy.  Therefore, I designed a system component that configured the therapy and necessary on-screen instructions and only required the patient to let the system know when he or she was ready to continue to subsequent tasks.  There was no need for a keyboard, mouse or any fine motor control.  Just, essentially, a yes or no input.

I was also responsible for designing the user experience of how a patient would progress through five weeks (three hour-long sessions a week) of training. Using inspiration from constructivist learning and musical instrument instruction models, I created two training paths that progressed a patient from simple reach to touch tasks to more complex transportation tasks (in which a patient moved an object between multiple locations). Ultimately the goal was to adapt these training path compositions computationally in response to a patient’s live performance of a task, however these fixed paths served as a first iteration to identify which dimensions (task sequence, feedback sensitivity, etc) are important when adapting a training protocol.

The feedback (developed by Nicole Lehrer) was designed to provide information to the patient on their performance of a task at multiple time scales. I collaborated with her to identify the resolution and content of the feedback that should be presented to a patient during and after completing a task.  This was especially a challenge given that the experience needed to scale over long periods of time (weeks to months) of use at home.


In order to test the efficacy of the previously identified design constraints and the resulting system, the system was tested with eight patients. We did not feel that the system was quiet home-ready, so we tested the system with patients in a clinical environment, but with minimal supervision, serving as a next step towards eventually moving to the home.

Eight patients were recruited across two sites at Rehabilitation Institute of Chicago and Emory University who used the system three days a week, for an hour each session, for four weeks total.

The Fugl Meyer Assessment and Functional Ability portion of the Wolf Motor Test, traditional measures used in physical therapy, both improved significantly when measured before and after the four-week therapy protocol. Individual kinematic measure results were more inconsistent and did not show significant group level changes. We think that this was partially due to the system not providing individually adaptable therapy.  Since we were only able to provide two training paths, as a starting point, more able participants were not provided a significant challenge. 

Overall when talking with patients and therapists, the reactions to the experience were positive and people could see the potential.  This indicated that we were on the right track the with overall experience and system design, but some key details needed to be tweaked to better accommodate individual needs.



This project was an incredible experience for me.  It really established me as an experience designer.  I got to work at the intersection of design, research and development.  Looking back on that work, I am incredibly proud of what I accomplished, but have many things I would want to change or improve, that now inform how I approach projects.

This project was my first opportunity to develop code.  There was a need early on in the project’s timeline for an Objective-C developer, so I stepped into that role.  I started, having never written code in an object-based language before and left the project having written 5 modules that could pull data from sensors, perform multiple timescale calculations and route data frames to appropriate targets, such as a database. I also built the main control GUI. That process taught me a lot about writing clean, extensible code, and there are may concepts I proposed in my dissertation to improve the code so that it could be further extended by future developers.

I also have reflected on the user experience and while I think I succeeded in creating an engaging, beneficial experience for patients, I wasn’t able to accomplish the same for the therapist.  In hindsight, it was a big oversight of priorities.  So much focus, by necessity, needs to be spent in making the system of value to patients, but equal consideration needs to be given to the supporters of stroke survivors. As a result, I proposed a complete redesign of the interface for future work that would make it easy for therapists to use and get the information they need.

This was also my first opportunity to lead the development and design of a massive interactive media project.  There were multiple students and faculty with individual interests as well as clinical stakeholders across two different locations. This reinforced my approach that an architecture is necessary, not only in terms of designing a system, but also how to allocate team resources and address research questions.

At the end of the day, the motto for my process with this project was: modularize in the face of complexity. Just because all of the details of optimal implementation are not known, that should not necessarily prevent development.  However, the system design needs to support iteration while still meeting overall goals.  I truly love this style of design that can tackle complex human issues through interactive experiences.