CSE590W: Accessibility Research Seminar

Instructors

Who can attend? CREATE members and students at any level who are excited about the intersection of disability, race and accessibility Those who can't register directly for the course should reach out to Kelly or Venkatesh.

Seminar Expectations

Please remember to help us create an inclusive, respectful, and accessible environment for all participants. This means many things but some examples are: taking turns when speaking; introducing yourself and helping to improve caption accuracy by speaking clearly and slowly; keeping your video on if possible to make lip reading easier; remembering to do the reading if any is required, listen and avoid assuming you know what others experience; do the work to teach yourself and learn together; check in regularly; and let others speak before speaking again.

To sign up for the seminar and mailing list, fill out this form

Seminar Format

This quarter, we are having CREATE members present about their research. These presentations could include practice talks, presenting works in progress, or asking for feedback for a project; it's quite flexible. Please reach out to Kelly or Venkatesh if you're interested in presenting.

Schedule

1/10/22: Overview

Intro to seminar expectations, grading for credits, and the reading for this quarter.

1/17/22: MLKJ Day

Holiday; No seminar.

1/24/22: Megan Hofmann

Title: Optimizing Medical Making

Megan is a PhD Candidate at the Human Computer Interaction Institute at Carnegie Mellon University. Her work on the emerging area of Medical Making, the application of digital fabrication in healthcare, has won multiple awards at ACM-CHI and Assets. Additionally, Megan is a leading researcher in the burgeoning field of Automatic Machine Knitting. Currently she is teaching a new Graduate Special Topics course on Machine Knitting at the Paul G. Allen School of Computer Science at the University of Washington. Her research has been published at top HCI conferences such as CHI, UIST, ASSETS, and CSCW.

1/31/22: Momona Yamagami

Title: Closed-Loop Human-Machine Interfaces for Accessibility & Health

Momona Yamagami is a Ph.D. candidate in the Electrical & Computer Engineering department at the University of Washington. She received her BS in Bioengineering from Rice University in 2016. Her research focuses on improving the accessibility of human-machine interfaces for people with and without limited motion using data-driven modeling, control theory, and wearable sensor technologies. Momona is the recipient of the 2021 UW College of Engineering Student Research Award, the UW ECE Irene Peden Fellowship, and the UW Institute of Neuroengineering Innovation Graduate Fellowship. She was a 2020 Microsoft Research Ph.D. intern. Momona’s 2021 proposal on "Framework for Diverse EMG Gesture Recognition" co-written with Dr. Jennifer Mankoff was awarded $150,000 from Meta Research.

2/7/22: Nancy Alajarmeh

Title: Password Managers Use Among Users who are Visually Impaired: Awareness, Adoption, and Rejection

2/14/22: Kim Ingraham

Title: Enabling human mobility through personalized robotic control

Kim Ingraham (she/her) is a post-doctoral fellow at the University of Washington in the departments of Rehabilitation Medicine and Mechanical Engineering, and is a member of the multidisciplinary group CREATE (Center for Research and Education on Accessible Technology and Experiences). Kim holds degrees in Mechanical Engineering from the University of Michigan (PhD, 2021) and in Biomedical Engineering from Vanderbilt University (BE, 2012). Prior to beginning graduate school, she worked on an interdisciplinary team of scientists, engineers, and clinicians as a Research Engineer at the Shirley Ryan AbilityLab (2012-2015). Kim is passionate about the potential for assistive robots to advance human mobility in all forms. In her research, she has developed and evaluated physiologically-inspired control systems for a variety of assistive devices, including powered lower-limb prostheses, robotic exoskeletons, and powered wheelchairs for young children with disabilities. Kim is an NSF graduate research fellow, and served as both a graduate student instructor and an engineering teaching consultant at the University of Michigan.

Talk abstract: Assistive robotic technologies—like protheses, exoskeletons, and semi-autonomous powered wheelchairs—have the potential to meaningfully transform human mobility. Yet, despite great strides in the development of wearable assistive robots over the last several decades, people are not widely using these systems in their daily lives. Fundamentally, we do not yet know how to apply robotic assistance to the human body in order to promote meaningful clinical improvements or achieve targeted physiological goals. To this end, my work is focused on the design and experimental evaluation of personalized, adaptive control strategies for assistive robotic devices. In this seminar, I will present work that advances our understanding of how to provide robotic assistance to users outside the laboratory environment. I will discuss how data-driven modeling and wearable sensors can be used to estimate important physiological metrics (e.g., metabolic cost) outside the lab. Such measurements are necessary for adaptive control strategies to respond to real-time changes in the user’s physiology. As an example of a personalized control system, I will demonstrate that exoskeleton users can quickly and precisely identify features of robotic assistance that they prefer, and highlight characteristics of user preference that make the design of personalized control systems both compelling and challenging. While many assistive robots support users during walking, I will also share insights into how these methods may be applied to wheeled mobility technologies, specifically for children with disabilities. Together, this work supports my future research goal of designing personalized, adaptive control strategies for wearable assistive robots in order to enable people to meet their goals and achieve full participation in their daily lives.

2/28/22: Lotus Zhang

Understanding and supporting digital content creation by blind and low vision individuals

Lotus Zhang is a third year PhD student in the department of Human Centered Design and Engineering, where she conducts accessibility research with professor Leah Findlater. She is passionate about understanding and supporting people who are blind and with low vision in creating and consuming digital content.

Talk abstract: Digital content creation has become an increasingly common activity in our work and personal life. A substantial part of workplace practices and education now involves digital content creation, and user-generated content has gained popularity online too. However, digital creative experiences are largely visual-oriented, introducing access barriers to people who are blind or with low vision. In this seminar, I will introduce an early stage research effort that I recently started as the focus of my PhD: understanding and supporting digital creative experiences of blind and low vision people. I will talk about related work around this topic and discuss how we as accessibility researchers could better approach this problem.

3/7/22: Aashaka Desai

Understanding and supporting speechreading experiences of d/Deaf and hard-of-hearing individuals

Aashaka Desai is a second-year PhD student in Computer Science and Engineering. She is advised by Jennifer Mankoff and Richard Ladner. Her research is in the area of accessibility and human-computer interaction. Her recent projects focus on supporting speechreading experiences of DHH individuals, examining lived experiences of innovators with disabilities and exploring embroidered tactile graphics creation.

Talk abstract: Speechreading is the art of using visual and contextual cues in the environent to support listening. Often used by DHH individuals, speechreading can involve reading lips to using body language and topic of conversation to guess what is being said. It offers richer information, but can be cognitively demanding. In this work, we explore speechreading experiences of DHH individuals during video calls and technological supports for speechreading through a three-part study consisting of formative interviews, design probes and design sessions. We offer a better understanding of the richness and variety of techniques DHH individuals use to provision access, design reccommendations for speech visulizations and directions for future speechreading technology research.

3/14/22: Reflect and plan.