Cornell University
Search:
more options

Shiri Azenkot, September 17th, 2014

Wednesday, September 17, 2014 - 4:00pm to 5:15pm
Athreya Conference Room (Rm 310)

Please join us for the Information Science Colloquium with guest, Shiri Azenkot. Shiri Azenkot is an Assistant Professor at the Jacobs Technion-Cornell Institute at Cornell Tech who's broadly interested in human-computer interaction and accessibility. She recently received her PhD in Computer Science from the University of Washington where she focused on eyes-free input on mobile devices using gestures and speech. Shirireceived two Best Paper awards from ACM's ASSETS conference and has presented her work at other top HCI conferences (CHI and UIST). She received the University of Washington graduate student medal, a National Science Foundation Graduate Research Fellowship and an AT&T Labs Graduate Fellowship. Shiri also holds a BA in computer science fromPomona College and an MS in computer science from the University of Washington.

Title: Eyes-Free Input on Mobile Devices

 

Abstract: I will discuss new methods and studies that aim to improve eyes-free data entry for blind mobile device users. Currently, mobile devices are generally accessible to blind people, but text entry is almost prohibitively slow. Studies show that blind people enter text on an iPhone at a rate of just 4 words per minute.

 

I will present *Perkinput*, a chording text entry method where users touch the screen with one to three fingers at a time in patterns based on Braille. Instead of soft keys, Perkinput uses concepts from signal detection theory to determine the user’s input. Based on Perkinput, I developed*PassChords, *a touchscreen authentication method that has no audio feedback. Unlike current eyes-free input methods, PassChords doesn’t echo a user’s input, so it won’t broadcast the user’s password for others to hear. Finally, I will discuss another modality for eyes-free input: speech. I conducted a survey and a study to determine the patterns and challenges of the use of speech input for composing paragraphs on mobile devices. I will conclude by presenting current work on eyes-free methods for correcting speech recognition errors.