About me

Hi! I am a third year PhD student at the University of Colorado. My research focuses providing independant access to images and videos online for people with vision impairments using off-the-shelf technologies.

In my research, I typically work with people with vision impairments to design, build, and evaluate systems which aim to improve access to online images and videos.

Ultimately, I aim to contribute novel techniques, and systems which have the potential to improve access and can be used without specialized expertise in programming or hardware.

Superimposed image (2 total); (1) Background is a 3D scene featuring a cube on the screen at a 45 degree angle, (2) a zoomed in picture showing a Sphero Ollie (cylindrical two wheeled robot) being rotated to 45 degrees, which then rotates the virtual box in blender in real-time.
Sphero Ollie robot controlling the virtual cube's orientation. Rotating the robot rotates the virtual cube directly.
Personal photo of Darren in front of his desktop computer which shows an image of the angry birds game and a sphero ollie in the background.

Projects

×

 

Accessible Videos

Using Robots to Improve the Accessibility of Online Videos

Online educational materials such as interactive simulations and videos are being increasingly relied upon for self-learning. However, many online simulations and educational videos are inaccessible to individuals with vision impairments and can be entirely visual. Even if audio is included it often lacks the appropriate narration and spatial details needed to interpret what is going on in the scene.

In my dissertation work, I explore ways to improve acesss to dynamic visual media using wireless robots. For this submission to the Assets 2017 Doctoral Consortium, I developed an early prototype to convey dynamic motion content using robots as tangible sprites to translate the visual motion into an accessible tangible representation. Videos can be Generated using a mix of computer vision and human editing. Pairs the original source video and audio with the tangible representation. Supports an optional spoken narration to allow authors to provide additional detail about important events in the scene.

See DC Submission

GUI Robots

Using Off-the-Shelf Robots as Tangible Input and Output Devices for Unmodified GUI Applications

In this work, I created a small framework to enable end-users to create tangible user interfaces for existing software applications using off-the-shelf robots. We wanted to allow users with little programming experience to create their own tangible interfaces for their commonly used applications, so we provided a small javascript framework to enable users to control the robot and gui. In addition to allowing users to react to traditional user input, we also provided a small computer vision library to allow users to create custom robotic events which coincided with the events onscreen.

To demonstrate the frameworks use in creating tangible user interfaces, we had 12 developers participate in 90 minute sessions to create their own custom tangible interfaces for the desktop. The evaluation began with the API introduction and brainstorming phase where the developer was given documentation about the framework and encouraged the sketch out initial designs and ask any questions related to the API and software framework. Developers were then asked to create two tangible user interfaces: An Angry birds controller, and a controller for the movie maker editing platform. The development time for each controller was 30 minutes or less. Finally developers were interviewed about how the framework could be improved, and if or how they would use the system in their own interfaces.

Affiliated Faculty:
Daniel Szafir
Shaun Kane

Full Paper

GUI Robots: Using Off-the-Shelf Robots as Tangible Input and Output Devices for Unmodified GUI Applications from Darren Guinness on Vimeo.

Bibtex Citation:
@inproceedings{Guinness2017GUIRU, title={GUI Robots: Using Off-the-Shelf Robots as Tangible Input and Output Devices for Unmodified GUI Applications}, author={Darren Guinness and Daniel Szafir and Shaun K. Kane}, booktitle={Conference on Designing Interactive Systems}, year={2017} }

Caption Crawler

Enabling Reusable Alternative Text Descriptions using Reverse Image Search

Accessing images online is often difficult for users with vision impairments. This population relies on text descriptions of images that vary based on website authors’ accessibility practices. Where one author might provide a descriptive caption for an image, another might provide no caption for the same image, leading to inconsistent experiences.

In this work, we present the Caption Crawler system, which uses reverse image search to find existing captions on the web and make them accessible to a user’s screen reader. We report our system’s performance on a set of 481 websites from alexa.com’s list of most popular sites to estimate caption coverage and latency, and also report blind and sighted users’ ratings of our system’s output quality. Finally, we conducted a user study with fourteen screen reader users to examine how the system might be used for personal browsing.

Full Paper

Caption Crawler: Enabling Reusable Alternative Text Descriptions Using Reverse Image Search from Darren Guinness on Vimeo.

Personal Space

Modeling Mid-air Gestural Interaction as an Adaptable User Interface

This work was conducted as a part of my Master's Program at Baylor University. We wanted to create an adaptive interface for users who may have limited limb mobility, or motor impairments which affect one's ability to use traditional input devices or cause users to switch between their dominant or non-dominant limbs during input. We choose to examine how gestural interaction may enable easier expertise transition between dominant and non-dominant handed input based on previous body awareness.

To enable this, we introduced the Personal Space approach to allow users to calibrate a custom interaction space for cursor control using the Leap Motion and Myo Armband. We studied how this input could be used between dominant and non-dominant limbs, how the input could be used as an assitive technology for people who had difficulties using traditional input methods, and how different types of modeling the interaction space affected user performance.

Modeling Interaction Space Paper

Contact me