Click on the images to learn more about our research projects.
Sonification in Image-guided Surgery
One challenge facing image-guided surgery systems is the problem of presenting 3D information on a 2D surface. As a potential solution, we are investigating the utility of sonification (transforming data into sound) for providing real-time and continuous distance information within the context of image-guided surgery. More specifically, we are interested in the ability of real-time sound synthesis to facilitate the perception of depth (i.e. "hearing depth"), which is the dimension that suffers the most in 2D visualizations. The sounds that we use in this research are inspired by sonification techniques that have been used within other commercial applications, such as the automative and aerospace industries.
Adaptive Computer-Generated Voices
Intelligent personal assistants (such as Apple’s Siri, Google Now, or Amazon’s Echo) represent a new generation of audio-based human-computer interaction. Audio-based HCI solutions are useful for many applications, especially those that already demand significant visual attention from the user. While there are many limitations of intelligent personal assistants (IPAs), one limitation, in particular, pertains to “emotionless” speech. Human speech is highly expressive not only to our vocabulary and ability to develop contextual information but also due to a rich set of tone of voice cues we employ to nuance semantic meaning. This research explores the use of affective ‘tone of voice cues’ within IPA applications to adapt and shape the tone of a user.
Medical Image Visualization
Appropriate visualization of scientific data can significantly improve understanding of the information captured within the data. In the context of 3D medical images (e.g. MRI, CT, angiography) proper visualization has the potential of making anatomical features perceptible to medical practitioners and can aid in the spatial understanding of anatomy and pathology. This can aid in the prevention of disease through the discovery of imaging biomarkers, as well as improve treatment — leading to better diagnosis, planning and therapy. Motivated by the potential of dramatically improve clinical workflows and ultimately patient care, the focus of this research area it to develop and evaluate new image processing and visualization algorithms that allow for intuitive understanding and analysis of anatomical data.
Augmented Reality in Image-guided Surgery
Augmented reality images are difficult to interpret in terms of depth because the virtual or computer-generated part of the image tends to look like it is floating above the real world. In the proposed research, rendering methods that account for the discrepancy in real and virtual images and which improve spatial and depth understanding will be studied. New algorithms grounded in pictorial depth cues such as depth of focus, distance-based edge depiction, ambient shading, and perspective distortion will be developed and tested. Furthermore, new ways to combine real and virtual elements to create AR views will be developed.