Click on the images to learn more about our research projects.

Adaptive Computer-Generated Voices
Intelligent personal assistants (such as Apple’s Siri, Google Now, or Amazon’s Echo) represent a new generation of audio-based human-computer interaction. Audio-based HCI solutions are useful for many applications, especially those that already demand significant visual attention from the user. While there are many limitations of intelligent personal assistants (IPAs), one limitation, in particular, pertains to “emotionless” speech. Human speech is highly expressive not only to our vocabulary and ability to develop contextual information but also due to a rich set of tone of voice cues we employ to nuance semantic meaning. This research explores the use of affective ‘tone of voice cues’ within IPA applications to adapt and shape the tone of a user.
Learn More
{"dots":"true","arrows":"true","effect":"slide"}
Medical Image Visualization
Appropriate visualization of scientific data can significantly improve understanding of the information captured within the data. In the context of 3D medical images (e.g. MRI, CT, angiography) proper visualization has the potential of making anatomical features perceptible to medical practitioners and can aid in the spatial understanding of anatomy and pathology. This can aid in the prevention of disease through the discovery of imaging biomarkers, as well as improve treatment — leading to better diagnosis, planning and therapy. Motivated by the potential of dramatically improve clinical workflows and ultimately patient care, the focus of this research area it to develop and evaluate new image processing and visualization algorithms that allow for intuitive understanding and analysis of anatomical data.
Learn More
{"dots":"true","arrows":"true","effect":"fade"}
Augmented Reality in Image-guided Surgery
Augmented reality images are difficult to interpret in terms of depth because the virtual or computer-generated part of the image tends to look like it is floating above the real world. In the proposed research, rendering methods that account for the discrepancy in real and virtual images and which improve spatial and depth understanding will be studied. New algorithms grounded in pictorial depth cues such as depth of focus, distance-based edge depiction, ambient shading, and perspective distortion will be developed and tested. Furthermore, new ways to combine real and virtual elements to create AR views will be developed.
Learn More