Mar
22
12:00 PM12:00

Computational Imaging Lunch: Motion Resolved Dynamic MRI

Title: Motion resolved dynamic MRI
Speakers: Frank Ong

Description:  In some MRI applications, we are interested in visualizing motion over time, such as cardiac or respiratory motion, for functional information. However, there are a few challenges in reconstructing these motion dynamics accurately: 1) Each time frame is vastly under sampled in the Fourier domain, so some priors on motion is needed for reconstruction 2) Accurate representation for complex body motion are difficult come up with. 3) It is not clear how to incorporate motion models in an inverse problem formulation (what is the cost function?).

In this talk, I will give an overview of the problem and present preliminary results in modeling non-rigid body motion as local linear translations, which leads a cleaner formulation.

View Event →
Feb
22
12:00 PM12:00

Computational Imaging Lunch: Non-Uniform FFTs

Title: Non-Uniform FFTs
Speakers: Teresa Ou

Description:  The non-uniform FFT (NUFFT) generalizes the FFT from equispaced samples to arbitrary sampling patterns. NUFFT has applications in many areas, including MRI, optics, radar, and astronomy. We present a fast GPU-based NUFFT library with auto-tuning, which allows users to accelerate iterative reconstructions without the difficulties of choosing the optimal parameters and algorithms. In this talk, I will present an overview of algorithms, strategies for optimization, and performance analysis of our implementation.

View Event →
Feb
14
12:00 PM12:00

Computational Imaging Lunch: Building the Cloud Button

Title: Building the Cloud Button
Speakers: Eric Jonas

Description: In comp imaging lunch last term, Professor Ren Ng asked "why is there no cloud button?" We realized, listening to Ren, that indeed #theCloudIsTooDamnHard. Fortunately, recent technologies developed for web services and internet startups can be repurposed to enable a much lower-friction cloud experience. Our goal is making the power, elasticity, and dynamism of commercial cloud services like Amazon's EC2 accessible to busy applied physicists and EEs, as well as  a compelling new capability over matlab, hopefully encouraging migration. We built PyWren, a transparent distributed execution engine on top of AWS Lambda, which hopefully simplifies many scale-out use cases for computational imaging. We will demo applications built on our framework and seek user input into next directions. 

Joint work with Shivaram Venkataraman, Ion Stoica, and Ben Recht. 

Speaker Bio: Eric Jonas is an exhausted postdoc working in EECS with Ben Recht on machine learning and computational acquisition and clever hashtags. 

View Event →
Feb
1
12:00 PM12:00

Computational Imaging Lunch: Harnessing the Creative Power of Drones

Title: Harnessing the Creative Power of Drones
Speakers: Mike Roberts

Date: February 1 at 12pm

Description:  Drones are becoming a popular camera platform due to their maneuverability, small size, and low cost. Indeed, drones are being used to film scenes in a growing number of Hollywood movies, and are emerging as a powerful tool for 3D content creation. However, drones remain difficult to control, both for humans and for computers. In this talk, I will present my ongoing work aiming to make it easier for humans to interact with drones, focusing specifically on the creative tasks of cinematography and 3D modeling. I will begin by presenting an interactive tool for designing drone camera shots, describing the key insights and algorithms that inform its design. I will show examples of ambitious aerial cinematography that have been created using this tool, by users with absolutely no experience flying drones. I will then present my ongoing efforts to build a fully automatic, end-to-end system for scanning large outdoor scenes in 3D using off-the-shelf consumer drones.

Bio: Mike Roberts is a fifth-year PhD candidate in the Computer Graphics Laboratory at Stanford University advised by Pat Hanrahan. Mike’s work is at the intersection of computer graphics and robotics, where he focuses on using drones to support human creativity. Mike has interned at Harvard University, Skydio, and most recently at Microsoft Research. Mike’s joint work with the Harvard Center for Brain Science was published on the cover of Cell in 2015, and has been featured in BBC Horizon, The Guardian, Huffington Post, National Geographic, Nature News, The New York Times, and Popular Science. In 2013, Mike co-developed the Introduction to Parallel Programming course at Udacity, which has enrolled over 80,000 students.

View Event →
Jan
18
12:00 PM12:00

Computational Imaging Lunch: 3D Compressed Sensing with a Diffuser

Title: 3D Compressed Sensing with a Diffuser 
Speakers: Grace Kuo, Nick Antipa

Description: When capturing higher dimensional data (3D or light-fields) with a 2D sensor, one usually must sacrifice either spatial resolution or time resolution. For example, a plenoptic light-field camera distributes pixels over four dimensions resulting in lower spatial resolution than a traditional camera. Scanning and multi-shot methods can maintain high spatial resolution, but they sacrifice time resolution. In this work, we propose an alternative: If we can encode 3D information on the 2D sensor in such a way that it meets the requirements of compressed sensing, than we should be able to recover the 3D object from a single images without loss of resolution, assuming the object is sparse in some domain.  In this talk, we will give an overview of compressed sensing, demonstrate that a diffuser meets the practical requirements of compressed sensing, and show results from our prototype system.

This paper by Candes gives a nice introduction to compressed sensing:

http://ieeexplore.ieee.org/document/4472240/?arnumber=4472240&tag=1

Nick Antipa’s paper from ICCP last year discusses how a diffuser can be user to capture light-fields:

http://ieeexplore.ieee.org/document/7492880/

 

 

View Event →