Research

Intelligence at Extreme-Edge through Processing-in-Pixel Paradigm: Visual data in cameras are usually captured in analog voltages by a sensor pixel array, and then converted to the digital domain for subsequent AI processing using analog-to-digital converters (ADC). These high-resolution input images need to be streamed between the camera and the AI processing unit, frame by frame, causing energy, bandwidth, and security bottlenecks. To mitigate this problem, we propose a novel Processing-in-Pixel paradigm that customizes the pixel array by adding support for massively parallel analog computing implementing all the computations needed for first few layers of modern Convolutional Neural Networks. Results show that processing-in-pixel paradigm can reduce the data bandwidth from sensors to cloud by 10-25x. This work is in collaboration with semiconductor manufacturing experts, algorithm and applications designers.

Neuromorphic Computing in Sensors and Processors: Brain inspired neuromorphic computing has shown promising potential to achieve high energy-efficiency and low-latency on complex spatio-temporal tasks. Our group explores device-technology-circuit-algorithm co-design approaches to enable neuromorphic systems spanning compute-in-sensor to compute-in-memory processors. The group has proposed first of its kind novel compute enabled processing-in-pixel for neuromorphic vision based on Spiking Neural Networks. The work is in collaboration with neuromorphic algorithm experts.

Retina-Inspired Cameras: State-of-the-art retina inspired cameras are based on late 1980s understanding of retinal biology. The recent advances in understanding of complex computations performed by the retina have not yet been translated into computer vision technology. This collaborative work with one a kind interdisciplinary team that includes experimental and theoretical retinal neuroscientists and algorithm designer and advised by robotics and autonomous system experts aims to use 3D heterogenous integration to embed retinal computations from input photoreceptors to the output of retinal into mass manufacturable CMOS image sensor technology. Nicknamed IRIS (Integrated Retinal Functionality in Image Sensors), this project aims to develop the next generation of truly retina-inspired cameras following closely recent discoveries in retinal neuroscience.

Optical Computing and Memory: Optical devices specifically those built using Silicon Photonics provides an alternate state variable for computing with speeds well beyond existing CMOS technology. Our group is exploring new techniques to enable computing and memory solutions in the optical domain that can replace/augment existing CMOS technology. We have proposed for the first time an optical analogue of electrical SRAM using foundry-ready optical devices. Optical solutions are being explored for AI/ML, high-precision scientific applications as well as traditional digital computations along with optical high-speed interconnects.

In-Memory Computing: Segregation of memory and computing in traditional von-Neumann architecture leads to throughput and energy bottlenecks. In-memory computing in CMOS and emerging devices aims to minimize data movement between memory and processor. Our group has worked on almost all flavors of in-memory computing including digital, analog and mixed signal. We have authored some of the initial pioneering works on SRAM based in-memory computing.