(With a little effort this might work itself into reasonable shape. Due to perpetual lack of time, the information below should suffice to whet your appetite for now. If you want more information on any of these projects, please drop me a note.)
The overreaching research umbrella at the Parallel Architecture Group at Northwestern (PARAG@N) is energy-efficient computing. At the macro scale, computers consume inordinate amounts of energy, negatively impacting the economics and environmental footprint of computing. At the micro scale, power constraints prevent us from riding Moore's Law. We attack both problems by identifying sources of energy inefficiencies and working at hardware/software techniques for cross-stack energy optimization. Thus, our work extends from circuit and hardware design, through programming languages and OS optimizations, all the way to application software. In a nutshell, our work aims to minimize the overheads associated with data storage and data transfers (e.g., through adaptive memory hierarchy designs, memory technologies, and silicon photonics), computational overheads (e.g., through specialized computing on dark silicon, approximate computing), circuits (e.g., through speculative arithmetic units, fused accelerators), and in the long term aims to push back the bandwidth and power walls by designing 1000+-core virtual macro-chips with nanophotonic interconnects and optical memories. An overview of our research at PARAG@N was presented at an invited talk at IBM T.J. Watson Research Center and Google Chicago in March 2012. That talk is a little old and many things have happened since then, but it is a good starting point.
More specifically, we work on:
Elastic Memory Hierarchies
In this project we develop adaptive cache designs and memory hierarchy sub-systems that minimize the overheads of storing, retrieving and communicating data to/from memories and other cores. An incarnation of Elastic Caches for near-optimal data placement was published at ISCA 2009 and won an IEEE Micro Top Picks award in 2010, while newer papers at DATE 2012 and IEEE Computer Special Issue on Multicore Coherence in 2013 present an instance of Elastic Caches that minimize interconnect power by collocating directory meta-data with sharer cores. You can also find an interview on Dynamic Directories conducted by Prof. Srini Devadas (MIT) here. This thrust currently focuses on revisiting memory hierarchy designs, optical memories, and new hardware-software co-designs for virtual-to-physical address mapping. This work was partially funded by NSF CCF-1218768.
Elastic Fidelity: Disciplined Approximate Computing
At the circuit level, the shrinking transistor geometries and race for energy-efficient computing result in significant error rates at smaller technologies due to process variation and low voltages (especially with near-threshold computing). Traditionally, these errors are handled at the circuit and architectural layers, as computations expect 100% reliability. Elastic Fidelity computing is based on the observation that not all computations and data require 100% fidelity; we can judiciously let errors manifest in the error-resilient data, and handle them higher in the stack. We develop programming language extensions that allow data objects to be instantiated with certain accuracy guarantees, which are recorded by the compiler and communicated to hardware, which then steers computations and data to separate ALU/FPU blocks and cache/memory regions that relax the guardbands and run at lower voltage to conserve energy. This work was funded by NSF CCF-1218768 and NSF CCF-1217353.
SeaFire: Design for Dark Silicon
While Elastic Fidelity and Elastic Caches cut back on the energy consumption, they do not push the power wall far enough. To gain another order of magnitude, we must minimize the overheads of modern computing. The idea behind the SeaFire project is that instead of building conventional high-overhead multicores that we cannot power, we should repurpose the dark silicon for specialized energy-efficient cores. A running application will power up only the cores most closely matching its computational requirements, while the rest of the chip remains off to conserve energy. Preliminary results on SeaFire have been published at a highly-cited IEEE Micro article in July 2011, an invited USENIX ;login article in April 2012, the ACLD workshop in 2010, a keynote at ISPDC in 2010, an invited presentation at the NSF Workshop on Sustainable Energy-Efficient Data Management in 2011 (the abstract is here, and an invited presentation at HPTS in 2011. This work was funded by an ISEN Booster award and now continues as part of the Intel Parallel Computing Center at Northwestern (here is the Intel Press release) that I co-founded with faculty from the IEMS department.
Galaxy: Computer Architecture Meets Silicon Photonics
This project combines advances in parallel computer architecture and silicon photonics to develop architectures that break past the power, bandwidth and utilization walls (dark silicon) that plague modern processors. The Galaxy architecture of optically-connected disintegrated processors argues that instead of building monolithic chips, we should split them into several smaller chiplets> and form a "virtual macro-chip" by connecting them with optical links. The optics allow such high bandwidth communication that break the bandwidth wall entirely, and such low latency that the virtual macro-chip behaves as a single tightly-coupled chip. As each chiplet has its own power budget and the optical links eliminate the traditional chip-to-chip communication overheads, the macro-chip behaves as an oversized multicore that scales beyond single-chip area limits, while maintaining high yield and reasonable cost (only faulty chiplets> need replacement). Our preliminary results indicate that Galaxy scales seamlessly to 4000 cores, making it possible to shrink an entire rack's worth of computational power onto a single wafer. The full design was presented at an EPFL talk in 2014 and published at ICS-2014. This project has advanced the state of the art in silicon photonic interconnects by designing laser power-gating NoCs, developing the concept further through co-designing the on-chip NoC with the architecture, escalating the laser power-gating to datacenter optical networks and overcoming the thermal transfer problems of 3D-stacked electro-optical processor/photonics chips. A full list of publications appears in the NSF CCF-1453853 project web page on energy-efficient and energy-proportional silicon photonic manycore architectures, which funded this work.