Three papers chosen as IEEE Micro Top Picks
Top Picks is an annual special edition of IEEE Micro magazine that acknowledges the 10-12 most significant research papers.
Three papers authored by CSE researchers have been selected for IEEE Micro’s Top Picks from the 2018 Computer Architecture Conferences. Top Picks is an annual special edition of IEEE Micro magazine that acknowledges the 10-12 most significant research papers from computer architecture conferences in the past year based on novelty and potential for long-term impact.
The authors found a smarter use for CPU memory, developed a new way for programming languages to use persistent memory, and unearthed a major security vulnerability on Intel chips.
Speeding up neural networks through smart use of CPU memory
In “Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks,” the authors detail a new way to speed up computation for neural networks by orders of magnitude over standard CPU and GPU computation. Called Neural Cache, the architecture works by re-purposing the caches in CPUs into data-parallel accelerators capable of executing deep neural network inference calculations. By computing in CPU cache, these applications can receive an efficiency gain of 679X and 128X over CPU and GPU, respectively. To accomplish this, Neural Cache makes small extensions to the cache area, requiring only an area cost of ~10 mm^2 in 22 nm (about 2% additional space to a standard CPU die), while the evaluated GPU die area is 471 mm^2 in 16 nm.
The paper was authored by: Charles Eckert (CSE PhD student), Xiaowei Wang (CSE PhD student), Jingcheng Wang (ECE PhD student), Arun Subramaniyan (CSE PhD student), Ravi Iyer (Intel Fellow), Prof. Dennis Sylvester, Prof. David Blaauw, and Prof. Reetuparna Das.
Programming language approach to using powerful new memory
In “Language-level Persistency,” the authors describe a new way for programming languages to operate with a powerful upcoming memory technology. Called Persistent Memories (PMs), the new devices offer many desirable properties, such as better durability, density, and energy efficiency than Dynamic RAM while offering similar performance. These properties have spawned many efforts to adopt PM in different areas of computer science, ranging from data structures, to software systems, to computer architecture. One of the most disruptive potential use cases for PM is to host in-memory recoverable data structures. PMs blur the traditional divide between a byte-addressable, volatile main memory and a block-addressable, persistent storage. This memory allows programmers to directly manipulate recoverable data structures using processor loads and stores, rather than relying on performance-sapping software intermediaries like the operating system and file system.
Ensuring that these data structures are recoverable requires programmer control of the order of persistent writes to memory. Prior work has considered persistency models at the abstraction of the instruction set architecture. Instead, this paper argues for extending the language-level memory model to provide guarantees on the order of persistent writes to memory. The authors explored a taxonomy of guarantees a language-level persistency model might provide, describing a number of optimizations that increase performance by up to 33.2% (19.8% avg.) over baseline models.
The paper was authored by: Aasheesh Kolli (Penn State University), Vaibhav Gogte (CSE PhD student), Ali Saidi (Amazon Web Services), Stephan Diestelhorst (ARM), William Wang (ARM), Prof. Peter Chen, Prof. Satish Narayanasamy, and Prof. Thomas F. Wenisch.
Foreshadow: Heading off a major Intel vulnerability
In “Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution,” researchers identified a processor vulnerability that had the potential to put secure information at risk in any Intel-based PC manufactured since 2008. It could affect users who rely on a digital lockbox feature known as Intel Software Guard Extensions, or SGX, as well as those who utilize common cloud-based services. The SGX security hole, called Foreshadow, was identified in January 2018 and made known to Intel. That led Intel to discover a broader set vulnerabilities and their potential risks to the cloud. These further variants, Foreshadow-NG and L1TF, target Intel-based virtualization environments that cloud computing providers like Amazon and Microsoft use to create thousands of virtual PCs on a single large server.
While these vulnerabilities were caught before causing major damage, they expose the fragility of secure enclave and virtualization technologies says Ofir Weisse, the graduate student researcher involved in the work. He believes that the key to keeping technologies secure lies in making designs open and accessible to researchers so that they can identify and repair vulnerabilities quickly.
The paper was authored by: Jo Van Bulck (imec-DistriNet, KU Leuven), Marina Minkin (CSE PhD student), Ofir Weisse (CSE PhD student), Prof. Daniel Genkin, Prof. Baris Kasikci, Frank Piessens (imec-DistriNet, KU Leuven), Mark Silberstein (Technion), Prof. Thomas F. Wenisch, Yuval Yarom (University of Adelaide and Data61), and Raoul Strackx (imec-DistriNet, KU Leuven).