In our “Big-data Era”, a vast amount of data are being collected continuously from interactive social networks, eCommerce, web searches, and various sensors. The economic value of this “big-data” is highly dependent on how cost-effectively one can store and analyze it. In the “elastic cloud”, where application servers and storage servers are connected by relatively a slow network, latency-sensitive applications, such as internet searches and social networking employ a middle-layer of DRAM-based “caching services” to overcome the long-latency and coarse-grain of storage accesses. For throughput-bound applications, such as analytics on terabytes/petabytes datasets, application servers push computations into storage servers to reduce data movement over the network. Both solutions require a large number of CPUs and DRAM, which are expensive in terms of equipment cost, area and power. We present an alternative solution using flash storage with hardware accelerators to make big-data applications more affordable.
In this talk I will describe two novel hardware-accelerated flash-based architectures we have built: BlueCache, a scalable flash-based key-value cache for data-centers, and AQUOMAN, an in-storage analytic-query offloading machine for SQL analytics. Both systems reduce the CPU and DRAM resource requirements significantly without sacrificing applications’ performance.
Thesis Committee: Prof. Arvind (supervisor), Profs. Sanchez and Belay (readers)
To attend this defense, please contact the doctoral candidate for details at shuotao at mit dot edu