Doctoral Thesis: Devices and Algorithms for Analog Deep Learning
Efforts to realize analog processors have skyrocketed over the last decade as having energy-efficient deep learning accelerators became imperative for the future of information processing. However, the absence of two entangled components creates an impasse before their practical implementation: devices satisfying algorithm-imposed requirements and algorithms running on nonideality-tolerant routines. This thesis demonstrates a near-ideal device technology and a superior neural network training algorithm that can ultimately propel analog computing when combined together. The CMOS-compatible nanoscale protonic devices demonstrated here show unprecedented characteristics, incorporating the benefits of nanoionics with extreme acceleration of ion transport and reactions under strong electric fields. Enabled by a material-level breakthrough of utilizing phosphosilicate glass (PSG) as a proton electrolyte, this operation regime achieves controlled shuttling and intercalation of protons in nanoseconds at room temperature in an energy-efficient manner. Then, a theoretical analysis is carried out to explain the infamous incompatibility between asymmetric device modulation and conventional neural network training algorithms. By establishing a powerful analogy with classical mechanics, a novel method, Stochastic Hamiltonian Descent, is developed to exploit device asymmetry as a useful feature. Overall, devices and algorithms developed in this thesis have immediate applications in analog deep learning, whereas the overarching methodology provides further insight for future advancements.
- Date: Monday, April 11
- Time: 10:00 am
Additional Location Details:
Prof. Jesús A. del Alamo (Thesis Supervisor)
Prof. Bilge Yıldız, Prof. Jing Kong, Prof. Ju Li (readers)
To attend via zoom, please find the link below: