## Event Speaker:

## Event Location:

## Event Date/Time:

Abstract:

In this thesis, we primarily study some problems that revolve around Fourier analysis. More specifically we focus on the magnitudes of the frequency components. Firstly, we perform a study on the hypercube. It is well known that the Delsarte linear programming bounds provide rich information on the magnitudes of the Fourier coefficients, grouped by Hamming weight. Classically, such information is primarily used to attack coding problems, where the objective is to maximize cardinality of a subset of a metric space subject to a minimum distance constraint. Here, we use it to study anticoding problems, where the objective is to maximize cardinality of a subset of a metric space subject to a maximum distance (diameter) constraint. One motivation for such study is the problem of finding memories that are cheap to update, where the cost of an update is a function of the distance in the metric space. Such a view naturally supports the study of different cost functions going beyond hard diameter constraints. We work accordingly with different cost functions, with a particular emphasis on completely monotonic functions. Our emphasis is on the phenomenon of "universal optimality", where the same subset (anticode) simultaneously optimizes a wide range of natural cost functions. Among other things, our work here gives some answers to a question in computer science, namely finding Boolean functions with maximal noise stability subjected to an expected value constraint.

Secondly, we work with Fourier analysis on the integers modulo a number by drawing upon Nazarov's general solution to the "coefficient problem". Roughly speaking, the coefficient problem asks one to construct time domain signals with prescribed magnitudes of frequency components, subject to certain natural constraints on the signal. In particular, Nazarov's solution works with p-norm constraints in time. This solution to the coefficient problem allows us to give an essentially complete resolution to the mathematical problem of designing optimal coded apertures that arises in computational imaging. However, the solution is for an infinity-norm constraint on the aperture. We believe it is important to also examine a binary valued constraint on the aperture as one does not need to synthesize partial occluders for such apertures. We are unable to give a resolution in such a stringent setting, and instead provide some preliminary results as well as directions for future research.

Finally, inspired by the recent breakthroughs in understanding the dimension 8 and 24 cases of sphere packing and universal optimality in Euclidean space, we attempt to show that the associated lattices (E8 and the Leech lattice respectively) are also optimal for the problem of vector quantization in the sense of minimizing mean squared error. We are unable to resolve this question with our methods. However, our methods still give improved lower bounds for the mean squared error for all large enough dimensions.

Committee: Gregory Wornell, Henry Cohn, Yury Polyanskiy

Thesis Advisors: Gregory Wornell, Henry Cohn

Please contact the doctoral candidate to obtain the zoom address

gajjanag@fastmail.com