Skip to main content

  • Tuesday, February 6, 2018
  • 11:00 - 11:30

Tanner (Oxford): Sparse Non-Negative Super-Resolution: Simplified and Stabilized

Super-resolution is a technique by which one seeks to overcome the inherent accuracy of a measurement device by exploiting further information.  Applications are very broad, but in particular these methods have been used to great effect in modern microscopy methods and underpin recent Nobel prizes in chemistry.  This topic has received a renewed theoretical interest starting in approximately 2013 where notions from compressed sensing were extended to this continuous setting.  The simplest model is to consider a one dimensional discrete measure @%\mu = \sum_{j=1}^{k} \alpha_j \delta_{t_j}%@ which models @%k%@ discrete objects at unknown locations @%t_j%@ and unknown amplitudes @%\alpha_j%@ (typically with non-negative amplitudes). The measurement device can be viewed as a burring operator, where each discrete spike is instead replaced a function @%\psi(s,t_j)%@ such as a Gaussian @%\exp(-\sigma |st_j|)%@, in which case one can make measurements of the form @%y(s)=\psi(s,t)\star\mu=\sum_{j=1}^k \alpha_j \psi(s,t_j)%@.  Typically one measures @%m>2k+1%@ discrete values; that is @%y(s_i)%@ for @%i=1,\ldots, m%@. The aim is then to recover the @%2k%@ parameters @%{t_j}_{j=1}^k%@ and @%{\alpha_j}_{j=1}^k%@ from the $m$ samples and knowledge of @%\psi(s,t)%@. In this talk we extend recent results by Schiebinger, Robava, and Recht to show that the @%2k%@ parameters are uniquely determined by their @%2k+1%@ samples, and that any solution consistent with the measurement within @%\tau%@ is proportionally consistent with the original measure. This work is joint with A. Eftekhari, J. Tanner, A. Thompson, B. Toader, and H. Tyagi.