## Abstract

We study the problem of recovering a vector x ∈ ℝ^{n} from its magnitude measurements y_{i} = |(a_{i},x)|,i = 1⋯, m. Our work is along the line of the Wirtinger flow (WF) approach Candès et al. [2015], which solves the problem by minimizing a nonconvex loss function via a gradient algorithm and can be shown to converge to a global optimal point under good initialization. In contrast to the smooth loss function used in WF, we adopt a nonsmooth but lower-order loss function, and design a gradient-like algorithm (referred to as reshaped-WF). We show that for random Gaussian measurements, reshaped-WF enjoys geometric convergence to a global optimal point as long as the number m of measurements is at the order of O(n), where n is the dimension of the unknown x. This improves the sample complexity of WF, and achieves the same sample complexity as truncated-WF Chen and Candes [2015] but without truncation at gradient step. Furthermore, reshaped-WF costs less computationally than WF, and runs faster numerically than both WF and truncated-WF. Bypassing higher-order variables in the loss function and truncations in the gradient loop, analysis of reshaped-WF is simplified.

Original language | English (US) |
---|---|

Pages (from-to) | 2630-2638 |

Number of pages | 9 |

Journal | Advances in Neural Information Processing Systems |

State | Published - 2016 |

## ASJC Scopus subject areas

- Computer Networks and Communications
- Information Systems
- Signal Processing