Skip to yearly menu bar Skip to main content


Poster

Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions

Yichen Chen · Dongdong Ge · Mengdi Wang · Zizhuo Wang · Yinyu Ye · Hao Yin

Gallery #27

Abstract: Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for $n$ data points (each of dimension $d$) and a nonconvex sparsity penalty. We prove that finding an $\mathcal{O}(n^{c_1}d^{c_2})$-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any $c_1, c_2\in [0,1)$ such that $c_1+c_2$ less than 1. We also prove strong NP-hardness results for the sparsity-constrained optimizaiotn problem. These results apply to a broad class of loss functions and sparse penalty functions. They suggest that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P $=$ NP.

Live content is unavailable. Log in and register to view live content