Abstract
Activation functions are essential to introduce nonlinearity into neural networks, with the Rectified Linear Unit (ReLU) often favored for its simplicity and effectiveness. Motivated by the structural similarity between a single layer of the Feedforward Neural Network (FNN) and a single iteration of the Projected Gradient Descent (PGD) algorithm for constrained optimization problems, we consider ReLU as a projection from R onto the nonnegative half-line R+. Building on this interpretation, we generalize ReLU to a Generalized Multivariate projection Unit (GeMU), a projection operator onto a convex cone, such as the Second-Order Cone (SOC). We prove that the expressive power of FNNs activated by our proposed GeMU is strictly greater than those activated by ReLU. Experimental evaluations further corroborate that GeMU is versatile across prevalent architectures and distinct tasks, and that it can outperform various existing activation functions.