%A Chunhui Chen
%A O. L. Mangasarian
%T Smoothing Methods for Convex Inequalities and Linear Complementarity Problems
%D November 1993/ Revised November 1994
%R tr1191
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X A smooth approximation $\bold{p(x, \alpha)}$ to the plus function:
$\bold{\max \{ x, 0 \}}$, is obtained by integrating
the sigmoid function $\bold{ 1/(1+e^{- \alpha x}) }$, commonly used in
neural networks. By means of this approximation,
linear and convex inequalities are converted into smooth, convex unconstrained
minimization problems, the solution of which approximates the solution of
the original problem to a high degree of accuracy for $\bold{\alpha}$
sufficiently large. In the special case when a Slater constraint qualification
is satisfied, an exact solution can be obtained for finite $\bold{\alpha}$.
Speedup over MINOS 5.4 was as high as 1142 times for linear inequalities of
size $\bold{ 2000 \times 1000}$, and 580 times for convex inequalities with 400
variables. Linear complementarity problems are converted into a system of smooth
nonlinear equations and are solved by a quadratically convergent Newton method.
For monotone LCP's with as many as 10,000 variables, the proposed approach was
as much as 63 times faster than Lemke's method.