%A Spyridon A. Kontogiorgis
%T ALTERNATING DIRECTIONS METHODS FOR THE PARALLEL SOLUTION OF LARGE-SCALE BLOCK-STRUCTURED OPTIMIZATION PROBLEMS
%D August 1994 (Revised: June 1995)
%R 94-13
%I COMPUTER SCIENCES DEPARTMENT, UNIVERSITY OF WISCONSIN
%C MADISON, WI
%X In this thesis we develop efficient algorithms for the
solution of large-scale convex block-angular minimization problems
on a Multiple-Instruction-Multiple-Data parallel environment.
From the method of Alternating Directions we derive three
decomposition algorithms that exploit the block structure and can
be viewed as block Gauss-Seidel modifications of the Augmented
Lagrangian algorithm combined with a coordinating step. We prove
convergence of an extended Alternating Directions method and develop
heuristics that can reduce the number of iterations to convergence,
by using information on constraint violation to adjust the penalty.
We investigate theoretical aspects of the three methods and their
practical performance on the Thinking Machines CM-5 parallel
supercomputer under the fork-join coordination protocol.
Each block of the problem is assigned to a processor. In the
fork phase, each processor solves a minimization subproblem
involving only block constraints and variables, and an objective
function that consists of a block of the original convex
block-separable objective plus a linear Lagrangian and a quadratic
proximal term. The (AP) algorithm proximizes activities, the (RP)
algorithm proximizes resources, and the (ARP) algorithm proximizes
both resources and activities. In the join phase, processors
simply exchange solution information and form new aggregate values
for the subproblem proximal and Lagrangian terms. Then the cycle
repeats. Fast global communication functions on the CM-5
allow for an efficient implementation of the join step.
Our computational experience on randomly generated linear and
quadratic multicommodity network problems indicates that, because
of the reduced size of the subproblems solved at each iteration,
the (AP) and (RP) algorithms perform significantly better than
the (ARP) algorithm. On large-scale problems, our parallel
implementation of the (AP) algorithm is one to two orders of
magnitude faster than the standard serial optimization package
MINOS 5.4 on a DEC-5000 workstation.