Quasi-Newton solution technique

Quasi-Newton methods can be cost-effective for large nonlinear problems where the Jacobian is symmetric, is fairly well conditioned, and does not change greatly from one iteration to the next.

Related Topics
In Other Guides
Convergence criteria for nonlinear problems

ProductsAbaqus/Standard

A major contribution to the computational effort involved in nonlinear analysis is the solution of the nonlinear equations (Equation 1). In most cases Abaqus/Standard uses Newton's method to solve these equations, as described in Nonlinear solution methods in Abaqus/Standard. The principal advantage of Newton's method is its quadratic convergence rate when the approximation at iteration i is within the “radius of convergence”—that is, when the gradients defined by KiNM provide an improvement to the solution. The method has two major disadvantages: the Jacobian matrix has to be calculated, and this same matrix has to be solved. The calculation of the Jacobian matrix is a problem because, in many important cases, it is difficult to derive the form of the matrix algebraically. The solution of the Jacobian is a problem because of the computational effort involved: as the problem size increases, the direct solution of the linear equations can dominate the entire computational effort.

There are a number of important nonlinear applications where the Jacobian is symmetric, is fairly well conditioned, and does not change greatly from one iteration to the next. Examples are implicit dynamic time integration with small time increments relative to the periods of the natural vibrations that participate in the response or small-displacement elastic-plastic analysis where the yielding is confined (such as occurs in many practical fracture mechanics applications). In such cases, especially when the problem is large, it can be less expensive to use an alternative to the Newton approach to the solution of the nonlinear equations. The “quasi-Newton” methods are such an approach; and Matthies and Strang (1979) have shown that, for systems of equations with a symmetric Jacobian matrix, the BFGS (Broyden, Fletcher, Goldfarb, Shanno) method can be written in a simple form that is especially effective on the computer and is successful in such applications. This method is implemented in Abaqus/Standard and is described in this section. The user must select this method explicitly: by default, Abaqus/Standard uses the standard Newton method.

The basis of quasi-Newton methods is to obtain a series of improved approximations to the Jacobian matrix, K~iNM, that satisfy the secant condition:

(1)FN(uiM)-FN(ui-1M)=K~iNM(uiM-ui-1M),

so that K~iNM approaches KiNM as the iterations proceed. Equation 1 is the basic quasi-Newton equation.

For convenience we define the change in the residual from one iteration to the next as

γiN=FiN-Fi-1N,

so that Equation 1 can be written

(2)γiN=K~iNMciM,

where ci-M is the correction to the solution from the previous iteration, defined in Nonlinear solution methods in Abaqus/Standard.

Matthies and Strang's implementation of the BFGS method is a computationally inexpensive method of creating a series of approximations to [K~iNM]-1 that satisfy Equation 1 and retain the symmetry and positive definiteness of K~iNM. They accomplish this by updating [K~i-1NM]-1 to [K~iNM]-1 using a “product plus increment” form:

(3)[K~iMN]-1=[INL-ρiciNγiL][K~i-1PL]-1[IPM-ρiγiPciM]+ρiciNciM,

where

ρi=(ciMγiM)-1.

In the actual implementation of this version of the BFGS method, each [K~iMN]-1 is not stored: rather, a “kernel” matrix, [K~IMN]-1, is used (as the decomposition of K~IMN), and the update is accomplished by premultiplication of the kernel matrix by the terms

[INL-ρjcjNγjL]

and postmultiplication of the kernel matrix by the terms

[IPM-ρjγjPcjM]

for j=I+1,I+2,i. Because of the form of these terms, the premultiplication and postmultiplication operations result in inner products of vectors and the scaling of vectors by constants: it is this organization that makes the method computationally attractive. However, too many such products (i-I being bigger than, say, 5–10) are not attractive, so usually a new kernel matrix is formed and stored after some iterations. In the Abaqus/Standard implementation the kernel is the actual Jacobian matrix FN/uM. It is formed whenever a specified number of iterations have been done without obtaining a convergent solution; the default number of iterations is 8. Abaqus/Standard does not reform the kernel unless this value is exceeded, so the same kernel can be used for several increments if the BFGS updates are successful.

In general, the rate of convergence of the quasi-Newton method is slower than the quadratic rate of convergence of Newton's method, though faster than the linear rate of convergence of the modified Newton method.