Configuring the Large Scale Generalized Reduced Gradient (LSGRG) Technique

You can configure the Large Scale Generalized Reduced Gradient (LSGRG) technique options.

  1. Select the optimization technique as described in Configuring the Technique and Execution Options.

  2. In the Optimization Technique Options area, enter or select the following:

    Option Description
    Max Iterations This parameter sets the maximum number of design iterations you want the optimizer to run. The value type is integer. The default value is 40. Other possible values are 1.
    Convergence Epsilon This parameter sets the convergence criterion for LSGRG. If the relative change in objective function is smaller than the value of the Convergence Epsilon for a number of iterations or the necessary Kuhn-Tucker optimality conditions are satisfied to within Convergence Epsilon, optimization is terminated. The default value is 0.0010.
    Rel Step Size This parameter sets the value of the relative gradient step size for LSGRG when calculating gradients by finite differencing. The absolute step value is calculated by LSGRG as follows:

    dx=(1+x)×GradientStepSize

    where x is the current value of a design variable. For small values of x, GradientStepSize becomes the absolute value of the step (when x = 0). For large values of x, GradientStepSize becomes the relative step size (when x >> 1). In general, the best value for GradientStepSize is sqrt(eps), where eps is the relative error in the computed function values (simcode outputs), such as if function values are available to full double precision (eps = 1e-16), GradientStepSize is about 1e-8. The value type is real. The default value is 0.0010. Other possible values are >0.

    Convergence Iterations This parameter sets the number of iterations used in the convergence check (see the definition of Convergence Epsilon above). The value type is integer. The default value is 3. Other possible values are 0 and 1.
    Binding Constraint Epsilon This parameter sets the value of the threshold for binding constraints. If a constraint is within this epsilon of its bound, it is assumed to be binding. This parameter may have a strong effect on the speed of the optimization convergence. Increasing it can sometimes speed convergence (by requiring fewer Newton iterations during a one-dimensional search), while decreasing it occasionally yields a more accurate solution or gets optimization moving if the algorithm gets “stuck.” You must treat values larger than 1e-2 cautiously, as well as values smaller than 1e–6. The value type is real. The default value is 1×104. Other possible values are >0.
    Phase 1 Objective Ratio This parameter sets the ratio of the true objective value to the sum of constraint violations to be used as the objective function during the so-called Phase 1 of optimization. LSGRG uses the Generalized Reduced Gradient method, which is designed to work in the feasible domain. If the initial design is not feasible, the first step is to obtain a feasible point from which feasibility is maintained thereafter. This is known as Phase 1 of optimization with LSGRG. The Phase 1 objective function is the sum of the constraint violations plus, optionally, a fraction of the true objective. This optimization phase terminates either with a message that the problem is infeasible or with a feasible solution. If an infeasibility message is produced the program may have become stuck at a local minimum of the Phase 1 objective (or too large a part of the true objective was incorporated), and the problem may actually have feasible solutions. The suggested remedy, if you suspect that this is so, is to choose a different starting point (or reduce the proportion of the true objective) and try again. The default value is 1.0. Other possible values are 0 and 1.
    Maximum Failed Runs This parameter is used to set the maximum number of failed subflow evaluations that can be tolerated by the optimization technique. If the number of failed runs exceeds this value, the optimization component will terminate execution. To disable this feature, set this option to any negative value (e.g., –1). When this option is set to a negative value, the optimization will continue execution despite any number of failed subflow runs.
    Failed Run Penalty Value This parameter represents the value of the Penalty parameter that is used for all failed subflow runs. The default value is 1×1030.
    Failed Run Objective Value This parameter represents the value of the Objective parameter that is used for all failed subflow runs. The default value is 1×1030.
    Save Technique Log Most optimization techniques create a log file of information/messages as they run. This information can be useful for determining why an optimizer took the path that it did or why it converged. Some of these log files can get extremely large, so they are not stored with the run results by default. Select this option if you want to store the log with the run results (as a file parameter) for later viewing.

  3. Click Update Component to save your changes.