About Using Distribution Options with the Abaqus Component

This section describes how the Abaqus component can use distribution options.

About Configuring Abaqus Jobs to Run on an LSF Cluster

The LSF option is used to configure Abaqus jobs to run on an LSF cluster. LSF gives you the ability to execute multiple Abaqus runs, possibly in parallel, on a distributed compute cluster with cluster scheduling provided by LSF.

Some of the benefits of using this option include:

  • Seamless integration of LSF job management into the Abaqus environment

  • No detailed end-user configuration or knowledge of LSF is required

  • Optimization of the Abaqus run-time environment by further leveraging significant advancements in Abaqus solver technology by using LSF

The component supports LSF with Linux-to-Linux and Windows-to-Windows. You may be able to customize your system to enable LSF with Windows-to-Linux or Linux-to-Windows; however, these configurations are not supported by SIMULIA and have not been tested. In addition, only the scenario where the machine running Isight can directly submit jobs to the LSF cluster (the machine running Isight must be able to run the LSF “bsub” command) is supported.

Before you try to use Isight to execute jobs with an LSF cluster, you must verify that the FIPER_TEMP environment variable is set to a shared directory. If you have to change the variable setting, it is recommended that you close and restart Isight. In addition, if you want to ensure that parallel runs for process components work properly in standalone mode, do not set the local execution directory.

For more information on setting environment variables or sharing directories, contact your local system administrator.

You use the options on the Grid tab to generate a script. This script is used to submit an Abaqus analysis to an LSF cluster. You can edit the script on the Grid tab and provide site-specific customization, such as adding ncpus=N for MPI jobs. In addition, you may have to modify the script if all of the shared file system and working directory requirements are not satisfied.

You can add the variable substitution string {var <parsed variable name>} in the script to replace the parameter name with the parameter value during execution.

The following are examples of Windows LSF scripts (64-bit):

@echo off
call abaqus.bat input=*inp_file* job=*inp_cmd**ID* *StudyRunCount* 
cpus=2 interactive
call abaqus.bat exec.exe -odb *inp_cmd**ID*.odb -config "config.txt"

The following are examples of Linux LSF scripts (64-bit):

#!/bin/sh
abaqus input=*inp_file* job=*inp_cmd**ID* *StudyRunCount* cpus=2 
interactive
abaqus exec.exe -odb *inp_cmd**ID*.odb -config "config.txt"

The *inp_file* and *inp_cmd* strings in this reference file are replaced at design time based on the Abaqus input file name. The *ID* strings are replaced at run time to ensure that unique job names exist on the LSF cluster. You can also add the string “*StudyRunCount*” as an argument to supply design study information from Isight to Abaqus for reduced Abaqus license token consumption when appropriate. When design study information is not available, the argument is ignored; therefore, it is safe to provide it in any case.

This script is executed on the remote machine running LSF.

See Running Your Jobs on an LSF Cluster for details on using LSF-specific scripts in the Abaqus component editor.

About Configuring Abaqus Jobs to Run with PBS/TORQUE

The PBS/TORQUE option is used to configure Abaqus jobs to run on a PBS/TORQUE cluster. PBS/TORQUE gives you the ability to execute multiple Abaqus runs, possibly in parallel, on a distributed compute cluster with cluster scheduling provided by PBS/TORQUE.

Some of the benefits of using this option include:

  • Seamless integration of PBS/TORQUE job management into the Abaqus environment

  • No detailed end-user configuration or knowledge of PBS/TORQUE is required

  • Optimization of the Abaqus run-time environment by further leveraging significant advancements in Abaqus solver technology by using PBS/TORQUE

The component supports PBS/TORQUE with a Linux-to-Linux configuration. You may be able to customize your system to enable PBS/TORQUE with Windows systems; however, these configurations are not supported by SIMULIA. In addition, only the scenario where the system running Isight can directly submit jobs to the PBS/TORQUE cluster is supported. In this scenario, the system running Isight must be able to run the PBS/TORQUE qsub command and related commands.

Before you try to use Isight to execute jobs with a PBS/TORQUE cluster, you must verify that the following configuration requirements have been met:

  • The FIPER_TEMP environment variable is set to a shared directory. If you have to change the variable setting, it is recommended that you close and restart Isight.

  • The Isight working directory is set to the shared directory specified for the FIPER_TEMP environment variable. The working directory is set using the Preferences dialog box. For more information about setting preferences, see Setting Preferences in the Isight User’s Guide.

For more information on setting environment variables or sharing directories, contact your local system administrator.

You use the options on the Grid tab to generate a script. This script is used to submit an Abaqus analysis to a PBS/TORQUE cluster. You can edit the script on the Grid tab and provide site-specific customization, such as adding *StudyRunCount* to supply design driver information for Abaqus token reduction. This script is executed on the remote machine using PBS/TORQUE. For example:

call abaqus.bat input=*inp_file* job=*inp_cmd**ID* *StudyRunCount*
interactive
call abaqus.bat exec.exe -odb *inp_cmd**ID*.odb -config "config.txt"

You can add the variable substitution string {var <parsed variable name>} in the script to replace the parameter name with the parameter value during execution.

See Running Your Jobs with PBS/TORQUE for details on using PBS/TORQUE-specific scripts in the Abaqus component editor.

About Configuring Abaqus Jobs to Run with SSH

The SSH option is used to configure Abaqus jobs to run on a remote system using SSH. SSH gives you the ability to distribute Abaqus jobs, allowing you to run the jobs on more appropriate and/or capable remote systems and to run jobs where licenses may be more readily available. The SSH option does not supply any scheduling capability and, therefore, does not directly enable parallel Abaqus job execution on remote nodes (since the remote system’s host name must be specified in the component configuration).

The benefits of using this option include:

  • Seamless integration of SSH functionality into the Abaqus environment

  • No detailed end-user configuration or knowledge of SSH is required

Your computer/network must be configured to allow remote login without a password. The exact mechanism depends on the protocol used, but it should be either an SSH key agent for the SSH protocols or the usual.rhosts/hosts.equiv syntax for rsh. If you are prompted for a password, the command will fail.

Furthermore, you can set preference options for SSH using the OS Command component preferences. The OS Command component and the Abaqus component share the same distribution (grid) options. For more information on setting these SSH preferences, see Setting SSH Grid Plug-in Options.

You use the options on the Grid tab to generate a script. This script is used to submit an Abaqus analysis to an SSH cluster. You can edit the script on the Grid tab and provide site-specific customization, such as adding *StudyRunCount* to supply design driver information for Abaqus token reduction. This script is executed on the remote machine using SSH. For example:

call abaqus.bat input=*inp_file* job=*inp_cmd**ID* *StudyRunCount*
interactive
call abaqus.bat exec.exe -odb *inp_cmd**ID*.odb -config "config.txt"

You can add the variable substitution string {var <parsed variable name>} in the script to replace the parameter name with the parameter value during execution.

See Running Your Jobs with SSH for details on using SSH-specific scripts in the Abaqus component editor.

About Scripts that Customize Job Execution

The Custom option is used to configure a custom script that can be used in non-standard or highly customizable situations (such as connecting to a non-LSF distributed environment).

Note: If your script will be used to connect to an LSF cluster, it is highly recommended that you use the LSF option as described in Running Your Jobs on an LSF Cluster. LSF gives you the ability to execute multiple Abaqus runs, possibly in parallel, on a distributed compute cluster with cluster scheduling provided by LSF.

You use the options on the Grid tab to generate a script. This script is used during execution. You can edit the script on the Grid tab and provide site-specific customization. In addition, you may have to modify the script if all of the shared file system and working directory requirements are not satisfied.

You can add the variable substitution string {var <parsed variable name>} in the script to replace the parameter name with the parameter value during execution.

The following are examples of custom Windows scripts used to connect to an LSF cluster:

@echo off
REM Sample Isight Abaqus component custom GRID script configuration file
REM Save to <isightinstalldir>\config\AbqLSFScript.bat
REM  1) Execute Abaqus solver on remote machine
REM The *StudyRunCount* command line argument for reduced Abaqus analysis token consumption
bsub -q normal -J myjob*ID* "cd %CD% & abaqus inp=myjob.inp
job=myjob*ID* *StudyRunCount* interactive"
REM  2) Extract output parameters from Abaqus output (odb) file
bsub -q normal -w ended("myjob*ID*") "cd %CD% & abaqus exec.exe
-odb myjob*ID*.odb -config config.txt -fieldvalues yes"

The following are examples of custom Linux scripts used to connect to an LSF cluster:

#!/bin/sh
# Sample Isight Abaqus component custom GRID script configuration file
# Save to <isightinstalldir>/config/AbqLSFScript.sh
#----------------------------------------------------------------
#  1) Execute Abaqus solver on remote machine (assumes a shared file system)
abaqus input=*inp_file* job=*inp_cmd**ID* queue=default

bsub -q normal -J myjob*ID* /PATH/TO/abaqus/Commands/abaqus
input=myjob.inp \
job=myjob*ID* *StudyRunCount* interactive
#----------------------------------------------------------------
#  2) Extract output parameters from Abaqus output (odb) file
bsub -q normal -w ended\("myjob*ID*"\)
/PATH/TO/abaqus/Commands/abaqus python \ 
exec.exepy -odb myjob*ID*.odb -config "config.txt" -fieldvalues yes

You can configure the default script that is generated by modifying the AbqLSFScript.bat file (AbqLSFScript.sh on Linux) in your <Isight_install_directory>\<operating_system>\reffiles\SMAFIPconfigdirectory. The *inp_file* and *inp_cmd* strings in this reference file are replaced at design time based on the Abaqus input file name. The *ID* strings are replaced at run time to ensure that unique job names exist on the LSF cluster. The contents of the default file included with the component are shown below. You can also add the string “*StudyRunCount*” as an argument to supply design study information from Isight to Abaqus for reduced Abaqus license token consumption when appropriate. When design study information is not available, the argument is ignored; therefore, it is safe to provide it in any case.

The following shows the content of the default Windows script:

@echo off
abaqus.bat input=*inp_file* job=*inp_cmd**ID* queue=default
bsub -q default -w ended("*inp_cmd**ID*") abaqus.bat exec.exe -odb \
*inp_cmd**ID*.odb -config "config.txt"

The following shows the content of the default Linux script:

#!/bin/sh
abaqus input=*inp_file* job=*inp_cmd**ID* queue=default
bsub -q default -w ended\("*inp_cmd**ID*"\) abaqus exec.exe -odb \
*inp_cmd**ID*.odb -config "config.txt"

This script is executed on the machine running the component in a temporary working directory on the same machine (the directory specified by the FIPER_TEMP environment variable).

When used to connect to LSF, the script assumes that the LSF cluster nodes have direct access to this same folder via a shared file system (the LSF job runs in the same folder it was submitted from).

Note: Although this custom script can be used to connect to an LSF cluster, it is recommended that you use the LSF option described in Running Your Jobs on an LSF Cluster.

When used with LSF, the script performs the following functions:

  • Submits an LSF job to run Abaqus in the file name myjob.inp, producing an output database file called myjob*ID*, where *ID* is replaced by a unique identifier that is generated by Isight before the script is run. This action ensures that all LSF jobs have unique names.

  • Fills in the job name (myjob) based on the file name used on the Input and Output tabs.

  • Assigns the Abaqus LSF job a unique LSF job name with the -J argument.

  • Submits a second job that uses a Python script supplied by Isight to extract information from the output database file created by the first job. The Python script creates the output file params.txt containing the values of the output parameters.

  • Configures the second job with an LSF job dependency on the first job (-w “ended”), so that it will not start until the first job has completed.

See Running Your Jobs Using a Custom Script for details on using custom scripts in the Abaqus component editor.

About Configuring Abaqus Jobs

The Abaqus option is used to configure Abaqus jobs. Abaqus jobs run on a queue that utilizes an LSF cluster that is configured by your Abaqus administrator. The LSF job management is integrated into the Abaqus environment, and you cannot configure the LSF cluster that is used by Abaqus. You configure a single Abaqus job to run in parallel across multiple processors by selecting a queue name and the number of processors.

For more details about setting up an LSF cluster or defining Abaqus queues, contact your local Abaqus administrator.

See Running Your Jobs Using Abaqus for details on selecting a queue name and the number or processors.