.. _run_setup: Execute ======= ``Execute`` is the application that drives the simulation. It handles gathering all required environmental variables and files to run the simulation. While the job is running run will update the job status in d3VIEW, call preview to push files, and call publisher upon the simulation's completion. To add a solver in ``Execute``, find the ``run`` property in the ``d3view.json``. The main structure of the run configuration looks as follows:: "run": { "info": "Run configuration file used by d3VIEW. Do not edit without consulting d3VIEW Dev Team.", "script_directory": "", "scratch_dir": "", "file_transfer": "local", "monitoring": {}, "applications": {} } Script directory ~~~~~~~~~~~~~~~~ The ``run -> script_directory`` is the directory in which custom run scripts reside. The value should be a string of the directory's path. Setting this is useful when wanting to provide a custom run script for a custom solver. If this is set the ``run -> application -> `` key must be updated to contain the name of the script to be run. For instance with the application ``custom_solver`` we could create a ``custom_solver.sh`` script that runs only one application, and the minimal configuration for the d3view.json would look as follows:: "run": { "info": "Run configuration file used by d3VIEW. Do not edit without consulting d3VIEW Dev Team.", "script_directory": "/path/to/script", "scratch_dir": "", "file_transfer": "local", "abort_interval": 60, "applications": { "custom_solver": { "run_script": "custom_solver.sh", "env": {}, "versions": { "default": { "binary": "none" } } } } } .. note:: Custom scripts are disabled if the ``run -> script_directory`` is set to ``""`` Scratch directory ~~~~~~~~~~~~~~~~~ Some solvers support using a scratch directory, to specify a custom scratch directory, such as ``/mnt/resource/`` this parameter may be set. File transfer ~~~~~~~~~~~~~ This option is ``local`` by default meaning files will be copied from one directory on the host to a directory on the same host. If changed to a different setting, the remote host that is specified in submit will be the host that files will be copied from. Applications ~~~~~~~~~~~~ As shown in `Script directory`_ section, defining a new application can be very simple. Each application in the ``applications`` section is a dictionary which contains certain required keys. The keys required for each application are `env`_ and `versions`_. These keys give the basic details required to drive the simulation. There is also lot of flexibility with the d3VIEW run utility such as adding optional settings such as adding `custom_scripts`_. env +++ The ``env`` key is a dictionary of keys and values that correspond to environmental variable names and their respective values. All environmental variables required for pre and post processing will also need to be included. In the example of dyna, the ``env`` looks as follows:: "env": { "LSTC_LICENSE_SERVER": "d3VIEW.license.server", "LSTC_MEMORY": "AUTO", "ALTAIR_LICENSE_PATH": "ALTAIR_LICENSE_PATH" } versions ++++++++ The ``versions`` key is a dictionary of binaries required for each solver. Each dictionary's key should be the solver version for those binaries. An example for lsdyna is as follows:: "lsdyna": { … "versions": { "r712sp": { "binary": "/path/to/ls-dyna_mpp_s_r7_1_2_95028_x64_redhat54_ifort131_sse2_platformmpi", "mpi_bin": "/path/to/platform_mpi/bin/mpirun", "post_bin": { "l2a_bin": "/path/to/ls-dyna_mpp_s_r7_1_2_95028_x64_redhat54_ifort131_sse2_platformmpi.l2a" } } }, … } For a custom application, the default values should be used:: "versions": { "default": { "binary": "none" } } custom_scripts ++++++++++++++ Custom scripts can be defined for each solver and they must reside in the `Script directory`_. Therefore, if the `Script directory`_ is not set or doesn't exist, none of the custom scripts will be executed. ``run -> applications -> -> `` will define the path to the script. Except the ``run_script``, each of these scripts will execute after the task has completed for predefined solvers. For example, Lucy comes packaged with the ability to run dyna without a custom run script, but a custom run script will be executed instead of the predefined script if configured. Also, Lucy comes with predefined cleanup steps for dyna, but if a custom cleanup script is configured, the custom cleanup will execute after the standard cleanup for dyna. The possible custom scripts are as follows: 1) ``run_script`` - the main solver run script (This will run instead of the packaged solver's run process.) 2) ``post_script`` - adds additional post processing 3) ``cleanup_script`` - adds additional cleanup steps 4) ``abort_Script`` - adds custom abort handling For example:: "custom_solver": { "run_script": "custom_solver.sh", "long_interval": "custom_long_interval.sh", "post_script": "custom_post_script.sh", "cleanup_script": "custom_cleanup_script.sh", "abort_Script": "custom_abort_Script.sh" "versions": { "default": { "binary": "none" } } } .. warning:: These settings will do nothing if the `Script directory`_ is not defined.