close


Monitoring Incremental Elapsed Time as a Function of Simulation Time


When running explicit simulations in LS-DYNA, it is very important to understand the total CPU clock and the total Elapsed time used by the solver. This information is available at the bottom of every D3HSP file written by LS-DYNA as shown below. The total elapsed time reported in the file is the difference between the times of termination and execution. This report of Elapsed time is a static number and gives a good overview of the turnaround time for the simulation. It however, lacks the ability to provide insight at incremental intervals that is sometimes very useful to verify that it did not spend too much time during the solution. There are two ways to view incremental elapsed time and they are briefly discussed below.

1. Time Per Zone Cycle (TZC) in GLSTAT output
LS-DYNA outputs a variable called ‘Time per zone cycle’ in nanosecond as a function of simulation time into the file GLSTAT. TZC represents the elapsed clock time for each cycle at the time of output and can be plotted as a function of the simulation time in any pre-processor.

2. Incremental Elapsed Time based on D3PLOT File TimeStamp
To better understand the turnaround time, we can use the TIMESTAMP of the D3PLOT files (output at least 10 per simulation using a frequency of ENDTIM/10) and plot the incremental time difference using two successive D3PLOT’s TIMESTAMP starting from the first D3PLOT. For example, if we have 10 states of information (D3PLOT[geometry], D3PLOT01, D3PLOT02, ….., D3PLOT10), then the successive incremental elapsed time can be computed by taking the TIMESTAMP difference of D3PLOT01_TIMESTAMP, D3PLOT02_TIMESTAMP-D3PLOT01_TIMESTAMP, ….., D3PLOT10_TIMESTAMP-D3PLOT9_TIMESTAMP. This information can then be plotted against the respective actual simulation time at which the D3PLOT was written as shown below.

(Click image to enlarge)

There may be several factors that could lead to slower progression when compared with the starting point. Dropping timestep, adaptivity (where more elements are being added dynamically), varying contact-impact conditions are some of the few possible causes. Some of the causes may be unavoidable, such as adaptivity, but other causes such as dropping timestep or inappropriate contact defintions which could lead to large bucket-sizes must be reviewed and fixed to improve the overall job turnaround time. Since we run a number of design iterations over the course of the product design cycle, any improvement early on during the debugging phase can signficantly improve the overall turnaround time.

  • Andy H says:

    Hi Suri,

    Time per zone is certainly a problem for determining the a model performance problem. However, my current situation is what you would think to be quite simple. Two models with different element densities running on the same number of cpus. Basically the geometry of the models are identical and the reduced model (approx 40% to 600,000) elements indicates a higher minimum contact timestep in the d3hsp “The LS-DYNA time step size should not exceed #### to avoid contact instabilities.” But the time per zone in the model is approximately 50% higher than the larger model.

    Is there any way to identify bucket size issues or other possible causes that have increased the time per zone. Typically mesh coarsening away from the area of simulation interest is the first step in reducing run time. So balancing this with a method of assessing the potential gains in relation to how lsdyna calculates the bucket sizes would be a useful guide for users who are now moving to calculations on ever more cpus.

    Lsdyna is a powerful tool. However, the problem in the real world is the users always have to provide project results yesterday for more and more loadcases, so the argument for improved model turn around will always come back to optimal model design for cpu turnaround balanced against the modelling detail to provide guidance to the design teams. This also comes back to your post regarding “Best practices for building flexible models with reduced file sizes”. Big models require significant resources, and the pre/post processors struggle to manage the new magnitude of model data for visualisation. This is another area which is on the same theme. It is all related MODEL SIZE -> RUN TURNAROUND -> OUTPUT SIZE -> VISUALIZATION/STORAGE REQUIREMENTS. We are limited by the end result as much as the run turnaround.

  • Suri Bala says:

    Andy,

    Increasing time per zone could also indicate a drop in the time step if DT2MS is not used, bucket sizes getting larger in case there is a non-physical element deformation, etc. If the coarser model has completed normally, perhaps a review of maximum shell element area between reference and original could indicate any contact cost increase.
    If necessary, you can send me the d3hsp files from both runs to info at d3view dot com.

Leave a Reply

Your email address will not be published. Required fields are marked *


*