PEACOCK — LS-DYNA PARSER¶
Runs the Peacock C++ parser on a remote host to parse d3plot, binout, and LS-DYNA keyword files, producing JS3D visualisation assets, energy histories, BOM CSVs, parameterised keyword decks, and sweep variants. Host, executable path, and SOLVERAI endpoint are drawn from admin site-variables (Settings › Peacock); user credentials are injected automatically from the active d3VIEW session.
When to use¶
Tagged: babylon, binout, bom, cae, d3plot, js3d, keyword, keyword_manip.
Inputs¶
| Label | ID | Type | Default | Required | Description |
|---|---|---|---|---|---|
| Files | files | file | — | ✓ | Input file(s) for Peacock. Accepts a single file, a keyword file (e.g. main_model.key), or multiple files making up a d3plot/binout dataset. All uploaded files are placed in the same remote folder and peacock auto-discovers content. |
| Input Files Dataset | input_files_ds | dataset | — | Optional: dataset of {name, path} rows. Each file is copied into the remote input folder alongside the attachments above. | |
| Output Name | output_name | text | out | Passed to peacock as -o <name>. The output directory/zip will be named accordingly. | |
| Tasks | tasks | select | (complex) | Peacock tasks to run. Defaults cover standard d3plot+keyword parsing. Add keyword_bom_csv / keyword_conn / keyword_params etc. for extended extraction. | |
| Keyword Remove Part PID | keyword_remove_part_pid | number | — | Only used when Tasks contains ‘keyword_remove_part’. The *PART ID to drop. The mutator removes the *PART block, all elements assigned to it, nodes used only by those elements, *SET_PART_LIST entries containing it, *ELEMENT_MASS_PART rows, *INITIAL_VELOCITY_GENERATION pairs, *CONTACT_*_ID blocks resolving to it, and orphan *DATABASE_HISTORY_NODE_ID rows. Runs keyword_validate after the mutation as a self-check. | |
| Keyword Clone Part PID | keyword_clone_part_pid | number | — | Only used when Tasks contains ‘keyword_clone_part’. The source *PART ID to duplicate. The mutator copies the *PART block (new PID = max+1), every *ELEMENT_SOLID/SHELL/BEAM/DISCRETE/TSHELL row referencing it (new EIDs = max+1..N), and the unique *NODE rows those elements reference (new NIDs, coords translated by keyword_clone_offset_x/y/z). Runs keyword_validate after the mutation as a self-check. | |
| Keyword Clone Offset X (mm) | keyword_clone_offset_x | number | 200 | Only used when Tasks contains ‘keyword_clone_part’. Translation in X (mm) applied to the cloned part’s nodes. Default 200mm. Use 0 to keep cloned coords identical (will overlap source — typically you want a non-zero offset). | |
| Keyword Clone Offset Y (mm) | keyword_clone_offset_y | number | 0 | Only used when Tasks contains ‘keyword_clone_part’. Translation in Y (mm) applied to the cloned part’s nodes. | |
| Keyword Clone Offset Z (mm) | keyword_clone_offset_z | number | 0 | Only used when Tasks contains ‘keyword_clone_part’. Translation in Z (mm) applied to the cloned part’s nodes. | |
| Panel X-min (mm) | keyword_panel_x_min | number | -100 | Only used when Tasks contains ‘keyword_add_panel’. Lower X bound of the synthesized panel’s bounding box (mm). Combined with keyword_panel_x_max, controls the panel’s width. | |
| Panel X-max (mm) | keyword_panel_x_max | number | 100 | Only used when Tasks contains ‘keyword_add_panel’. Upper X bound of the synthesized panel’s bounding box (mm). | |
| Panel Y-min (mm) | keyword_panel_y_min | number | -100 | Only used when Tasks contains ‘keyword_add_panel’. Lower Y bound of the synthesized panel’s bounding box (mm). | |
| Panel Y-max (mm) | keyword_panel_y_max | number | 100 | Only used when Tasks contains ‘keyword_add_panel’. Upper Y bound of the synthesized panel’s bounding box (mm). | |
| Panel Z (mm) | keyword_panel_z | number | -50 | Only used when Tasks contains ‘keyword_add_panel’. Z coordinate (mm) at which the flat shell panel is placed. Default -50 places it below the existing geometry. | |
| Panel Nodes (X) | keyword_panel_nx | number | 21 | Only used when Tasks contains ‘keyword_add_panel’. Number of nodes along the X axis (must be >=2). Element count along X is nx-1. Default 21 nodes -> 20 elements. | |
| Panel Nodes (Y) | keyword_panel_ny | number | 21 | Only used when Tasks contains ‘keyword_add_panel’. Number of nodes along the Y axis (must be >=2). Element count along Y is ny-1. Default 21 nodes -> 20 elements. Total panel elements = (nx-1)*(ny-1). | |
| Panel Thickness (mm) | keyword_panel_thickness | number | 2 | Only used when Tasks contains ‘keyword_add_panel’. Shell thickness (mm) applied uniformly to all four corner thickness fields (t1=t2=t3=t4) in the synthesized *SECTION_SHELL card. | |
| Keyword Slice Drop List | keyword_slice_drop | textarea | — | Only used when Tasks contains ‘keyword_slice’. Comma-separated list of include-file basenames to drop from the deck (e.g. Rr_Sus_1.incl,Tank_Fuel_1.incl). The slicer removes the matching *INCLUDE lines and prunes every reference to parts/nodes defined by those files in every surviving card in a single pass. | |
| Parameterize Target Keywords | parameterize_target_keywords | string | SECTION_SHELL,MAT | Only used when Tasks contains ‘keyword_parameterize’. Comma-separated keyword categories to scan for parameterizable values. Default ‘SECTION_SHELL,MAT’ covers shell thicknesses + material properties. Add ‘INITIAL_VELOCITY’ to also parameterize impact velocities. | |
| Parameterize Target Properties | parameterize_target_properties | string | — | Only used when Tasks contains ‘keyword_parameterize’. Comma-separated property names to restrict parameterization (e.g. ‘T1,SIGY,E’). Empty = use sensible per-keyword defaults (T1 only for SECTION_SHELL; all known properties for the matched MAT type; VX/VY/VZ for INITIAL_VELOCITY). | |
| Parameterize Part IDs | parameterize_part_ids | string | — | Only used when Tasks contains ‘keyword_parameterize’. Comma-separated *PART IDs to restrict parameterization to (e.g. ‘2000226,2000227’). Empty = parameterize values referenced by every *PART in the deck. | |
| Parameterize Naming Convention | parameterize_naming_convention | select | keyword_id | Only used when Tasks contains ‘keyword_parameterize’. Controls the generated *PARAMETER name format. ‘keyword_id’ produces S{sid}_{prop} / M{mid}_{prop} / V{nsid}_{prop} (capped at 8 chars). ‘short’ produces {prop}_{id}. Names are de-collided by suffixing digits. | |
| Parameterize Min % | parameterize_min_percentage | number | 50 | Only used when Tasks contains ‘keyword_parameterize’. Lower bound of each parameter’s design range as a percentage below the baseline value (default 50 → min = 0.5 × value). Written into the parameters.csv ‘min’ column for downstream DOE / sweep tooling. | |
| Parameterize Max % | parameterize_max_percentage | number | 50 | Only used when Tasks contains ‘keyword_parameterize’. Upper bound of each parameter’s design range as a percentage above the baseline value (default 50 → max = 1.5 × value). | |
| Keyword Sweep Experiments (dataset) | keyword_sweep_experiments | dataset | — | Only used when Tasks contains ‘keyword_sweep_apply’. Experiments dataset (one row per variant) — typically wired directly from doe_sampling_point_generator’s experiments output. Each row’s keys are parameter names matching the *PARAMETER block in the parameterized deck; values are the parameter values for that variant. Metadata columns (id, name, sampling_type, baseline, bucket_index) are auto-recognized and used for variant naming / ignored as appropriate. | |
| Keyword Sweep Experiments (file) | keyword_sweep_experiments_file | string | — | Alternate input for keyword_sweep_apply. Filename (basename) of an experiments CSV that has already been uploaded as an attachment and listed in the files input. Use only when chaining from doe_sampling_point_generator is impractical; the dataset-shaped keyword_sweep_experiments input is preferred. | |
| Threads | threads | number | 4 | Number of OpenMP threads passed to peacock with -t. | |
| Recursive | recursive | select | yes | When Yes, adds -r so peacock processes all subfolders containing d3plot files. | |
| Zip Output | zip_output | select | no | When Yes, adds -zip so peacock zips its output directory into a single archive. Default No: each output file is saved as its own attachment, readable directly by downstream tools (file_read, file_parser). Only set Yes when the caller wants a bundled archive. | |
| Extra Args | extra_args | textarea | — | Optional extra CLI flags appended verbatim to the peacock command. Useful for d3plot/binout-specific flags (-binout, -binout_input, -state, -part, …) or keyword-specific flags (-keyword_delete, -keyword_main_only, …). Caller owns shell-safety of this value. | |
| Save Files | save_files | select | yes | Whether to save output files as attachments. Default yes so agents get durable attachment IDs to pass to downstream tool calls. | |
| Execution Timeout (seconds) | execution_timeout | number | 600 | Maximum seconds to wait for peacock to finish on the remote host. |
Outputs¶
| Label | ID | Type | Description |
|---|---|---|---|
| Agent Next Steps | agent_next_steps | textarea | LLM-facing guidance (read this first). Lists saved attachment IDs to use for downstream tool calls and explicitly warns that the ‘files’ output contains ephemeral scratch paths that MUST NOT be used for downstream tool calls. |
| Attachments | attachments | dataset | Saved attachments with durable IDs. Use these for every downstream tool call (file_read, file_parser, file_search, peacock_parser). |
| Parameters (chain-ready for DOE) | parameters | dataset | Populated only by keyword_parameterize. Mirrors parameters.csv in dataset shape (columns: name, value, defaultValue, min, max, type, valueType). Chain directly into doe_sampling_point_generator’s variables input — no CSV re-parse needed. |
| Files from Peacock (scratch paths — do not use for tool calls) | files | dataset | Raw listing of files produced on the remote scratch (/dev/shm). The IDs here are sha1 hashes, NOT attachment IDs. Agents MUST NOT pass these to downstream tool calls. |
| STDOUT | stdout | textarea | |
| STDERR | stderr | textarea |
Disciplines¶
- cae.postprocessing.extraction
- cae.postprocessing.visualization
- cae.preprocessing.deck_authoring
- cae.solver
- platform.integration
- platform.job_submission
Auto-generated from platform schema. Worker id: peacock_parser. Schema hash: 00f731c2bd67. Hand-curated docs in workerexamples/ override this page when present.