LAUNCHES DOE STUDY FROM SIMULATION¶
Launches a Design of Experiments (DOE) study by cloning and parametrically sweeping an existing simulation. Given a parameter matrix (the experiment table), it creates or reuses a named study, submits each experiment run to HPC, and collects responses via a response template.
When to use¶
Tagged: baseline, doe, hpc, optimization, parametric_sweep, random_sampling, response_collection, simulation.
Inputs¶
| Label | ID | Type | Default | Required | Description |
|---|---|---|---|---|---|
| Study Name | study_name | scalar | — | ✓ | Human-readable name assigned to the new (or existing) DOE study; must be unique within the project unless add_to_existing_study is enabled. |
| Study Description | study_description | textarea | — | Optional free-text description summarising the study objective; stored as metadata on the study record. | |
| Simulation | source_simulation_id | remote_lookup | — | ✓ | Platform ID(s) of the baseline simulation(s) whose deck will be cloned and parameter-substituted for each experiment run. |
| Experiments | parameters | dataset | — | ✓ | Dataset (tabular) defining the experiment matrix — each row is one design point and each column is a named parameter to be substituted into the simulation deck. |
| HPC Config | hpc_settings | remote_lookup | — | Reference to a saved HPC configuration (cluster, queue, resource limits) used to submit all experiment jobs; leave blank to inherit the project default. | |
| Response Template | responsetemplate_id | remote_lookup | — | ID(s) of response template(s) that define which KPIs/time-histories to extract automatically after each simulation completes. | |
| Type of Experiment Selection | baseline_only | select | yes | ✓ | Controls which experiments are executed: ‘Run Baseline Only’ (default, for deck verification), ‘Run all Experiments’, ‘Run Random Experiments’, ‘Run 3 Design Bounds’, ‘Collect Responses from Completed Simulations’, or ‘Verify Parameter Replacements’. |
| Add to existing Study | add_to_existing_study | select | no | When enabled, if a study with the given study_name already exists the worker appends new experiments to it rather than creating a duplicate study. | |
| Skip Submission | skip_submission | select | no | ✓ | |
| Sleep Interval After Job Submission | sleep_after_submission | 1 | Number of seconds to sleep after each submission | ||
| Iteration number | iteration_number | 1 | If the DOE has more than one state, we can specify this here to include in the simulation name | ||
| Simulation Start Number | start_index | 1 | This number is associated with the Simulation starting with the baseline and incremented by 1 | ||
| Ignore Source Simulation | ignore_source | select | no | ✓ | |
| Format | format | select | no | ||
| Number of Random Sampling | num_random_sampling | 3 | If the option of sampling is set to Random, this number determines the number of random experiments to consider | ||
| Remove Simulations belonging to Study before creation | clean_simulations | select | no | ✓ | |
| Verify Template Extractions | verify_template | select | no | ||
| Reference Test | physicaltest_id | remote_lookup | — |
Outputs¶
| Label | ID | Type | Description |
|---|---|---|---|
| Study Id | study_id | text | Platform ID of the created or reused DOE study; use this to reference the study in downstream workers. |
| Simulations | simulations | dataset | Dataset listing all simulation records spawned by the study (one row per experiment), including their platform IDs and status. |
| Template | template_id | integer | Integer ID of the response template applied to the study; propagated for use in downstream response-processing workers. |
| Responses | responses | dataset | Dataset of extracted KPI/response values collected from all completed experiment simulations, keyed by experiment and response name. |
| Experiments with Simulation Ids | experiments_with_sim_ids | dataset | Dataset merging the original experiment parameter matrix with the assigned simulation IDs, enabling traceability between design points and runs. |
| Logs | logs | dataset | Dataset of execution and submission log messages for the study launch, useful for debugging failed or skipped experiments. |
Disciplines¶
- cae.postprocessing.response
- design_exploration.doe
- design_exploration.optimization
- platform.hpc_config
- platform.job_submission
- platform.workflow
Auto-generated from platform schema. Worker id: doe_study_launcher_from_simulation. Schema hash: 0e4069495e3f. Hand-curated docs in workerexamples/ override this page when present.