ASSEMBLER EVALUATE LOADCASE¶
Evaluates all simulations under one or more loadcases against their associated response templates, ranks them using a configurable method (weighted sum, threshold count, or best/worst), and identifies the top-performing simulation. Use this worker to perform structured post-simulation assessment and scoring within the d3VIEW Assembler framework.
When to use¶
Classification: default.
Tagged: assembler, best_simulation, evaluation, kpi, loadcase, ranking, response_template, scoring.
Inputs¶
| Label | ID | Type | Default | Required | Description |
|---|---|---|---|---|---|
| Loadcase IDs | loadcase_ids | remote_lookup | — | ✓ | One or more loadcase IDs whose child simulations will be evaluated; at least one is required and multiple selections are supported. |
| Project ID | project_id | remote_lookup | — | Optional project context used to scope the loadcase lookup; automatically inferred from the selected loadcase(s) if not provided. | |
| Criteria JSON | criteria_json | json | — | Optional JSON object defining per-response evaluation thresholds and optimization direction; format: {“response_name”: {“direction”: “minimize”/”maximize”, “threshold”: <number>}}. Omit to skip criteria-based filtering. | |
| Ranking Method | ranking_method | select | — | Algorithm used to rank simulations across responses: “weighted_sum” aggregates normalized scores, “threshold_count” counts criteria met, and “best_worst” ranks by best or worst single response; defaults to “weighted_sum” when omitted. | |
| Re-extract Responses | reextract_responses | boolean | — | When true, re-triggers Simlyzer response extraction before evaluation rather than using previously stored response values; default is false (use cached values). |
Outputs¶
| Label | ID | Type | Description |
|---|---|---|---|
| Records Dataset | records | dataset | Tabular dataset with one row per simulation and one column per response, providing the raw response values used for ranking. |
| Evaluation JSON | evaluation_json | json | Structured JSON containing per-simulation scores, per-criteria pass/fail results, and the best simulation identified for each loadcase. |
| Best Simulation ID | best_simulation_id | text | Platform ID of the top-ranked simulation (resolved from the first loadcase when multiple loadcases are evaluated). |
| Best Simulation Name | best_simulation_name | text | Human-readable name of the top-ranked simulation, suitable for display or downstream reporting. |
| Simulation Count | simulation_count | number | Integer count of the total number of simulations evaluated across all specified loadcases. |
| Summary | summary | text | Plain-text narrative summarizing the evaluation outcome, including the best simulation and key scoring results; intended for human review or LLM agent consumption. |
| Status | status | text | Short status string (e.g., “success” or an error message) indicating whether the worker completed successfully; intended for programmatic checks in agent workflows. |
Disciplines¶
- ai_ml.agents
- cae.postprocessing.extraction
- cae.postprocessing.response
- platform.workflow
Auto-generated from platform schema. Worker id: assembler_evaluate_loadcase. Schema hash: e35eddc6ab7f. Hand-curated docs in workerexamples/ override this page when present.