############################### ML_MODEL_SELECTOR Worker ############################### Overview -------- The ``ML_MODEL_SELECTOR`` worker is a newly added component under **Shapes** in Workflows. It enables users to compare multiple available machine learning models directly within a workflow. The worker provides an **ML Model Investigation** tab, where users can analyze model performance using a ranking table and visualizations based on training data, helping in selecting the most suitable model for a given use case. Key Features ------------ - **Model Comparison Capability** Allows evaluation of multiple ML models within a single workflow. - **ML Model Investigation Tab** Dedicated interface for analyzing model performance and behavior. - **Ranking Table** Displays models ranked based on performance metrics for easy comparison. - **Training Data Visualizations** Provides visual insights into model performance using training datasets. - **Seamless Workflow Integration** Easily connects with existing ML pipelines and data sources. Usage ----- 1. Open the Workflow canvas. 2. Navigate to the **Shapes** panel. 3. Add the ``ML_MODEL_SELECTOR`` worker to the canvas. 4. Connect inputs such as training data and model configurations. 5. Run the workflow. 6. Open the **ML Model Investigation** tab to: - View the ranking table of models - Analyze visualizations of training data - Compare model performance metrics .. video:: _static/movies/modelselectorworker.mp4 :width: 100% | ML Model Investigation Models Toggle =============================================== The **ML Model Investigation** feature now includes a **Models toggle** in the header, providing users with enhanced control over model comparison management. This update allows users to easily view all models included in the comparison, add new models, and manage existing ones through edit and delete actions. Key Features ------------ - **Models Toggle in Header** Provides quick access to view and manage all models included in the comparison. - **View All Models** Displays the complete list of models currently part of the investigation. - **Add New Models** Enables users to include additional models in the comparison using the **+** icon. - **Edit Models** Allows modification of existing model configurations directly from the toggle panel. - **Delete Models** Provides the ability to remove models from the comparison. Usage ----- 1. Open the **ML Model Investigation** tab. 2. Locate the **Models toggle** in the header. 3. Click the toggle to open the models panel. 4. Perform desired actions: - Click the **+** icon to add a new model - Select an existing model to edit its configuration - Use the delete option to remove a model from the comparison 5. The comparison table and visualizations update dynamically based on the selected models. .. video:: _static/movies/modelsmlmodelinvestigation.mp4 :width: 100% | Coefficients Widget Enhancement in ML Investigation ===================================================== The Coefficients widget in the **ML Investigation** tab has been enhanced to provide a more intuitive and interactive visualization of model variables. It now displays coefficients per model using color-coded bars, improving interpretability and comparison across models. Additional enhancements such as intercept separation, hover highlights, and grouped bar comparisons further enrich the analysis experience. Key Features ------------ - **Color-Coded Bars** Visualizes model coefficients using distinct colors for better differentiation. - **Per-Model Visualization** Displays variables and their corresponding coefficients for each model. - **Intercept Separation** Clearly distinguishes intercept values from other variable coefficients. - **Hover Highlights** Highlights bars on hover to improve readability and focus. - **Grouped Bar Comparison** Enables side-by-side comparison of coefficients across multiple models. Usage ----- 1. Open the **ML Investigation** tab. 2. Navigate to the Coefficients widget. 3. Select or view multiple models. 4. Analyze: - Variable importance via color-coded bars - Intercept values separately - Grouped bars for cross-model comparison 5. Hover over bars to view detailed highlights and insights. .. thumbnail:: /_images/Images/coefficientsmlinvestigation.png :title: Coefficients Widget .. centered:: :sup:`Coefficients Widget` | ML Model Investigation Baseline Comparison ===================================================== The **ML Model Investigation** feature now supports setting a model as a baseline using the **Set as Baseline** option available in the table context menu. This enhancement enables users to compare model performance relative to a selected baseline model. The **Baseline Comparison** view provides clear visual indicators of performance differences, including percentage changes, directional arrows, and color-coded deltas. Key Features ------------ - **Set as Baseline Option** Allows users to designate any model as the baseline directly from the table context menu. - **Baseline Comparison View** Displays performance differences of all models relative to the selected baseline. - **Percentage Difference Indicators** Shows percentage change for each metric compared to the baseline. - **Directional Arrows** Uses up and down arrows to indicate performance improvement or decline. - **Color-Coded Deltas** - Green → Positive improvement over baseline - Red → Negative decline compared to baseline - **Baseline Badge** Highlights the selected baseline model with a visible badge in the table. .. video:: _static/movies/modelbaselinemlinvestigation.mp4 :width: 100% | ML_MODEL_SELECTOR Outputs ===================================== The ``ML_MODEL_SELECTOR`` worker in Workflows has been enhanced to provide two new outputs: **Selected Model** and **Verified Predictions**. These outputs enable seamless integration of model selection results into downstream workflow steps. The **Selected Model** is determined directly from the ranking table using the context menu, while **Verified Predictions** are generated from the model’s predicted values. Key Features ------------ - **Selected Model Output** Outputs the model chosen by the user from the ranking table. - **Ranking Table Integration** Allows users to select a model via the context menu within the ranking table. - **Verified Predictions Output** Provides predictions generated from the selected model. - **Dynamic Updates** Outputs are automatically updated based on the selected model and its predictions. Usage ----- 1. Add the ``ML_MODEL_SELECTOR`` worker to the workflow. 2. Connect training data and model configurations. 3. Execute the workflow. 4. Open the **ML Model Investigation** tab. 5. In the ranking table: - Right-click on a model - Select the desired option to mark it as the selected model 6. Access outputs: - **Selected Model** → Available for downstream workflow connections - **Verified Predictions** → Contains predicted values from the selected model .. video:: _static/movies/outputsinvestigationmlmodelworker.mp4 :width: 100% | Compare Models Button for Dataset Outputs =========================================== A new **Compare Models** button is now available for dataset outputs that contain a ``model_path`` column. This feature enables quick access to model comparison capabilities directly from dataset results. By leveraging the ``model_path`` information, users can seamlessly launch model comparison workflows without manually configuring inputs. Key Features ------------ - **Automatic Availability** The **Compare Models** button appears when a dataset includes a ``model_path`` column. - **Direct Model Comparison Access** Allows users to initiate model comparison directly from dataset outputs. - **Seamless Integration** Connects dataset outputs with model comparison tools such as the **ML Model Investigation** interface. - **Reduced Manual Setup** Eliminates the need to manually gather and configure model paths for comparison. .. thumbnail:: /_images/Images/comparemodelsbuttonfordatasetinputes.png :title: Compare Models button .. centered:: :sup:`Compare Models button` |