top of page

Clone Detection

Improving Clone Detection in SIMULINK
Improving the Clone detection workflow in SIMULINK through Usability testing. 


In Model-Based design workflows, components are often reused across different models. Using clone detection, these copies can be located and mapped to a single component, thereby making the model more efficient, and easy to track and test. 


This page details a Usability Test conducted to improve the clone detection workflow in SIMULINK.

 Completed during employment at MATHWORKS
UX Research 
My Role
2 months
Power BI
G Suite

Introduction to Clone Detection

In Model-Based design workflows, components are often reused across different models. Using clone detection, these copies can be located and mapped to a single component, thereby making the model more efficient, and easy to track and test. 


This page details a Usability Test conducted to improve the clone detection workflow in SIMULINK.


Clones are modeling patterns that have identical block types and connections. There are 2 types of clones 

EXACT CLONES : clones that have identical block types, connections, and parameter values. 

SIMILAR CLONES : clones that have identical block types and connections but different block parameter values

Screenshot 2021-12-18 at 12.57_edited.jpg
The Clone Detector App

The Clone Detector App is used to find and replace clones in Simulink Models. It works through 3 main stages :

01. Find Clones
Specify Settings and Detect clones in the Model


02. Replace Clones
Refactor the detected clones by creating Library blocks and linking them to the model clones

03. Check Equivalency
Check if the refactored model is equivalent to the  original model 

Screenshot 2021-12-17 at 6.35_edited.jpg

The Clone Detector App


The need for UX Research came about because of the following concerns.

01. Low Adoption Rate
The usage of the Clone Detector App was limited to specific regions and companies. There was a need to improve the workflow and drive adoption rates

02. Bugs and Enhancements 
There were several customer reports of issues in the App and requests for enhancements to the App

03. New Feature 
There were plans to add a new feature to the App to address some of these requests; it needed UX testing​

The New Feature: Detect Clones Across Model

The new feature was proposed to enable users to detect clones (only exact) based on block patterns - the clones did not have to be subsystems as before.

Screenshot 2021-12-18 at 10.54.56 PM.png

There were 2 new settings associated with the feature.

01. Minimum Region Size
Minimum number of blocks in the detected clones

02. Minimum Clone Group Size
There were several customer reports of issues in the App and requests for enhancements to the App

NOTE: The work detailed in the remaining sections was done with a focus on stage 1 (find clones) of the clone detection  workflow

The Research Process

Moderated Usability testing
To Evaluate the new feature and Find Improvements in the workflow
5 Internal surrogate users







Formulating Research Questions and Hypothesis

Recruiting Participants

Designing the tasks

Conducting the study

Analyzing the results

Proposing design modifications

The Target Workflow

Screenshot 2021-12-18 at 11.57.05 PM.png

Research Questions and Hypotheses

Can the user notice and enable the feature for detecting clones across the model? (Discoverability)​​


  • The user will notice and enable  the feature to detect clones across the model

Will the user be able to understand if clones were successfully detected?


  • The user will understand the status of clone detection from the logs

  • The user will refer to the Results report for more details regarding the clone detection run

  • The user may expect more details about the Detect Clones Across Model settings in the  Results Report

Can the user navigate through clone Results and understand the metrics shown?

  • The user will notice the fields for the number of clone groups detected, the number of clones in each group and take action accordingly

  • The user will be able to navigate within the Results pane to get more details about the detected clones

Will the user be able to configure different settings for detecting clones across the model? (Minimum region size, Minimum Clone Group size)

  • The user will notice these settings and set values for these correctly

Can the user understand the different clone groups detected?

  • The user will be able to differentiate between the different clone groups detected 

  • The user will expect the colors used for highlighting to be similar to the colors used in Clone Detection at the subsystem level

  • The user may refer to the Results pane to see the list of detected clones and clone groups

The Participants

5 participants were recruited for the study. All of them were Internal Surrogate Users - Customer Facing Engineers(CFEs) at MathWorks. They were chosen as they had been working closely with multiple users (individual and organizational) who had workflows that could benefit from clone detection or were already using clone detection.

A translator was included in sessions where the participants were non-Native English speakers.

Some statistics regarding the participants :​

3 Non-native English Speakers
All familiar with the Clone Detector App 
2 familiar with Clone Detection scripts and APIs

The Sessions

The study began with a briefing where I informed participants about the format and purpose of the study, emphasizing the fact that the product was being evaluated, not them.

I gave them 2 tasks to complete


The following ratings were collected before and after each task.

Ease of Use
1(very difficult)
7(very easy)
1(not confident)
7(very confident)
1(not frustrated)
7(very frustratedt)

After each session, further feedback and ratings were collected to understand the user's thoughts on the new feature and the workflows they had tried out. 


The following Qualitative and Quantitative data was gathered from the test sessions.

Quantitaive Data


  1. Pre and post Task Ratings

  2. Time spent to complete each task

  3. Steps taken to complete each task

Qualitative Data 

User comments and feedback 

Graphs were plotted using Microsoft Power BI to analyze the quantitative data. The Qualitative data was analyzed through data affinitization in excel. The analysis indicated the following issues.

01. Discoverability Issues

1. Users did not notice the feature

All users failed to notice the feature on their own - they had to be probed into exploring the settings, which is where the feature resides


Graph showing the average number of assists given to users to complete Task 1

Users took 30 minutes on average to find and learn how to use the feature


Graph showing the average no. of  minutes taken by each user to complete Task 1

2. Users were confused about what the feature did

Users could not understand what Detect Clones Across modelmeant– they had to refer to the documentation for help. It was not known to all users that clones outside subsystems were not detected by default. 


Moreover, the name of the tool does not indicate that only exact clones will be detected. Users were expecting similar clones to be detected too.


Users took 98 steps, compared to 63 in the target workflow, to complete Task 1

NOTE: Target steps were calculated as 3x minimum no. of steps needed to complete the task, to provide sufficient leeway

Screenshot 2021-12-18 at 11.16.48 PM.png
02. Configuring the Settings

1. The Settings were not intuitive

Users employed trial and error to understand the meaning of the terms ‘Clone Group Size’ and ‘Region Size’; they couldn't understand what the terms meant. 


Users took 55 steps and 10 minutes, on average, to complete task 2 

(Target steps for Task 2 : 33)

Screenshot 2021-12-18 at 11.16.48 PM.png

The Confidence Rating increased for Task 2 but the Ease Of Use rating remained the same after the task  - users did not find the settings easy to use but their confidence increased after completing the task 


pre vs post task ratings for confidence and ease of use for Task 2 

2. Users faced issues during Trial and Error

Users were getting stuck in a loop, during trial and error.

Screenshot 2021-12-19 at 4.57.45 PM.png

User workflow during trial and error

Users disabled the feature by mistake many times. Clicking on the area was enabling the setting, without checking the box directly

Screenshot 2021-12-18 at 11.16.48 PM.png
03. Understanding Clone Detection Results

The results of the clone detection run are shown in the 'Clone Detection Actions and Results' pane. 

It contains information regarding the clone groups that have been detected, with metrics such as the number of clones in each group.

Screenshot 2021-12-18 at 11.36.10 PM.png

Breakdown of Issues within the Workflow

01. Go to the Results Pane

02.  Go to the 'Map Clone Groups to Library' Tab

03.  Observe the entries in the table

04.  Click on the entries to see them highlighted in the model

05.  Go through the Logs

06.  Open the Results   Report in the Logs Tab

3 issues

7 issues

5 issues

8 issues

Revisiting the Research Questions

Based on the analysis of the results, some of the hypotheses were proved right (highlighted in green) and others proved wrong (highlighted in red)

Screenshot 2021-12-22 at 6.05.41 AM.png
Screenshot 2021-12-22 at 5.59.31 AM.png

Images showing the analysis being mapped back to the original hypotheses

Design and Impact

Using the results of the study, changes were proposed to the Clone Detection App.

The feature for detecting clones throughout the model was shipped in SIMULINK R2021b, incorporating some of these changes, affecting 5 million+ users. Several other changes have been planned for future releases. A few of the changes are mentioned below. 

01. Changes to the Name and Placement of the feature
To aid discoverability and ease of use


02. Changes to the settings

Better explanations for terms,

More meaningful error messages,

Ability to revert to original values

03. Changes to feature functionality
Both similar clones and exact clones to be detected


04. Changes to the Results Pane

Changes to the placement and naming of the tab,

Changes to the dynamic behavior of tab content,

Changes to the Results report

05. Changes to Highlighting

A Bird's-eye view of the model, Option to highlight all clones from the Results Pane 


01. User Reaction

All participants found the feature to be very helpful, once they understood its capabilities. 


03. Analysis

The analysis of the quantitative metrics that were collected during the study helped back up the qualitative feedback we received.


02. Participant Criteria

Conducting the study using Internal Surrogate users still gave me a lot of useful inputs.


04. Future

Future studies with an emphasis on the next 2 stages of clone detection can unearth more usability issues and improve the App. 


bottom of page