Analysing OEE
We can see from our driver model that our effectiveness (OEE) is only 57%. Our availability, quality and performance numbers are all above 80%, which on the surface looks ok, but we aren’t really that effective at producing widgets.
That is our baseline scenario. We now want to investigate different scenarios to see which “levers” we can pull to increase our OEE. While it’s possible to simply adjust the numbers in the driver model, it does not provide us a capability of comparing numbers against each other.
But first, so we can create some meaningful reports and compare scenarios, we need to set OEE, Availability, Performance and Quality to publish to results, as shown here. We also want to set the parameter group to results to ensure they are grouped together within the research grid.
Now switch to the Research grid. Notice that because we haven’t “Executed” the scenario, we don’t see any results in the grid. This is different from the Build screen with automatic evaluation turned on. Run the model through the blue Play button at the top.
We now want to see how our metrics change as we adjust different levels.