So how do you make that initial transition from the manual, clipboard approach to OEE to taking a technology-enabled, systemic approach to driving operational efficiency against your KPIs?
Rockwell Automation’s Matt Giordano, technical evangelist, broke it down into three steps:
- Visualize
- Benchmark
- Optimize
The fist step of visualization, he said, puts a stake in the ground by collecting relevant information that can be presented to decision makers who can then start the ball rolling on action. Visualization “is that foundational element.”
For benchmarking, visualization becomes part of a broader solution that brings in data from different value streams, collecting downtime data and connecting it to financial information to start considering downtime in dollars and sense, for instance. Plant operators could also layer in things like alarms and notifications that are triggered when OEE drops below a predetermined threshold. “I can start to correlate when I’m seeing poor quality,” Giordano said, “What is the information coming out of the lab versus what I’m seeing on the shop floor? It starts to broaden the value of OEE and its data. Benchmarking takes that OEE core and expands it in terms of the kind of analysis I can do and the data I can visualize.”
The optimization step is, again, further broadening the scope of OEE. This is where the ability to do things like predictive analytics come into play–using structured data to understand when and why a machine or part is likely to fail and intervening before unplanned downtime hits your OEE valuation. “This OEE journey, you can start at any one of these levels or you can start at visualization and work your way up,” Giordano said. “We designed [this] to be value-add at every step of the way.”
Getting into the nuts and bolts, Rockwell Automation’s Terry Gansen, a solution architect, said he works with customers to address the visualization step by looking for sensors and automated processes already in place from which data can be pulled without further capital spend. From there, with increasing granularity and visibility into operations, you can swap out those clipboards for something more accurate and replicable.
With benchmarking in place and optimization underway, “That’s when we start to look at these events that occur and find out if there is a statistical correlation,” Gansen said. “Then we can start to perform that prediction–after so many cycle counts this machine generally starts to need lubrication or some type of adjustment. This lets us not only track but change the manufacturing environment faster.”
In terms of moving from a limited pilot to broader deployment, or scaling up an initial deployment, Gansen sketched out the process. “You have the top-level enterprise, below that multiple sites, below that areas, and within areas you have lines, and within lines there are work cells. Having that structure, you can work on one particular machine…then start to roll that up in terms of that machine across its line, then compare different lines at one facility.” And then the virtuous cycle kicks in.
“Having that organizational structure in place automatically starts to align the data and the information. If you come up with three to five basic data points, you can start to use that same foundation and expand it across multiple machines and lines and work cells and continue to scale up,” Gansen said. This scale enables more deep analysis and optimization, allowing a manufacturer to start breaking down production by part, by operator or both. You can even drill further down to look at shifts and individual employees to gain insight into particular sets of knowledge or training and then automate that to make it consistent and scalable.