Main navigation

Sciemetric

Using data to contain the damage of a recall

Headshot

Contributed by: Richard Brine - CTO

Contributed by: David Mannila

It’s the “R” word that makes manufacturers cringe – recall. The cost is only part of the expense – there is the impact on brand reputation and, consequently, future sales, too.

A quality issue in the field will trigger an internal investigation to determine its scope and severity. Forensic experiments may have to be conducted to reproduce the manufacturing issue, which consumes resources, product and production time.

Anything that can focus and shorten this process and avoid the need for a blanket recall can be worth millions of dollars, depending on the scope of the recall and the dollar value of the product in question.

If the promise of Manufacturing 4.0 is a connected environment in which root cause and correlation are easy, how does a manufacturer get there? It's all about gathering the right data and then applying the best data management and analytics tools.

All data is the right data – scalars and digital process signatures from processes and test stations on the line, as well as machine vision images with their related datasets.

Break down the silos

All this data must be consolidated into a centralized database, indexed by serial number. The result is a single, consolidated birth history record for each part or assembly.

With the right data management tools, signatures can be converted into histograms that can be correlated with the other data types associated with the part to illustrate the profile of a good part and the range of acceptable deviation. This makes it easy to create and visualize a baseline against which to compare all parts.

Now you’re ready when a warranty claim walks in the door.

Take this example, in which a fuel rail leak was detected at a vehicle plant. Analyzing the test data revealed that all failures were marginal passes. The test limits being used on the test stand were those originally supplied by the part designer and had not been monitored after production startup.

The quality manager used one week of manufacturing test data to determine the impact of applying more scientific statistically-based limits. It was determined that tightening the test limits would have caught the faulty fuel rails and yet would have had a very minor impact on throughput. Two months’ worth of part data was re-analyzed by applying the new limits to identify other suspect parts. Three additional “failures” were found. The result? A handful of vehicles were recalled instead of thousands.

Act and improve

To recap, centralized data collection and analysis allows you to quickly triage your production line and contain a warranty claim.
You can:

  • Use report data to make decisions and act on them to solve the current issue (i.e., fix what is causing problems NOW)
  • Use failure paretos to target top failure modes on your line (apply the 80/20 rule to determine which adjustments or refinements have the biggest impact to improve quality)
  • Use first time yield data to help determine if there is an issue trending in your manufacturing process
  • Use trend reports to determine when a specific issue started to occur
  • And then close the loop:
    • Implement corrective actions for machine tooling
    • Implement a new test algorithm, or limit adjustment
    • Introduce a new quality check or test specification for components coming in from a supplier

For more information and examples of how to use data to solve problems in manufacturing, view the archive of our webinar with Industry Week, titled “Solving Your Top 5 Manufacturing Issues – With Data.”