No discussion about the basics of Six Sigma can go without a breather on Measurement System Analysis. One of the great things about implementing Six Sigma in the workplace is that there is no single “best answer” or “best possible solution” to any problem. Six Sigma provides us with simply a tool box. A vast one at that. Experienced 6S professionals don’t evangelize the same approach for every problem they encounter, the contributors to the problem and team dynamics are always in flux and adaptation is basically mastery.
Adaptation and innovation is what I think makes 6S standout from other quality management such as TQM, ISO 9000, and Baldrige. These systems propose – I’m carefully choose not to say “impose” – a clear-cut template or recipe based on a pre-defined approach. Only Six Sigma tries to understand and reduce variation from a statistical process control (SPC) approach.
Six Sigma offers many tools and the Six Sigma Black Belt knows with increasing skill and positive results which tools to apply to a specific problem whether it concerns product design – software, hardware, or systems; manufacturing, service, and so on. Sometimes I’ll need pliers and a hammer. Sometimes I’ll need a screwdriver and saw. And sometimes I’ll need a ratchet and crowbar. Nonetheless, the Six Sigma Black Belt should always promote the right tool and train and empower everyone on how and when to use it.
When is a problem good enough for Six Sigma?
Should we use Six Sigma to identify root causes in variation in the irregular replacement of toilet paper in the company bathroom? No. Let’s use a little common sense. Many executive and upper management teams will recognize the need to change certain corporate behaviors. The first key to implementing Six Sigma successfully in an organization is executive sponsorship and recognizing the need to change the corporate culture into a Lean enterprise. An executive that considers hiring a Six Sigma Master Black Belt to fix a specific problem is missing the point. Implementing Six Sigma is a long-term solution much like a marriage, not a one-night stand.
Measurement System Analysis (MSA) and the Voice of the Customer (VOC)
The point has been made. We’ll use Six Sigma as a corporate strategy to effect change, empower employees to succeed, and provide them with the tools to recognize and reduce variation to whatever they do, process-wise or product-manufacturing-wise. This sounds philosophical and is meant to be. We need everyone to accept ownership, responsibility, and accountability, and most importantly to know how to have fun and keep a smile when times are tough. Positive change won’t occur if everyone is mapping a SIPOC every five minutes for the sake of knowing how. You’ll just end up with a lot of SIPOCs, no sustainable results.
Now for a little Six Sigma oversimplification (you know when you learn about TRIZ and the 40 Principles, you learn simplification and separation of the components of a problem, so I owe no apologies for oversimplifying 6s in the next few paragraphs – I do say this tongue in cheek, so please read on). If I were to choose to major Six Sigma approaches to talking about a Measurement Systems Analysis, they would be how a VOC ties into Measurement System Analysis.
I won’t go into details right now about Measurement System Analysis, granularity, Gage R&R, or repeatability & reproducibility. I want to stay (as my Master Black Belt so taught me) as high as I can as long as I can so please allow me to throw more philosophy at you.
Any properly conducted VOC will identify what your customer requirements are for a product or service. We know that no VOC is the same and that VOCs include customer interviews, focus groups, cross-functional teams, Kano or other properly weighted surveys, tech support complaints, and other sources where customer data can be captured and an action plan be put to use to improve these points. Before conducting a VOC, there must be a goal or objective in place. VOCs that set out to “tell our sales team what they can improve” are a waste of time and won’t show sweet spots our marketplace are telling us we can further improve. A VOC should determine if the customer’s requirements dictate the differences between their wants and needs (more on Kano another time) and whether we can improve specific aspects of their experience. How can we improve the customer’s experience with our product or service so that they can purchase more of our product or service, reduce our costs to increase our return while maybe even lowering the price (but not the value) they pay for our product or service, and ultimately not letting them go to the competition.
Therefore, all VOC data must be used to determine what will be our MSA guidelines before we can accurately standup to our executive teams and tell them they’re data is wrong. An MSA mustn’t be used to put reporting dashboards in place for the hell of it. Data mining for the sake of data mining is futile and is actually a form of waste (yes, overproduction and transportation).
A properly set up MSA should highlight current measurement and reporting systems in a company and whether they’re telling us anything. A weekly Microsoft Excel spreadsheet report indicating the number of product reports should be considered in the MSA. So are tech support call logs indicating who, when, where, and what customers are calling about (5Ws or W5s).
The first two questions I ask anybody who proudly shows me a report and tells me that this is part of their MSA is:
I see a bunch of numbers, graphs, and colors but:
a) What is it that you’re trying to show me?
b) Have you shown this to the customer?
I don’t mean the internal customer we call Product Management or Manufacturing. I mean have you shown this to our paying customer?
I love the reactions on peoples’ faces I get and the typical responses.
a) If you can’t interpret the data I’m showing you, then you don’t understand.
b) The only customer that matters is the next one in my flow, the internal customer
These are exactly the behaviors we need to change if we want to strive for continued success!
Any report that provides data should be optimized to show data that can be benchmarked and lead to improvement. This is called metrics. Metrics are simply markers that make data meaningful. Numbers are numbers and only arbitrary. 1 is 1. 812482348 is 812482348. .881248 is .881248. Pi is Pi. Nothing more can be said about numbers. Numbers are meaningful only when there’s a clear context. And I when I know what it is you want me to understand and help change.
The number of product returns is important when I know what the numbers are over time; what they are when I change a key input like manufacturing location, product features, and so on. The number of calls received is arbitrary and meaningless unless I know that you’re showing the number of calls for a certain product, in a demographic area or top DMA, whether the customer has been trained; or if the product has been recently launched or redesigned.
Metrics and measurements
I want to conclude by suggesting that all measurements you observe must indicate which key metrics to measure and report. Here are some pointers for the not so obvious.
•Measure things that matter – to our customer (Critical to Satisfaction)
•Measure consistency and understand variation (Critical to Quality)
•Monitor and evaluate performance (Critical to Delivery)
•Measure conformance to the process
•Identify improvement opportunities
•Make your metrics practical and graphical
•The right metrics drive the right behaviors
Not just for calibration
An MSA should help you better understand “hidden” factories, processes, defects, and anything that causes variation. MSA should be approached scientifically, and objectively in order to best analyze the validity of an existing measurement system. But it doesn’t stop there. An MSA can be as effective where no measurement systems exist and shouldn’t be limited to Gage R&R. Think of an MSA like a project within a project to ensure that scope, time, and budget are respected.
So be careful before making any adjustments.
An MSA should validate the following points of an existing measurement system
1. Resolution – let’s not measure a mile with a micrometer, and let’s not measure a micrometer with a yardstick
a. Simplest measurement system problem
b. Poor resolution is a common issue
c. Impact is rarely recognized and/or addressed
d. Easily detected
e. No special studies are necessary
f. No “known standards” are needed
2. Stability – or consistency
a. Measurements remain constant and predictable over time
b. No drifts or sudden shifts
c. Evaluated using control charts
3. Accuracy – to check for bias (either skewed data or operator error)
a. Calibrate when needed/scheduled
b. Use operations instructions
c. Review specifications
d. Review software logic
e. Create and Promote Standard Operating Procedures (à la ISO 9001)
a. Can measurement data be correlated?
5. Precision – Reproducibility and Repeatability
Who’s measuring? Who cares?
The one thing an effective MSA will ensure is that whoever is taking the measurements or observing, the end result must be the same. The MSA must ensure that no matter how many times a report is run by different people, the results must be the same: indiscriminate and objective to the operator.
Keys for successful Measurement Systems Analysis
1. Define and validate measurement process
2. Identify known elements of the measurement process (operators, gages, SOP, setup, etc.)
3. Clarify purpose and strategy for evaluation
4. Set acceptance criteria
5. Implement preventive/corrective action procedures
6. Establish on-going assessment criteria and schedules
1. Written inspection/measurement procedure?
2. Detailed Process Map developed?
3. Specific measuring system and set-up defined?
4. Trained or Certified Operators?
5. Instrument calibration performed in a timely manner?
6. Tracking Accuracy?
7. Tracking % R&R?
8. Tracking Bias?
9. Tracking Linearity?
10. Tracking Discrimination?
11. Correlation with supplier or customer where appropriate?
12. Have we picked the right measurement system? Is this measurement system associated with either critical inputs or outputs?
13. What do the precision, accuracy, tolerance, P/T ratio, % R&R, and trend chart look like?
14. What are the sources of variation and what is the measurement error?
15. What needs to be done to improve this system?
16. Have we informed the right people of our results?
17. Who owns this measurement system?
18. Who owns trouble shooting?
19. Does this system have a control plan in place?
20. What is the calibration frequency? Is that frequent enough?
21. Do identical systems match?
“When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind. It may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the state of science.”
“If you cannot measure, you cannot improve!”
You can download many Six Sigma Templates at our sister site templatestaff.com