Measuring System Analysis 300
This class explains the purpose and methods of measuring systems analysis, including measurement variation and gage repeatability and reproducibility studies.
Number of Lessons 20
- What Is Measuring System Analysis?
- What Is SPC?
- Purpose of MSA
- Defining the Measuring System
- Measurement Value
- Measurement Uncertainty
- Measurement Assurance
- Types of Measuring System Variation
- Gage Variation
- Gage Calibration
- Gage Capability Studies
- Measuring System Performance Studies
- Short Form GRR Study
- Long Form GRR Study
- Analysis of Variance Study
- Choosing Variables
- Measurement Study Preparation
- Acceptance Criteria
- Define measuring system analysis.
- Compare SPC with MSA.
- Describe the purpose of MSA.
- List common factors that can impact the measuring system.
- Identify the four variables that combine to yield a measurement value.
- Explain how measurement uncertainty affects tolerance limits.
- Describe common practices that are included in measurement assurance operations.
- Distinguish between sources of random variation and system variation.
- Distinguish among the primary sources of gage variation.
- Describe the commonality between gage linearity and gage stability.
- Describe how to conduct an ideal gage capability study.
- Describe how to conduct a practical gage capability study.
- Describe gage repeatability and reproducibility.
- Distinguish the short form GRR study from other types of measuring system performance studies.
- Distinguish the long form GRR study from other types of measuring system performance studies.
- Distinguish the analysis of variance study from other types of measuring system performance studies.
- Explain common approaches for choosing optimum test variables.
- List the steps of effective measurement study preparation.
- Describe the typical range of acceptance criteria used in manufacturing.
An agreed upon condition or characteristic that must be present in order for a part to pass inspection.
The predicted difference on average between the measurement and the true value. Accuracy is also known as bias.
actual measured value
The number indicated on the measuring device as the size of the dimension. Actual measured value is part of measurement value.
analysis of variance study
A series of measurement trials that analyzes the interaction between repeatability and reproducibility and other causes. Analysis of variance is also known by the acronym ANOVA.
The acronym for analysis of variance.
The predicted difference on average between the measurement and the true value. Bias is also known as accuracy.
A fixed source variation within a system. A system may have multiple forms of common cause.
A graph used during SPC or MSA methods that charts data and provides a picture of how a process is performing over time.
A horizontal line on a control chart that represents a boundary for a process. If the process strays beyond a control limit, it is out of control.
A detailed plan for a part or object that includes dimensions and other precise descriptions of its manufacturing requirements.
The comparison of a device of unknown accuracy to a device with a known, accurate standard to eliminate any variation in the device being checked.
A gage's predictable range of ability, even when under the influence of natural variation due to common causes.
gage repeatability and reproducibility
The interaction between gage accuracy and precision. GRR helps determine the type of variation in the measuring system, isolate product variation from measuring system variation, and reduce the overall gage error.
gage system error
The combination of gage stability, linearity, accuracy, repeatability, and reproducibility.
The difference in multiple measurements taken by the same gage under similar conditions.
ideal gage capability study
A series of trials in which a gage is tested under the best circumstances to verify that gage specifications can be met. Ideal gage capability studies are often performed in a laboratory with as much random variation removed as possible.
The quality management system based on the standard published by the International Organization for Standardization.
The amount of error change throughout an instrument's measurement range. Linearity is also the amount of deviation from an instrument's ideal straight-line performance.
long form GRR study
A series of measurement trials that offers an accurate, mathematical method of calculating GRR. Long form GRR is also known as the range and average method.
Variation caused by the manufacturing process that affects the size of the part. Manufacturing error is part of measurement value.
The average of a numerical set. It is found by dividing the sum of a set of numbers by the number of members in the group.
The ability to quantify measurement uncertainty and show that the total uncertainty is small enough to meet product specifications. Measurement assurance requires a process approach similar to quality assurance.
Variation caused by the measuring process that affects the measured size of the part. Measurement error is part of measurement value.
The estimate of the difference between a measured value and the true value. Measurement uncertainty values are often included in the final value of a part.
A combination of the actual measured value indicated by the instrument, the nominal value, manufacturing error, and measurement error.
A type of variation that occurs when the same object is measured multiple times but produces different results.
The unique devices and processes used to measure and inspect a part, including the measuring device, the operator, the operator’s technique, the part, the feature being measured, the environment, and time.
measuring system analysis
The methods used to verify and monitor the accuracy and quality of a measuring system using statistical study of repeated tests of the gages and other parts of the system. MSA tools identify the amount of variation in the gage by isolating the measurement variation from the process variation.
measuring system performance study
A series of trials in which the measuring system is tested to determine the system's ability to consistently produce measurements using actual parts and realistic conditions.
measuring system study
A series of trials in which the measuring system is tested to determine the system's ability to consistently produce measurements.
The size by which an object is known, such as a two-inch screw. Nominal value is part of measurement value.
practical gage capability study
A series of trials in which a gage is tested under real conditions to verify that gage specifications can be met. Practical gage capability studies are often performed in the shop with as much random variation removed as possible.
The degree to which an instrument will repeat the same measurement over a period of time.
A type of variation that occurs when there are differences in multiple instances of the same process.
Differences that occur due to outside influences, such as temperature or vibration. Random variation can be detected and corrected through MSA.
range and average method
Another term for long form GRR study.
Another term for short form GRR study.
The variation that occurs among measurements made by the same operator. Repeatability is a form of random variation.
The difference in the average of groups of repeated measurements made by different operators. Reproducibility is also known as appraiser variation and between operator variation.
short form GRR study
A series of measurement trials that offers a crude estimate of combined GRR. Short form GRR is also known as the range method.
The ability of a measuring instrument to retain its calibration over a long period of time. Stability determines an instrument's consistency over time.
A number representing the degree of variation within a numerical set.
statistical process control
The use of statistics and control charts to measure key quality characteristics and control how the related manufacturing process behaves.
Fixed differences that are considered a series of common causes. System variation often exists within the gage itself in the form of bias or accuracy.
The deviation of a value from a norm or standard.
A measurement standard used to check or calibrate measuring instruments. Gage blocks are an example of a working standard.
A chart used to track a series of sample averages.