Good morning, Daniel. I sent you the copies last week. I try to download the data in minitab, there are 2 types of attributes: 1. Analysis of attributes – the results correspond to the AIAG manual. 2. Attribut Gage Study (Methodal Analytical) – FAILED, to generate the results, has caused too few studies per piece. According to the statistical guide, Minitab accepts either summary data or raw data for attribute-gage studies, with conditions, the number of tests for each piece must be greater than or equal to 15. If you use the AIAG method (default) to test wage distortion, you need to do exactly 20 tests per piece. If I ignored the examiner, I only have 9 tries per piece. Can you see me in the issue above, Daniel? Looking for your answer, and thank you in advance…

Best greetings, Alice I recently 100% inspects some parts about attribute defects. The supplier had delivered about 1 or 2 defective parts for 100 parts for three previous shipments. The defect was a threading hole was not in the key style, as indicated on the pressure. When I inspected these parts, it seemed that all the threading holes were centered in the key style. Later, I was shown that they were on at least 60% of the threading holes that were named on the notes as no grate. I was called negligent by my employer with respect to my inspection. My question is, can someone be predetermined or change behavior to check something they`ve seen before and miss an obvious defect? Does anyone have any data, especially quantitative data, that support this theory? Since the %agreement for each examiner is within the confidence interval for the other examiner, we must conclude that there is no statistically significant difference between the three examiners. We introduced the kappa value in the latest newsletter. It can be used to measure the examiner`s compliance with the benchmark. Kappa can range from 1 to -1. A kappa value of 1 represents a perfect match between the examiner and the benchmark. A kappa value of -1 is a perfect disagreement between the examiner and the benchmark.

A Kappa value of 0 indicates that the agreement represents the agreement that is expected only by chance. Therefore, Kappa values close to 1 are desired. I did a study attribute R-R for an inspection process (pass/fail). We are currently doing a 100% inspection. Normally, you think there is an alpha and beta risk for analysis. Is there a way to calculate them? Is there also a way to assess a trade-off between reducing the frequency of inspections and supplying the customer with rejected parts? Thanks Matt, here are some basic principles for performing an attribute study: 1) Choose 20 pieces or samples for inspection.