Practice Exams:

Lean Six Sigma Green Belt – Six Sigma Measure Phase Part 3

  1. Attribute Agreement Analysis Case Studies

All right, here is a case study. We have 25 patient registration forms. That is what PRF stands for. We have four appraisers, appraiser one, two, three, and four. And these four appraises evaluate the patient registration forms twice. Hey, we have two trials, trial one, trial two. For appriser two, also, we have trial one and two, and the same follows. For a price of three and four, we have the details of whether a specific patient registration form is complete or incomplete.

We also have a standard which is Chief medical examiner. This person is considered to be always correct because he has wealth of experience backing him. In order to perform attribute agreement analysis, all we need to do is go to Stat on the minitab menu option, go to Quality Tools and select this option which says Attribute Agreement Analysis. So let me click on this. There we go.

Since we have the data in multiple columns there, you need to select multiple columns. Hey, nothing is appearing here. The reason being you would be able to see the values or the column names here in this space, provided you click on or you click in this window or in this field rather. There we go. The moment I click, you see all these options here. All we need to do is select our prizer one, press down the shift key on your keyboard and select the appraiser for trial two.

So I’m selecting all the appraisers, and I click on select. There we go. So appraiser one, trial one, trial two, appraiser two, trial one, trial two. Until appraiser for trial two, all the columns are added here. Multiple columns are added. How many appraisers do we have? We have four appraises. Appraiser one, two, three, and four. How many trials is each person conducting? Two trials. So I’ll select two there.

Do we know any standard or attribute against whom we want to measure the performance of the other appraisers? Absolutely. That’s your chief medical examiner. All we need to do is click on okay. And the magic box would show you the result here. All right, there we go. Rather than spend time here, I want to spend time on the raw data. So I’ll click on this to see what’s going on. Let me click on attribute agreement analysis. There we go. Within our prizes that you’re seeing here is how many people are agreeing with themselves, and this is called as repeatability.

You can go back always to the measure phase, attribute Agreement Analysis, and try to understand this. Any value which is less than 90% is an area of concern for us. Appraiser one out of 22 patient registration forms that this person has evaluated. He is agreeing with himself only in 22 places or for 22 patient registration forms. Hence, the accuracy is 88%, and you have an accuracy of 92% for appraiser two and three. And for Appraiser Fold, he is your top performer. He has 100% accuracy this is within appraiser repeatability. So you can cautiously accept this, provided you have appraiser one undergo some kind of a formal training to ensure that this value increases to probably 90 or greater than that.

Flake up of statistics is part of your Six Sigma Black Belt discussion, so I’m not going to discuss that at this point of time. Let me show you next between appraises. Between appraisers is called as reproducibility. That is in how many instances out of 25 patient registration forms? All the four uprises across all the trials are agreeing with each other. That is only in 17 instances out of 25, and the count is 68, which is not a good sign for us. If it is greater than 90%, you can safely assume that your measurement system is quite reliable. Anything in between 70% to 90% says that all right. You can cautiously accept your measurement system and proceed with further analysis. If it is less than 70%, we tend to reject. So there’s a lot of mismatch among your various appraises. I think this between appraisals can be changed to among apprises because we have more than two appraisals in this scenario. All right, that is what statusics is all about. Within appraisers.

Between appraisers is what we have discussed. Let us look into each appraiser versus standard. This is called as individual accuracy. We are trying to measure the performance of each appraiser versus the chief medical examiner, who happens to be a standard of the subject matter expert. And here the percentage is not so good for appraiser one and two.

It’s kind of okay for appraiser three, but appraiser four is doing a great job. So maybe your chief medical examiner has to connect some additional knowledge transfer sessions for your first, second and third appraisers, maybe just to ensure that the count is greater than 90%. Now let us finally look into all appraisals versus standard.

And this is called a steam accuracy 64%. It is not a good sign at all. Out of 25 patient registration forms, 16 forms matched with standard. I mean, I’m looking into all the prizes. Evaluation result across the two trials was as the chief medical examiner, which happens to be 64 pathotic, it should have been at least greater than 70 for us to proceed with the measurement system. So you put everyone across trainings, across knowledge transfer sessions from your chief medical examiner, and thereby achieve the end result. This is one case study for us.

Now let me show you another case study which will help us get even better understanding. Let us now look into the second case study. This is about CBSH Bank. We’ll be looking into this minita file and we’ll be trying to solve this problem. Complaints team at CBSH Bank check application forms to verify whether all documents are complete or not. There have been recent complaints of them sending the incomplete application forms to the credit check team.

Now, the complaints manager wanted to assess the reliability of the current inspection activity. And therefore he got access to 20 application forms. And he got those 20 application forms checked by four compliance team executives. So the four compliance team executives would be called as your appraisers. All right, the compliance manager sent the same application forms again to the four compliance team executives for inspection, and the four compliance team members weren’t aware that they were inspecting the same forms. Now, is this measurement system acceptable or not? And that is what we are going to solve anyways.

But who is the standard here? Who is our subject matter expert? It happens to be the compliance manager because he has been doing this activity day in, day out, and he has reached this level after having a thorough understanding about the complaints and how to perform that. So this person would be our standard or subject matter expert. Now let us open this Hbshbank metap file and try to solve this. All right, here is our CBSH MTW case study. Let us try to solve this. So we have 20 application forms. We have compliance executives.

We have four of them, executive one, Two, three, and four. And each person is trying to conduct two trials, trial one and two. And also you have a standard or the subject matter expert who happens to be your compliance manager there. In order to solve this attribute agreement analysis case study, all we need to do is go to Stat Quality Tools and select the option which says Attribute Agreement Analysis. There we go.

Since we have the data in multiple columns, we need to select this option, the radio button which sees multiple columns. In order to view all the columns here, you need to click within this field. There we go. And I need to select Executives One until for all the trials. And if I click on select, everything would be listed here. How many appraisals do we have?

We have four appraisers because we have four executives, compliance executives. And how many trials is each person performing? Each person happens to perform two trials, two trials, two and two. So let me select two there. Finally, do we have a standard or known attribute? Absolutely. We have the complaints manager here, so let me select the complaints manager. And now click on okay, I’ll do the same activity. I’ll go back to this. Look at the session window. Here is attribute agreement analysis.

Within appraises is called as repeatability. That is, how many times is each person agreeing with himself when it comes to defining or evaluating the forms except for the first appraiser, everyone else is doing an amazing job in all 20 places. Appraiser Two has agreed with himself. Appraiser Three has agreed with himself another 20 times, and Appraiser Four has agreed with himself on 20 times out of the 20 applications he has evaluated, except for Appraiser One, who has missed out on one application form.

Rest of the people are doing a great job. If you want to go back to the worksheet and check, hey, look at this. Executive Two, Three and Four have 100% agreement complete appraiser or Executive Two is saying that Application One is complete, and he’s agreeing with himself. Even in the second trial, so on and so forth. So in 100% of the times, people have agreed with themselves.

When it comes to your Executive Two, Three and Four, your Appraiser One or Executive One has not agreed with himself on one occasion. Where is that? Hip application form 30 in the first evaluation, first trial, Executive One is saying that Application Form 13 is complete, and in the second trial is saying it is incomplete. He’s not agreeing with himself.

Now, let me go back to this. There we go. So probably you need to have him attend some kind of additional training or get some kind of knowledge transfer session from the other appraisers. But 95% is not a bad thing. It’s a good thing. Greater than 90% is amazing. Now, let us look into between appraises. How are they agreeing among themselves? This is called as reproducibility. And here, in 80% of the instances, all the four appraisers agree among themselves. That is also kind of okay, because greater than 90% would be amazing. But anything which lies between 70% to 90% is also kind of okay.

Now let us look into each appraiser versus standard. This is called as individual accuracy. And when it comes to individual accuracy, most of them have 90%, except for your first appraiser. That is, first appraiser is not agreeing with the standard, which happens to be the complaints manager evaluation results. If you see here Executive One complete complete, and compliance manager is also saying it’s complete. So he has agreed with the complaints manager.

Where is he not agreeing? Let us try to quickly find that. Okay, here, obviously, he says complete and incomplete. The first executive, however, your complaints manager says it’s complete for Application Form 14. Though Executive One has agreed with Cr, agreed with himself in both these trials, he has not agreed with the complaints manager there, even with respect to the 20th application file, incomplete and incomplete is what Executive One says in both the trials as opposed to complete, which is your complaints managers evaluation result? Let me go back here.

So probably your complaints manager has to give additional knowledge transfer station to appraiser One. And when it comes to all appraisers was a standard, this is called as team accuracy. Here it’s 80%. So you can cautiously accept the measurement system you cannot directly access, but cautiously we can access. This brings us to the end of attribute agreement analysis case study discussion. The other case studies which follow would be discussed in the subsequent recordings. Thank you so much for.

  1. Capability Indices Case studies

Hello. Welcome to the exercise of calculating CP and CPK. We need to calculate CP and CPK. For all of these five entries, formula to calculate CP remains as USL which is upper specification limit minus lower specification limit, and you are supposed to divide it by six times of standard deviation. If you want to calculate CP, for the first entry, it happens to be 229 minus 224 because it is USL minus LSL. And then you’re going to divide this by six times of standard deviation.

And standard deviation happens to be 1. 5. We can write this as five and divide it by six times of 1. 5, would be nine, and this would be approximately equal to zero point 55, so on and so forth. Now, CPV value of greater than or equal to 1. 66 is a good indication. That means the variation in the process would be less. However, since here we have zero point 55, variation in the process, variation among the different data points would be higher. If you look at this table, USL and LSL happens to be the same.

Standard deviation happens to be the same. So throughout this table, for each and every entry, CP would be the same, which would be zero point 55. When we were to calculate CPK, the formula slightly changes. Formula would be minimum value of USL minus mean divided by three times of standard deviation. Comma mean minus LSL divided by three times our standard deviation.

Let us try to calculate this for the first entry here. So, if I were to substitute in this formula these values, it turns out to be minimum of USL minus mean, which is 229 minus 225. 229 minus 225. You divide it by three times the standard deviation, which happens to be three times of 1. 5. Mean minus LSL by three standard deviation. What does mean here?

255. So that’s 225 minus LSL happens to be 224. You divide it by three times of standard deviation, which happens to be three times of 1. 5. If you want to solve these two, it would be minimum of 229. -225 which is USL minus mean upper specification minus mean happens to be four. You divide it by three times of 1. 5, which happens to be 4. 5, and 225 minus 224, which is mean minus LSL happens to be one, and you divide it by three times of 1. 5, which is 4. 5. If I were to expand this further, it would be minimum of four divided by 4. 5. Any guesses on what that value would be? It would be approximately zero point 88. And then we have one divided by 4. 5, which happens to be zero point 22. Minimum of these two would be zero point 22. And that would be your CPK.

And as a term rule, CPK value of greater than or equal to 1. 33 is a good indication. That means your process mean would be closer to the target. In this scenario, your CPK value happens to be zero point 22. So your process mean is extremely far away from the process, and since the process mean for each and every row varies when you calculate CPK, the values of CPK across these rows would change. I’m assuming that this makes your understanding clear on how to calculate the CPK values for the rest of the entries. Thank you so much for listening to this recording. Stay tuned to learn more about further case studies which were left for you all for practice. Thank you.

  1. Sigma Level – Continuous Case Studies

Here we go. So we have the processing time, and the first step would be check whether the data follows normal distribution or not. Alongside that, we also need to calculate the mean and standard deviation of the data. So I’ve selected processing time. When you open this for the first time, you don’t see anything here in the variables field. So we need to double click on this or select this and click on select yeah, let me click on OK, now there we go. The moment we look at this curve it says that the data follows normal distribution or it appears as if it’s normal distribution also the p value happens to be greater than 0.

5 which is an indication that the data follows normal distribution what is mean and standard deviation here? Mean is 22. 92 and standard deviation happens to be 5. 96. Three, if I substitute these two values along with 20, which is upper specification limit in the sigma level calculator, I’ve just placed mean. Here I’ve placed standard deviation and upper specification limit since the lower specification limit wasn’t given.

And it was clear from the case study that lower specification limit can also be zero. So I’ve selected zero and 20. The moment I key in these values, it has calculated the z shot term, which happens to be 1. 15. All right. And the difference between sigma level short term and long term would always be 1. 5 sigma. So 1. 5 or 1. 15 -1. 5 would give you sigma level long term.

In order to understand what is sigma level short term and long term, I will request you all to go back to the major phase recording of sigma level and try to get a better understanding. If you haven’t achieved that understanding yet. Now we also need to discuss about how to calculate the probability. Because another part of this specific question was what is the probability of processing the internet bills within 20 minutes? So let us try to solve that. All right, we are back to this case study. We need to calculate the probability of processing the internet bills within 20 minutes.

So this is the second part that we’re trying to solve off to our metap file. This was the data set wherein we had the processing time details. All we need to do is go to graph probability distribution plot, and since I want to view the probability, I’ll select the view probability and click on a K. The mean and standard deviation that we have earlier calculated for this specific data set is provided here. And since the data follows normal distribution, we have selected normal. There I go to the shaded area. We want to calculate the probability of processing time being less than 20 minutes. So I select the left tail because it is less than and I give the value as 20 if I click on OK. Ah, look at that. Mean has been taken by default as zero and one so I need to rectify that. Now, to do that, I go back here. I need to change this.

The mean was 22. 92, so I’ll put that 22. 92 and the standard deviation happens to be 5. 9635. 963. Oh my God. Now I go to the shaded area. Since I want probability of less than 20 minutes, I select the left tail. Since it is 20 minutes, I select x value as 20 and then I click on okay, here. There we go. There is a 36. 29% probability. We have the test time of the lab.

All we need to first do is check whether the data follows normal distribution or not by going to Stat Basic Statistics graphical Summary. Here I’m going to select the test time. Double click on that to select and simply click on OK. All right, the data seems to be following a normal distribution. The right curve is a neat bell shaped curve. Here the Pvalue also is greater than zero point 74 which indicates that the data follows normal distribution. We also know the mean and standard deviation.

All we need to do is substitute these values in the sigma level calculator to get the z value. So let me go to the sigma level calculator now. All right, I’ve substituted these values in the sigma level calculator which says that the sigma level short term has gone into negative which is minus 1. 35, which is pathetic. So the performance of this happens to be pathetic. All right, here is a case study of cricket ball diameter or rather circumference. All right, let me go to Stat basic Statistics and click on Display. Rather than going to display descriptive statistics, we would select graphical Summary. This will let us know on whether the data follows normal distribution or not.

And we will also be able to calculate the mean and standard deviation. So let me select this. Let me select the cricut ball. It has to be circumference. My bad, I need to change that. Now let me click on OK. There we go. Data seems to follow normal distribution by looking at the red curve. P value is greater than 0. 5, which is also an indication that the data follows normal distribution.

We also have the mean and standard deviation for this data which happens to be 226, 36 and 1. 63. Let me substitute these two values in the sigma level calculator to get the sigma short term. The moment we keen the mean and standard deviation values along with the lower specification limit and upper specification limit values, the sigma shot term happens to be 2. 64 and that’s your sigma shot.