Lean Six Sigma Green Belt – Six Sigma Measure Phase Part 2
- Yield – Traditional, FTY, RTY
Yield is a first time right percentage. And we have first time yield and role to cookie. First time yield is used to determine quality level of individual sub process. If I multiply the first time yield of each and every sub process, I would ultimately get the role to cookie. Let us understand this with an example. Does this picture sound familiar?
Formula One Race hey, let me ask you some interesting thing. According to NASCAR pit stop. If you’re concerned about what NASCAR stands for, NSCR it is. National association for Stock Car Auto Racing. National association for Stock Car Auto Racing. And I’m sure you all know what a pit stop is. After N, number of circuits run your racing track. Your tires run out. You’ll run out of fuel, your braking system might get eroded.
So you’ll have to stop, have your tires changed, have your fuel filled, and then check your braking system and move on, right? So let me ask you this question. How much time does it take for me to replace the four tires, fill the fuel, check the braking system, and let me go? If I’m a driver, how much time does it take? If you’re thinking about a value which is greater than ten, you’re out of the race, my dear friend.
If you are thinking about a value which is greater than five, still you’re behind the race. The right value is 1. 923 seconds. 1. 923 seconds. In 2013 US. Grand Prix, the Red Bull racing team, wherein Mark Weber was a driver. All right. They could complete a pit stop in 1. 92 3 seconds, under 2 seconds. Amazing stuff. Extraordinary achievement. This is the fastest Formula One pit stop by far.
All right, that’s an interesting part. But now let us get into traditional yield and things like that. Suppose there were 450 sports car and 400 out of these. 450 had the pressure, air pressure within the required specification limits. If this is the case, how would you calculate traditional yield? It’s simply 400 divided by 450, which would give you 88. 88%. Think about this.
Why could this be considered misleading? Why is this misleading? Think about it. Here is the answer for that. Initially, there was an inspection performed, and you have found that 160 tires did not comply with pressure specification, right? However, the operators collected 110 tires out of these 160, leaving only 50 that could not be brought back to the specification. Right.
And what were you looking into for traditional yield? You were looking into only the 50. The final result, who is going to consider the entire number of tires which were outside your specification? Right? So if I subtract 160 from 450, I would get 290. If I divide by 450, which is the total count I would get the first time yield at 64. 44%. Look at the difference between traditional yield and first time yield.
Traditional yield is covering up. It’s trying to cover up and say that, you know what, there are only 50 tires which could not be brought back to the specification. But in reality, during the first check, there were 160 tires which did not comply with the prejudice specification. And you’ll have to consider this 116 for your calculation, right? And then you get first time yield. This is the right number which you should be looking at, right? First time yield focuses on yield of the process before inspection and rework is carried out. That’s the interesting part. You have to look at that.
Now look into this first time yield and rule throughput yield calculation. For this entire process, 100 units were taken to process one, right? And you have found that 91 out of 100 were meeting a specification. So nine were not meeting a specification. So if I were to find out the first time yield for this process one, what would that be? It would be 91 divided by 100, which is nothing but zero point 91, right? Now, these 91 are given to process 282 out of these 91 per meeting the specifications.
So you have again lost money. If I were to find out the first time yield for this second process, it will be 82 divided by 91, you get a number, right? So on and so forth. These 82 were given to process three and you got 70 which were meeting your specifications. So the first time yield for your third process is 17 divided by 82. You get a number once again, right?
These are the first time yields for each and every step. If I multiply the first time yield of step one with first time yield of step two, with first time yield of step three, then I would get a rolled throughput yield role. To put yield is for the entire process. First time yield is for a particular step of the process. And this is how you calculate your first time yield. This is how you calculate your role to put yield. And you baseline your current performance in this way, right?
- 9. Capability Indices – Cp, Cpk_Part 1
This is a metric which is used to quantify and evaluate your process capability. It shows how capable your process is in terms of meeting both your internal and external customer requirements. Look at these four scenarios. Look at scenario one even before that, do you know what this particular image is? This is the dot boot. You place it on the wall and start hitting it with these dot points. Let me move back to the previous life, okay?
Have you ever played that before? I hope most of you all have played or at least are aware of this. And for the married people, this is what I would suggest. If you have not played that, take the picture of your spouse, take it to the wall and start hitting it in the dark. Vent out your frustrations and then you can lead a happy life from there on. Take my word on that. Free suggestion. All right? If you don’t bought my free suggestion also, it’s fine.
Okay, let us look into this. So this is the Dartboard and this is the Bullseye. If I hit on this red dot, I get maximum points because I would be very accurate. If I continuously hit my dot points on this, accuracy would be high. Why is this low in accuracy and low in precision? Because I have hit the dark points which are away from a target. I have low accuracy.
Why is that low? Precision because these points three points are not close to each other. Let us look into example two. It is low in accuracy because these dot points are away from a target. That’s fine. Why is it high in precision? Because these dot points are closer to each other. Think about these as my marks which I’ve obtained in my 10th standard of my school days. I fail in mathematics, okay? I fail in social studies, okay? I fail in science, though I’m low in accuracy. I’m high in precision. I consistently fail in all subjects.
This example is only for illustration, by the way. Infinity. All right, let us look into that scenario. It is high in accuracy because your dot points are closer to the target. But it is low in precision because these dot points are away from each other. Unlike this, unlike the previous scenario, these points are away from each other. Let us look into the fourth scenario now, it is high in accuracy and high in precision. Why is it high in accuracy? Because all the dot points are hit at the target.
Why is it high in precision? Because all these dot points are closer to each other. Let us understand these things in the context of CPC. Now, we use the number or the representation CP to speak about the precision. How close your data is to each other is explained using CT and this value as a thumb rule should always be greater than or equal to 1. 66. It has to be greater than 1. 66. As a thumb rule, how close are these dot points to the target is explained using a capability index called CPK.
This value as rule should be greater than one point CP. For me to say that my data points are closer to the target, my mean is closer to the target. If I want to make that statement, my CPK should be greater than or equal to 1. 3. Let’s jump into an exercise and understand about this CP capability index. It is a ratio of tolerance or specification width to the inherent variability which is processed width in the process. Here is the formula.
CP is equal to specification width divided by process width. Here is the case study for us. It’s a large sigma specified to the hotel. That temperature in the training room should be 21 plus or minus three degrees. The average room temperature is 22 and the standard deviation is 0. 5. What a CP? Look at the inputs. What are the inputs which are given to us? Specification limit is 24. Where are we getting this 24 from? Right. It is 21 plus three. Where are we getting this lower specification as 18 from?
It is 21 minus three. Where are we getting this target from? It is 21 here. And we are also given the standard deviation which is 0. 5 which is here. How do you calculate CP? CP is equal to specification width divided by process width. What is specification width? It is upper specification limit minus lower specification limit. What is the process width? Normally for repeated process, you keep the control limits as plus or minus three standard deviation and that is your process rate.
But it’s automatically statistically calculated based on the data that you input. So if it is plus three standard deviations on one side and if it is minus three standard deviations on the other side, it would sum up to six standard deviations. So your process width is six standard deviations. You just simply substitute. You get the CP value as two. What do you infer? What are we doing now?
We are in the measure phase. I’m trying to find out what is my current performance and CP value should have been greater than 1. 66 for me to claim that the variation is less in the process, right? And here I’m getting two, which is extremely good. So the variation in the process is less. I’m doing things consistently, that is what my CPC. But doing things consistently alone does not give complete picture about the process. Look at the scenario but telling that CP value is high, which says this, that the variation is less in the process. We also have to look into whether you are on target or whether you’re away from the target. For that we need to look into CPK value. So let us move on. Understand what CPK means. Here is the formula, right? We will understand that formula while we solve this particular case study. XLR six sigma specified to the hotel.
That temperature in the training room should be between 21 plus or minus three degrees. And the average room temperature is 22 and standard deviation is 0. 5. These values are given to us. What is CPK value? How do I calculate this? What is given to me upper specification limit is given to me as 24. Where am I getting 24 from? It is 21 plus three degrees. Where am I getting this lower specification limit as 18 from? It is 21 minus. Where am I getting this target from? It’s coming from here. 21. It’s given in my case studies. Where am I getting the mean of 22 from? Given in the case study. Where am I getting the standard deviation as 0. 5 from here. Okay, all you need to do is first calculate your CPU which is upper specification limit minus mean divided by three times of standard deviation. If I simply substitute the values, I get 1. 32. I then need to calculate Cpl.
Cpl measures how close process mean is to the lower specification limit. If I simply use this formula, which is mean minus lower specification limit divided by three times of standard deviation, I get this value which is 22 -18 divided by three times of standard deviation and the calculation results into 2. 67. What is the CPK value? It is the minimum of CPU and Cpl. So it’s minimum of these two values which turns out to be 1. 33. That’s my CPK. What do I info? Now? Remember, we are into a measure feature. We are baselining the current performance of the process, right? As a term rule, CPK value should be greater than or equal to 1. 33. For me to say that the process mean is closer to the target and it’s 1. 33 here, so I need not worry, right? There is less variation in the process based on CPI. The value which we have calculated here, it is two. And CPK is also doing good. So my process meaning is closer to my target. This is how I info.
- Capability Indices – Cp, Cpk_Part 2
Lower the CP value, more is the variation. Look at this, there’s a huge variation as and how my CP value increases the variation rate users. There’s a less variation of CP values too that is arguing. How about CPA lower my CPK value far away are my data points to the mean. Process mean is far away from a target as a now your CPK value increases your process mean comes closer to the target. That is what you in for. My dear friends, let’s move on and look into this small case study, okay? For a health organization, the vision is to achieve the life expectancy of people which is 80 plus or minus ten years. So my target is given as 80. Upper specification limit is given as 90, which is 80 plus ten. Lower specification limit is 80 minus ten, which is 70. Mean is given as 88 x bar is given as 88 and the standard deviation is given as 1. 5. I just need to simply use the formula of Cpcpk and calculate those values and then also look into the inference CP value as a tumble should be greater than or equal to 1. 66. CPK value as a tumble should be greater than or equal to 1. 33. Look into these and just select the appropriate option on what you want to improve.
Do you want to move the mean closer to the target? Or do you want to reduce the variation? Or do you want to do both? You’ll be able to say so by calculating CP and CPK values. All right, here is another case study calculate Cpcpk. If you are wondering what to do, please do not worry. Try solving practice makes you perfect. I repeat this if you do not arrive at a solution, do not get bogged down. Do not press a panic button yet. We have recorded the solutions to all the case studies, all the exercises in the last recording. Wait until then. All right, Cpcpk, we have understood the explanations of this, right? But let us look into this. If CPK is equal to CP, what will happen to process mean and the target? Yeah, what will happen to that? Process mean will be equal to your target, right? And look at that. If CP is equal to one, what will happen? What is a formula for CP? Specification width divided by process width. If CP were to be one, both specification width and process width should be equal.
If CP is less than one, your denominator should be greater than your numerator. So your process width should be greater than your specification width, right? Let me know if you have a different explanation, you can write into me, but otherwise this is the right explanation. Okay, he is another brain teaser, but we’d be solving this. Here what will happen. Scenario One case one CP is greater than 1. 66. It is two. CPK is two, which is greater than 1. 33. Hence, you need not do anything right? There is less variation and your process mean is closer to the target.
So you need not worry much here. Look into case number two. CP is two, which is greater than 1. 66. Good. So your variation is less in the process. Look into CPK. It’s 1. 1, which is less than 1. 33, right? So you have to try and move mean closer to the target. That is your job for scenario two. Case two, looking at case three, CPU value is zero point 85, which is less than 1. 66. Hence you have to focus on reducing the variation. Look into CPK, which is 1. 85. This value is greater than 1. 33. That’s the thumb root, right? Hence it’s a good situation. You need not move mean closer to the target, you just need to focus on reducing the variation for case three. For case four, CP value is zero seven, which is less than 1. 66. Hence you need to focus on reducing the variation. CPK value is zero point 55, which is less than zero point, which is less than 1. 33. Hence you also have to focus on moving closer target. Ideally, you need to focus on improving both in case number four. Here quick understanding about capability, stability and all that all statistical analysis, CP, CPK concluded, et cetera, must be performed only after your process has achieved stability and only if you feel that all non random trends have been removed from the process, at least to the best of your knowledge and ability. Let’s look into this capability and stability. How does a process look if it has high stability and low capability? Right? This is how it looks like. Values are consistently falling outside your specification limits on either side, but it’s stable. All look alike. Think about this a process which is low in capability and low in stability.
This is how it looks like it’s random. Explaining that there is no stability in the process and there are a lot of things falling outside your specification limits. Explaining that it is low incapability. Think about this something which is high incapability and low instability. High incapability means all fall within your specification limits. Low instability means there’s no consistency. There’s a lot of radiation here though it’s falling within the specification. A process which is high in stability and high end capability looks like this everything falls within the specification limits and everything is consistent. Wow. Okay, this brings us to the end of major phase. Let us look into the outputs of the major phase and the key output of the major phase is baselining. Your current performance of y output, your baseline, the current performance. Apart from that, there are also a few other outputs.
And the first thing is performance standards for white. Second, we measure the measurement system. We look into the reliability of the measurement system using attribute agreement analysis. And the third output was baseline based on the performance metrics. We looked into sigma level. We looked into process capability analysis. We looked into the key. All right, we came to the end of the measure phase. Let us quickly recap a quick summary. We discussed about critical quality operational definition and we finalized the performance standard and we documented it. We also have seen on how to assess reliability of measurement system as part of attribute agreement analysis. AAA has three measures. We have looked into repeatability, reproducibility and accuracy.
We look into these tumbles measurement system is acceptable if it’s greater than 90%. It is greater than 70% and less than or equal to 90%. And measurement system is rejected if it is less than or equal to 70%. We also oh my bad. Bear with me on that. We have also understood that data collection plan should be performed only after the measurement system reliability is established. If you feel that the measurement system is reliable, then we move on to calculate either sigma level right for continuous data. We calculate in a different way. If the data is attributed. We have three different scenarios and that is how we calculate. And also we have this process right once again.
Process capability indices. You calculate using CP and CPK. CP value should be greater than or equal to 1. 66 for you to say that the variation in the process is less. CPK value should be greater than or equal to 1. 33 for you to say that the process mean is closer to the target. Why are we doing all these things? Because we want to baseline the current performance and we keep it handy to compare it against the values at a later point of time to check whether we have actually achieved some improvement or not. This brings us to the successful closure of the measure phase. Please look into our analyzed fees improve and conclude which is about to come. Thank you so much for attending this session. See you in the analyst. Bye for now.