ISTQB CTFL-2018 – Test Management
- Risk And Testing
We have mentioned before how risk is an important factor in the testing activity. We base our testing efforts or bonds amount of risk in delivering the product too early. If the risk is high, then we need to spend more effort in testing the software. If the risk is low enough, then we can deliver the software. Definition of Risk so what is risk? After all? There are two parts to the definition of risk. The first part, risk involves the possibility of an event in the future which has negative consequences. My friends who are PMP certified might not like this definition because in PMB a risk may result in future negative or positive consequences.
But people in ISTP consider risk as only may result in future negative consequences. Risk is used to focus on the effort required during testing. It’s used to decide where and when to start testing and to identify areas that need more attention. Testing is used to reduce the probability of a negative event occurring or to reduce the impact of a negative event. So if we are worried that the client will get upset, if there’s a miscalculation in a report, this is a negative event that might happen, then we can add more testing to the report to make sure we don’t miss any major defects.
This action will lower the probability or the impact of the negative event. Risk based testing draws on the collective knowledge and insight of the project stakeholders to carry out product risk analysis to ensure that the likelihood of the product failure is minimized. Risk management activities provide a disciplined approach to analyze and reevaluate on a regular basis what can go wrong which are the risks? Determine which risks are important to deal with, important actions to mitigate those risks, make contingency plans to deal with the risks should they become actual events.
In addition, testing may identify new risks, help to determine what risks should be mitigated, and lower uncertainty about risks. I will try here to give you a risk management course in five minutes. So let’s talk first about analyzing what can go wrong. Product and Project Risks one of the many different ways that a project team can identify the risks in their project is to look at the different classifications of risks and wonder if any of those risks could actually happen to them or to their project. We can classify the risks into two categories project risks and product risks.
What is the difference between project and product? Easy. A product is the software itself. A project is the activities or steps needed to create the product. So product risk is inevitably related to the software itself, while project risk is inevitably related to how we develop the software. Now, one of the famous questions I have seen is to distinguish between the two types of risks. I even have gotten this question in my Advanced Level Test Manager exam.
Product risk involves the possibility that a worker product for example, specification, component, system or test item may fail to satisfy the legitimate needs of its users and or stakeholders when the product risks are associated with specific quality characteristics of a product. For example, functional suitability, reliability, performance efficiency, usability, security, compatibility, mentality and portability all the elite characteristics.
Product risks are also called quality risks. Examples of product risks include software might not perform its intended functions according to the specification software might not perform its intended functions according to user, customer and or stakeholder needs. System architecture may not actually support some non functional requirements.
A particular computation may be performed incorrectly in some circumstances, a loop control structure may be coded incorrectly. Response times may be inadequate for high performance transaction processing systems, user experience of UX feedback might not meet product expectations. And for the second type of risks, project risks. Project risk involved a situation that, should they occur, may have a negative effect on the project’s ability to achieve its objectives.
Examples of project risks include project issues. Delays may occur in delivery, test completion, or satisfaction of exit criteria or definition of done, inaccurate estimates, reallocation of funds to higher priority projects, or general cost cutting across the organization may result in adequate funding. Later changes may result in substantial rework.
And we also have under project risks. Organizational issues such as skills, training and stuff may not be sufficient. Personal issues may cause conflict and problems. Users, business staff, or subject matter experts may not be available due to conflicting business priorities and under project risks, we also have political issues. Testers may not communicate their needs and or the test results adequately. Developers and or testers may fail to follow up on information found in testing and reviews, for example, not improving development and testing practices.
There may be an improper attitude toward or expectation of testing. For example, not appreciating the value of fighting defects during testing. Also, under Borgia twists, we have technical issues. Requirements may not be defined well enough. Requirements may not be met given existing constraints. The test environment may not be ready on time.
Data conversion, migration, planning, and their tools aboard may be late. Weaknesses in the development process may impact the consistency or quality of project work products such as design code configuration, test data, and test cases. Bore defect management and similar problems may result in accumulated defects and other technical debit.
And last, under project risks, we have supplier issues where a third party may fail to deliver unnecessary product or service or go bankrupt, and contractual issues may cause problems to the project. Project risks may affect both development activities and test activities. In some cases, project managers are responsible for handling all project twists. But it’s not unusual for test managers to have responsibility for test related project twists.
Now, for the Risk Analysis Board, the second part of the definition of risk is the level of risk is determined by the likelihood of the event and the impact of the harm from that event. Level of risk equals probability of the risk multiplied by impact of the risk if it did happen. So for example, if we have two risks, the first is a risk of having a UI issue. The probability of this risk happening is four, four in a scale, one to five, one being a low risk and five a high one.
But the impact of this risk happens is very low, only one using the same scale. Then the level of risk or risk score for this first risk is four multiplied by one equals four. A second risk might be a miscalculation. In one of the reports, the probability of such a risk is very low, such as two. But the impact of such a defect would be very high as the customer will be really best off if he saw such a defect.
So the impact might be three. So the level of risk in this case is two multiplied by three equals six. So the level of risk for the miscalculation is higher than that of the UI issue. This means that if we have very limited time for testing, then we would concentrate our efforts to test the report to lower the probability of the miscalculation or the impact of the miscalculation. Virtualizing Risks is an attempt to find the potential critical areas of the software as early as possible.
As I said, there are many ways to identify risks. Any identified risk should be analyzed and classified for better risk management. So now we have a long list of possible risks. We should calculate the risk level for each risk and sort the risks accordingly. That’s how we will know where to focus our testing attention. Riskbased Testing and Product Quality as we have said, risks are used to decide where to start testing, where to test more making some testing decisions, and when to stop testing.
Therefore, testing is used as a risk mitigation activity to provide feedback about identified risks as well as providing feedback on residual or unresolved risks. A risk based approach to testing provides proactive opportunities to reduce the levels of product risk.
Proactive means that we will not wait till the risk habit to deal with it, but rather we will be ready for it and even get rid of it before it even happens. To summarize what we have learned so far, risk based testing involves product risk analysis, which includes the identification of product risks and the assessment of each risk’s likelihood and impact.
The resulting broader risk information is used to guide test planning, the specification, preparation and the execution of test cases and test monitoring and control. Analyzing product risks early contributes to the success of the project in a riskbased approach. The results of product risk analysis are used to determine the techniques to be employed.
Determine the particular levels and types of testing to be performed, for example, security testing, accessibility testing and so on. Determine the extent of testing to be carried out biotech testing in an attempt to find the critical defects as early as possible and to determine whether any activities in addition to testing could be employed to reduce risk.
For example, providing training to inexperienced designers. Now, we have analyzed our risks and prioritized them, then what we need to do now is to manage those risks and handle those risks by lowering their risk levels. This is beyond the scope of the Ice TB Foundation course, but here you are. There are four ways we can handle or respond to risks.
One, avoid doing anything to make the risk level zero. Meaning that either making the probability zero or making the impact of the risk zero. Let’s imagine a risk that we have heard rumors that one of the team members, let’s call him Jack might move to another company. To avoid such a risk, you would not assign Jack to your project in the first place and get another one. So the impact will be zero.
Two, mitigate. Mitigate means that you will lower the risk level and you can achieve this by either lowering the likelihood or lowering the impact of the risk. Well, what should we do with Jack? You can lower the likelihood of him moving by giving him a promotion or salary increase. Or you can lower the impact by giving him minor tasks to work on. The third action that you can do, or the sale to his bonds you can do to handle risks is transfer. Meaning moving the risk from your site to another site. You might ask Jack’s manager to assure you that if Jack leaves the company for any reason, then he would be responsible for finding you another resource with the same qualifications, or maybe by outsourcing the whole job to another company. And four, you will accept the risk.
And you can be passively accept the risk by simply waiting for the risk to happen and see what to do then. Or you can accept it actively by putting a plan to be executed in case Jack leave the company. Like planning for two weeks, hand over from Jack to a new resource. This is called a contingency plan. Wow, that was a tough risk management course in, I hope, ten, five minutes also, so I hope you like it. It’s clear that any project risk will later affect the product itself. So the objective of all our risk management efforts is to.
- Independent Testing
Discourse. We say that testing tasks can be done by anyone. It may be done by people in a specific testing role or by people in another role, for example customers. The relationship between the tester and the tester object has an effect on the testing itself. By the relationship we mean how much the tester psychologically attached to what he is testing.
This relationship represents how the tester is dependent or independent from the test object. A certain degree of independence often makes the tester more effective at finding defects due to the differences between the authors and the tester’s. Mental Biases We have talked about the mental bias when we were talking about the psychology of testing in the first section of this course. Independence is not however a replacement for familiarity and developers can efficiently find many defects in their own code.
In this lecture we will elaborate more on how independence affects the test management of the software project. The approach is to organize a test team vary from a company to another and from a project to another. What we are trying to achieve here is to understand that testing independence should be put into consideration when organizing a testing. Degrees of independence in testing includes the following from low level of independence to high level.
On one side of testing independence lies available with low independence who tests his own code. And by the way, notice that when I say low independence is equivalent to high dependence, low independence is equivalent to high dependence.
So please take care when they mention dependence or independence. So again on one side of testing independence lies available with low independence who tests his own code. A little higher independence is tested from the development team. This could be developers testing their colleagues products than the independent testing team inside the organization reporting to project management or executive management.
Independent testers from the business organization or user community or with specializations in specific test types such as usability, security, performance, regularity compliance or portability. And on the very other side with very high independence lies independent testers external to the organization, a third party or a contractor either working on site which is in sourcing or off site which is outsourcing. Independent testing is sure a good thing, but it doesn’t mean that we should only consider highly independent testers.
So let’s look at each type of testers from the independence point of view and see the pros and cons of considering this type of tester to the testing team. To the testing Team first, the developer the author of the code. Should we allow him to test his own code even if he is highly dependent on the code?
The boss of using the developer of testing are knowing the code best will find problems that the testers will miss, can find and fix faults its Shebly. The cons of using the developer for testing are difficult to destroy on work. It’s his own baby after all. Tendency to see expected results, not actual results. Subjective assessment. So let’s consider a tester from the development team other than the developer. The bows are independent view of the software, more independent than the developer.
Dedicated to testing, not coding and testing at the same time. Part of the team walking to the same goal, which is quality. The cons are lack of respect. He’s a body lonely thankless task. He’s the only tester on the project. Corruptible peer pressure, a single view of opinion. Again, he’s the only tester on the project. Then comes the independent test team whose main job is testing the boss dedicated team just to do testing. Specialist testing expertise testing is more objective and more consistent. The cons are over the wall syndrome. There’s a wall between the developers and testers our department and your department.
Okay? And there could be some politics issues as well. Maybe confrontational over reliance on testers. Developers will be lazy to test depending on the testers to do the job for them. What about the specialized testers either from the user community or with a specialization in a specific testing type, security, performance and so on?
Sure they are the highest specialists in their field, but they need good people skills, communication and communication could be very tough with the developer. Last, with highest independence and low dependence comes third party organization where we outsource the testing of software to another organization. Highly specialized testing expertise. If outsourcing to a good organization, of course independent of internal politics and the conservative lack of product knowledge, they don’t know what they are testing, they are not from the same industry. Expertise gains goes outside the company could be expensive.
Actually it’s expensive and confidential information will be leaked from inside the organization to the third party organization. Therefore, the idea is to get as much as possible from the brows of independent testing and try to avoid as much as you can from the cons of independent testing. For most types of projects, especially complex or safety critical projects, it’s usually best to have multiple levels of testing with some or all of the levels done by independent testers.
Development staff may participate in testing especially at the lower levels so as to exercise control over the quality of their own work. We should consider asking the users to help with the testing and also we should consider asking testing subject matter experts to test the critical parts of the application of software if needed and so on. In addition, the way in which independence of testing is implemented varies depending on the software development lifecycle.
For example, in agile development testers may be part of a development team in some organizations, usually agile methods. These testers may be considered part of a larger independent test team as well. In addition, in such organizations product owners may be perform acceptance testing to validate user stories at the end of each iteration to summarize potential benefits of test independence include independent testers are likely to recognize different kinds of failures compared to developers because of their different backgrounds, technical perspectives and biases. An independent tester can verify a challenge or disapprove assumptions made by stakeholders during specification and implementation of the system. For example, if a developer assumes that a value should be in a specific range, then the tester will verify this assumption and will not take it for granted.
Potential drawbacks of test independence include the more independence, the more isolation from the development team, leading to a lack of collaboration, delays in providing feedback to the development team, or an confrontational relationship with the development team. Developers may lose a sense of responsibility for quality. Many times I have heard developers think that they should not test their own code because it’s the testers responsibility, which of course is not right at all. I’m saying that in the nicest possible way. Independent testers may be seen as a bottleneck or blamed for delays in these.
Independent testers may lack some important information about the test object. Many organizations are able to successfully achieve the benefits of test independence while avoiding the drawbacks. So let’s all hope we.
- Tasks of Test Manager and Tester
Testing tasks may be done by people in a specific testing role, or may be done by someone in another role, such as a project manager, quality manager, developer, business and domain expert infrastructure, or It operations. Anyone? The ice cream curriculum talks in detail about two roles only the test manager and the tester. Though the same people may play both roles at various points during the project. The active activities and tasks performed by these two roles depend on the project and product context, the skills of the people in the roles, and the organization. Let’s take a look at the work done by these roles, starting with the test Manager the test manager is tasked with overall responsibility for the test process and successful leadership of the test activities.
Typical test manager tasks may include white overview, a test strategy for the project and test policy for the organization if not already in place. They blend the test activities, considering the context and understanding the test objectives and risks, including selecting test approaches, estimating the time, effort and cost of testing acquiring resources, defining test levels, test cycles and planning defect management and create high level test schedule. Notice that I said high level test schedule. They also write and update the test plans, coordinate the test strategy and test plans with project managers, product owners and others. They share the testing perspective to other project activities such as integration planning.
They initiate the analysis, design, implementation and execution of tests monitor test progress and results, and check the status of exit criteria or definition of done. If we are talking about Agile, they prepare and deliver tested progress reports and test summary reports based on the information gathered during testing. They also adopt planning based on test results and progress sometimes documented in test bugs reports and or in test summary reports for other testing already completed on the project, and they take action when necessary or if necessary for test control. They support setting up the defect management system and adequate configuration management of testware. We will talk more about defect management and configuration management in future videos.
In this section, they introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product. They support the selection and implementation of tools to support the test process, including recommendation or recommending the budget for tool selection and possibly purchase and or support this tool selection allocating time and effort for pilot projects and providing continuing support in the use of the tools.
Design test Environment so they make sure the test environment is put into place before test execution and managed during test execution. They promote and advocate the tester, the test team, and the test profession within the organization. And last, they develop the skills and careers of testers, though through training plans, performance evaluations, coaching and so on. The test management role might be performed by a professional test manager or by a project manager, a development manager, or a quality assurance manager elijah projects or organizations.
Several test teams may report to a test manager, test coach or test coordinator, each team being headed by a test leader or lead tester. The way in which the test manager role is carried out varies depending on the software development lifecycle. For example, in Agile development or Agile projects, some of the tasks mentioned above are handled by the Whole Agile team. We have a feature in Agile called the Whole team, which the whole team acts as one. So some of the tasks mentioned above are handled by the whole Agile team, especially those tasks concerned with the day to day testing done within the team, often by artist or working within the team.
Some of the tasks that span multiple teams or the entire organization those that have to do with the Snail management may be done by test managers outside the development team, who are sometimes called test coaches. On the other hand, typical tester tasks may include review and contribute to tester plans, analyze for testability review and assess user requirements, specifications and models for disability identify and declare test conditions and capture to a stability between test. Cases, test conditions and the test cases. Design, set up and verify test environment or test environments setting up the hardware and software needed for testing, often coordinating with system administration and network management.
Design and implement test cases and test procedures prepare test data and acquire the test data if needed. Create the detailed test execution schedule yes, we leave it up to the testers to create their own detailed test schedule around the high level test schedule created by the test manager. As we mentioned above or before. Execute tests, evaluate the results and comment deviations from expected results. Use abruptly tools to facilitate the test process. Automate tests may be supported by a developer or a test automation expert.
Evaluate non functional characteristics such as performance, efficiency, reliability, usability, security, compatibility and portability. Review and contribute to test blends and also review tests developed by others again and again. Depending on the risks related to the product and the project and the software development lifecycle model selected different people may take over the role of tester at different test levels. For example, at the component testing level and the component integration testing level, the role of a tester is often done by developers.
At the Acceptance test level, the Rollover tester is often done by business analysts, subject matter experts and users. At the system test level and the system integration test level, the Rollover tester is often done by independent testing. At the operational Acceptance test level, the Rollover tester is often done by operations and or system administration staff.
People who work on test analysis, test design specific test types, or test automation may be specialists in these faults. The questions in this part are usually to differentiate between the tasks of a tester and the test manager. As you may have noticed the test manager tasks are related to how to do things, while the tester tasks are related to the actual hands on of doing those things.