Software Testing Techniques By Boris Beizer Ppt Free ##BEST## Download
Click Here ->>->>->> https://ssurll.com/2t83wO
Clear box testing requires setup and instrumentation, or at least poring over code, while most black box techniques can begin immediately; the operator simply tries to use the software. A system could behave correctly as a black box, but still contain defects in the code itself.
An interesting analogy parallels the difficulty in software testing with thepesticide, known as the Pesticide Paradox [Beizer90]:Every method you use to prevent or find bugs leaves a residue of subtlerbugs against which those methods are ineffectual. But this alone will notguarantee to make the software better, because the Complexity Barrier[Beizer90] principle states: Softwarecomplexity(and therefore that of bugs) grows to the limits of our ability tomanage that complexity. By eliminating the (previous) easy bugs you allowedanother escalation of features and complexity, but his time you have subtlerbugs to face, just to retain the reliability you had before. Society seems tobe unwilling to limit complexity because we all want that extra bell, whistle,and feature interaction. Thus, our users always push us to the complexitybarrier and how close we can approach that barrier is largely determined by thestrength of the techniques we can wield against ever more complex and subtlebugs. [Beizer90]
Software testing is not mature. It still remains an art, because we stillcannot make it a science. We are still using the same testing techniquesinvented 20-30 years ago, some of which are crafted methods or heuristicsrather than good engineering methods. Software testing can be costly, but nottesting software is even more expensive, especially in places that human livesare at stake. Solving the software-testing problem is no easier than solvingthe Turing halting problem. We can never be sure that a piece of software iscorrect. We can never be sure that the specifications are correct. Noverification system can verify every correct program. We can never be certainthat a verification system is correct either.
There is a plethora of testing methods and testing techniques, servingmultiple purposes in different life cycle phases. Classified by purpose,software testing can be divided into: correctness testing, performance testing,reliability testing and security testing. Classified by life-cycle phase,software testing can be classified into the following categories: requirementsphase testing, design phase testing, program phase testing, evaluating testresults, installation phase testing, acceptance testing and maintenancetesting. By scope, software testing can be categorized as follows: unittesting, component testing, integration testing, and system testing.
There are many techniques available in white-box testing, because theproblem of intractability is eased by specific knowledge and attention on thestructure of the software under test. The intention of exhausting some aspectof the software is still strong in white-box testing, and some degree ofexhaustion can be achieved, such as executing each line of code at least once(statement coverage), traverse every branch statements (branch coverage), orcover all the possible combinations of true and false condition predicates(Multiple condition coverage). [Parrington89]
In mutation testing, the original program code is perturbed and many mutatedprograms are created, each contains one fault. Each faulty version of theprogram is called a mutant. Test data are selected based on the effectivenessof failing the mutants. The more mutants a test case can kill, the better thetest case is considered. The problem with mutation testing is that it is toocomputationally expensive to use. The boundary between black-box approach andwhite-box approach is not clear-cut. Many testing strategies mentioned above,may not be safely classified into black-box testing or white-box testing. It isalso true for transaction-flow testing, syntax testing, finite-state testing,and many other testing strategies not discussed in this text. One reason isthat all the above techniques will need some knowledge of the specification ofthe software under test. Another reason is that the idea of specificationitself is broad -- it may contain any requirement including the structure,programming language, and programming style as part of the specificationcontent.
Software reliability refers to the probability of failure-free operation ofa system. It is related to many aspects of software, including the testingprocess. Directly estimating software reliability by quantifying its relatedfactors can be difficult. Testing is an effective sampling method to measuresoftware reliability. Guided by the operational profile, software testing(usually black-box testing) can be used to obtain failure data, and anestimation model can be further used to analyze the data to estimate thepresent reliability and predict future reliability. Therefore, based on theestimation, the developers can decide whether to release the software, and theusers can decide whether to adopt and use the software. Risk of using softwarecan also be assessed based on reliability information. [Hamlet94] advocates that the primary goal of testingshould be to measure the dependability of tested software.
Software testing can be very costly. Automation is a good way to cut downtime and cost. Software testing tools and techniques usually suffer from a lackof generic applicability and scalability. The reason is straight-forward. Inorder to automate the process, we have to have some ways to generate oraclesfrom the specification, and generate test cases to test the target softwareagainst the oracles to decide their correctness. Today we still don't have afull-scale system that has achieved this goal. In general, significant amountof human intervention is still needed in testing. The degree of automationremains at the automated test script level.
In a narrower view, many testing techniques may have flaws. Coveragetesting, for example. Is code coverage, branch coverage in testing reallyrelated to software quality? There is no definite proof. As early as in[Myers79], the so-called "humantesting" -- including inspections, walkthroughs, reviews -- aresuggested as possible alternatives to traditional testing methods.[Hamlet94] advocates inspection as a cost-effectalternative to unit testing. The experimental results in [Basili85] suggests that code reading by stepwiseabstraction is at least as effective as on-line functional and structuraltesting in terms of number and cost of faults observed.
In a broader view, we may start to question the utmost purpose of testing.Why do we need more effective testing methods anyway, since finding defects andremoving them does not necessarily lead to better quality. An analogy of theproblem is like the car manufacturing process. In the craftsmanship epoch, wemake cars and hack away the problems and defects. But such methods were washedaway by the tide of pipelined manufacturing and good quality engineeringprocess, which makes the car defect-free in the manufacturing phase. Thisindicates that engineering the design process (such as clean-room softwareengineering) to make the product have less defects may be more effective thanengineering the testing process. Testing is used solely for quality monitoringand management, or, "design for testability". This is the leap forsoftware from craftsmanship to engineering.
This book is a comprehensive introduction to various methods of testing,using intuitive examples. Complete coverage of all important testingtechniques, and up-to-date. The focus is black-box/functional testing. Theauthor is an internationally known software consultant with almost four decadesof experience in the computer industry.
Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not necessarily limited to:
Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines. Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing. 2b1af7f3a8