Monday, January 21, 2013

Software Development Life Cycle (SDLC)

Software Development Life Cycle (SDLC)

What is SDLC?

SDLC stands for Software Development Life Cycle. A Software Development Life Cycle is essentially a series of steps, or phases, that provide a model for the development and lifecycle management of an application or piece of software. The methodology within the SDLC process can vary across industries and organizations, but standards such as ISO/IEC 12207 represent processes that establish a lifecycle for software, and provide a mode for the development, acquisition, and configuration of software systems.

Benefits of the SDLC Process

The intent of a SDLC process it to help produce a product that is cost-efficient, effective, and of high quality. Once an application is created, the SDLC maps the proper deployment and decommissioning of the software once it becomes a legacy. The SDLC methodology usually contains the following stages: Analysis (requirements and design), construction, testing, release, and maintenance (response). Veracode makes it possible to integrate automated security testing into the SDLC process through use of its cloud based platform.

Phases of the Software Development Life Cycle

1. Requirement 
2. Design and Analysis
3. Implementation(Coding)
4. Testing
5. Deployment and Maintenance.
 
SDLC starts with the analysis and definition phases, where the purpose of the software or system should be determined, the goals of what it needs to accomplish need to be established, and a set of definite requirements can be developed.
During the software construction or development stage, the actual engineering and writing of the application is done. The software is designed and produced, while attempting to accomplish all of the requirements that were set forth within the previous stage.
Next, in the software development life cycle is the testing phase. Code produced during construction should be tested using static and dynamic analysis, as well as manual penetration testing to ensure that the application is not easily exploitable to hackers, which could result in a critical security breach. The advantage of using Veracode during this stage is that by using state of the art binary analysis (no source code required), the security posture of applications can be verified without requiring the use of any additional hardware, software, or personnel.

Once the software is deemed secure enough for use, it can be implemented in a beta environment to test real-world usability, and then pushed a full release where it enters the maintenance phase. The maintenance stage allows the application to be adjusted to organizational, systemic, and utilization changes.

SDLC

Following are the most important and popular SDLC models followed in the industry:
  1. Waterfall Model
  2. Iterative Model
  3. Spiral Model
  4. V-Model
  5. Big Bang Model

 SDLC Implementation

 There are two different types of SDLC that can be used: waterfall and agile. The major difference between the two is that the waterfall process is more traditional and begins with a well thought out plan and defined set of requirements whereas agile SDLC begins with less stringent guidelines and then makes adjustments as needed throughout the process. Agile development is known for its ability to quickly translate an application that is in development to a full release at nearly any stage, making it well suited for applications that are updated frequently.

Tuesday, January 15, 2013

Software Testing Life Cycle (STLC)

Software Testing Life Cycle (STLC) :


Contrary to popular belief, Software Testing is not a just a single activity. It consists of series of activities carried out methodologically to help certify your software product. These activities (stages) constitute the Software Testing Life Cycle (STLC).

The different stages in Software Test Life Cycle -



software-test-life-cycle

Each of these stages have a definite Entry and Exit criteria  , Activities & Deliverables associated with it.
In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial , we will focus of activities and deliverables for the different stages in STLC. Lets look into them in detail.

Requirement Analysis

During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.

Activities

  • Identify types of tests to be performed. 
  • Gather details about testing priorities and focus.
  • Prepare Requirement Traceability Matrix (RTM).
  • Identify test environment details where testing is supposed to be carried out. 
  • Automation feasibility analysis (if required).

Deliverables 

  • RTM
  • Automation feasibility report. (if applicable)

Test Planning

This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.

Activities

  • Preparation of test plan/strategy document for various types of testing
  • Test tool selection 
  • Test effort estimation 
  • Resource planning and determining roles and responsibilities.
  • Training requirement

Deliverables 

  • Test plan /strategy document.
  • Effort estimation document.

Test Case Development

This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.

Activities

  • Create test cases, automation scripts (if applicable)
  • Review and baseline test cases and scripts 
  • Create test data (If Test Environment is available)

Deliverables 

  • Test cases/scripts 
  • Test data

Test Environment Setup

Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.

Activities 

  • Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. 
  • Setup test Environment and test data 
  • Perform smoke test on the build

Deliverables 

  • Environment ready with test data set up 
  • Smoke Test Results.

Test Execution

 During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.

Activities 

  • Execute tests as per plan
  • Document test results, and log defects for failed cases 
  • Map defects to test cases in RTM 
  • Retest the defect fixes 
  • Track the defects to closure

Deliverables 

  • Completed RTM with execution status 
  • Test cases updated with results 
  • Defect reports

Test Cycle Closure

Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.

Activities

  • Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality
  • Prepare test metrics based on the above parameters. 
  • Document the learning out of the project 
  • Prepare Test closure report 
  • Qualitative and quantitative reporting of quality of the work product to the customer. 
  • Test result analysis to find out the defect distribution by type and severity.

Deliverables 

  • Test Closure report 



Saturday, January 12, 2013

What is the difference between white box, black box, and gray box testing?

What is the difference between white box, black box, and gray box testing?

Black box testing is a testing strategy based solely on requirements and specifications. Black box testing requires no knowledge of internal paths, structures, or implementation of the software being tested.

White box testing is a testing strategy based on internal paths, code structures, and implementation of the software being tested. White box testing generally requires detailed programming skills.

There is one more type of testing called gray box testing. In this we look into the "box" being tested just long enough to understand how it has been implemented. Then we close up the box and use our knowledge to choose more effective black box tests.

Software Testing Image

The above figure shows how both types of testers view an accounting application during testing. Black box testers view the basic accounting application. While during white box testing the tester knows the internal structure of the application. In most scenarios white box testing is done by developers as they know the internals of the application. In black box testing we check the overall functionality of the application while in white box testing we do code reviews, view the architecture, remove bad code practices, and do component level testing.

Tuesday, January 8, 2013

What are the Types Of Testing?

Types Of Testing:

Performance testinga. Performance testing is designed to test run time performance of software within the context of an integrated system. It is not until all systems elements are fully integrated and certified as free of defects the true performance of a system can be ascertained
b. Performance tests are often coupled with stress testing and often require both hardware and software infrastructure. That is, it is necessary to measure resource utilization in an exacting fashion. External instrumentation can monitor intervals, log events. By instrument the system, the tester can uncover situations that lead to degradations and possible system failure
Security testing
If your site requires firewalls, encryption, user authentication, financial transactions, or access to databases with sensitive data, you may need to test these and also test your site’s overall protection against unauthorized internal or external access
Exploratory Testing
Often taken to mean a creative, internal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it
Benefits Realization tests
With the increased focus on the value of Business returns obtained from investments in information technology, this type of test or analysis is becoming more critical. The benefits realization test is a test or analysis conducted after an application is moved into production in order to determine whether the application is likely to deliver the original projected benefits. The analysis is usually conducted by the business user or client group who requested the project and results are reported back to executive management
Mutation Testing
Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (‘bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources
Sanity testing: Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state
Sanity testing
Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort, For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state
Build Acceptance Tests
Build Acceptance Tests should take less than 2-3 hours to complete (15 minutes is typical). These test cases simply ensure that the application can be built and installed successfully. Other related test cases ensure that Testing received the proper Development Release Document plus other build related information (drop point, etc.). The objective is to determine if further testing is possible. If any Level 1 test case fails, the build is returned to developers un-tested
Smoke Tests
Smoke Tests should be automated and take less than 2-3 hours (20 minutes is typical). These tests cases verify the major functionality a high level. The objective is to determine if further testing is possible. These test cases should emphasize breadth more than depth. All components should be touched, and every major feature should be tested briefly by the Smoke Test. If any Level 2 test case fails, the build is returned to developers un-tested
Bug Regression Testing
Every bug that was “Open” during the previous build, but marked as “Fixed, Needs Re-Testing” for the current build under test, will need to be regressed, or re-tested. Once the smoke test is completed, all resolved bugs need to be regressed. It should take between 5 minutes to 1 hour to regress most bugs
Database Testing
Database testing done manually in real time, it check the data flow between front end back ends. Observing that operations, which are operated on front-end is effected on back-end or not.
The approach is as follows:
While adding a record there’ front-end check back-end that addition of record is effected or not. So same for delete, update, Some other database testing checking for mandatory fields, checking for constraints and rules applied on the table , some time check the procedure using SQL Query analyzer
Functional Testing (or) Business functional testing
All the functions in the applications should be tested against the requirements document to ensure that the product conforms with what was specified.(They meet functional requirements)Verifies the crucial business functions are working in the application. Business functions are generally defined in the requirements Document. Each business function has certain rules, which can’t be broken. Whether they applied to the user interface behavior or data behind the applications. Both levels need to be verified. Business functions may span several windows (or) several menu options, so simply testing that all windows and menus can be used is not enough to verify the business functions. You must verify the business functions as discrete units of your testing
* Study SRS
* Identify Unit Functions
* For each unit function
* Take each input function
* Identify Equivalence class
* Form Test cases
* Form Test cases for boundary values
* From Test cases for Error Guessing
* Form Unit function v/s Test cases, Cross Reference Matrix
User Interface Testing (or) structural testing
It verifies whether all the objects of user interface design specifications are met. It examines the spelling of button test, window title test and label test. Checks for the consistency or duplication of accelerator key letters and examines the positions and alignments of window objects
Volume Testing
Testing the applications with voluminous amount of data and see whether the application produces the anticipated results (Boundary value analysis)
Stress Testing
Testing the applications response when there is a scarcity for system resources
Load Testing
It verifies the performance of the server under stress of many clients requesting data at the same time
Installation testing
The tester should install the systems to determine whether installation process is viable or not based on the installation guide
Configuration Testing
The system should be tested to determine it works correctly with appropriate software and hardware configurations
Compatibility Testing
The system should be tested to determine whether it is compatible with other systems (applications) that it needs to interface with
Documentation Testing
It is performed to verify the accuracy and completeness of user documentation
1. This testing is done to verify whether the documented functionality matches the software functionality
2. The documentation is easy to follow, comprehensive and well edited
If the application under test has context sensitive help, it must be verified as part of documentation testing
Recovery/Error Testing
Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems
Comparison Testing
Testing that compares software weaknesses and strengths to competing products
Acceptance Testing
Acceptance testing, which black box is testing, will give the client the opportunity to verify the system functionality and usability prior to the system being moved to production. The acceptance test will be the responsibility of the client; however, it will be conducted with full support from the project team. The Test Team will work with the client to develop the acceptance criteria
Alpha Testing
Testing of an application when development is nearing completion, Minor design changes may still be made as a result of such testing. Alpha Testing is typically performed by end-users or others, not by programmers or testers
Beta Testing
Testing when development and testing are essentially completed and final bugs, problems need to be found before the final release. Beta Testing is typically done by end-users or others, not by programmers or testers
Regression Testing
The objective of regression testing is to ensure software remains intact. A baseline set of data and scripts will be maintained and executed to verify changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software being regression tested. All discrepancies will be highlighted and accounted for, before testing proceeds to the next level
Incremental Integration Testing
Continuous testing of an application as new functionality is recommended. This may require various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers or by testers
Usability Testing
Testing for ‘user-friendliness’ clearly this is subjective and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers
Integration Testing
Upon completion of unit testing, integration testing, which is black box testing, will begin. The purpose is to ensure distinct components of the application still work in accordance to customer requirements. Test sets will be developed with the express purpose of exercising the interfaces between the components. This activity is to be carried out by the Test Team. Integration test will be termed complete when actual results and expected results are either in line or differences are explainable/acceptable based on client input
System Testing
Upon completion of integration testing, the Test Team will begin system testing. During system testing, which is a black box test, the complete system is configured in a controlled environment to validate its accuracy and completeness in performing the functions as designed. The system test will simulate production in that it will occur in the “production-like” test environment and test all of the functions of the system that will be required in production. The Test Team will complete the system test. Prior to the system test, the unit and integration test results will be reviewed by SQA to ensure all problems have been resolved. It is important for higher level testing efforts to understand unresolved problems from the lower testing levels. System testing is deemed complete when actual results and expected results are either in line or differences are explainable/acceptable based on client input