System Testing or Functional Testing:
System Testing is the next level of testing after Unit and Integration Testing. In System testing, end to end testing is conducted in an environment representative of the live or production environment. The test cases are based completely on the Functional Specification. And an independent test team(independent of the development team) often carries out system testing to avoid author bias.
System testing includes rigorous testing of both functional and non-functional requirements.
The testing phase starts with creation of the System Test Plan, usually when the Functional Specifications are about 50-80% complete. The most commonly used template is the IEEE 829 Test Plan that defines the cope of the project, resources, schedule, timelines etc.
The specifications are reviewed by the test team for testability. Any errors, ambiguities, discrepancies are raised to the BA and development team for further clarification. Then, the team starts work on designing test conditions, test cases, test data and test environment. The test objectives recorded in the System Test Plan drive the design of test cases.
Some points to be borne in mind are: Priorities, risk, test design techniques like Equivalence Partitioning, Boundary Value Analysis etc to optimize the test cases, Reuse of tests in other projects and Automation.
Other general process checks include making sure there is a :
a) A Configuration Management process in place for tracking changes and making Builds.
b) A proper Defect tracking process is available.
Following this , in the Test Implementation & Execution phase, test cases, test procedures, test data are created. Test environment is created. The test cases are reviewed by the BA and developers.
Entry and Exit(completion)criteria for System Testing are defined and recorded in the System Test Plan.
The usual entry criteria to System Testing required of the Development team are:
- code is complete and has been subject to unit and integration testing. The test results are made available to the independent test team.
- There are no major outstanding defects, and no planned major changes or enhancements
- the code or build passes the Smoke test (a minimum set of tests that are designed to check the basic functionality is working and testing can be done).
From the testing team side, the entry criteria would be:
- Test cases are complete and have been reviewed by BA, Developers etc
- Test Environment is ready and has been checked.
- Test Data required is ready.
- The application passes the Smoke tests
If Smoke tests are successful, the Build is accepted and the testing cycle starts in earnest and tests are run based on priority. Test results are recorded as passed or failed and defects raised for the failed tests. Usually two to three cycles of complete testing are planned.
In addition to functional tests, System Testing can also include:
- Installation testing of the application and manuals
- Non-functional testing of the application eg. Performance Testing, Load Testing, Stress Testing etc
- Regression testing
- Localization testing
- Verifying the technical documentation – User Manuals etc
Some important points in System Testing :
Test Environment: The Test Environment should be representative of the live or final production environment. The Test environment details need to be noted down: the operating system version eg, Sun Solaris 8 vs Solaris 9, RedHat Linux or Suse Linux, database name and version, Browser versions etc.
Databases: Where applicable, the database tables need to be checked to see if data has been stored correctly in the database tables i.e. right fields, date formats are correct (mm/dd/yyyy vs dd/mm/yyyyy) etc, as part of System testing. A basic knowledge of databases and SQL comes in handy.
Unix: Many applications reside on Unix or Linux. So a basic understanding of Unix commands comes in handy.
Test Data: Test Data can be populated manually using the forms in the application(a time consuming process unless an automatic test script is used), using SQL queries to populate the database or from production.
Where possible for existing applications, if System Testing is focused on Regression, Defect fixes, new enhancements, it makes sense to procure a data cut from production. This data is very valuable for testing because it embodies real data. (The System Admin will have utilities to retrieve the data and change the real data to make it anonymous).
Resource Utilization: As part of System Testing, it goes without saying that we also track how the system resources are utilized. Sometimes this becomes very apparent when the machine hangs or a core-dump occurs. In Windows, Task-manager is a easy way to keep an eye on process information. In Unix using the ps -ef command etc helps get an idea of performance issues.
Typical issues found during system testing are:
- Environmental issues
- Functional errors – something not working like clicking on a button and nothing happens, data stored incorrectly in database, reports not displaying correct data etc.
- Non-functional issues like difficult to use, response times being too high, system resources utilization etc
At the end of System Testing cycle, the completion or Exit criteria defined in the System Test Plan are checked to evaluate if testing can be stopped. The testing may be stopped because there is no more time, all critical issues have been fixed and retested, all planned tests have been completed. And the testers feel a level of confidence in the application.
A Test Summary report is sent to the stakeholders – the Project Manager, Development team lead etc. The Test Summary report is usually based on the IEEE 829 Test Summary report template and details what was planned, what went well, variances from original plan and reasons, outstanding defects etc.
After this, the testing is deemed complete and tidying up done. Clean-up activities include updating test cases, archiving the test environment & test data and anything that could be useful for future projects. Also a port-mortem of the project is conducted – lessons learned, what went well, what can we do better the next time etc are discussed and recorded. Metrics are collected and collated.
System Testing comes before User Acceptance testing . System Testing is much more rigorous and detailed than User Acceptance testing. If System Testing is conducted well, User Acceptance testing is usually a breeze. We will look at User Acceptance testing in detail in our next blog.
The BUG stops here!