Tuesday, 13 September 2011

Testing Methodologies

Black box testing
not based on any knowledge of internal design or code. Tests are based on requirements
and functionality.

White box testing
based on knowledge of the internal logic of an applications code. Tests are based on coverage of code statements, branches, paths, conditions.

Unit testing
the most micro scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.

Incremental integration testing
continuous testing of an application as new functionality is added; requires that various aspects of an applications functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

Integration testing
testing of combined parts of an application to determine if they function together correctly.
The parts can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional testing
black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesnt mean that the programmers shouldnt check that their code works before releasing it (which of course applies to any stage of testing.)

System testing
black box type testing that is based on overall requirement specifications; covers all combined parts of a system.

End-to-end testing
similar to system testing; the macro end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing
typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the
software may not be in a sane enough condition to warrant further testing in its current state.

Regression testing
re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

Acceptance testing
final testing based on specifications of the end-user or customer, or based on use by endusers/customers over some limited period of time.

Load testing
testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the systems response time degrades or fails.

Stress testing
term often used interchangeably with load and performance testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

Performance testing
term often used interchangeably with stress and load testing. Ideally performance testing (and any other type of testing) is defined in requirements documentation or QA or Test Plans.

Usability testing
testing for user-friendliness. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Install/uninstall testing
testing of full, partial, or upgrade install/uninstall processes.

Recovery testing
testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing
testing how well the system protects against unauthorized internal or external access, willful
damage, etc; may require sophisticated testing techniques.

Compatibility testing
testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

Exploratory testing
often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

Ad-hoc testing
similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

User acceptance testing
determining if software is satisfactory to an end-user or customer.

Comparison testing
comparing software weaknesses and strengths to competing products.

Alpha testing
testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by
programmers or testers.

Beta testing
testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not
by programmers or testers.

Mutation testing
a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (bugs) and retesting with the original test data/cases to determine if the bugs are detected. Proper implementation requires large computational resources.

Bug Life Cycle

Roman Numerals Chart


Wednesday, 10 August 2011

Capability Maturity Model Integration for Software Engineering (CMMI - SW)

The vision of Watts Humphrey, Capability Maturity Model (CMM) was developed to assess the capabilities of organizations in taking up large software development projects of Department of Defense, USA.
CMM addresses the process improvement in software development organizations.CMM identifies a set of guidelines that need to be implemented for producing quality software.

In organizations that develop both hardware and software, Total Quality Management (TQM) has to be followed. CMM is confined to software quality management of the organization whereas TQM addresses both hardware and software quality management.

In CMMI framework there are two representations: ‘staged’ and ‘continuous’. An organization can choose either staged representation or continuous representation for its software process improvement.
In staged representation, five maturity levels are defined and each level has specific process areas [level 1 >> level 2 >> level 3 >> soon]. If the organization chooses the continuous representation, it can select its own order of improvement.

Levels of Software Process Maturity
Based on the software process maturity, an organization can be at one of the five maturity levels.

Level 1 (Initial):
Organizations at this level execute projects using Adhoc methods. Development is generally disordered.

Level 2 (Managed):
Organizations at this level follow a defined process for execution of each project. The requirements are managed as per a defined process. Seven process areas are defined at this level and an organization is assesses as Level 2 organization if it implements all these 7 process areas.

Level 3 (Defined):
Organizations at this level have a set of process definitions across the organization and processes related to particular project are derived from the organization-wide processes. As compared to level 2 organizations, processes are more clearly defined and efforts are made to continuously improve the process definitions. There are 11 process areas in this level.

Level 4 (Quantitatively Managed):
Organizations at this level define quantitative objectives for process performance and product quality. Sub processes are defined for the processes and wherever possible, these sub-processes are quantitatively managed. There are 3 process areas in this level.

Level 5 (Optimizing):
Organizations at this level continuously improve their process and product performance through innovation and technology. There are 5 process area in this level.

Source: Software Testing by Dr. K.V.K.K Prasad

Testing Article - Bug Reporting

Finding bug is not a big deal, to get a bug fixed is a really tough for a tester. Bug reports are the primary work product of most testers. The better your reports, the better your reputation. Programmers rely on your reports for vital information. Good reporting of good bugs earns you a good reputation.

You are not usually present when your bug report is received and read. When you write a bug report, you are asking a programmer to do some more work. Much bug fixing is done on their time - after hours or on weekends. You are asking them to give this time up for the bug you found.

To get a bug fixed, you have to convince the Change Control Board to approve the fix or programmer to fix it on his own (when board isn’t looking).

Because so many people read and rely on bug reports, take the time to make each report informative and understandable.

Keep clear the difference between severity and priority. Severity refers to the impact of the bug. Priority indicates when your company wants it fixed. Severity doesn’t change unless you learn more about hidden consequences. Priorities change as a project progresses.

Always report nonrepoducible errors, they may be time bombs. Sometimes the program misbehaves in the way that you can’t replicate. You see the failure once, but don’t know how to get it again. If that happens to a customer, it will erode confidence in the product.

When you report a non reproducible bug, make it clear that you cannot replicate the bug. Some tracking systems have a field fir this (can you reproduce the bog: yes/no/unknown).  Using Print Screen, video recording can help you prove the existence of error.

When you are told that one of the bugs you reported has been fixed, test it as soon as you can. Giving prompt attention to verify fixes shows respect to the programmer and makes it more likely that he will respond quickly to your bug reports in future.

Bug reports should be closed by tester, when a bug has been marked as resolved, a tester should review it. If the bug report was rejected as non reproducible or not understandable, the tester should fix the report. If the bug was rejected as a non bug, the tester should decide whether to gather additional data in order to challenge the rejection.

No bug should be marked as closed unless it has been reviewed and closed by the tester.

Source:
      Books
            Software Testing, A Context-Driven Approach by Cam Kaner
            Effective Software Testing by Elfrede Dustin
This article is the mixture of information gathered from these two books.

Difference Between Quality Assurance and Quality Control !!!!!

Quality Assurance (QA)
The system implemented by an organization which assures outsidebodies that the data generated is of proven and known quality and meets the needs of the end user. This assurance relies heavily ondocumentation of processes, procedures, capabilities, and monitoring ofsuch.

Quality Control (QC)Those operations undertaken in the field to ensure that the data produced is within known measures of accuracy and precision.

Dimensions of Quality !!!!!!

F:Functionality
U:Usability
R: Reliability
P:Performance
S:Supportability

Quality people build quality products, quality organizations, and quality life.

Have no doubt, quality people produce quality products, services, life experiences, etc. Those are the people who radiate quality everywhere they go, and in everything they do. No exceptions and no shortcuts. It’s as if they have a quality aura about them. Everything they touch turns to be high quality.

Rest assured that low quality people shall produce low quality products and services. Those are the ones who send you the mistakes ridden resumes, come unprepared to the interview, come late to work, blame others for their mistakes, and always complaining.

So, if you are planning to succeed in work, life, or anything else, befriend, hang around, hire, and deal with quality people. Life is too short to waste on low quality employees, suppliers, friends, etc. If you are going to do it, DO only Quality People!

Charles' Six Rules of Unit Testing

1 Write the test first .
2 Never write a test that succeeds the first time .
3 Start with the null case, or something that doesn't work .
4 Don't be afraid of doing something trivial to make the test work .
5 Loose coupling and testability go hand in hand .
6 Use mock objects .

Sunday, 7 August 2011

What are Smoke Testing and Sanity Testing?


Smoke Testing:
Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested. Smoke testing is normal health check up to a build of an application before taking it to testing in depth.


Sanity Testing:
A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep. Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.