To make software performs as expected, we have to do two thing well:
- Verification - "Are we building the software (product) right?" Software should conform to its specifications.
- Validation - "Are we building the right software (product)?" Software should do what the user really requires. Maybe our requirements are wrong and doesn't meet the user's needs.
Examples:
- Design and prototype walk-throughs with users are examples of Validation.
- Unit tests mapping requirements (or user stories in Agile development) to class behaviors are examples of Verification.
- User interviews to see if requirements/User stories match the user's expectations of how the system will perform are examples of Validation.
- Regression testing after refactoring the system to determine whether it behaves the same as the non-refactored version is an example of Verification.
- From testing strengths are: It checks the whole system, including software that you didn't write, and It documents the system behavior.
Techniques for getting the software right:
- Need to understand and validate software requirements.
In order to build tests, first we have to understand what the software does and what it should do.
- Need to apply multiple V&V techniques throughout the development cycle.
1. Inspections: Other people look over the code we write.
2. Design discussions: To make sure that the code we are going to build is going to meet security and performance requirements.
3. Static analysis: Check the program for well-formlessness, for example integer overflow and null pointer dereference.
4. Testing: We have the software product itself and we see if it does the right thing
5. Runtime verification: So we think the software is going to work as intended but in case we see any anomalous behavior, we want to be able to shut it down and inspect it.
<aside>
💡 We focus on software testing. And why are we focusing on this area of verification and validation rather than all the others? Well, software testing is the only defect detection technique that can check the whole system.
</aside>
But testing is always incomplete, and our goal is to make it effective despite incompleteness.

V&V techniques that we have in our toolbox that are related to testing on one square.
- There's a tool called Lint that's design to check whether or not your program has certain simple errors in it, like null pointer dereference or integer overflow. These kinds of tools tend to be very pessimistic. They'll tell you that your program has all kinds of errors in it when in fact, most of those are false warnings.
- The more that you retest, the quicker you find the errors.
Netflix/chaosmonkey
The Problem with Software Testing
It only samples a set of possible behaviors, and unlike physical systems most software systems are discontinuous. There is no sound basis for extrapolating from tested to untested cases. So we need to consider all possible states of the system.