Its amazing how often new software bugs are reported when you introduce new testers to the testing process. Its quite likely that some of these are not new breakages in the latest software release. What is happening is usually that the new guys:
- read the test script differently. It is very difficult to write unambiguous instructions, and it would be unconventional to test the test script on a lot of testers until any editorial problems were fixed - usually that would be a poor use of resources. Or;
- they read it carefully and take it literally. By contrast, your established testers have long got used to skimming the instructions to remind themselves what to do, but rely mostly on their experience and memory of previous tests. It's the trade-off you make for their faster progress through the tests. It's very difficult to make yourself read a long, boring test script for the umpteenth time as if you'd never seen it before (at least, I find this so). In this case the testers, new and old, are making an interesting kind of error (if it is an error)- a Goldovsky error
This phenomenon of new tester-related bug reports can go on for a long time if the test script is long and complicated or gets updated over time (or both).
Usually this is seen as a nuisance - after some effort, it's discovered that the bugs aren't real things going wrong with the software, they are artifacts of the test case and tester. It's easy to feel that the tester (or the test script author) "should have got it right in the first place". That would be a fair criticism if lots of time and resources were allowed to get the test script exactly right and train up the testers. But how often do you see that? More usually the team decided on a trade-off (faster preparation for testing in return for lower-quality script) and now someone is moaning about the downside of it. Or, we didn't decide on a trade-off as such, we just didn't allow enough time to prepare the script, and the test-script errors coming through are how we are learning this uncomfortable truth.
You could see Goldovsky errors as a blessing. Just occasionally the new guy discovers something that is important and was overlooked by the specific way things were done before. You could argue that, for best bug discovery, you ought to rotate people on and off the team of testers to take advantage of this. Hmm, I'm not sure. On long projects, or systems being regression tested each release, staff turnover or other change tends to make this happen anyway without having to do it as a matter of policy.
Conversely, training your testers extensively teaches them to use the system just like you do and risks missing out on all the unexpected and creative things people do when they work from instructions, and the discoveries that might come from that.
Anyway, your customers don't have the benefit of a script, probably won't read the instructions and will certainly do a whole pile of things you won't ever think of until you either do usability tests or launch the product and get customer feedback.