WARNING! BORING POST: READ WITH CAUTION
Some people might wonder what exactly occurs in a QA Testing department.
Since that is where I spend one-third of every week, I thought I would give a rundown of the generalized processes that consume my life.
I start work sometime between 8 and 10. Flex-time is great.
There's a computer and a 360 on my desk. Right now I don't have a cubicle, so it's a little cramped but not too bad.
I logon and check Outlook -- look for reports of balance changes and skim the new A bugs (those with highest priority) and any other important announcements.
I open up the game folder and run an archive update. This will pull down the current build. Xbox builds are usually run at a scheduled time, so while I'm waiting for the PC I'll fire up the 360 and start a new game. Well, after I get my morning cup of coffee.
It's pretty clear within a few minutes whether a build is stable. Major bugs will go in right away, and if they're glaring enough we might roll back to an earlier build. The same thing will happen with PC -- often there's a pretty mad scramble to find a way to make a build stable enough for us to go through our testing cycle.
All testers have assignments, sections of the game in which they look for issues with AI, graphics, physics, saving/loading, pretty much anything at all.
I don't think I can overstress the importance of having organized and thorough test cases.
Sometimes the problem will be completely obvious -- missing texture, broken model. Sometimes it takes very careful sleuthing, backtracking from effects to causes, and in these cases it can be useful to speak with the other testers. Familiarity with the game systems are very important, and it would be rare to have one tester be an expert in them all (or any of them, really).
So as the day goes by I work my way through my assignments.
When I hit a bug, then a whole different process kicks off. The first step is to see how often a bug reproduces. Which means repeating everything I did prior to setting off the bug (to the best of my ability) and getting a save before the bug occurs.
If a bug doesn't repeat (and it's pure guesswork figuring out how many times to try and get it to repeat before you decide it was a one-of-a-kind glitch) then I will move on, keeping a lookout for similar problems, trying to see if there is a general issue.
If a bug repeats, then I enter it into the database.
The bug database will ask for a summary of the bug (short and simple terms, to make it easier for other people to search for known issues) and has dropdown boxes to classify the issue -- on what build did it occur, what type of system is affected, what percentage of time it occurs.
There are also two large text boxes. The first is for a description of what happens -- this is a good place to put down any incidental information that might be pertinent.
The second text box is for the steps to reproduce. This is especially tedious. You have to write down exactly what someone else needs to do in order to get the bug to occur. The better the description, the less likely the bug will come back to you with a request for more information.
Another side of QA is performing regressions.
After a bug is marked fixed, it goes to the QA lead to assign to team members. Then it enters a team member's queue.
So, the breakdown of the life cycle of an ideal bug might look something like this:
1. A bug is entered in the database.
2. The bug is assigned to be fixed -- art bugs go to artists, code bugs go to programmers, and so on in that fashion.
3. The bug is fixed and gets assigned back to QA.
4. The bug is assigned to a QA team member -- they attempt to reproduce the bug.
5. If the bug does not repeat, then it is verified in the system and closes out; If it occurs as described then it is failed and goes back to Step 2.
This is the essence of QA. There are other tasks that more experienced members of the team handle (TCR stuff, which is something so tedious that, if discussed, it would triple the Boring Level of this post).