mcwl logo

Marine Corps Warfighting Laboratory

Background for the Basic Analytic Wargaming Course from the Naval Postgraduate School at MCB Quantico, 8 to 12 January 2018

Feedback from Participants

Questionnaires

Procedure

Analysts may develop a series of questions before a war game for which they seek answers (opinions?) from participants before the game, as the game unfolds, at the conclusion of a particular scenario or vignette, or at the conclusion of the whole process.

Substantial guidance already exists for the development of questionnaires. There are packages to support distributing questionnaires and responding to them using email: SurveyMonkey and LimeSurvey.

Of course, each questionnaire needs to be tailored to its purpose, so it is difficult to give guidance for developing a specific one. Using software like SurveyMonkey or LimeSurvey to compose a questionnaire is the easy part. Ensuring the questionnaire will provide relevant results is far more challenging.

Multiple choice questions can be used to reduce the time required for completion of a questionnaire. However, given that war games can often steer into unanticipated areas, many such questions can become irrelevant soon after the game has begun. For example, before gaming begins the Study Team may anticipate that logistics will be a significant issue and include many multiple-choice questions on this. But as the game and discussion unfolds, it may turn out that logistics is only a passing concern for most players. At this point the questions or choices may seem almost nonsensical in the context of what has transpired in the game. If players answer such questions, their replies may be contradictory and may even seem whimsical -- and not at all what the analyst may have expected.

Observer Form AE6BFor many aspects of war gaming, open-ended questions that are framed in very general terms may be best. If the players (and respondents) are small in number, the efficiencies usually associated with multiple-choice questions may not pertain in any case. So free-text responses may be preferred.

A form of questionnaire that has been quite successful in war games is one that captures some contextual information by tick box or alpha-numeric code, and then has free-format text boxes for observation and recommendation (see figure).

The contextual information may be demographic information on the respondent (rank, military specialty, experience). Or it could specify which scenario, what staff branch (e.g., J1 through J6), and so on. In this example, from Canadian Army Experiment 6B on brigade-level command and control, the observer could put in reference codes for such elements as 'Critical Information Requirements' (CIR); 'Tactics Techniques and Procedures' (TTP); and 'Unit Standard Operating Procedures' (USOP). Codes for all of this had been specified prior to the start of the game. If the respondent had a observation to make on some specific Unit Standing Operating Procedure, say on the use of the C2 system to call for air support, he or she had only to put the relevant paragraph number of the SOP in the grey area at the top and then write out a observation (what is the problem or issue?) and the recommendation (what do you think should change?).

Value

Questionnaires go directly to the source: the participants provide the information.

Some forms of questionnaire can be quickly analyzed, meaning results may be ready by the time a "quick look report" is needed (Step 13).

Problems

Some common problems include:

  • Failure to anticipate. The game may turn in directions that the analysts did not anticipate, so the questions to the players are irrelevant and the questions that are really needed were not properly developed in advance.
  • Failure to test. Questions that have ambiguity or are framed in language unfamiliar to the respondents may result in nonsense answers. So, questionnaires should always be tested first on an audience with similar backgrounds to the participants.
  • Dissonance with context. For some repondents, the nature of the question may not related to what they believe they have seen in the war game. They then have to "make up" what they think is the intent of the question. In such circumstanced, misleading replies are highly likely.

Observations

Procedure

Getting direct observations from participants has considerable value. There are many modalities for this. The "Observer Data Sheet" above is one means. Another is to have a facilitator conduct a brain-storming session with a flip chart or white board.

It may be appealing to designate participants whose sole role during the game (Step 12) is to observe the conduct of the game and record their findings. However, some participants are likely to look with some suspicion -- "are they here to evaluate us?"

Value

If there is an independent groups of observers, players can devote all of their energy to the game itself.

Problems
  • Bias of observer or arrogance or hubris on technical issues. There are many sources of bias on the part of an observer. For example the observer may feel he or she knows more about a subject than has been presented during a game. The record they provide may be more based on their own notions of rightness than on what they saw.
  • Failure to appreciate context. Something that appears during a game may not be universally applicable (outside of the game). Rather it may be specific to some context within the game. If there is a lack of contextual information in the observer's records, some finding may appear more widely applicable than it should be. Later that observation may get applied to situations where it is no longer appropriate.
  • Failure to understand technical issues. Sometimes players are constrained by technical issues (e.g., the C2 system has developed a glitch). Or players may take a certain direction as they are already anticipating technical impediments to some alternative. When this occurs, the technical context should be recorded -- e.g., the players found a work-around for when the computer network failed. An observer who is unaware of technical issues that may have driven a player's response may deliver an incomplete or misleading record.
  • Suspicion from players. (Are the observers evaluating the players?) When the observers have not integrated with the players, the players may view them as outsiders who have come to grade their performance. This can be aggravated if observers walk around with clipboards and record material that is kept secret from players. The Study Team needs to be sensitive to this phenomenon and ensure that players will trust the observers to do them justice.

Interviews

Procedure

When interviewing participants of war games, two objectives should be adopted: to obtain the subject's special knowledge about the topic, and to obtain the subject's opinion about the topic. The interviewer should remain aware of which objective is the main focus at any time. Sometimes it will be best to draw out special knowledge and, after covering that ground, ask for opinions. Other times an interviewee may express an opinion and should then be obliged to provide the special knowledge that led to it.

A report consisting only of opinions will rarely be useful. A report consisting of special knowledge may lack the conclusions or hypothesis that incorporates the subject's opinions into a useful product.

Be prepared. The interviewer should have a basic knowledge of your subject, including popular jargon terms. A lack of basic knowledge has several disadvantages. First, the interviewee may not be at ease if always interrupted to explain basic issues. Second, the interviewer’s credibility may decline accompanied by a reluctance to open up.

The news reporters five W’s (who, what, when, where, why), with frequent support from how, may suffice as a framework for many interviews. Of course, the interviewer will want to couch the actual questions in more eloquent terms than simply and repeatedly asking: "But why?"

Have a list of questions prepared in advance. It seems obvious, but some people don't think of it. While you should be prepared to improvise and adapt, it makes sense to have a firm list of questions which need to be asked. However, not all questions on the list will have to be asked in every interview -- it will depend on context.

Providing a subject with a list of questions in advance may help them prepare. In interviewing participants in a war game, we generally are not trying to trick interviewees by asking unexpected questions. However it is perfectly acceptable to ask about failures or other potentially embarrassing points -- as long as the discussion remains professional and the objective is to have others avoid similar mistakes in future.

Whether providing the text in advance for the questions a good idea or not depends on the situation. For example, if you will be asking technical questions which might need a researched answer, then it helps to give the subject some warning. On the other hand, if you are looking for spontaneous answers then it may be best to wait until the interview to provide the question to the subject.

Try to avoid being restricted to a preset list of questions as this could inhibit the interviewer from improvising on some emergent topic. However, if you do agree to such a list before the interview (say, for consistency between subjects), stick to it.

Ask the subject if there are any particular questions he or she would like you to ask them. Use the subject as a collaborator; they may have a topic not on your list of questions but that is critical to the game. Then allow them to answer their own proposed question (and consider using it for other subjects too).

Listen. A common mistake is to be thinking about the next question while the subject is answering the current one, to the point that the interviewer misses some important information.

Value

Interviews can be more free ranging that a questionnaire set in advance; the topics can be improvised in response to the game results.

Problems
  • Bias of interviewer. If an interviewer has biases (even if they are inadvertent), this may take the interview in inappropriate directions. It may also result in important topics being missed (if the interviewer thinks he or she already knows the right answer). Many biases may be inadvertent, and the interviewer may be unaware they are affecting the quality of the interview; training and experience with interview techniques should help overcome this.
  • Dishonesty from subject. Sometimes the subject will provide a dishonest response. When this happens, it is not likely to be malicious. Rather the subject may think the answer is true, although, for example, it could really be some myth from his or her culture. Or, a subject might give the interviewer an answer that the subject thinks the interviewer wants to hear ("no harm done if I am just trying to be nice").
  • Failure to pursue critical issues. From time to time an interviewer may not tackle critical issues to the extent they need to be covered. For example, the interviewer might defer to rank -- "If a general officer has made a bunder, who am I to try and second guess him?"
  • Failure to record properly or promptly. Interviews should be recorded in real time if possible. If this is not possible (say due to trying to maintain spontaneity with the subject), the interviewer needs to record the notes from the interview immediately upon conclusion.
  • Intrusion of electronic means. If an interview is recorded, the recording means (audio or video) may become a distraction. For example, a subject may be intimidated -- e.g., some subjects may be unwilling to cover some topic knowing that a recording could come back to haunt them. Others may react by "playing to the camera". If the equipment needs attention -- sound checks, battery replacement -- this could interrupt the natural flow of the interview discussion.

After Action Review (AAR)

Procedure

AARs have been effective in both military and civilian applications. AARs are being used in civilian organizations and the procedures that have developed codify the best practices of their military colleagues.

Value

After Action Reviews have come to be associated with following up on training or on an actual operation, with a view to improving in the future. However, because the format will frequently be familiar to players, they can be of considerable benefit in the analysis of a war game.

Problems
  • Limitations of AARs alone. Good AARs will generally have participants acknowledging where they went wrong in various respects. But often, a missing aspect in this is to address "why". If some player action led to a bad outcome that is now obvious to all, the players may have no motivation to address why the action was taken -- that the player should have done differently seems apparent so why belabor it. But the analysts may need to know why the player took the action; perhaps, it was not actually a bad decision (i.e., the logic was sound) and circumstances beyond the player's control produced a bad outcome. Having players repeatedly respond to analysts asking "why?" during an AAR can become tiresome to the players, so other methods may be needed to get the necessary answers, e.g., one-on-one interviews after the AAR (or before?).
  • AAR becomes a soapbox. Some participants may use the stage of an AAR as a soapbox for some favorite topics. Rather than focusing on findings of a war game, they may go on tangents that reach into issues that were never part of the game itself.
  • Incomplete records. The Study Team may fail to record the AAR material (including critical aspects of context). In AARs conducted for training, it may often suffice to have the participants depart with a good understanding in their own minds of what transpired, with no other record of the proceedings. However, in a war game this all needs to be recorded -- this may not be a feature of AARs for training. After a training activity, simply having the players understand what happened is enough; that knowledge will then disperse when the players disperse.