曝光台 注意防骗
网曝天猫店富美金盛家居专营店坑蒙拐骗欺诈消费者
definitive method of assessing HQ performance. Each approach assesses one
particular aspect of the HQ operation. For example, results of wargame simulations
assess overall task effectiveness but do not provide insight as to information
requirements within the HQ. Hence, multiple measures need to be used to build up a
picture of the overall performance. A composite approach to evaluating HQ is
required.
The measures used to evaluate performance also need to be grounded in the aim of
the investigation. That is, the aims of the research underpin the measures used to
establish it. For example, a training needs analysis requires different information
compared to an individual skills examination.
There are several different methods of measuring both taskwork and teamwork.
These include:
• Objective methods: These are used to collect empirical data on the information
flow among the team, the team dynamics, and the task characteristics.
Observational techniques, such as behavioural and task load checklists, are used
to formalise the process of data collection.
• Subjective methods: These provide the individuals perceptions of teamwork and
task characteristics, using questionnaires and structured interviews. This type of
data provides useful insights into the individuals perception of changes to their
environment (eg. the introduction of BCSS). It is also useful for identifying
possible barriers to introducing changes to the system.
• Outcome measures: These include results of wargame simulations (eg. the
number of enemy, friendly and civilian casualties, time taken to complete the
mission, etc). They provide an objective measure of performance that can be
empirically linked to the observations and subjective data.
1.4 Observational Methods
The development of objective observational techniques that can be used to collect
data on team and task behaviour is the main focus of this report. To date,
behavioural observations conducted by DSTO analysts have tended to be informal.
For example, analysts observe an exercise, making informal notes of what are
DSTO-TR-1034
4
believed to be the salient characteristics. This approach has strong value in
generating insights and lessons learned. However, it does not allow the collection of
formal data on team processes after the introduction of systems such as BCSS. More
importantly, informal observation does not provide implications for design
principles that can be incorporated into future systems.
In contrast to this approach (and to the collection of subjective data via the use of
such methods as questionnaires, Structured Interviews and After Action Reviews),
formally observing and categorising behaviour allows analysis of what is happening
and in what sequence. Observing provides information on what actually happens
rather than relying on the subjects perception of what was occurring. This is an
important distinction when the aim is to examine team processes and inform future
design processes for command support tools. While an individuals subjective
experience does impact on their performance, and should be taken into account, to
assess the team as objectively as possible, it is necessary to observe it in operation.
Categorisation of observed behaviours needs to be made explicit when using this
method (see Section 3 for a detailed description of methods involved in formally
collecting behavioural data). The categorisations or rating scales should allow the
observer, after suitable training, to reliably score behaviour. This is an important
step. The reliability of a rating scale can be calculated by comparing the degree of
correlation across independent observers using the same scale on the same event. A
high degree of correlation between the independent observers shows that the scale
can be used to generate reliable information.
Smith-Jentsch, Johnston and Payne (1998) developed reliable and diagnostic ratings
of critical team processes. They advocate an event-based approach to obtaining
measures of individual and team processes that can be empirically linked to
important outcomes. Their research was conducted as part of the TADMUS project,
and defined four factors that were highly correlated with performance:
• Communication (how information is exchanged),
• Information Transfer (what information is exchanged),
• Team Supportive Behaviours (how the team interacts)
• Team Initiative (defining goals and roles)
In a similar study by Serfaty, Entin and Deckert (1994), 73% of the variance was
accounted for by the Team Performance Outcome Measure (TPOM). Likewise,
Serfaty and Entin (1997), using a teamwork observational form, found that 15
中国航空网 www.aero.cn
航空翻译 www.aviation.cn
本文链接地址:
航空资料30(5)