The U.S. Army Test and Evaluation Command (ATEC) is responsible for the operational testing and evaluation of Army systems in development. ATEC requested that the National Research Council form the Panel on Operational Test Design and Evaluation of the Interim Armored Vehicle (Stryker) to explore three issues concerning the initial operation test plans for the Stryker/Interim Brigade Combat Team (IBCT). First, the panel was asked to examine the measures selected to assess the performance and effectiveness of the Stryker/IBCT in comparison both to requirements and to the baseline system. Second, the panel was asked to review the test design for the Stryker/IBCT initial operational test to see whether it is consistent with best practices. Third, the panel was asked to identify the advantages and disadvantages of techniques for combining operational test data with data from other sources and types of use. In this report the panel presents findings, conclusions, and recommendations pertaining to the first two issues: measures of performance and effectiveness, and test design. The panel intends to prepare a second report that discusses techniques for combining information.
The U.S. Army Test and Evaluation Command (ATEC) is responsible for the operational testing and evaluation of Army systems in development. ATEC requested that the National Research Council form the Panel on Operational Test Design and Evaluation of the Interim Armored Vehicle (Stryker). The charge to this panel was to explore three issues concerning the IOT plans for the Stryker/SBCT. First, the panel was asked to examine the measures selected to assess the performance and effectiveness of the Stryker/SBCT in comparison both to requirements and to the baseline system. Second, the panel was asked to review the test design for the Stryker/SBCT initial operational test to see whether it is consistent with best practices. Third, the panel was asked to identify the advantages and disadvantages of techniques for combining operational test data with data from other sources and types of use. In a previous report (appended to the current report) the panel presented findings, conclusions, and recommendations pertaining to the first two issues: measures of performance and effectiveness, and test design. In the current report, the panel discusses techniques for combining information.
The U.S. Army Test and Evaluation Command (ATEC) is responsible for the operational testing and evaluation of Army systems in development. ATEC requested that the National Research Council form the Panel on Operational Test Design and Evaluation of the Interim Armored Vehicle (Stryker). The charge to this panel was to explore three issues concerning the IOT plans for the Stryker/SBCT. First, the panel was asked to examine the measures selected to assess the performance and effectiveness of the Stryker/SBCT in comparison both to requirements and to the baseline system. Second, the panel was asked to review the test design for the Stryker/SBCT initial operational test to see whether it is consistent with best practices. Third, the panel was asked to identify the advantages and disadvantages of techniques for combining operational test data with data from other sources and types of use. In a previous report (appended to the current report) the panel presented findings, conclusions, and recommendations pertaining to the first two issues: measures of performance and effectiveness, and test design. In the current report, the panel discusses techniques for combining information.
The U.S. Army Test and Evaluation Command (ATEC) is responsible for the operational testing and evaluation of Army systems in development. ATEC requested that the National Research Council form the Panel on Operational Test Design and Evaluation of the Interim Armored Vehicle (Stryker) to explore three issues concerning the initial operation test plans for the Stryker/Interim Brigade Combat Team (IBCT). First, the panel was asked to examine the measures selected to assess the performance and effectiveness of the Stryker/IBCT in comparison both to requirements and to the baseline system. Second, the panel was asked to review the test design for the Stryker/IBCT initial operational test to see whether it is consistent with best practices. Third, the panel was asked to identify the advantages and disadvantages of techniques for combining operational test data with data from other sources and types of use. In this report the panel presents findings, conclusions, and recommendations pertaining to the first two issues: measures of performance and effectiveness, and test design. The panel intends to prepare a second report that discusses techniques for combining information.
During the past decade and a half, the National Research Council, through its Committee on National Statistics, has carried out a number of studies on the application of statistical methods to improve the testing and development of defense systems. These studies were intended to provide advice to the Department of Defense (DOD), which sponsored these studies. The previous studies have been concerned with the role of statistical methods in testing and evaluation, reliability practices, software methods, combining information, and evolutionary acquisition. Industrial Methods for the Effective Testing and Development of Defense Systems is the latest in a series of studies, and unlike earlier studies, this report identifies current engineering practices that have proved successful in industrial applications for system development and testing. This report explores how developmental and operational testing, modeling and simulation, and related techniques can improve the development and performance of defense systems, particularly techniques that have been shown to be effective in industrial applications and are likely to be useful in defense system development. In addition to the broad issues, the report identifies three specific topics for its focus: finding failure modes earlier, technology maturity, and use of all relevant information for operational assessments.
The Panel on Statistical Methods for Testing and Evaluating Defense Systems had a broad mandate-to examine the use of statistics in conjunction with defense testing. This involved examining methods for software testing, reliability test planning and estimation, validation of modeling and simulation, and use of modem techniques for experimental design. Given the breadth of these areas, including the great variety of applications and special issues that arise, making a contribution in each of these areas required that the Panel's work and recommendations be at a relatively general level. However, a variety of more specific research issues were either brought to the Panel's attention by members of the test and acquisition community, e.g., what was referred to as Dubin's challenge (addressed in the Panel's interim report), or were identified by members of the panel. In many of these cases the panel thought that a more in-depth analysis or a more detailed application of suggestions or recommendations made by the Panel would either be useful as input to its deliberations or could be used to help communicate more individual views of members of the Panel to the defense test community. This resulted in several research efforts. Given various criteria, especially immediate relevance to the test and acquisition community, the Panel has decided to make available three technical or background papers, each authored by a Panel member jointly with a colleague. These papers are individual contributions and are not a consensus product of the Panel; however, the Panel has drawn from these papers in preparation of its final report: Statistics, Testing, and Defense Acquisition. The Panel has found each of these papers to be extremely useful and they are strongly recommended to readers of the Panel's final report.
During the past decade and a half, the National Research Council, through its Committee on National Statistics, has carried out a number of studies on the application of statistical methods to improve the testing and development of defense systems. These studies were intended to provide advice to the Department of Defense (DOD), which sponsored these studies. The previous studies have been concerned with the role of statistical methods in testing and evaluation, reliability practices, software methods, combining information, and evolutionary acquisition. Industrial Methods for the Effective Testing and Development of Defense Systems is the latest in a series of studies, and unlike earlier studies, this report identifies current engineering practices that have proved successful in industrial applications for system development and testing. This report explores how developmental and operational testing, modeling and simulation, and related techniques can improve the development and performance of defense systems, particularly techniques that have been shown to be effective in industrial applications and are likely to be useful in defense system development. In addition to the broad issues, the report identifies three specific topics for its focus: finding failure modes earlier, technology maturity, and use of all relevant information for operational assessments.
This will help us customize your experience to showcase the most relevant content to your age group
Please select from below
Login
Not registered?
Sign up
Already registered?
Success – Your message will goes here
We'd love to hear from you!
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.