At the request of the Department of Education, the National Research Council formed the Committee on NAEP Reporting Practices to address questions about the desirability, feasibility, and potential impact of implementing these reporting practices. The committee developed study questions designed to address issues surrounding district-level and market-basket reporting.
The National Assessment of Education Progress (NAEP) has earned a reputation as one of the nation's best measures of student achievement in key subject areas. Since its inception in 1969, NAEP has summarized academic performance for the nation as a whole and, beginning in 1990, for the individual states. Increasingly, NAEP results get the attention of the press, the public, and policy makers. With this increasing prominence have come calls for reporting NAEP results below the national and state levels. Some education leaders argue that NAEP can provide important and useful information to local educators and policy makers. They want NAEP to serve as a district-level indicator of educational progress and call for NAEP results to be summarized at the school district level. Reporting District-Level NAEP Data explores with various stakeholders their interest in and perceptions regarding the likely impacts of district level reporting.
At the request of the Department of Education, the National Research Council formed the Committee on NAEP Reporting Practices to address questions about the desirability, feasibility, and potential impact of implementing these reporting practices. The committee developed study questions designed to address issues surrounding district-level and market-basket reporting.
At the request of the U.S. Department of Education, the National Research Council (NRC) established the Committee on NAEP Reporting Practices to examine the feasibility and potential impact of district-level and market-basket reporting practices. As part of its charge, the committee sponsored a workshop in February 2000 to gather information on issues related to market-basket reporting for the National Assessment of Education Progress (NAEP). Designing a Market Basket for NAEP: Summary of a Workshop explores with various stakeholders their interest in and perceptions regarding the desirability, feasibility, and potential impact of market-basket reporting for the NAEP. The market-basket concept is based on the idea that a relatively limited set of items can represent some larger construct. The general idea of a NAEP market basket is based on an image of a collection of test questions representative of some larger content domain and an easily understood index to summarize performance on the items.
Since the late 1960s, the National Assessment of Educational Progress (NAEP)--the nation's report card--has been the only continuing measure of student achievement in key subject areas. Increasingly, educators and policymakers have expected NAEP to serve as a lever for education reform and many other purposes beyond its original role. Grading the Nation's Report Card examines ways NAEP can be strengthened to provide more informative portrayals of student achievement and the school and system factors that influence it. The committee offers specific recommendations and strategies for improving NAEP's effectiveness and utility, including: Linking achievement data to other education indicators. Streamlining data collection and other aspects of its design. Including students with disabilities and English-language learners. Revamping the process by which achievement levels are set. The book explores how to improve NAEP framework documents--which identify knowledge and skills to be assessed--with a clearer eye toward the inferences that will be drawn from the results. What should the nation expect from NAEP? What should NAEP do to meet these expectations? This book provides a blueprint for a new paradigm, important to education policymakers, professors, and students, as well as school administrators and teachers, and education advocates.
The National Assessment of Educational Progress (NAEP), known as the nation's report card, has chronicled students' academic achievement in America for over a quarter of a century. It has been a valued source of information about students' performance, providing the best available trend data on the academic achievement of elementary, middle, and secondary school students in key subject areas. NAEP's prominence and the important need for stable and accurate measures of academic achievement call for evaluation of the program and an analysis of the extent to which its results are reasonable, valid, and informative to the public. This volume of papers considers the use and application of NAEP. It provides technical background to the recently published book, Grading the Nation's Report Card: Evaluating NAEP and Transforming the Assessment of Educational Progress (NRC, 1999), with papers on four key topics: NAEP's assessment development, content validity, design and use, and more broadly, the design of education indicator systems.
Since 1969, the National Assessment of Educational Progress (NAEP) has been providing policymakers, educators, and the public with reports on academic performance and progress of the nation's students. The assessment is given periodically in a variety of subjects: mathematics, reading, writing, science, the arts, civics, economics, geography, U.S. history, and technology and engineering literacy. NAEP is given to representative samples of students across the U.S. to assess the educational progress of the nation as a whole. Since 1992, NAEP results have been reported in relation to three achievement levels: basic, proficient, and advanced. However, the use of achievement levels has provoked controversy and disagreement, and evaluators have identified numerous concerns. This publication evaluates the NAEP student achievement levels in reading and mathematics in grades 4, 8, and 12 to determine whether the achievement levels are reasonable, reliable, valid, and informative to the public, and recommends ways that the setting and use of achievement levels can be improved.
Educators and policy makers in the United States have relied on tests to measure educational progress for more than 150 years. During the twentieth century, technical advances, such as machines for automatic scoring and computer-based scoring and reporting, have supported states in a growing reliance on standardized tests for statewide accountability. State assessment data have been cited as evidence for claims about many achievements of public education, and the tests have also been blamed for significant failings. As standards come under new scrutiny, so, too, do the assessments that measure their results. The goal for this workshop, the first of two, was to collect information and perspectives on assessment that could be of use to state officials and others as they review current assessment practices and consider improvements.
At the request of the Department of Education, the National Research Council formed the Committee on NAEP Reporting Practices to address questions about the desirability, feasibility, and potential impact of implementing these reporting practices. The committee developed study questions designed to address issues surrounding district-level and market-basket reporting.
In 2001, with support from National Science Foundation, the National Research Council began a review of the evidence concerning whether or not the National Science Education Standards have had an impact on the science education enterprise to date, and if so, what that impact has been. This publication represents the second phase of a three-phase effort by the National Research Council to answer that broad and very important question. Phase I began in 1999 and was completed in 2001, with publication of Investigating the Influence of Standards: A Framework for Research in Mathematics, Science, and Technology Education (National Research Council, 2002). That report provided organizing principles for the design, conduct, and interpretation of research regarding the influence of national standards. The Framework developed in Phase I was used to structure the current review of research that is reported here. Phase II began in mid-2001, involved a thorough search and review of the research literature on the influence of the NSES, and concludes with this publication, which summarizes the proceedings of a workshop conducted on May 10, 2002, in Washington, DC. Phase III will provide input, collected in 2002, from science educators, administrators at all levels, and other practitioners and policy makers regarding their views of the NSES, the ways and extent to which the NSES are influencing their work and the systems that support science education, and what next steps are needed.
The education system in the United States is continually challenged to adapt and improve, in part because its mission has become far more ambitious than it once was. At the turn of the 20th century, less than one-tenth of students enrolled were expected to graduate from high school. Today, most people expect schools to prepare all students to succeed in postsecondary education and to prosper in a complex, fast-changing global economy. Goals have broadened to include not only rigorous benchmarks in core academic subjects, but also technological literacy and the subtler capacities known as 21st-century skills. To identify the most important measures for education and other issues and provide quality data on them to the American people, Congress authorized the creation of a Key National Indicators System (KNIS). This system will be a single Web-based information source designed to help policy makers and the public better assess the position and progress of the nation across a wide range of areas. Identifying the right set of indicators for each area is not a small challenge. To serve their purpose of providing objective information that can encourage improvement and innovation, the indicators need to be valid and reliable but they also need to capture the report committee's aspirations for education. This report describes a workshop, planned under the aegis of the Board on Testing and Assessment and the Committee on National Statistics of the National Research Council. Key National Education Indicators is a summary of the meeting of a group with extensive experience in research, public policy, and practice. The goal of the workshop was not to make a final selection of indicators, but to take an important first step by clearly identifying the parameters of the challenge.
Following a 2011 report by the National Research Council (NRC) on successful K-12 education in science, technology, engineering, and mathematics (STEM), Congress asked the National Science Foundation to identify methods for tracking progress toward the report's recommendations. In response, the NRC convened the Committee on an Evaluation Framework for Successful K-12 STEM Education to take on this assignment. The committee developed 14 indicators linked to the 2011 report's recommendations. By providing a focused set of key indicators related to students' access to quality learning, educator's capacity, and policy and funding initiatives in STEM, the committee addresses the need for research and data that can be used to monitor progress in K-12 STEM education and make informed decisions about improving it. The recommended indicators provide a framework for Congress and relevant deferral agencies to create and implement a national-level monitoring and reporting system that: assesses progress toward key improvements recommended by a previous National Research Council (2011) committee; measures student knowledge, interest, and participation in the STEM disciplines and STEM-related activities; tracks financial, human capital, and material investments in K-12 STEM education at the federal, state, and local levels; provides information about the capabilities of the STEM education workforce, including teachers and principals; and facilitates strategic planning for federal investments in STEM education and workforce development when used with labor force projections. All 14 indicators explained in this report are intended to form the core of this system. Monitoring Progress Toward Successful K-12 STEM Education: A Nation Advancing? summarizes the 14 indicators and tracks progress towards the initial report's recommendations.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.