In 2000, the federal government distributed over $260 billion of funding to state and local governments via 180 formula programs. These programs promote a wide spectrum of economic and social objectives, such as improving educational outcomes and increasing accessibility to medical care, and many are designed to compensate for differences in fiscal capacity that affect governments' abilities to address identified needs. Large amounts of state revenues are also distributed through formula allocation programs to counties, cities, and other jurisdictions. Statistical Issues in Allocating Funds by Formula identifies key issues concerning the design and use of these formulas and advances recommendations for improving the process. In addition to the more narrow issues relating to formula design and input data, the book discusses broader issues created by the interaction of the political process and the use of formulas to allocate funds. Statistical Issues in Allocating Funds by Formula is only up-to-date guide for policymakers who design fund allocation programs. Congress members who are crafting legislation for these programs and federal employees who are in charge of distributing the funds will find this book indispensable.
The workshop was a direct outgrowth of a previous study by the CNSTAT Panel on Estimates of Poverty for Small Geographic Areas. That panel, established under a 1994 act of Congress, began its work with a very specific mission: to evaluate the suitability of the U.S. Census Bureau's small-area estimates of poor school-age children for use in the allocation of funds to counties and school districts under Title I of the Elementary and Secondary Education Act. In carrying out their assignment, panel members came to realize that the properties of data sources and statistical procedures used to produce formula estimates, interacting with formula features such as thresholds and hold-harmless provisions, can produce consequences that may not have been anticipated or intended. It also became evident that there is a trade-off between the goals of providing a reasonable amount of stability in funding from one year to the next and redirecting funds to different jurisdictions as true needs change. In one instance, for example, the annual appropriation included a 100 percent hold-harmless provision, ensuring that no recipient would receive less than the year before. However, there was no increase in the total appropriation, with the result that new estimates showing changes in the distribution of program needs across areas had no effect on the allocations. Choosing the Right Formula provides an account of the presentations and discussions at the workshop. The first three chapters cover the overview, case studies, and methodological sessions, respectively. Chapter 4 summarizes the issues discussed in the roundtable and concluding sessions, with emphasis on the identification of questions that might be addressed in a panel study.
In 2000, the federal government distributed over $260 billion of funding to state and local governments via 180 formula programs. These programs promote a wide spectrum of economic and social objectives, such as improving educational outcomes and increasing accessibility to medical care, and many are designed to compensate for differences in fiscal capacity that affect governments' abilities to address identified needs. Large amounts of state revenues are also distributed through formula allocation programs to counties, cities, and other jurisdictions. Statistical Issues in Allocating Funds by Formula identifies key issues concerning the design and use of these formulas and advances recommendations for improving the process. In addition to the more narrow issues relating to formula design and input data, the book discusses broader issues created by the interaction of the political process and the use of formulas to allocate funds. Statistical Issues in Allocating Funds by Formula is only up-to-date guide for policymakers who design fund allocation programs. Congress members who are crafting legislation for these programs and federal employees who are in charge of distributing the funds will find this book indispensable.
The workshop was a direct outgrowth of a previous study by the CNSTAT Panel on Estimates of Poverty for Small Geographic Areas. That panel, established under a 1994 act of Congress, began its work with a very specific mission: to evaluate the suitability of the U.S. Census Bureau's small-area estimates of poor school-age children for use in the allocation of funds to counties and school districts under Title I of the Elementary and Secondary Education Act. In carrying out their assignment, panel members came to realize that the properties of data sources and statistical procedures used to produce formula estimates, interacting with formula features such as thresholds and hold-harmless provisions, can produce consequences that may not have been anticipated or intended. It also became evident that there is a trade-off between the goals of providing a reasonable amount of stability in funding from one year to the next and redirecting funds to different jurisdictions as true needs change. In one instance, for example, the annual appropriation included a 100 percent hold-harmless provision, ensuring that no recipient would receive less than the year before. However, there was no increase in the total appropriation, with the result that new estimates showing changes in the distribution of program needs across areas had no effect on the allocations. Choosing the Right Formula provides an account of the presentations and discussions at the workshop. The first three chapters cover the overview, case studies, and methodological sessions, respectively. Chapter 4 summarizes the issues discussed in the roundtable and concluding sessions, with emphasis on the identification of questions that might be addressed in a panel study.
As the United States continues to be a nation of immigrants and their children, the nation's school systems face increased enrollments of students whose primary language is not English. With the 2001 reauthorization of the Elementary and Secondary Education Act (ESEA) in the No Child Left Behind Act (NCLB), the allocation of federal funds for programs to assist these students to be proficient in English became formula-based: 80 percent on the basis of the population of children with limited English proficiency1 and 20 percent on the basis of the population of recently immigrated children and youth. Title III of NCLB directs the U.S. Department of Education to allocate funds on the basis of the more accurate of two allowable data sources: the number of students reported to the federal government by each state education agency or data from the American Community Survey (ACS). The department determined that the ACS estimates are more accurate, and since 2005, those data have been basis for the federal distribution of Title III funds. Subsequently, analyses of the two data sources have raised concerns about that decision, especially because the two allowable data sources would allocate quite different amounts to the states. In addition, while shortcomings were noted in the data provided by the states, the ACS estimates were shown to fluctuate between years, causing concern among the states about the unpredictability and unevenness of program funding. In this context, the U.S. Department of Education commissioned the National Research Council to address the accuracy of the estimates from the two data sources and the factors that influence the estimates. The resulting book also considers means of increasing the accuracy of the data sources or alternative data sources that could be used for allocation purposes.
Recent trends in federal policies for social and economic programs have increased the demand for timely, accurate estimates of income and poverty for states, counties, and even smaller areas. Every year more than $130 billion in federal funds is allocated to states and localities through formulas that use such estimates. These funds support a wide range of programs that include child care, community development, education, job training, nutrition, and public health. A new program of the U.S. Census Bureau is now providing more timely estimates for these programs than those from the decennial census, which have been used for many years. These new estimates are being used to allocate more than $7 billion annually to school districts, through the Title I program that supports educationally disadvantaged children. But are these estimates as accurate as possible given the available data? How can the statistical models and data that are used to develop the estimates be improved? What should policy makers consider in selecting particular estimates? This new book from the National Research Council provides guidance for improving the Census Bureau's program and for policy makers who use such estimates for allocating funds.
As the United States continues to be a nation of immigrants and their children, the nation's school systems face increased enrollments of students whose primary language is not English. With the 2001 reauthorization of the Elementary and Secondary Education Act (ESEA) in the No Child Left Behind Act (NCLB), the allocation of federal funds for programs to assist these students to be proficient in English became formula-based: 80 percent on the basis of the population of children with limited English proficiency1 and 20 percent on the basis of the population of recently immigrated children and youth. Title III of NCLB directs the U.S. Department of Education to allocate funds on the basis of the more accurate of two allowable data sources: the number of students reported to the federal government by each state education agency or data from the American Community Survey (ACS). The department determined that the ACS estimates are more accurate, and since 2005, those data have been basis for the federal distribution of Title III funds. Subsequently, analyses of the two data sources have raised concerns about that decision, especially because the two allowable data sources would allocate quite different amounts to the states. In addition, while shortcomings were noted in the data provided by the states, the ACS estimates were shown to fluctuate between years, causing concern among the states about the unpredictability and unevenness of program funding. In this context, the U.S. Department of Education commissioned the National Research Council to address the accuracy of the estimates from the two data sources and the factors that influence the estimates. The resulting book also considers means of increasing the accuracy of the data sources or alternative data sources that could be used for allocation purposes.
International trade plays a substantial role in the economy of the United States. More than 1.6 billion tons of international merchandise was conveyed using the U.S. transportation system in 2001. The need to transport this merchandise raises concerns about the quality of the transportation system and its ability to support this component of freight movement. Measuring International Trade on U.S. Highways evaluates the accuracy and reliability of measuring the ton-miles and value-miles of international trade traffic carried by highway for each state. This report also assesses the accuracy and reliability of the use of diesel fuel data as a measure of international trade traffic by state and identifies needed improvements in long-term data collection programs.
The decennial census was the federal government's largest and most complex peacetime operation. This report of a panel of the National Research Council's Committee on National Statistics comprehensively reviews the conduct of the 2000 census and the quality of the resulting data. The panel's findings cover the planning process for 2000, which was marked by an atmosphere of intense controversy about the proposed role of statistical techniques in the census enumeration and possible adjustment for errors in counting the population. The report addresses the success and problems of major innovations in census operations, the completeness of population coverage in 2000, and the quality of both the basic demographic data collected from all census respondents and the detailed socioeconomic data collected from the census long-form sample (about one-sixth of the population). The panel draws comparisons with the 1990 experience and recommends improvements in the planning process and design for 2010. The 2000 Census: Counting Under Adversity will be an invaluable resource for users of the 2000 data and for policymakers and census planners. It provides a trove of information about the issues that have fueled debate about the census process and about the operations and quality of the nation's twenty-second decennial enumeration.
Thank you for visiting our website. Would you like to provide feedback on how we could improve your experience?
This site does not use any third party cookies with one exception — it uses cookies from Google to deliver its services and to analyze traffic.Learn More.