Browse Results

Showing 7,601 through 7,625 of 54,293 results

Statistical Methods for Survival Trial Design: With Applications to Cancer Clinical Trials Using R (Chapman & Hall/CRC Biostatistics Series)

by Jianrong Wu

Statistical Methods for Survival Trial Design: With Applications to Cancer Clinical Trials Using R provides a thorough presentation of the principles of designing and monitoring cancer clinical trials in which time-to-event is the primary endpoint. Traditional cancer trial designs with time-to-event endpoints are often limited to the exponential model or proportional hazards model. In practice, however, those model assumptions may not be satisfied for long-term survival trials. This book is the first to cover comprehensively the many newly developed methodologies for survival trial design, including trial design under the Weibull survival models; extensions of the sample size calculations under the proportional hazard models; and trial design under mixture cure models, complex survival models, Cox regression models, and competing-risk models. A general sequential procedure based on the sequential conditional probability ratio test is also implemented for survival trial monitoring. All methodologies are presented with sufficient detail for interested researchers or graduate students.

Improving Your NCAA® Bracket with Statistics (ASA-CRC Series on Statistical Reasoning in Science and Society)

by Tom Adams

Twenty-four million people wager nearly $3 billion on college basketball pools each year, but few are aware that winning strategies have been developed by researchers at Harvard, Yale, and other universities over the past two decades. Bad advice from media sources and even our own psychological inclinations are often a bigger obstacle to winning than our pool opponents. Profit opportunities are missed and most brackets submitted to pools don’t have a breakeven chance to win money before the tournament begins. Improving Your NCAA® Bracket with Statistics is both an easy-to-use tip sheet to improve your winning odds and an intellectual history of how statistical reasoning has been applied to the bracket pool using standard and innovative methods. It covers bracket improvement methods ranging from those that require only the information in the seeded bracket to sophisticated estimation techniques available via online simulations. Included are: Prominently displayed bracket improvement tips based on the published research A history of the origins of the bracket pool A history of bracket improvement methods and their results in play Historical sketches and background information on the mathematical and statistical methods that have been used in bracket analysis A source list of good bracket pool advice available each year that seeks to be comprehensive Warnings about common bad advice that will hurt your chances Tom Adams’ work presenting bracket improvement methods has been featured in the New York Times, Sports Illustrated, and SmartMoney magazine.

Self-Controlled Case Series Studies: A Modelling Guide with R (Chapman & Hall/CRC Biostatistics Series)

by Paddy Farrington Heather Whitaker Yonas Ghebremichael Weldeselassie

Self-Controlled Case Series Studies: A Modelling Guide with R provides the first comprehensive account of the self-controlled case series (SCCS) method, a statistical technique for investigating associations between outcome events and time-varying exposures. The method only requires information from individuals who have experienced the event of interest, and automatically controls for multiplicative time-invariant confounders, even when these are unmeasured or unknown. It is increasingly being used in epidemiology, most frequently to study the safety of vaccines and pharmaceutical drugs. Key features of the book include: A thorough yet accessible description of the SCCS method, with mathematical details provided in separate starred sections. Comprehensive discussion of assumptions and how they may be verified. A detailed account of different SCCS models, extensions of the SCCS method, and the design of SCCS studies. Extensive practical illustrations and worked examples from epidemiology. Full computer code from the associated R package SCCS, which includes all the data sets used in the book. The book is aimed at a broad range of readers, including epidemiologists and medical statisticians who wish to use the SCCS method, and also researchers with an interest in statistical methodology. The three authors have been closely involved with the inception, development, popularisation and programming of the SCCS method.

Flexible Imputation of Missing Data, Second Edition (Chapman & Hall/CRC Interdisciplinary Statistics)

by Stef van Buuren

Missing data pose challenges to real-life data analysis. Simple ad-hoc fixes, like deletion or mean imputation, only work under highly restrictive conditions, which are often not met in practice. Multiple imputation replaces each missing value by multiple plausible values. The variability between these replacements reflects our ignorance of the true (but missing) value. Each of the completed data set is then analyzed by standard methods, and the results are pooled to obtain unbiased estimates with correct confidence intervals. Multiple imputation is a general approach that also inspires novel solutions to old problems by reformulating the task at hand as a missing-data problem. This is the second edition of a popular book on multiple imputation, focused on explaining the application of methods through detailed worked examples using the MICE package as developed by the author. This new edition incorporates the recent developments in this fast-moving field. This class-tested book avoids mathematical and technical details as much as possible: formulas are accompanied by verbal statements that explain the formula in accessible terms. The book sharpens the reader’s intuition on how to think about missing data, and provides all the tools needed to execute a well-grounded quantitative analysis in the presence of missing data.

Low Power Semiconductor Devices and Processes for Emerging Applications in Communications, Computing, and Sensing (Devices, Circuits, and Systems)

by Sumeet Walia

The book addresses the need to investigate new approaches to lower energy requirement in multiple application areas and serves as a guide into emerging circuit technologies. It explores revolutionary device concepts, sensors, and associated circuits and architectures that will greatly extend the practical engineering limits of energy-efficient computation. The book responds to the need to develop disruptive new system architectures and semiconductor processes aimed at achieving the highest level of computational energy efficiency for general purpose computing systems. Discusses unique technologies and material only available in specialized journal and conferences. Covers emerging materials and device structures, such as ultra-low power technologies, nanoelectronics, and microsystem manufacturing. Explores semiconductor processing and manufacturing, device design, and performance. Contains practical applications in the engineering field, as well as graduate studies. Written by international experts from both academia and industry.

Yearning for the Impossible: The Surprising Truths of Mathematics, Second Edition

by John Stillwell

Yearning for the Impossible: The Surprising Truth of Mathematics, Second Edition explores the history of mathematics from the perspective of the creative tension between common sense and the "impossible" as the author follows the discovery or invention of new concepts that have marked mathematical progress. The author puts these creations into a broader context involving related "impossibilities" from art, literature, philosophy, and physics. This new edition contains many new exercises and commentaries, clearly discussing a wide range of challenging subjects.

Complex Variables: A Physical Approach with Applications (Textbooks in Mathematics)

by Steven G. Krantz

Web Copy The idea of complex numbers dates back at least 300 years—to Gauss and Euler, among others. Today complex analysis is a central part of modern analytical thinking. It is used in engineering, physics, mathematics, astrophysics, and many other fields. It provides powerful tools for doing mathematical analysis, and often yields pleasing and unanticipated answers. This book makes the subject of complex analysis accessible to a broad audience. The complex numbers are a somewhat mysterious number system that seems to come out of the blue. It is important for students to see that this is really a very concrete set of objects that has very concrete and meaningful applications. Features: This new edition is a substantial rewrite, focusing on the accessibility, applied, and visual aspect of complex analysis This book has an exceptionally large number of examples and a large number of figures. The topic is presented as a natural outgrowth of the calculus. It is not a new language, or a new way of thinking. Incisive applications appear throughout the book. Partial differential equations are used as a unifying theme.

Companion Diagnostics (Pan Stanford Series on Digital Signal Processing)

by Il-Jin Kim

There is a new trend in anti-cancer therapeutics development: a targeted therapy and precision medicine that targets a subgroup of patients with specific biomarkers. An in vitro diagnostic (IVD) assay is required to identify a subgroup of cancer patients who would benefit from the targeted therapy, or not likely benefit, or have a high risk of side effects from the specific drug treatment. This IVD or medical device is called a companion diagnostic (CDx) assay. It is key to have a robust CDx assay or device for the success of targeted therapy and precision medicine. This book covers the technical, historical, clinical, and regulatory aspects of CDx in precision medicine. Clearly, more and more newly developed oncology drugs will require accompanying CDx assays, and this book, with chapters contributed by renowned oncologists, provides a comprehensive foundation for the knowledge and application of CDx for precision medicine.

Analysis of Correlated Data with SAS and R

by Mohamed M. Shoukri

Analysis of Correlated Data with SAS and R: 4th edition presents an applied treatment of recently developed statistical models and methods for the analysis of hierarchical binary, count and continuous response data. It explains how to use procedures in SAS and packages in R for exploring data, fitting appropriate models, presenting programming codes and results. The book is designed for senior undergraduate and graduate students in the health sciences, epidemiology, statistics, and biostatistics as well as clinical researchers, and consulting statisticians who can apply the methods with their own data analyses. In each chapter a brief description of the foundations of statistical theory needed to understand the methods is given, thereafter the author illustrates the applicability of the techniques by providing sufficient number of examples. The last three chapters of the 4th edition contain introductory material on propensity score analysis, meta-analysis and the treatment of missing data using SAS and R. These topics were not covered in previous editions. The main reason is that there is an increasing demand by clinical researchers to have these topics covered at a reasonably understandable level of complexity. Mohamed Shoukri is principal scientist and professor of biostatistics at The National Biotechnology Center, King Faisal Specialist Hospital and Research Center and Al-Faisal University, Saudi Arabia. Professor Shoukri’s research includes analytic epidemiology, analysis of hierarchical data, and clinical biostatistics. He is an associate editor of the 3Biotech journal, a Fellow of the Royal Statistical Society and an elected member of the International Statistical Institute.

Stochastic Communities: A Mathematical Theory of Biodiversity

by A. K. Dewdney

Stochastic Communities presents a theory of biodiversity by analyzing the distribution of abundances among species in the context of a community. The basis of this theory is a distribution called the "J distribution." This distribution is a pure hyperbola and mathematically implied by the "stochastic species hypothesis" assigning equal probabilities of birth and death within the population of each species over varying periods of time. The J distribution in natural communities has strong empirical support resulting from a meta-study and strong theoretical support from a theorem that is mathematically implied by the stochastic species hypothesis.

Advances in Algebra: SRAC 2017, Mobile, Alabama, USA, March 17-19 (Springer Proceedings in Mathematics & Statistics #277)

by Jörg Feldvoss Lauren Grimley Drew Lewis Andrei Pavelescu Cornelius Pillen

This proceedings volume covers a range of research topics in algebra from the Southern Regional Algebra Conference (SRAC) that took place in March 2017. Presenting theory as well as computational methods, featured survey articles and research papers focus on ongoing research in algebraic geometry, ring theory, group theory, and associative algebras. Topics include algebraic groups, combinatorial commutative algebra, computational methods for representations of groups and algebras, group theory, Hopf-Galois theory, hypergroups, Lie superalgebras, matrix analysis, spherical and algebraic spaces, and tropical algebraic geometry. Since 1988, SRAC has been an important event for the algebra research community in the Gulf Coast Region and surrounding states, building a strong network of algebraists that fosters collaboration in research and education. This volume is suitable for graduate students and researchers interested in recent findings in computational and theoretical methods in algebra and representation theory.

Implementing a Standards-Based Curriculum in the Early Childhood Classroom

by Lora Battle Bailey

Implementing a Standards-Based Curriculum in the Early Childhood Classroom demonstrates how pre-service and in-service teachers can develop mathematics, language arts, and integrated curricula suitable for equipping young children with the knowledge, dispositions, and skills needed to operate successfully as 21st century learners. Chapters promote family-school partnerships, and each content area chapter (mathematics, language arts and integrated curriculum) will demonstrate assessment practices proven to be effective for detecting the impact of specific early childhood teaching methods on student learning.

The Analytical Foundations of Loop Antennas and Nano-Scaled Rings (Signals and Communication Technology)

by Arnold McKinley

This book develops the analytical theory of perfectly conducting and lossy metal, circular, round-wire loop antennas and nano-scaled rings from the radio frequency (RF) regime through infrared and the optical region. It does so from an antenna theory perspective. It is the first time that all of the historical material found in the literature has appeared in one place. It includes, particularly, material that has appeared in the literature only in the last decade and some new material that has not yet been published. The book derives the input impedance, resonances and anti-resonances, the RLC circuit model representation, and radiation patterns not only of closed loops and rings, but also of loops and rings loaded randomly and multiply with resistive and reactive impedances. Every derivation is compared with simulations run in Microwave Studio (MWS). It looks carefully at the physical response of loop antennas and nano-rings coupled to a source at one point in the periphery and at such rings illuminated by a plane wave arriving from every different direction with the E-field in all polarizations. The book ends with a brief look at polygonal loops, two dimensional arrays of nano-rings, and Yagi-Uda arrays.

A Practical Guide to Managing Clinical Trials

by JoAnn Pfeiffer Cris Wells

A Practical Guide to Managing Clinical Trials is a basic, comprehensive guide to conducting clinical trials. Designed for individuals working in research site operations, this user-friendly reference guides the reader through each step of the clinical trial process from site selection, to site set-up, subject recruitment, study visits, and to study close-out. Topics include staff roles/responsibilities/training, budget and contract review and management, subject study visits, data and document management, event reporting, research ethics, audits and inspections, consent processes, IRB, FDA regulations, and good clinical practices. Each chapter concludes with a review of key points and knowledge application. Unique to this book is "A View from India," a chapter-by-chapter comparison of clinical trial practices in India versus the U.S. Throughout the book and in Chapter 10, readers will glimpse some of the challenges and opportunities in the emerging and growing market of Indian clinical trials.

Developing Young Children’s Mathematical Learning Outdoors: Linking Pedagogy and Practice

by Lynda Keith

Developing Young Children’s Mathematical Learning Outdoors provides detailed guidance and practical advice on planning mathematical experiences for young children outdoors. By examining the key features of a mathematically rich outdoor environment, it illustrates how this can motivate children in leading their own learning and mathematical thinking.Drawing upon the author’s wealth of experience, the book provides support for students and early years' practitioners in developing a deeper understanding of how to plan quality experiences, which combine pedagogy with effective practice. Covering all aspects of mathematics, it identifies meaningful contexts and shows how adults can use open-ended questions and prompts to promote children’s mathematical play outside.With rich case studies and reflective questions included throughout, as well as suggestions for useful resources to put the ideas in the book into practice, it is essential reading for all those that want to develop curious and creative mathematical thinkers in the early years.

Exploratory Multivariate Analysis by Example Using R (Chapman & Hall/CRC Computer Science & Data Analysis)

by Francois Husson Sebastien Le Jérôme Pagès

Full of real-world case studies and practical advice, Exploratory Multivariate Analysis by Example Using R, Second Edition focuses on four fundamental methods of multivariate exploratory data analysis that are most suitable for applications. It covers principal component analysis (PCA) when variables are quantitative, correspondence analysis (CA) a

Disk-Based Algorithms for Big Data

by Christopher G. Healey

Disk-Based Algorithms for Big Data is a product of recent advances in the areas of big data, data analytics, and the underlying file systems and data management algorithms used to support the storage and analysis of massive data collections. The book discusses hard disks and their impact on data management, since Hard Disk Drives continue to be common in large data clusters. It also explores ways to store and retrieve data though primary and secondary indices. This includes a review of different in-memory sorting and searching algorithms that build a foundation for more sophisticated on-disk approaches like mergesort, B-trees, and extendible hashing. Following this introduction, the book transitions to more recent topics, including advanced storage technologies like solid-state drives and holographic storage; peer-to-peer (P2P) communication; large file systems and query languages like Hadoop/HDFS, Hive, Cassandra, and Presto; and NoSQL databases like Neo4j for graph structures and MongoDB for unstructured document data. Designed for senior undergraduate and graduate students, as well as professionals, this book is useful for anyone interested in understanding the foundations and advances in big data storage and management, and big data analytics. About the Author Dr. Christopher G. Healey is a tenured Professor in the Department of Computer Science and the Goodnight Distinguished Professor of Analytics in the Institute for Advanced Analytics, both at North Carolina State University in Raleigh, North Carolina. He has published over 50 articles in major journals and conferences in the areas of visualization, visual and data analytics, computer graphics, and artificial intelligence. He is a recipient of the National Science Foundation’s CAREER Early Faculty Development Award and the North Carolina State University Outstanding Instructor Award. He is a Senior Member of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE), and an Associate Editor of ACM Transaction on Applied Perception, the leading worldwide journal on the application of human perception to issues in computer science.

Randomization, Masking, and Allocation Concealment (Chapman & Hall/CRC Biostatistics Series)

by Vance Berger

Randomization, Masking, and Allocation Concealment is indispensable for any trial researcher who wants to use state of the art randomization methods, and also wants to be able to describe these methods correctly. Far too often the subtle nuances that distinguish proper randomization from flawed randomization are completely ignored in trial reports that state only that randomization was used, with no additional information. Experience has shown that in many cases, the type of randomization that was used was flawed. It is only a matter of time before medical journals and regulatory agencies come to realize that we can no longer rely on (or publish) flawed trials, and that flawed randomization in and of itself disqualifies a trial from being robust or high quality, even if that trial is of high quality otherwise. This book will help to clarify the role randomization plays in ensuring internal validity, and in drawing valid inferences from the data. The various chapters cover a variety of randomization methods, and are not limited to the most common (and most flawed) ones. Readers will come away with a profound understanding of what constitutes a valid randomization procedure, so that they can distinguish the valid from the flawed among not only existing methods but also methods yet to be developed.

Programming Language Explorations

by Ray Toal Rachel Rivera Alexander Schneider Eileen Choe

Programming Language Explorations is a tour of several modern programming languages in use today. The book teaches fundamental language concepts using a language-by-language approach. As each language is presented, the authors introduce new concepts as they appear, and revisit familiar ones, comparing their implementation with those from languages seen in prior chapters. The goal is to present and explain common theoretical concepts of language design and usage, illustrated in the context of practical language overviews.Twelve languages have been carefully chosen to illustrate a wide range of programming styles and paradigms. The book introduces each language with a common trio of example programs, and continues with a brief tour of its basic elements, type system, functional forms, scoping rules, concurrency patterns, and sometimes, metaprogramming facilities.Each language chapter ends with a summary, pointers to open source projects, references to materials for further study, and a collection of exercises, designed as further explorations. Following the twelve featured language chapters, the authors provide a brief tour of over two dozen additional languages, and a summary chapter bringing together many of the questions explored throughout the text.Targeted to both professionals and advanced college undergraduates looking to expand the range of languages and programming patterns they can apply in their work and studies, the book pays attention to modern programming practice, covers cutting-edge languages and patterns, and provides many runnable examples, all of which can be found in an online GitHub repository. The exploration style places this book between a tutorial and a reference, with a focus on the concepts and practices underlying programming language design and usage. Instructors looking for material to supplement a programming languages or software engineering course may find the approach unconventional, but hopefully, a lot more fun.

Quantifying Software: Global and Industry Perspectives

by Capers Jones

Software is one of the most important products in human history and is widely used by all industries and all countries. It is also one of the most expensive and labor-intensive products in human history. Software also has very poor quality that has caused many major disasters and wasted many millions of dollars. Software is also the target of frequent and increasingly serious cyber-attacks. Among the reasons for these software problems is a chronic lack of reliable quantified data. This reference provides quantified data from many countries and many industries based on about 26,000 projects developed using a variety of methodologies and team experience levels. The data has been gathered between 1970 and 2017, so interesting historical trends are available. Since current average software productivity and quality results are suboptimal, this book focuses on "best in class" results and shows not only quantified quality and productivity data from best-in-class organizations, but also the technology stacks used to achieve best-in-class results. The overall goal of this book is to encourage the adoption of best-in-class software metrics and best-in-class technology stacks. It does so by providing current data on average software schedules, effort, costs, and quality for several industries and countries. Because productivity and quality vary by technology and size, the book presents quantitative results for applications between 100 function points and 100,000 function points. It shows quality results using defect potential and DRE metrics because the number one cost driver for software is finding and fixing bugs. The book presents data on cost of quality for software projects and discusses technical debt, but that metric is not standardized. Finally, the book includes some data on three years of software maintenance and enhancements as well as some data on total cost of ownership.

The Fractional Laplacian

by C. Pozrikidis

The fractional Laplacian, also called the Riesz fractional derivative, describes an unusual diffusion process associated with random excursions. The Fractional Laplacian explores applications of the fractional Laplacian in science, engineering, and other areas where long-range interactions and conceptual or physical particle jumps resulting in an irregular diffusive or conductive flux are encountered. Presents the material at a level suitable for a broad audience of scientists and engineers with rudimentary background in ordinary differential equations and integral calculus Clarifies the concept of the fractional Laplacian for functions in one, two, three, or an arbitrary number of dimensions defined over the entire space, satisfying periodicity conditions, or restricted to a finite domain Covers physical and mathematical concepts as well as detailed mathematical derivations Develops a numerical framework for solving differential equations involving the fractional Laplacian and presents specific algorithms accompanied by numerical results in one, two, and three dimensions Discusses viscous flow and physical examples from scientific and engineering disciplines Written by a prolific author well known for his contributions in fluid mechanics, biomechanics, applied mathematics, scientific computing, and computer science, the book emphasizes fundamental ideas and practical numerical computation. It includes original material and novel numerical methods.

Environmental Systems Analysis with MATLAB®

by Stefano Marsili-Libelli

Explore the inner workings of environmental processes using a mathematical approach. Environmental Systems Analysis with MATLAB® combines environmental science concepts and system theory with numerical techniques to provide a better understanding of how our environment works. The book focuses on building mathematical models of environmental systems, and using these models to analyze their behaviors. Designed with the environmental professional in mind, it offers a practical introduction to developing the skills required for managing environmental modeling and data handling. The book follows a logical sequence from the basic steps of model building and data analysis to implementing these concepts into working computer codes, and then on to assessing their results. It describes data processing (rarely considered in environmental analysis); outlines the tools needed to successfully analyze data and develop models, and moves on to real-world problems. The author illustrates in the first four chapters the methodological aspects of environmental systems analysis, and in subsequent chapters applies them to specific environmental concerns. The accompanying software bundle is freely downloadable from the book web site. It follows the chapters sequence and provides a hands-on experience, allowing the reader to reproduce the figures in the text and experiment by varying the problem setting. A basic MATLAB literacy is required to get the most out of the software. Ideal for coursework and self-study, this offering: Deals with the basic concepts of environmental modeling and identification, both from the mechanistic and the data-driven viewpoint Provides a unifying methodological approach to deal with specific aspects of environmental modeling: population dynamics, flow systems, and environmental microbiology Assesses the similarities and the differences of microbial processes in natural and man-made environments Analyzes several aquatic ecosystems’ case studies Presents an application of an extended Streeter & Phelps (S&P) model Describes an ecological method to estimate the bioavailable nutrients in natural waters Considers a lagoon ecosystem from several viewpoints, including modeling and management, and more

Introductory Fisheries Analyses with R (Chapman & Hall/CRC The R Series)

by Derek H. Ogle

A How-To Guide for Conducting Common Fisheries-Related Analyses in R Introductory Fisheries Analyses with R provides detailed instructions on performing basic fisheries stock assessment analyses in the R environment. Accessible to practicing fisheries scientists as well as advanced undergraduate and graduate students, the book demonstrates the flexibility and power of R, offers insight into the reproducibility of script-based analyses, and shows how the use of R leads to more efficient and productive work in fisheries science. The first three chapters present a minimal introduction to the R environment that builds a foundation for the fisheries-specific analyses in the remainder of the book. These chapters help you become familiar with R for basic fisheries analyses and graphics. Subsequent chapters focus on methods to analyze age comparisons, age-length keys, size structure, weight-length relationships, condition, abundance (from capture-recapture and depletion data), mortality rates, individual growth, and the stock-recruit relationship. The fundamental statistical methods of linear regression, analysis of variance (ANOVA), and nonlinear regression are demonstrated within the contexts of these common fisheries analyses. For each analysis, the author completely explains the R functions and provides sufficient background information so that you can confidently implement each method. Web Resource The author’s website at http://derekogle.com/IFAR/ includes the data files and R code for each chapter, enabling you to reproduce the results in the book as well as create your own scripts. The site also offers supplemental code for more advanced analyses and practice exercises for every chapter.

Business Analytics for Decision Making

by Steven Orla Kimbrough Hoong Chuin Lau

Business Analytics for Decision Making, the first complete text suitable for use in introductory Business Analytics courses, establishes a national syllabus for an emerging first course at an MBA or upper undergraduate level. This timely text is mainly about model analytics, particularly analytics for constrained optimization. It uses implementations that allow students to explore models and data for the sake of discovery, understanding, and decision making.Business analytics is about using data and models to solve various kinds of decision problems. There are three aspects for those who want to make the most of their analytics: encoding, solution design, and post-solution analysis. This textbook addresses all three. Emphasizing the use of constrained optimization models for decision making, the book concentrates on post-solution analysis of models. The text focuses on computationally challenging problems that commonly arise in business environments. Unique among business analytics texts, it emphasizes using heuristics for solving difficult optimization problems important in business practice by making best use of methods from Computer Science and Operations Research. Furthermore, case studies and examples illustrate the real-world applications of these methods. The authors supply examples in Excel®, GAMS, MATLAB®, and OPL. The metaheuristics code is also made available at the book's website in a documented library of Python modules, along with data and material for homework exercises. From the beginning, the authors emphasize analytics and de-emphasize representation and encoding so students will have plenty to sink their teeth into regardless of their computer programming experience.

Smart Use of State Public Health Data for Health Disparity Assessment

by Ge Lin Ming Qu

Health services are often fragmented along organizational lines with limited communication among the public health–related programs or organizations, such as mental health, social services, and public health services. This can result in disjointed decision making without necessary data and knowledge, organizational fragmentation, and disparate knowledge development across the full array of public health needs. When new questions or challenges arise that require collaboration, individual public health practitioners (e.g., surveillance specialists and epidemiologists) often do not have the time and energy to spend on them. Smart Use of State Public Health Data for Health Disparity Assessment promotes data integration to aid crosscutting program collaboration. It explains how to maximize the use of various datasets from state health departments for assessing health disparity and for disease prevention. The authors offer practical advice on state public health data use, their strengths and weaknesses, data management insight, and lessons learned. They propose a bottom-up approach for building an integrated public health data warehouse that includes localized public health data. The book is divided into three sections: Section I has seven chapters devoted to knowledge and skill preparations for recognizing disparity issues and integrating and analyzing local public health data. Section II provides a systematic surveillance effort by linking census tract poverty to other health disparity dimensions. Section III provides in-depth studies related to Sections I and II. All data used in the book have been geocoded to the census tract level, making it possible to go more local, even down to the neighborhood level.

Refine Search

Showing 7,601 through 7,625 of 54,293 results