Browse Results

Showing 7,701 through 7,725 of 54,298 results

Numerical Methods for Optimal Control Problems (Springer INdAM Series #29)

by Maurizio Falcone Roberto Ferretti Lars Grüne William M. McEneaney

This work presents recent mathematical methods in the area of optimal control with a particular emphasis on the computational aspects and applications. Optimal control theory concerns the determination of control strategies for complex dynamical systems, in order to optimize some measure of their performance. Started in the 60's under the pressure of the "space race" between the US and the former USSR, the field now has a far wider scope, and embraces a variety of areas ranging from process control to traffic flow optimization, renewable resources exploitation and management of financial markets. These emerging applications require more and more efficient numerical methods for their solution, a very difficult task due the huge number of variables. The chapters of this volume give an up-to-date presentation of several recent methods in this area including fast dynamic programming algorithms, model predictive control and max-plus techniques. This book is addressed to researchers, graduate students and applied scientists working in the area of control problems, differential games and their applications.

Displaying Time Series, Spatial, and Space-Time Data with R (Chapman & Hall/CRC The R Series)

by Oscar Perpinan Lamigueiro

Focusing on the exploration of data with visual methods, this book presents methods and R code for producing high-quality static graphics, interactive visualizations, and animations of time series, spatial, and space-time data. Practical examples using real-world datasets help you understand how to apply the methods and code. Each of the three parts of the book is devoted to different types of data. In each part, the chapters are grouped according to the various visualization methods or data characteristics. Recent developments in the "htmlwidgets" family of packages are covered in this second edition with many new interactive graphics.

On the Class Number of Abelian Number Fields: Extended with Tables by Ken-ichi Yoshino and Mikihito Hirabayashi

by Helmut Hasse

With this translation, the classic monograph Über die Klassenzahl abelscher Zahlkörper by Helmut Hasse is now available in English for the first time.The book addresses three main topics: class number formulas for abelian number fields; expressions of the class number of real abelian number fields by the index of the subgroup generated by cyclotomic units; and the Hasse unit index of imaginary abelian number fields, the integrality of the relative class number formula, and the class number parity. Additionally, the book includes reprints of works by Ken-ichi Yoshino and Mikihito Hirabayashi, which extend the tables of Hasse unit indices and the relative class numbers to imaginary abelian number fields with conductor up to 100. The text provides systematic and practical methods for deriving class number formulas, determining the unit index and calculating the class number of abelian number fields. A wealth of illustrative examples, together with corrections and remarks on the original work, make this translation a valuable resource for today’s students of and researchers in number theory.

Computational Intelligence Methods for Bioinformatics and Biostatistics: 14th International Meeting, CIBB 2017, Cagliari, Italy, September 7-9, 2017, Revised Selected Papers (Lecture Notes in Computer Science #10834)

by Massimo Bartoletti Annalisa Barla Andrea Bracciali Gunnar W. Klau Leif Peterson Alberto Policriti Roberto Tagliaferri

This book constitutes the thoroughly refereed post-conference proceedings of the 14th International Meeting on Computational. Intelligence Methods for Bioinformatics and Biostatistics, CIBB 2017, held in Cagliari, Italy, in September 2017.The 19 revised full papers presented were carefully reviewed and selected from 44 submissions. The papers deal with the application of computational intelligence to open problems in bioinformatics, biostatistics, systems and synthetic biology, medical informatics, computational approaches to life sciences in general.

Formal Languages and Compilation (Texts in Computer Science)

by Luca Breveglieri Angelo Morzenti Stefano Crespi Reghizzi

This classroom-tested and clearly-written textbook presents a focused guide to the conceptual foundations of compilation, explaining the fundamental principles and algorithms used for defining the syntax of languages, and for implementing simple translators.This significantly updated and expanded third edition has been enhanced with additional coverage of regular expressions, visibly pushdown languages, bottom-up and top-down deterministic parsing algorithms, and new grammar models.Topics and features: describes the principles and methods used in designing syntax-directed applications such as parsing and regular expression matching; covers translations, semantic functions (attribute grammars), and static program analysis by data flow equations; introduces an efficient method for string matching and parsing suitable for ambiguous regular expressions (NEW); presents a focus on extended BNF grammars with their general parser and with LR(1) and LL(1) parsers (NEW); introduces a parallel parsing algorithm that exploits multiple processing threads to speed up syntax analysis of large files; discusses recent formal models of input-driven automata and languages (NEW); includes extensive use of theoretical models of automata, transducers and formal grammars, and describes all algorithms in pseudocode; contains numerous illustrative examples, and supplies a large set of exercises with solutions at an associated website.Advanced undergraduate and graduate students of computer science will find this reader-friendly textbook to be an invaluable guide to the essential concepts of syntax-directed compilation. The fundamental paradigms of language structures are elegantly explained in terms of the underlying theory, without requiring the use of software tools or knowledge of implementation, and through algorithms simple enough to be practiced by paper and pencil.

Cryptology and Error Correction: An Algebraic Introduction and Real-World Applications (Springer Undergraduate Texts in Mathematics and Technology)

by Lindsay N. Childs

This text presents a careful introduction to methods of cryptology and error correction in wide use throughout the world and the concepts of abstract algebra and number theory that are essential for understanding these methods. The objective is to provide a thorough understanding of RSA, Diffie–Hellman, and Blum–Goldwasser cryptosystems and Hamming and Reed–Solomon error correction: how they are constructed, how they are made to work efficiently, and also how they can be attacked. To reach that level of understanding requires and motivates many ideas found in a first course in abstract algebra—rings, fields, finite abelian groups, basic theory of numbers, computational number theory, homomorphisms, ideals, and cosets. Those who complete this book will have gained a solid mathematical foundation for more specialized applied courses on cryptology or error correction, and should also be well prepared, both in concepts and in motivation, to pursue more advanced study in algebra and number theory. This text is suitable for classroom or online use or for independent study. Aimed at students in mathematics, computer science, and engineering, the prerequisite includes one or two years of a standard calculus sequence. Ideally the reader will also take a concurrent course in linear algebra or elementary matrix theory. A solutions manual for the 400 exercises in the book is available to instructors who adopt the text for their course.

Technical Analysis of Stock Trends

by Robert D. Edwards John Magee W.H.C. Bassetti

This revised and updated version of the best-selling book, Technical Analysis of Stock Trends, 10th Edition, presents proven long- and short-term stock trend analysis enabling investors to make smart, profitable trading decisions. The book covers technical theory such as The Dow Theory, reversal patterns, consolidation formations, trends and channels, technical analysis of commodity charts, and advances in investment technology. The book also includes a comprehensive guide to trading tactics from long and short goals, stock selection, charting, low and high risk, trend recognition tools, balancing and diversifying the stock portfolio, application of capital, and risk management. This sharpened and updated new edition offers patterns and charts that are tighter and more illustrative, including modifiable charts. Expanded material will be offered on Pragmatic Portfolio Theory as a more elegant alternative to Modern Portfolio Theory; and a newer, simpler, and more powerful alternative to Dow Theory is presented.

Longitudinal Multivariate Psychology (Multivariate Applications Series)

by Emilio Ferrer Steven M. Boker Kevin J. Grimm

This volume presents a collection of chapters focused on the study of multivariate change. As people develop and change, multivariate measurement of that change and analysis of those measures can illuminate the regularities in the trajectories of individual development, as well as time-dependent changes in population averages. As longitudinal data have recently become much more prevalent in psychology and the social sciences, models of change have become increasingly important. This collection focuses on methodological, statistical, and modeling aspects of multivariate change and applications of longitudinal models to the study of psychological processes. The volume is divided into three major sections: Extension of latent change models, Measurement and testing issues in longitudinal modeling, and Novel applications of multivariate longitudinal methodology. It is intended for advanced students and researchers interested in learning about state-of-the-art techniques for longitudinal data analysis, as well as understanding the history and development of such techniques.

Model-free Hedging: A Martingale Optimal Transport Viewpoint (Chapman and Hall/CRC Financial Mathematics Series)

by Pierre Henry-Labordere

Model-free Hedging: A Martingale Optimal Transport Viewpoint focuses on the computation of model-independent bounds for exotic options consistent with market prices of liquid instruments such as Vanilla options. The author gives an overview of Martingale Optimal Transport, highlighting the differences between the optimal transport and its martingale counterpart. This topic is then discussed in the context of mathematical finance.

Neutron Diffusion: Concepts and Uncertainty Analysis for Engineers and Scientists

by S. Chakraverty Sukanta Nayak

This book is designed for a systematic understanding of nuclear diffusion theory along with fuzzy/interval/stochastic uncertainty. This will serve to be a benchmark book for graduate & postgraduate students, teachers, engineers and researchers throughout the globe. In view of the recent developments in nuclear engineering, it is important to study the basic concepts of this field along with the diffusion processes for nuclear reactor design. Also, it is known that uncertainty is a must in every field of engineering and science and, in particular, with regards to nuclear-related problems. As such, one may need to understand the nuclear diffusion principles/theories corresponding with reliable and efficient techniques for the solution of such uncertain problems. Accordingly this book aims to provide a new direction for readers with basic concepts of reactor physics as well as neutron diffusion theory. On the other hand, it also includes uncertainty (in terms of fuzzy, interval, stochastic) and their applications in nuclear diffusion problems in a systematic manner, along with recent developments. The underlying concepts of the presented methods in this book may very well be used/extended to various other engineering disciplines viz. electronics, marine, chemical, mining engineering and other sciences such as physics, chemistry, biotechnology etc. This book then can be widely applied wherever one wants to model their physical problems in terms of non-probabilistic methods viz. fuzzy/stochastic for the true essence of the real problems.

Missing and Modified Data in Nonparametric Estimation: With R Examples (Chapman & Hall/CRC Monographs on Statistics and Applied Probability)

by Sam Efromovich

This book presents a systematic and unified approach for modern nonparametric treatment of missing and modified data via examples of density and hazard rate estimation, nonparametric regression, filtering signals, and time series analysis. All basic types of missing at random and not at random, biasing, truncation, censoring, and measurement errors are discussed, and their treatment is explained. Ten chapters of the book cover basic cases of direct data, biased data, nondestructive and destructive missing, survival data modified by truncation and censoring, missing survival data, stationary and nonstationary time series and processes, and ill-posed modifications. The coverage is suitable for self-study or a one-semester course for graduate students with a prerequisite of a standard course in introductory probability. Exercises of various levels of difficulty will be helpful for the instructor and self-study. The book is primarily about practically important small samples. It explains when consistent estimation is possible, and why in some cases missing data should be ignored and why others must be considered. If missing or data modification makes consistent estimation impossible, then the author explains what type of action is needed to restore the lost information. The book contains more than a hundred figures with simulated data that explain virtually every setting, claim, and development. The companion R software package allows the reader to verify, reproduce and modify every simulation and used estimators. This makes the material fully transparent and allows one to study it interactively. Sam Efromovich is the Endowed Professor of Mathematical Sciences and the Head of the Actuarial Program at the University of Texas at Dallas. He is well known for his work on the theory and application of nonparametric curve estimation and is the author of Nonparametric Curve Estimation: Methods, Theory, and Applications. Professor Sam Efromovich is a Fellow of the Institute of Mathematical Statistics and the American Statistical Association.

Why Are We Conscious?: A Scientist’s Take on Consciousness and Extrasensory Perception

by David E.H. Jones

There are two huge gaps in scientific theory. One, the contradiction between classical and quantum mechanics, is discussed in many publications. The other, the total failure to explain why anything made of atoms (such as ourselves) can be conscious, has little acknowledgement. The main thesis of this book is that to be conscious at all, you need an unconscious mind. The author explores the idea that this mind sometimes makes contact with a whole unknown world, sporadically revealed by paranormal effects, but perhaps discoverable by hitherto uninvented scientific instruments. The book looks at the notion of the unconscious mind, one of the most important hypotheses of the twentieth century. Psychiatrists often deploy it rather informally, but there is no accepted theory of it. No region of the human brain seems to hold it. The author delves into the notion that the unknown world exists and is very weakly coupled to the physical world. He ponders the properties it may have to allow this coupling, looks at several paranormal effects scientifically and points out that many of them seem to imply brief but dramatic changes of the forces between atoms—a possible effect of the unknown world, unexamined by physical science. No existing publication seeks to talk both about paranormal mysteries and scientific theory. If scientists know about the gaps in existing knowledge, they might initiate research into such gaps, or notice experimental oddities they now gloss over. If the general public was aware of the gaps in physical theory, they would be less overwhelmed by the intellectual diktats of some scientists.

Mobile Apps Engineering: Design, Development, Security, and Testing

by Ghita K. Mostefaoui Faisal Tariq

The objective of this edited book is to gather best practices in the development and management of mobile apps projects. Mobile Apps Engineering aims to provide software engineering lecturers, students and researchers of mobile computing a starting point for developing successful mobile apps. To achieve these objectives, the book’s contributors emphasize the essential concepts of the field, such as apps design, testing and security, with the intention of offering a compact, self-contained book which shall stimulate further research interest in the topic. The editors hope and believe that their efforts in bringing this book together can make mobile apps engineering an independent discipline inspired by traditional software engineering, but taking into account the new challenges posed by mobile computing.

Platform Trial Designs in Drug Development: Umbrella Trials and Basket Trials (Chapman & Hall/CRC Biostatistics Series)

by Zoran Antonijevic Robert A. Beckman

Platform trials test multiple therapies in one indication, one therapy for multiple indications, or both. These novel clinical trial designs can dramatically increase the cost-effectiveness of drug development, leading to life-altering medicines for people suffering from serious illnesses, possibly at lower cost. Currently, the cost of drug development is unsustainable. Furthermore, there are particular problems in rare diseases and small biomarker defined subsets in oncology, where the required sample sizes for traditional clinical trial designs may not be feasible. The editors recruited the key innovators in this domain. The 20 articles discuss trial designs from perspectives as diverse as quantum computing, patient’s rights to information, and international health. The book begins with an overview of platform trials from multiple perspectives. It then describes impacts of platform trials on the pharmaceutical industry’s key stakeholders: patients, regulators, and payers. Next it provides advanced statistical methods that address multiple aspects of platform trials, before concluding with a pharmaceutical executive’s perspective on platform trials. Except for the statistical methods section, only a basic qualitative knowledge of clinical trials is needed to appreciate the important concepts and novel ideas presented.

Applied Power Analysis for the Behavioral Sciences: 2nd Edition

by Christopher L. Aberson

Applied Power Analysis for the Behavioral Sciences is a practical "how-to" guide to conducting statistical power analyses for psychology and related fields. The book provides a guide to conducting analyses that is appropriate for researchers and students, including those with limited quantitative backgrounds. With practical use in mind, the text provides detailed coverage of topics such as how to estimate expected effect sizes and power analyses for complex designs. The topical coverage of the text, an applied approach, in-depth coverage of popular statistical procedures, and a focus on conducting analyses using R make the text a unique contribution to the power literature. To facilitate application and usability, the text includes ready-to-use R code developed for the text. An accompanying R package called pwr2ppl (available at https://github.com/chrisaberson/pwr2ppl) provides tools for conducting power analyses across each topic covered in the text.

How Qualitative Data Analysis Happens: Moving Beyond "Themes Emerged"

by Áine Humble Elise Radina

Winner of the 2020 Anselm Strauss Award for Qualitative Family Research, National Council on Family Relations. How is qualitative data actually collected, analyzed, and accomplished? Real stories of How Qualitative Data Analysis Occurs: Moving Beyond "Themes Emerged" offers an in-depth look into how qualitative social science researchers studying family issues and dynamics approach their data analyses. It moves beyond the usual vague statement of "themes emerged from the data" to show readers how researchers actively and consciously arrive at their themes and conclusions, revealing the complexity and time involved in making sense of thousands of pages of interview data, multiple data sources, and diverse types of data. How Qualitative Data Analysis Occurs focuses on a diversity of topics in family research across the life course. The various authors provide detailed narratives into how they analyzed their data from previous publications, and what methodologies they used, ranging from arts-based research, autoethnography, community-based participatory research, ethnography, grounded theory, to narrative analysis. Supplemental figures, images, and screenshots which are referred to in the chapters, are included in an accompanying eResource, as well as links to the previously published work on which the chapters are based. This book is an invaluable resource for experienced and novice qualitative researchers throughout the social sciences.

Rigor in the 6–12 Math and Science Classroom: A Teacher Toolkit

by Barbara R. Blackburn Abbigail Armstrong

Learn how to incorporate rigorous activities in your math or science classroom and help students reach higher levels of learning. Expert educators and consultants Barbara R. Blackburn and Abbigail Armstrong offer a practical framework for understanding rigor and provide specialized examples for middle and high school math and science teachers. Topics covered include: Creating a rigorous environment High expectations Support and scaffolding Demonstration of learning Assessing student progress Collaborating with colleagues The book comes with classroom-ready tools, offered in the book and as free eResources on our website at www.routledge.com/9781138302716.

Statistical Methods for Field and Laboratory Studies in Behavioral Ecology (Chapman & Hall/CRC Applied Environmental Statistics)

by Scott Pardo Michael Pardo

Statistical Methods for Field and Laboratory Studies in Behavioral Ecology focuses on how statistical methods may be used to make sense of behavioral ecology and other data. It presents fundamental concepts in statistical inference and intermediate topics such as multiple least squares regression and ANOVA. The objective is to teach students to recognize situations where various statistical methods should be used, understand the strengths and limitations of the methods, and to show how they are implemented in R code. Examples are based on research described in the literature of behavioral ecology, with data sets and analysis code provided. Features: This intermediate to advanced statistical methods text was written with the behavioral ecologist in mind Computer programs are provided, written in the R language. Datasets are also provided, mostly based, at least to some degree, on real studies. Methods and ideas discussed include multiple regression and ANOVA, logistic and Poisson regression, machine learning and model identification, time-to-event modeling, time series and stochastic modeling, game-theoretic modeling, multivariate methods, study design/sample size, and what to do when things go wrong. It is assumed that the reader has already had exposure to statistics through a first introductory course at least, and also has sufficient knowledge of R. However, some introductory material is included to aid the less initiated reader. Scott Pardo, Ph.D., is an accredited professional statistician (PStat®) by the American Statistical Association. Michael Pardo is a Ph.D. is a candidate in behavioral ecology at Cornell University, specializing in animal communication and social behavior.

Sample Size Calculations in Clinical Research (Chapman & Hall/CRC Biostatistics Series)

by Shein-Chung Chow Jun Shao Hansheng Wang Yuliya Lokhnygina

Praise for the Second Edition: "… this is a useful, comprehensive compendium of almost every possible sample size formula. The strong organization and carefully defined formulae will aid any researcher designing a study." -Biometrics "This impressive book contains formulae for computing sample size in a wide range of settings. One-sample studies and two-sample comparisons for quantitative, binary, and time-to-event outcomes are covered comprehensively, with separate sample size formulae for testing equality, non-inferiority, and equivalence. Many less familiar topics are also covered …" – Journal of the Royal Statistical Society Sample Size Calculations in Clinical Research, Third Edition presents statistical procedures for performing sample size calculations during various phases of clinical research and development. A comprehensive and unified presentation of statistical concepts and practical applications, this book includes a well-balanced summary of current and emerging clinical issues, regulatory requirements, and recently developed statistical methodologies for sample size calculation. Features: Compares the relative merits and disadvantages of statistical methods for sample size calculations Explains how the formulae and procedures for sample size calculations can be used in a variety of clinical research and development stages Presents real-world examples from several therapeutic areas, including cardiovascular medicine, the central nervous system, anti-infective medicine, oncology, and women’s health Provides sample size calculations for dose response studies, microarray studies, and Bayesian approaches This new edition is updated throughout, includes many new sections, and five new chapters on emerging topics: two stage seamless adaptive designs, cluster randomized trial design, zero-inflated Poisson distribution, clinical trials with extremely low incidence rates, and clinical trial simulation.

Analyzing Longitudinal Clinical Trial Data: A Practical Guide (Chapman & Hall/CRC Biostatistics Series)

by Craig Mallinckrodt Ilya Lipkovich

Analyzing Longitudinal Clinical Trial Data: A Practical Guide provides practical and easy to implement approaches for bringing the latest theory on analysis of longitudinal clinical trial data into routine practice.The book, with its example-oriented approach that includes numerous SAS and R code fragments, is an essential resource for statisticians and graduate students specializing in medical research. The authors provide clear descriptions of the relevant statistical theory and illustrate practical considerations for modeling longitudinal data. Topics covered include choice of endpoint and statistical test; modeling means and the correlations between repeated measurements; accounting for covariates; modeling categorical data; model verification; methods for incomplete (missing) data that includes the latest developments in sensitivity analyses, along with approaches for and issues in choosing estimands; and means for preventing missing data. Each chapter stands alone in its coverage of a topic. The concluding chapters provide detailed advice on how to integrate these independent topics into an over-arching study development process and statistical analysis plan.

AIQ: How artificial intelligence works and how we can harness its power for a better world

by Nick Polson James Scott

Two leading data scientists offer an up-close and user-friendly look at artificial intelligence: what it is, how it works, where it came from and how to harness its power for a better world. 'There comes a time in the life of a subject when someone steps up and writes the book about it. AIQ explores the fascinating history of the ideas that drive this technology of the future and demystifies the core concepts behind it; the result is a positive and entertaining look at the great potential unlocked by marrying human creativity with powerful machines.'Steven D. Levitt, co-author of FreakonomicsDozens of times per day, we all interact with intelligent machines that are constantly learning from the wealth of data now available to them. These machines, from smart phones to talking robots to self-driving cars, are remaking the world in the twenty first century in the same way that the Industrial Revolution remade the world in the nineteenth.AIQ is based on a simple premise: if you want to understand the modern world, then you have to know a little bit of the mathematical language spoken by intelligent machines. AIQ will teach you that language but in an unconventional way, anchored in stories rather than equations.You will meet a fascinating cast of historical characters who have a lot to teach you about data, probability and better thinking. Along the way, you'll see how these same ideas are playing out in the modern age of big data and intelligent machines, and how these technologies will soon help you to overcome some of your built-in cognitive weaknesses, giving you a chance to lead a happier, healthier, more fulfilled life.

Model-based Geostatistics for Global Public Health: Methods and Applications (Chapman & Hall/CRC Interdisciplinary Statistics)

by Peter J. Diggle Emanuele Giorgi

Model-based Geostatistics for Global Public Health: Methods and Applications provides an introductory account of model-based geostatistics, its implementation in open-source software and its application in public health research. In the public health problems that are the focus of this book, the authors describe and explain the pattern of spatial variation in a health outcome or exposure measurement of interest. Model-based geostatistics uses explicit probability models and established principles of statistical inference to address questions of this kind. Features: Presents state-of-the-art methods in model-based geostatistics. Discusses the application these methods some of the most challenging global public health problems including disease mapping, exposure mapping and environmental epidemiology. Describes exploratory methods for analysing geostatistical data, including: diagnostic checking of residuals standard linear and generalized linear models; variogram analysis; Gaussian process models and geostatistical design issues. Includes a range of more complex geostatistical problems where research is ongoing. All of the results in the book are reproducible using publicly available R code and data-sets, as well as a dedicated R package. This book has been written to be accessible not only to statisticians but also to students and researchers in the public health sciences. The Authors Peter Diggle is Distinguished University Professor of Statistics in the Faculty of Health and Medicine, Lancaster University. He also holds honorary positions at the Johns Hopkins University School of Public Health, Columbia University International Research Institute for Climate and Society, and Yale University School of Public Health. His research involves the development of statistical methods for analyzing spatial and longitudinal data and their applications in the biomedical and health sciences. Dr Emanuele Giorgi is a Lecturer in Biostatistics and member of the CHICAS research group at Lancaster University, where he formerly obtained a PhD in Statistics and Epidemiology in 2015. His research interests involve the development of novel geostatistical methods for disease mapping, with a special focus on malaria and other tropical diseases. In 2018, Dr Giorgi was awarded the Royal Statistical Society Research Prize "for outstanding published contribution at the interface of statistics and epidemiology." He is also the lead developer of PrevMap, an R package where all the methodology found in this book has been implemented.

Bayesian Cost-Effectiveness Analysis of Medical Treatments (Chapman & Hall/CRC Biostatistics Series)

by Elias Moreno Francisco Jose Vazquez-Polo Miguel Angel Negrín-Hernández

Cost-effectiveness analysis is becoming an increasingly important tool for decision making in the health systems. Cost-Effectiveness of Medical Treatments formulates the cost-effectiveness analysis as a statistical decision problem, identifies the sources of uncertainty of the problem, and gives an overview of the frequentist and Bayesian statistical approaches for decision making. Basic notions on decision theory such as space of decisions, space of nature, utility function of a decision and optimal decisions, are explained in detail using easy to read mathematics. Features Focuses on cost-effectiveness analysis as a statistical decision problem and applies the well-established optimal statistical decision methodology. Discusses utility functions for cost-effectiveness analysis. Enlarges the class of models typically used in cost-effectiveness analysis with the incorporation of linear models to account for covariates of the patients. This permits the formulation of the group (or subgroup) theory. Provides Bayesian procedures to account for model uncertainty in variable selection for linear models and in clustering for models for heterogeneous data. Model uncertainty in cost-effectiveness analysis has not been considered in the literature. Illustrates examples with real data. In order to facilitate the practical implementation of real datasets, provides the codes in Mathematica for the proposed methodology. The motivation for the book is to make the achievements in cost-effectiveness analysis accessible to health providers, who need to make optimal decisions, to the practitioners and to the students of health sciences. Elías Moreno is Professor of Statistics and Operational Research at the University of Granada, Spain, Corresponding Member of the Royal Academy of Sciences of Spain, and elect member of ISI. Francisco José Vázquez-Polo is Professor of Mathematics and Bayesian Methods at the University of Las Palmas de Gran Canaria, and Head of the Department of Quantitative Methods. Miguel Ángel Negrín is Senior Lecturer in the Department of Quantitative Methods at the ULPGC. His main research topics are Bayesian methods applied to Health Economics, economic evaluation and cost-effectiveness analysis, meta-analysis and equity in the provision of healthcare services.

Equivalence: Elizabeth L. Scott at Berkeley

by Amanda L. Golbeck

Equivalence: Elizabeth L. Scott at Berkeley is the compelling story of one pioneering statistician’s relentless twenty-year effort to promote the status of women in academe and science. Part biography and part microhistory, the book provides the context and background to understand Scott’s masterfulness at using statistics to help solve societal problems. In addition to being one of the first researchers to work at the interface of astronomy and statistics and an early practitioner of statistics using high-speed computers, Scott worked on an impressively broad range of questions in science, from whether cloud seeding actually works to whether ozone depletion causes skin cancer. Later in her career, Scott became swept up in the academic women’s movement. She used her well-developed scientific research skills together with the advocacy skills she had honed, in such activities as raising funds for Martin Luther King Jr. and keeping Free Speech Movement students out of jail, toward policy making that would improve the condition of the academic workforce for women. The book invites the reader into Scott’s universe, a window of inspiration made possible by the fact that she saved and dated every piece of paper that came across her desk.

Mathematics of Keno and Lotteries (AK Peters/CRC Recreational Mathematics Series)

by Mark Bollman

Mathematics of Keno and Lotteries is an elementary treatment of the mathematics, primarily probability and simple combinatorics, involved in lotteries and keno. Keno has a long history as a high-advantage, high-payoff casino game, and state lottery games such as Powerball are mathematically similar. MKL also considers such lottery games as passive tickets, daily number drawings, and specialized games offered around the world. In addition, there is a section on financial mathematics that explains the connection between lump-sum lottery prizes (as with Powerball) and their multi-year annuity options. So-called "winning systems" for keno and lotteries are examined mathematically and their flaws identified.

Refine Search

Showing 7,701 through 7,725 of 54,298 results