Browse Results

Showing 7,726 through 7,750 of 54,423 results

Competitive Math for Middle School: Algebra, Probability, and Number Theory

by Vinod Krishnamoorthy

The 39 self-contained sections in this book present worked-out examples as well as many sample problems categorized by the level of difficulty as Bronze, Silver, and Gold in order to help the readers gauge their progress and learning. Detailed solutions to all problems in each section are provided at the end of each chapter. The book can be used not only as a text but also for self-study. The text covers algebra (solving single equations and systems of equations of varying degrees, algebraic manipulations for creative problem solving, inequalities, basic set theory, sequences and series, rates and proportions, unit analysis, and percentages), probability (counting techniques, introductory probability theory, more set theory, permutations and combinations, expected value, and symmetry), and number theory (prime factorizations and their applications, Diophantine equations, number bases, modular arithmetic, and divisibility). It focuses on guiding students through creative problem-solving and on teaching them to apply their knowledge in a wide variety of scenarios rather than rote memorization of mathematical facts. It is aimed at, but not limited to, high-performing middle school students and goes further in depth and teaches new concepts not otherwise taught in traditional public schools.

Energy Efficient Computing & Electronics: Devices to Systems (Devices, Circuits, and Systems)

by Santosh K. Kurinec Sumeet Walia

In our abundant computing infrastructure, performance improvements across most all application spaces are now severely limited by the energy dissipation involved in processing, storing, and moving data. The exponential increase in the volume of data to be handled by our computational infrastructure is driven in large part by unstructured data from countless sources. This book explores revolutionary device concepts, associated circuits, and architectures that will greatly extend the practical engineering limits of energy-efficient computation from device to circuit to system level. With chapters written by international experts in their corresponding field, the text investigates new approaches to lower energy requirements in computing. Features • Has a comprehensive coverage of various technologies • Written by international experts in their corresponding field • Covers revolutionary concepts at the device, circuit, and system levels

Data Visualization: Charts, Maps, and Interactive Graphics (ASA-CRC Series on Statistical Reasoning in Science and Society)

by Robert Grant

This is the age of data. There are more innovations and more opportunities for interesting work with data than ever before, but there is also an overwhelming amount of quantitative information being published every day. Data visualisation has become big business, because communication is the difference between success and failure, no matter how clever the analysis may have been. The ability to visualize data is now a skill in demand across business, government, NGOs and academia. Data Visualization: Charts, Maps, and Interactive Graphics gives an overview of a wide range of techniques and challenges, while staying accessible to anyone interested in working with and understanding data. Features: Focusses on concepts and ways of thinking about data rather than algebra or computer code. Features 17 short chapters that can be read in one sitting. Includes chapters on big data, statistical and machine learning models, visual perception, high-dimensional data, and maps and geographic data. Contains more than 125 visualizations, most created by the author. Supported by a website with all code for creating the visualizations, further reading, datasets and practical advice on crafting the images. Whether you are a student considering a career in data science, an analyst who wants to learn more about visualization, or the manager of a team working with data, this book will introduce you to a broad range of data visualization methods. Cover image: Landscape of Change uses data about sea level rise, glacier volume decline, increasing global temperatures, and the increasing use of fossil fuels. These data lines compose a landscape shaped by the changing climate, a world in which we are now living. Copyright © Jill Pelto (jillpelto.com).

bookdown: Authoring Books and Technical Documents with R Markdown (Chapman & Hall/CRC The R Series)

by Yihui Xie

bookdown: Authoring Books and Technical Documents with R Markdown presents a much easier way to write books and technical publications than traditional tools such as LaTeX and Word. The bookdown package inherits the simplicity of syntax and flexibility for data analysis from R Markdown, and extends R Markdown for technical writing, so that you can make better use of document elements such as figures, tables, equations, theorems, citations, and references. Similar to LaTeX, you can number and cross-reference these elements with bookdown. Your document can even include live examples so readers can interact with them while reading the book. The book can be rendered to multiple output formats, including LaTeX/PDF, HTML, EPUB, and Word, thus making it easy to put your documents online. The style and theme of these output formats can be customized. We used books and R primarily for examples in this book, but bookdown is not only for books or R. Most features introduced in this book also apply to other types of publications: journal papers, reports, dissertations, course handouts, study notes, and even novels. You do not have to use R, either. Other choices of computing languages include Python, C, C++, SQL, Bash, Stan, JavaScript, and so on, although R is best supported. You can also leave out computing, for example, to write a fiction. This book itself is an example of publishing with bookdown and R Markdown, and its source is fully available on GitHub.

The Craft of Model-Based Testing

by Paul C. Jorgensen

In his latest work, author Paul C Jorgensen takes his well-honed craftsman’s approach to mastering model-based testing (MBT). To be expert at MBT, a software tester has to understand it as a craft rather than an art. This means a tester should have deep knowledge of the underlying subject and be well practiced in carrying out modeling and testing techniques. Judgment is needed, as well as an understanding of MBT the tools. The first part of the book helps testers in developing that judgment. It starts with an overview of MBT and follows with an in-depth treatment of nine different testing models with a chapter dedicated to each model. These chapters are tied together by a pair of examples: a simple insurance premium calculation and an event-driven system that describes a garage door controller. The book shows how simpler models—flowcharts, decision tables, and UML Activity charts—express the important aspects of the insurance premium problem. It also shows how transition-based models—finite state machines, Petri nets, and statecharts—are necessary for the garage door controller but are overkill for the insurance premium problem. Each chapter describes the extent to which a model can support MBT. The second part of the book gives testers a greater understanding of MBT tools. It examines six commercial MBT products, presents the salient features of each product, and demonstrates using the product on the insurance premium and the garage door controller problems. These chapters each conclude with advice on implementing MBT in an organization. The last chapter describes six Open Source tools to round out a tester’s knowledge of MBT. In addition, the book supports the International Software Testing Qualifications Board’s (ISTQB®) MBT syllabus for certification.

Approximation Techniques for Engineers: Second Edition

by Louis Komzsik

This second edition includes eleven new sections based on the approximation of matrix functions, deflating the solution space and improving the accuracy of approximate solutions, iterative solution of initial value problems of systems of ordinary differential equations, and the method of trial functions for boundary value problems. The topics of th

Probabilistic Foundations of Statistical Network Analysis (Chapman & Hall/CRC Monographs on Statistics and Applied Probability)

by Harry Crane

Probabilistic Foundations of Statistical Network Analysis presents a fresh and insightful perspective on the fundamental tenets and major challenges of modern network analysis. Its lucid exposition provides necessary background for understanding the essential ideas behind exchangeable and dynamic network models, network sampling, and network statistics such as sparsity and power law, all of which play a central role in contemporary data science and machine learning applications. The book rewards readers with a clear and intuitive understanding of the subtle interplay between basic principles of statistical inference, empirical properties of network data, and technical concepts from probability theory. Its mathematically rigorous, yet non-technical, exposition makes the book accessible to professional data scientists, statisticians, and computer scientists as well as practitioners and researchers in substantive fields. Newcomers and non-quantitative researchers will find its conceptual approach invaluable for developing intuition about technical ideas from statistics and probability, while experts and graduate students will find the book a handy reference for a wide range of new topics, including edge exchangeability, relative exchangeability, graphon and graphex models, and graph-valued Levy process and rewiring models for dynamic networks. The author’s incisive commentary supplements these core concepts, challenging the reader to push beyond the current limitations of this emerging discipline. With an approachable exposition and more than 50 open research problems and exercises with solutions, this book is ideal for advanced undergraduate and graduate students interested in modern network analysis, data science, machine learning, and statistics. Harry Crane is Associate Professor and Co-Director of the Graduate Program in Statistics and Biostatistics and an Associate Member of the Graduate Faculty in Philosophy at Rutgers University. Professor Crane’s research interests cover a range of mathematical and applied topics in network science, probability theory, statistical inference, and mathematical logic. In addition to his technical work on edge and relational exchangeability, relative exchangeability, and graph-valued Markov processes, Prof. Crane’s methods have been applied to domain-specific cybersecurity and counterterrorism problems at the Foreign Policy Research Institute and RAND’s Project AIR FORCE.

A Brief Introduction to Dispersion Relations: With Modern Applications (SpringerBriefs in Physics)

by José Antonio Oller

This text offers a brief introduction to the dispersion relations as an approach to calculate S-matrix elements, a formalism that allows one to take advantage of the analytical structure of scattering amplitudes following the basic principles of unitarity and causality.First, the case of two-body scattering is considered and then its contribution to other processes through final-state interactions is discussed. For two-body scattering amplitudes, the general expression for a partial-wave amplitude is derived in the approximation where the crossed channel dynamics is neglected. This is taken as the starting point for many interesting nonperturbative applications, both in the light and heavy quark sector. Subsequently crossed channel dynamics is introduced within the equations for calculating the partial-wave amplitudes. Some applications based on methods that treat crossed-channel dynamics perturbatively are discussed too.The last part of this introductory treatment is dedicated to the further impact of scattering amplitudes on a variety of processes through final-state interactions. Several possible approaches are discussed such as the Muskhelishvili-Omnes dispersive integral equations and other closed formulae. These different formalisms are then applied in particular to the study of resonances presenting a number of challenging properties. The book ends with a chapter illustrating the use of dispersion relations in the nuclear medium for the evaluation of the energy density in nuclear matter.

C/C++ anwenden: Technisch-wissenschaftliche Übungsaufgaben mit Lösungen

by Thomas Hoch Gerd Küveler

In diesem Buch geht es um die Lösung realitätsnaher Probleme aus Forschung und Technik. Sowohl die Probleme als auch deren Lösungen werden jeweils ausführlich erläutert. Abgesehen von den Einführungs-Aufgaben handelt es sich um größere Projekte, die in der Regel in Übungsveranstaltungen nicht behandelt werden können. Grafiktools erhöhen den Spaß am Programmieren. Die in diesem Buch behandelten Aufgaben mit Lösungen erfordern bereits Grundkenntnisse in C und C++.

Textual Data Science with R (Chapman & Hall/CRC Computer Science & Data Analysis)

by Mónica Bécue-Bertaut

Textual Statistics with R comprehensively covers the main multidimensional methods in textual statistics supported by a specially-written package in R. Methods discussed include correspondence analysis, clustering, and multiple factor analysis for contigency tables. Each method is illuminated by applications. The book is aimed at researchers and students in statistics, social sciences, hiistory, literature and linguistics. The book will be of interest to anyone from practitioners needing to extract information from texts to students in the field of massive data, where the ability to process textual data is becoming essential.

Das Geheimnis der transzendenten Zahlen: Eine etwas andere Einführung in die Mathematik

by Fridtjof Toenniessen

Was ist Mathematik? Was macht sie so spannend? Und wie forschen Mathematiker eigentlich?Das Geheimnis der transzendenten Zahlen ist eine Einführung in die Mathematik, bei der diese Fragen im Zentrum stehen.Sie brauchen dazu keine Vorkenntnisse. Aufbauend auf den natürlichen Zahlen 0,1,2,3,... beginnt eine Reise durch verschiedene Gebiete dieser lebendigen Wissenschaft. Ziel der Reise sind die großen Entdeckungen, mit denen Jahrtausende alte Rätsel aus der Antike gelöst wurden. Den roten Faden bildet die berühmte Frage nach der Quadratur des Kreises, die eng mit transzendenten Zahlen verbunden ist.Das Buch zeigt, wie Mathematiker mit Neugier forschen, immer neue Fragen stellen und dabei überraschende Zusammenhänge finden. Es richtet sich an Studierende, Lehrer, Schüler und Laien, die auch auf diesen Pfaden wandeln wollen.Das Werk wurde für die 2. Auflage vollständig überarbeitet, durch intuitive Argumente vereinfacht und um das berühmte siebte Hilbert'sche Problem erweitert.Stimme zum Buch:„Fridtjof Toenniessen führt den Leser mit seinem klaren, einfühlsamen und auch kurzweiligen Stil auf einen Weg, der von der Schulmathematik über Grundbegriffe der Hochschulmathematik bis hin zu ausgewählten Höhepunkten der modernen Zahlentheorie führt und leistet damit einen wichtigen Beitrag, um den Übergang von der Schule zur Hochschule zu erleichtern. Dieses Buch lebt von der Faszination der Welt der Zahlen und der Begeisterung des Autors für dieses Gebiet. Besonders gut finde ich, dass auch Beweistechniken und die Methode der Abstraktion, die die Mathematik auszeichnen, nicht verborgen werden, sondern – im Gegenteil – in den Blickpunkt rücken." Prof. Dr. Stefan Müller-Stach, Universität Mainz

Applied Directional Statistics: Modern Methods and Case Studies (Chapman & Hall/CRC Interdisciplinary Statistics)

by Christophe Ley Thomas Verdebout

This book collects important advances in methodology and data analysis for directional statistics. It is the companion book of the more theoretical treatment presented in Modern Directional Statistics (CRC Press, 2017). The field of directional statistics has received a lot of attention due to demands from disciplines such as life sciences or machine learning, the availability of massive data sets requiring adapted statistical techniques, and technological advances. This book covers important progress in bioinformatics, biology, astrophysics, oceanography, environmental sciences, earth sciences, machine learning and social sciences.

Python for Bioinformatics (Chapman & Hall/CRC Mathematical and Computational Biology)

by Sebastian Bassi

In today's data driven biology, programming knowledge is essential in turning ideas into testable hypothesis. Based on the author’s extensive experience, Python for Bioinformatics, Second Edition helps biologists get to grips with the basics of software development. Requiring no prior knowledge of programming-related concepts, the book focuses on the easy-to-use, yet powerful, Python computer language. This new edition is updated throughout to Python 3 and is designed not just to help scientists master the basics, but to do more in less time and in a reproducible way. New developments added in this edition include NoSQL databases, the Anaconda Python distribution, graphical libraries like Bokeh, and the use of Github for collaborative development.

Risk Assessment and Decision Analysis with Bayesian Networks

by Norman Fenton Martin Neil

Since the first edition of this book published, Bayesian networks have become even more important for applications in a vast array of fields. This second edition includes new material on influence diagrams, learning from data, value of information, cybersecurity, debunking bad statistics, and much more. Focusing on practical real-world problem-solving and model building, as opposed to algorithms and theory, it explains how to incorporate knowledge with data to develop and use (Bayesian) causal models of risk that provide more powerful insights and better decision making than is possible from purely data-driven solutions. Features Provides all tools necessary to build and run realistic Bayesian network models Supplies extensive example models based on real risk assessment problems in a wide range of application domains provided; for example, finance, safety, systems reliability, law, forensics, cybersecurity and more Introduces all necessary mathematics, probability, and statistics as needed Establishes the basics of probability, risk, and building and using Bayesian network models, before going into the detailed applications A dedicated website contains exercises and worked solutions for all chapters along with numerous other resources. The AgenaRisk software contains a model library with executable versions of all of the models in the book. Lecture slides are freely available to accredited academic teachers adopting the book on their course.

Continuous Improvement, Probability, and Statistics: Using Creative Hands-On Techniques (Continuous Improvement Series)

by William Hooper

What happens when the sport of Juggling meets a Statistical Process Control class? This book shows a creative approach to teaching data analysis for continuous improvement. Using step by step instructions, including over 65 photos and 40 graphs, traditional continuous improvement topics (design of experiments, reliability functions, and probability) are demonstrated using card illusions and hands-on activities. This book is for anyone that teaches these topics and wants to make them more understandable and sometimes even fun. Every operator, technician, student, manager, and leader can learn data analysis and be inspired to join the next generation of continuous improvement professionals.

Flexible Regression and Smoothing: Using GAMLSS in R (Chapman & Hall/CRC The R Series)

by Mikis D. Stasinopoulos Robert A. Rigby Gillian Z. Heller Vlasios Voudouris Fernanda De Bastiani

This book is about learning from data using the Generalized Additive Models for Location, Scale and Shape (GAMLSS). GAMLSS extends the Generalized Linear Models (GLMs) and Generalized Additive Models (GAMs) to accommodate large complex datasets, which are increasingly prevalent. GAMLSS allows any parametric distribution for the response variable and modelling all the parameters (location, scale and shape) of the distribution as linear or smooth functions of explanatory variables. This book provides a broad overview of GAMLSS methodology and how it is implemented in R. It includes a comprehensive collection of real data examples, integrated code, and figures to illustrate the methods, and is supplemented by a website with code, data and additional materials.

Addressing Special Educational Needs and Disability in the Curriculum: Maths (Addressing SEND in the Curriculum)

by Max Wallace

The SEND Code of Practice (2015) reinforced the requirement that all teachers must meet the needs of all learners. This topical book provides practical, tried and tested strategies and resources that will support teachers in making maths lessons accessible and interesting for all pupils, including those with special needs. The author draws on a wealth of experience to share his understanding of special educational needs and disabilities and show how the maths teacher can reduce or remove any barriers to learning. Offering strategies that are specific to the context of maths teaching, this book will enable teachers to: adopt a ‘problem solving’ approach to ensure students use and apply mathematics at all times during their learning develop students’ understanding of mathematical ideas structure lessons to empower and actively engage students create a mutually supportive classroom which maximises learning opportunities plan the classroom layout and display to enhance learning, for example displaying number lines, vocabulary lists and pupils’ work successfully train and fully use the support of their teaching assistants. An invaluable tool for continuing professional development, this text will be essential for secondary maths teachers (and their teaching assistants) seeking guidance specific to teaching maths to all pupils, regardless of their individual needs. This book will also be of interest to secondary SENCOs, senior management teams and ITT providers. In addition to free online resources, a range of appendices provide maths teachers with a variety of pro forma and activity sheets to support effective teaching. This is an essential tool for maths teachers and teaching assistants, and will help to deliver successful, inclusive lessons for all pupils.

Speed, Data, and Ecosystems: Excelling in a Software-Driven World (Chapman & Hall/CRC Innovations in Software Engineering and Software Development Series)

by Jan Bosch

As software R&D investment increases, the benefits from short feedback cycles using technologies such as continuous deployment, experimentation-based development, and multidisciplinary teams require a fundamentally different strategy and process. This book will cover the three overall challenges that companies are grappling with: speed, data and ecosystems. Speed deals with shortening the cycle time in R&D. Data deals with increasing the use of and benefit from the massive amounts of data that companies collect. Ecosystems address the transition of companies from being internally focused to being ecosystem oriented by analyzing what the company is uniquely good at and where it adds value.

Exascale Scientific Applications: Scalability and Performance Portability (Chapman & Hall/CRC Computational Science)

by Timothy J. Williams Tjerk P. Straatsma Katerina B. Antypas

From the Foreword: "The authors of the chapters in this book are the pioneers who will explore the exascale frontier. The path forward will not be easy... These authors, along with their colleagues who will produce these powerful computer systems will, with dedication and determination, overcome the scalability problem, discover the new algorithms needed to achieve exascale performance for the broad range of applications that they represent, and create the new tools needed to support the development of scalable and portable science and engineering applications. Although the focus is on exascale computers, the benefits will permeate all of science and engineering because the technologies developed for the exascale computers of tomorrow will also power the petascale servers and terascale workstations of tomorrow. These affordable computing capabilities will empower scientists and engineers everywhere."— Thom H. Dunning, Jr., Pacific Northwest National Laboratory and University of Washington, Seattle, Washington, USA "This comprehensive summary of applications targeting Exascale at the three DoE labs is a must read."— Rio Yokota, Tokyo Institute of Technology, Tokyo, Japan "Numerical simulation is now a need in many fields of science, technology, and industry. The complexity of the simulated systems coupled with the massive use of data makes HPC essential to move towards predictive simulations. Advances in computer architecture have so far permitted scientific advances, but at the cost of continually adapting algorithms and applications. The next technological breakthroughs force us to rethink the applications by taking energy consumption into account. These profound modifications require not only anticipation and sharing but also a paradigm shift in application design to ensure the sustainability of developments by guaranteeing a certain independence of the applications to the profound modifications of the architectures: it is the passage from optimal performance to the portability of performance. It is the challenge of this book to demonstrate by example the approach that one can adopt for the development of applications offering performance portability in spite of the profound changes of the computing architectures."— Christophe Calvin, CEA, Fundamental Research Division, Saclay, France "Three editors, one from each of the High Performance Computer Centers at Lawrence Berkeley, Argonne, and Oak Ridge National Laboratories, have compiled a very useful set of chapters aimed at describing software developments for the next generation exa-scale computers. Such a book is needed for scientists and engineers to see where the field is going and how they will be able to exploit such architectures for their own work. The book will also benefit students as it provides insights into how to develop software for such computer architectures. Overall, this book fills an important need in showing how to design and implement algorithms for exa-scale architectures which are heterogeneous and have unique memory systems. The book discusses issues with developing user codes for these architectures and how to address these issues including actual coding examples.’ — Dr. David A. Dixon, Robert Ramsay Chair, The University of Alabama, Tuscaloosa, Alabama, USA

Survival Analysis with Interval-Censored Data: A Practical Approach with Examples in R, SAS, and BUGS (Chapman & Hall/CRC Interdisciplinary Statistics)

by Kris Bogaerts Arnost Komarek Emmanuel Lesaffre

Survival Analysis with Interval-Censored Data: A Practical Approach with Examples in R, SAS, and BUGS provides the reader with a practical introduction into the analysis of interval-censored survival times. Although many theoretical developments have appeared in the last fifty years, interval censoring is often ignored in practice. Many are unaware of the impact of inappropriately dealing with interval censoring. In addition, the necessary software is at times difficult to trace. This book fills in the gap between theory and practice. Features: -Provides an overview of frequentist as well as Bayesian methods. -Include a focus on practical aspects and applications. -Extensively illustrates the methods with examples using R, SAS, and BUGS. Full programs are available on a supplementary website. The authors: Kris Bogaerts is project manager at I-BioStat, KU Leuven. He received his PhD in science (statistics) at KU Leuven on the analysis of interval-censored data. He has gained expertise in a great variety of statistical topics with a focus on the design and analysis of clinical trials. Arnošt Komárek is associate professor of statistics at Charles University, Prague. His subject area of expertise covers mainly survival analysis with the emphasis on interval-censored data and classification based on longitudinal data. He is past chair of the Statistical Modelling Society and editor of Statistical Modelling: An International Journal. Emmanuel Lesaffre is professor of biostatistics at I-BioStat, KU Leuven. His research interests include Bayesian methods, longitudinal data analysis, statistical modelling, analysis of dental data, interval-censored data, misclassification issues, and clinical trials. He is the founding chair of the Statistical Modelling Society, past-president of the International Society for Clinical Biostatistics, and fellow of ISI and ASA.

Joint Modeling of Longitudinal and Time-to-Event Data (Chapman & Hall/CRC Monographs on Statistics and Applied Probability)

by Ning Li Robert Elashoff Gang li

Longitudinal studies often incur several problems that challenge standard statistical methods for data analysis. These problems include non-ignorable missing data in longitudinal measurements of one or more response variables, informative observation times of longitudinal data, and survival analysis with intermittently measured time-dependent covariates that are subject to measurement error and/or substantial biological variation. Joint modeling of longitudinal and time-to-event data has emerged as a novel approach to handle these issues. Joint Modeling of Longitudinal and Time-to-Event Data provides a systematic introduction and review of state-of-the-art statistical methodology in this active research field. The methods are illustrated by real data examples from a wide range of clinical research topics. A collection of data sets and software for practical implementation of the joint modeling methodologies are available through the book website. This book serves as a reference book for scientific investigators who need to analyze longitudinal and/or survival data, as well as researchers developing methodology in this field. It may also be used as a textbook for a graduate level course in biostatistics or statistics.

Twists, Tilings, and Tessellations: Mathematical Methods for Geometric Origami (AK Peters/CRC Recreational Mathematics Series)

by Robert J. Lang

Twists, Tilings, and Tessellation describes the underlying principles and mathematics of the broad and exciting field of abstract and mathematical origami, most notably the field of origami tessellations. It contains folding instructions, underlying principles, mathematical concepts, and many beautiful photos of the latest work in this fast-expanding field.

Programming for Hybrid Multi/Manycore MPP Systems (Chapman & Hall/CRC Computational Science)

by John Levesque Aaron Vose

"Ask not what your compiler can do for you, ask what you can do for your compiler."--John Levesque, Director of Cray’s Supercomputing Centers of Excellence The next decade of computationally intense computing lies with more powerful multi/manycore nodes where processors share a large memory space. These nodes will be the building block for systems that range from a single node workstation up to systems approaching the exaflop regime. The node itself will consist of 10’s to 100’s of MIMD (multiple instruction, multiple data) processing units with SIMD (single instruction, multiple data) parallel instructions. Since a standard, affordable memory architecture will not be able to supply the bandwidth required by these cores, new memory organizations will be introduced. These new node architectures will represent a significant challenge to application developers. Programming for Hybrid Multi/Manycore MPP Systems attempts to briefly describe the current state-of-the-art in programming these systems, and proposes an approach for developing a performance-portable application that can effectively utilize all of these systems from a single application. The book starts with a strategy for optimizing an application for multi/manycore architectures. It then looks at the three typical architectures, covering their advantages and disadvantages. The next section of the book explores the other important component of the target—the compiler. The compiler will ultimately convert the input language to executable code on the target, and the book explores how to make the compiler do what we want. The book then talks about gathering runtime statistics from running the application on the important problem sets previously discussed. How best to utilize available memory bandwidth and virtualization is covered next, along with hybridization of a program. The last part of the book includes several major applications, and examines future hardware advancements and how the application developer may prepare for those advancements.

Bayesian Psychometric Modeling (Chapman & Hall/CRC Statistics in the Social and Behavioral Sciences)

by Roy Levy Robert J. Mislevy

A Single Cohesive Framework of Tools and Procedures for Psychometrics and AssessmentBayesian Psychometric Modeling presents a unified Bayesian approach across traditionally separate families of psychometric models. It shows that Bayesian techniques, as alternatives to conventional approaches, offer distinct and profound advantages in achieving many goals of psychometrics.Adopting a Bayesian approach can aid in unifying seemingly disparate—and sometimes conflicting—ideas and activities in psychometrics. This book explains both how to perform psychometrics using Bayesian methods and why many of the activities in psychometrics align with Bayesian thinking.The first part of the book introduces foundational principles and statistical models, including conceptual issues, normal distribution models, Markov chain Monte Carlo estimation, and regression. Focusing more directly on psychometrics, the second part covers popular psychometric models, including classical test theory, factor analysis, item response theory, latent class analysis, and Bayesian networks. Throughout the book, procedures are illustrated using examples primarily from educational assessments. A supplementary website provides the datasets, WinBUGS code, R code, and Netica files used in the examples.

Methods in Comparative Effectiveness Research (Chapman & Hall/CRC Biostatistics Series)

by Sally C. Morton Constantine Gatsonis

Comparative effectiveness research (CER) is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care (IOM 2009). CER is conducted to develop evidence that will aid patients, clinicians, purchasers, and health policy makers in making informed decisions at both the individual and population levels. CER encompasses a very broad range of types of studies—experimental, observational, prospective, retrospective, and research synthesis. This volume covers the main areas of quantitative methodology for the design and analysis of CER studies. The volume has four major sections—causal inference; clinical trials; research synthesis; and specialized topics. The audience includes CER methodologists, quantitative-trained researchers interested in CER, and graduate students in statistics, epidemiology, and health services and outcomes research. The book assumes a masters-level course in regression analysis and familiarity with clinical research.

Refine Search

Showing 7,726 through 7,750 of 54,423 results