Browse Results

Showing 29,301 through 29,325 of 82,418 results

Evaluating AAL Systems Through Competitive Benchmarking: International Competitions and Final Workshop, EvAAL 2012, July and September 2012. Revised Selected Papers (Communications in Computer and Information Science #362)

by Stefano Chessa Stefan Knauth

This book constitutes the refereed proceedings of the international competition aimed at the evaluation and assessment of Ambient Assisted Living, EvAAL 2012, which was organized in three major events: the Second International Competition on Indoor Localization and Tracking for Ambient Assisted Living, which took place in Madrid, Spain, in July 2012, the First International Competition on Activity Recognition for Ambient Assisted Living, which took place in Valencia, Spain, in July 2012, and the Final Workshop, which was held in Eindhoven, The Netherlands, in September 2012. The papers included in this book describe the organization and technical aspects of the competitions, and provide a complete technical description of the competing artefacts and report on the experience lessons learned by the teams during the competition.

Evaluating AAL Systems Through Competitive Benchmarking - Indoor Localization and Tracking: International Competition, EvAAL 2011, Competition in Valencia, Spain, July 25-29, 2011, and Final Workshop in Lecce ,Italy, September 26, 2011. Revised Selected Papers (Communications in Computer and Information Science #309)

by Stefano Chessa Stefan Knauth

This book constitutes the refereed proceedings of the international competition aimed at the evaluation and assessment of Ambient Assisted Living (AAL) systems and services, EvAAL 2011, which was organized in two major events, the Competition in Valencia, Spain, in July 2011, and the Final workshop in Lecce, Italy, in September 2011. The papers included in this book describe the organization and technical aspects of the competition, and provide a complete technical description of the competing artefacts and report on the experience lessons learned by the teams during the competition.

Evaluating Children's Interactive Products: Principles and Practices for Interaction Designers (Interactive Technologies)

by Panos Markopoulos Janet C Read Stuart MacFarlane Johanna Hoysniemi

Evaluating Children's Interactive Products directly addresses the need to ensure that interactive products designed for children — whether toys, games, educational products, or websites — are safe, effective, and entertaining. It presents an essential background in child development and child psychology, particularly as they relate to technology; captures best practices for observing and surveying children, training evaluators, and capturing the child user experience using audio and visual technology; and examines ethical and legal issues involved in working with children and offers guidelines for effective risk management. Based on the authors' workshops, conference courses, and own design experience and research, this highly practical book reads like a handbook, while being thoroughly grounded in the latest research. Throughout, the authors illustrate techniques and principles with numerous mini case studies and highlight practical information in tips and exercises and conclude with three in-depth case studies. This book is recommended for usability experts, product developers, and researchers in the field.* Presents an essential background in child development and child psychology, particularly as they relate to technology. * Captures best practices for observing and surveying children, training evaluators, and capturing the child user experience using audio and visual technology.* Examines ethical and legal issues involved in working with children and offers guidelines for effective risk management.

Evaluating e-Learning: Guiding Research and Practice (Connecting with E-learning)

by Rob Phillips Carmel McNaught Gregor Kennedy

How can novice e-learning researchers and postgraduate learners develop rigorous plans to study the effectiveness of technology-enhanced learning environments? How can practitioners gather and portray evidence of the impact of e-learning? How can the average educator who teaches online, without experience in evaluating emerging technologies, build on what is successful and modify what is not? By unpacking the e-learning lifecycle and focusing on learning, not technology, Evaluating e-Learning attempts to resolve some of the complexity inherent in evaluating the effectiveness of e-learning. The book presents practical advice in the form of an evaluation framework and a scaffolded approach to an e-learning research study, using divide-and-conquer techniques to reduce complexity in both design and delivery. It adapts and builds on familiar research methodology to offer a robust and accessible approach that can ensure effective evaluation of a wide range of innovative initiatives, including those covered in other books in the Connecting with e-Learning series. Readers will find this jargon-free guide is a must-have resource that provides the proper tools for evaluating e-learning practices with ease.

Evaluating e-Learning: Guiding Research and Practice (Connecting with E-learning)

by Rob Phillips Carmel McNaught Gregor Kennedy

How can novice e-learning researchers and postgraduate learners develop rigorous plans to study the effectiveness of technology-enhanced learning environments? How can practitioners gather and portray evidence of the impact of e-learning? How can the average educator who teaches online, without experience in evaluating emerging technologies, build on what is successful and modify what is not? By unpacking the e-learning lifecycle and focusing on learning, not technology, Evaluating e-Learning attempts to resolve some of the complexity inherent in evaluating the effectiveness of e-learning. The book presents practical advice in the form of an evaluation framework and a scaffolded approach to an e-learning research study, using divide-and-conquer techniques to reduce complexity in both design and delivery. It adapts and builds on familiar research methodology to offer a robust and accessible approach that can ensure effective evaluation of a wide range of innovative initiatives, including those covered in other books in the Connecting with e-Learning series. Readers will find this jargon-free guide is a must-have resource that provides the proper tools for evaluating e-learning practices with ease.

Evaluating Information Retrieval and Access Tasks: NTCIR's Legacy of Research Impact (The Information Retrieval Series #43)

by Tetsuya Sakai Douglas W. Oard Noriko Kando

This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, today’s smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students—anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one.

Evaluating Information Systems

by Zahir Irani Peter Love

The adoption of Information Technology (IT) and Information Systems (IS) represents significant financial investments, with alternative perspectives to the evaluation domain coming from both the public and private sectors.As a result of increasing IT/IS budgets and their growing significance within the development of an organizational infrastructure, the evaluation and performance measurement of new technology remains a perennial issue for management. This book offers a refreshing and updated insight into the social fabric and technical dimensions of IT/IS evaluation together with insights into approaches used to measure the impact of information systems on its stakeholders. In doing so, it describes the portfolio of appraisal techniques that support the justification of IT/IS investments. Evaluating Information Systems explores the concept of evaluation as an evolutionary and dynamic process that takes into account the ability of enterprise technologies to integrate information systems within and between organisations. In particular, when set against a backdrop of organisational learning. It examines the changing portfolio of benefits, costs and risks associated with the adoption and diffusion of technology in today's global marketplace. Finally approaches to impact assessment through performance management and benchmarking is discussed.

Evaluating Information Systems

by Zahir Irani Peter Love

The adoption of Information Technology (IT) and Information Systems (IS) represents significant financial investments, with alternative perspectives to the evaluation domain coming from both the public and private sectors.As a result of increasing IT/IS budgets and their growing significance within the development of an organizational infrastructure, the evaluation and performance measurement of new technology remains a perennial issue for management. This book offers a refreshing and updated insight into the social fabric and technical dimensions of IT/IS evaluation together with insights into approaches used to measure the impact of information systems on its stakeholders. In doing so, it describes the portfolio of appraisal techniques that support the justification of IT/IS investments. Evaluating Information Systems explores the concept of evaluation as an evolutionary and dynamic process that takes into account the ability of enterprise technologies to integrate information systems within and between organisations. In particular, when set against a backdrop of organisational learning. It examines the changing portfolio of benefits, costs and risks associated with the adoption and diffusion of technology in today's global marketplace. Finally approaches to impact assessment through performance management and benchmarking is discussed.

Evaluating IT Projects (Routledge Focus on Business and Management)

by Eriona Shtëmbari

Project management disciplines have been a part of IT for many years. Why then, are so many challenges still directly associated with how a project is managed? Many projects fail for a myriad of reasons; most, however, stem from poor or inadequate project evaluation and performance appraisal, while, improved project planning and direction is considered to be one of the key factors to IT project success. Eriona Shtembari arranges evaluation methods and techniques into three groups, managerial-financial-and-development. This book explores the process of project evaluation and the purposes of evaluation, given its strong relationship to the success of the project. It examines IT project evaluation; identifies methods and techniques to be used throughout the project life cycle; examines the benefits of project evaluation and proposes a systematic approach/framework of project evaluation to serve as a tool for successful project management. Shtembari analyses the most up-to-date research relating to the process and methods/techniques of project evaluation, throughout the project life cycle. From the systematic literature review, she identifies the most usable methods and techniques in project evaluation and focuses on the adequacy of these methods and techniques in the service sector. The theoretical underpinning of the book, serves as a base to interpret the interviews in the case study and build a theory as to how the project evaluation context relates to the proposed scientific theory. The findings in this book provide solutions for practitioners to help them boost the evaluation framework and consequently improve their IT project management.

Evaluating IT Projects (Routledge Focus on Business and Management)

by Eriona Shtëmbari

Project management disciplines have been a part of IT for many years. Why then, are so many challenges still directly associated with how a project is managed? Many projects fail for a myriad of reasons; most, however, stem from poor or inadequate project evaluation and performance appraisal, while, improved project planning and direction is considered to be one of the key factors to IT project success. Eriona Shtembari arranges evaluation methods and techniques into three groups, managerial-financial-and-development. This book explores the process of project evaluation and the purposes of evaluation, given its strong relationship to the success of the project. It examines IT project evaluation; identifies methods and techniques to be used throughout the project life cycle; examines the benefits of project evaluation and proposes a systematic approach/framework of project evaluation to serve as a tool for successful project management. Shtembari analyses the most up-to-date research relating to the process and methods/techniques of project evaluation, throughout the project life cycle. From the systematic literature review, she identifies the most usable methods and techniques in project evaluation and focuses on the adequacy of these methods and techniques in the service sector. The theoretical underpinning of the book, serves as a base to interpret the interviews in the case study and build a theory as to how the project evaluation context relates to the proposed scientific theory. The findings in this book provide solutions for practitioners to help them boost the evaluation framework and consequently improve their IT project management.

Evaluating Natural Language Processing Systems: An Analysis and Review (Lecture Notes in Computer Science #1083)

by Karen Sparck Jones Julia R. Galliers

This comprehensive state-of-the-art book is the first devoted to the important and timely issue of evaluating NLP systems. It addresses the whole area of NLP system evaluation, including aims and scope, problems and methodology.The authors provide a wide-ranging and careful analysis of evaluation concepts, reinforced with extensive illustrations; they relate systems to their environments and develop a framework for proper evaluation. The discussion of principles is completed by a detailed review of practice and strategies in the field, covering both systems for specific tasks, like translation, and core language processors. The methodology lessons drawn from the analysis and review are applied in a series of example cases. A comprehensive bibliography, a subject index, and term glossary are included.

Evaluating Online Teaching: Implementing Best Practices

by Thomas J. Tobin B. Jean Mandernach Ann H. Taylor

Create a more effective system for evaluating online faculty Evaluating Online Teaching is the first comprehensive book to outline strategies for effectively measuring the quality of online teaching, providing the tools and guidance that faculty members and administrators need. The authors address challenges that colleges and universities face in creating effective online teacher evaluations, including organizational structure, institutional governance, faculty and administrator attitudes, and possible budget constraints. Through the integration of case studies and theory, the text provides practical solutions geared to address challenges and foster effective, efficient evaluations of online teaching. Readers gain access to rubrics, forms, and worksheets that they can customize to fit the needs of their unique institutions. Evaluation methods designed for face-to-face classrooms, from student surveys to administrative observations, are often applied to the online teaching environment, leaving reviewers and instructors with an ill-fitted and incomplete analysis. Evaluating Online Teaching shows how strategies for evaluating online teaching differ from those used in traditional classrooms and vary as a function of the nature, purpose, and focus of the evaluation. This book guides faculty members and administrators in crafting an evaluation process specifically suited to online teaching and learning, for more accurate feedback and better results. Readers will: Learn how to evaluate online teaching performance Examine best practices for student ratings of online teaching Discover methods and tools for gathering informal feedback Understand the online teaching evaluation life cycle The book concludes with an examination of strategies for fostering change across campus, as well as structures for creating a climate of assessment that includes online teaching as a component. Evaluating Online Teaching helps institutions rethink the evaluation process for online teaching, with the end goal of improving teaching and learning, student success, and institutional results.

Evaluating Online Teaching: Implementing Best Practices

by Thomas J. Tobin B. Jean Mandernach Ann H. Taylor

Create a more effective system for evaluating online faculty Evaluating Online Teaching is the first comprehensive book to outline strategies for effectively measuring the quality of online teaching, providing the tools and guidance that faculty members and administrators need. The authors address challenges that colleges and universities face in creating effective online teacher evaluations, including organizational structure, institutional governance, faculty and administrator attitudes, and possible budget constraints. Through the integration of case studies and theory, the text provides practical solutions geared to address challenges and foster effective, efficient evaluations of online teaching. Readers gain access to rubrics, forms, and worksheets that they can customize to fit the needs of their unique institutions. Evaluation methods designed for face-to-face classrooms, from student surveys to administrative observations, are often applied to the online teaching environment, leaving reviewers and instructors with an ill-fitted and incomplete analysis. Evaluating Online Teaching shows how strategies for evaluating online teaching differ from those used in traditional classrooms and vary as a function of the nature, purpose, and focus of the evaluation. This book guides faculty members and administrators in crafting an evaluation process specifically suited to online teaching and learning, for more accurate feedback and better results. Readers will: Learn how to evaluate online teaching performance Examine best practices for student ratings of online teaching Discover methods and tools for gathering informal feedback Understand the online teaching evaluation life cycle The book concludes with an examination of strategies for fostering change across campus, as well as structures for creating a climate of assessment that includes online teaching as a component. Evaluating Online Teaching helps institutions rethink the evaluation process for online teaching, with the end goal of improving teaching and learning, student success, and institutional results.

Evaluating Participatory Mapping Software

by Charla M. Burnett

This volume provides a framework for evaluating geospatial software for participatory mapping. The evaluation is based on ten key indicators: ethics, cost, technical level, inclusiveness, data accuracy, data privacy, analytical capacity, visualization capacity, openness, and accessibility (i.e., mobile friendly or offline capabilities). Each application is evaluated by a user and cross analyzed with specific case studies of the software’s real-world application. This framework does not discriminate against assessing volunteered geographic information (VGI) applications, as a form of participatory mapping, in circumstances that its application is spearheaded by underrepresented groups with the intent to empower and spark political or behavioral change within formal and informal institutions. Each chapter follows a strict template to ensure that the information within the volume can be updated periodically to match the ever-changing technological environment. The book covers ten different mapping applications with the goal of creating a comparative evaluation framework that can be easily interpreted by convening institutions and novice users. This will also help identify gaps in software for participatory mapping which will help to inform application development in the future and updates to current geospatial software.

Evaluating Systems for Multilingual and Multimodal Information Access: 9th Workshop of the Cross-Language Evaluation Forum, CLEF 2008, Aarhus, Denmark, September 17-19, 2008, Revised Selected Papers (Lecture Notes in Computer Science #5706)

by Thomas Deselaers Nicola Ferro Julio Gonzalo Mikko Kurimo Thomas Mandl Vivien Petras

The ninth campaign of the Cross-Language Evaluation Forum (CLEF) for European languages was held from January to September 2008. There were seven main eval- tion tracks in CLEF 2008 plus two pilot tasks. The aim, as usual, was to test the p- formance of a wide range of multilingual information access (MLIA) systems or s- tem components. This year, 100 groups, mainly but not only from academia, parti- pated in the campaign. Most of the groups were from Europe but there was also a good contingent from North America and Asia plus a few participants from South America and Africa. Full details regarding the design of the tracks, the methodologies used for evaluation, and the results obtained by the participants can be found in the different sections of these proceedings. The results of the CLEF 2008 campaign were presented at a two-and-a-half day workshop held in Aarhus, Denmark, September 17–19, and attended by 150 resear- ers and system developers. The annual workshop, held in conjunction with the European Conference on Digital Libraries, plays an important role by providing the opportunity for all the groups that have participated in the evaluation campaign to get together comparing approaches and exchanging ideas. The schedule of the workshop was divided between plenary track overviews, and parallel, poster and breakout sessions presenting this year’s experiments and discu- ing ideas for the future. There were several invited talks.

Evaluating User Experience in Games: Concepts and Methods (Human–Computer Interaction Series)

by Regina Bernhaupt

It was a pleasure to provide an introduction to a new volume on user experience evaluation in games. The scope, depth, and diversity of the work here is amazing. It attests to the growing popularity of games and the increasing importance developing a range of theories, methods, and scales to evaluate them. This evolution is driven by the cost and complexity of games being developed today. It is also driven by the need to broaden the appeal of games. Many of the approaches described here are enabled by new tools and techniques. This book (along with a few others) represents a watershed in game evaluation and understanding. The eld of game evaluation has truly “come of age”. The broader eld of HCI can begin to look toward game evaluation for fresh, critical, and sophisticated thi- ing about design evaluation and product development. They can also look to games for groundbreaking case studies of evaluation of products. I’ll brie y summarize each chapter below and provide some commentary. In conclusion, I will mention a few common themes and offer some challenges. Discussion In Chapter 1, User Experience Evaluation in Entertainment, Bernhaupt gives an overview and presents a general framework on methods currently used for user experience evaluation. The methods presented in the following chapters are s- marized and thus allow the reader to quickly assess the right set of methods that will help to evaluate the game under development.

Evaluating Voting Systems with Probability Models: Essays by and in Honor of William Gehrlein and Dominique Lepelley (Studies in Choice and Welfare)

by Mostapha Diss Vincent Merlin

This book includes up-to-date contributions in the broadly defined area of probabilistic analysis of voting rules and decision mechanisms. Featuring papers from all fields of social choice and game theory, it presents probability arguments to allow readers to gain a better understanding of the properties of decision rules and of the functioning of modern democracies. In particular, it focuses on the legacy of William Gehrlein and Dominique Lepelley, two prominent scholars who have made important contributions to this field over the last fifty years. It covers a range of topics, including (but not limited to) computational and technical aspects of probability approaches, evaluation of the likelihood of voting paradoxes, power indices, empirical evaluations of voting rules, models of voters’ behavior, and strategic voting. The book gathers articles written in honor of Gehrlein and Lepelley along with original works written by the two scholars themselves.

Evaluation and Assessment in Educational Information Technology

by D Lamont Johnson Cleborne D Maddux Leping Liu Norma Henderson

Choose the right hardware and software for your school!This unique book is the first systematic work on evaluating and assessing educational information technology. Here you?ll find specific strategies, best practices, and techniques to help you choose the educational technology that is most appropriate for your institution. Evaluation and Assessment in Educational Information Technology will show you how to measure the effects of information technology on teaching and learning, help you determine the extent of technological integration into the curriculum that is best for your school, and point you toward the most effective ways to teach students and faculty to use new technology.Evaluation and Assessment in Educational Information Technology presents: a summary of the last ten years of assessment instrument development seven well-validated instruments that gauge attitudes, beliefs, skills, competencies, and technology integration proficiencies two content analysis instruments for analyzing teacher-student interaction patterns in a distance learning setting an examination of the best uses of computerized testing--as opposed to conventional tests, as used in local settings, to meet daily instructional needs, in online delivery programs, in public domain software, and available commercial and shareware options successful pedagogical and assessment strategies for use in online settings a four-dimensional model to assess student learning in instructional technology courses three models for assessing the significance of information technology in education from a teacher?s perspective an incisive look at Michigan?s newly formed Consortium of Outstanding Achievement in Teaching with Technology (COATT) ways to use electronic portfolios for teaching/learning performance assessment and much more!

Evaluation and Assessment in Educational Information Technology

by D Lamont Johnson Cleborne D Maddux Leping Liu Norma Henderson

Choose the right hardware and software for your school!This unique book is the first systematic work on evaluating and assessing educational information technology. Here you?ll find specific strategies, best practices, and techniques to help you choose the educational technology that is most appropriate for your institution. Evaluation and Assessment in Educational Information Technology will show you how to measure the effects of information technology on teaching and learning, help you determine the extent of technological integration into the curriculum that is best for your school, and point you toward the most effective ways to teach students and faculty to use new technology.Evaluation and Assessment in Educational Information Technology presents: a summary of the last ten years of assessment instrument development seven well-validated instruments that gauge attitudes, beliefs, skills, competencies, and technology integration proficiencies two content analysis instruments for analyzing teacher-student interaction patterns in a distance learning setting an examination of the best uses of computerized testing--as opposed to conventional tests, as used in local settings, to meet daily instructional needs, in online delivery programs, in public domain software, and available commercial and shareware options successful pedagogical and assessment strategies for use in online settings a four-dimensional model to assess student learning in instructional technology courses three models for assessing the significance of information technology in education from a teacher?s perspective an incisive look at Michigan?s newly formed Consortium of Outstanding Achievement in Teaching with Technology (COATT) ways to use electronic portfolios for teaching/learning performance assessment and much more!

An Evaluation Framework for Multimodal Interaction: Determining Quality Aspects and Modality Choice (T-Labs Series in Telecommunication Services)

by Ina Wechsung

This book presents (1) an exhaustive and empirically validated taxonomy of quality aspects of multimodal interaction as well as respective measurement methods, (2) a validated questionnaire specifically tailored to the evaluation of multimodal systems and covering most of the taxonomy‘s quality aspects, (3) insights on how the quality perceptions of multimodal systems relate to the quality perceptions of its individual components, (4) a set of empirically tested factors which influence modality choice, and (5) models regarding the relationship of the perceived quality of a modality and the actual usage of a modality.

Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments: Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 – 27, 2015, Revised Contributions (Lecture Notes in Computer Science #10264)

by Daniel Archambault Helen Purchase Tobias Hoßfeld

As the outcome of the Dagstuhl Seminar 15481 on Crowdsourcing and Human-Centered Experiments, this book is a primer for computer science researchers who intend to use crowdsourcing technology for human centered experiments. The focus of this Dagstuhl seminar, held in Dagstuhl Castle in November 2015, was to discuss experiences and methodological considerations when using crowdsourcing platforms to run human-centered experiments to test the effectiveness of visual representations. The inspiring Dagstuhl atmosphere fostered discussions and brought together researchers from different research directions. The papers provide information on crowdsourcing technology and experimental methodologies, comparisons between crowdsourcing and lab experiments, the use of crowdsourcing for visualisation, psychology, QoE and HCI empirical studies, and finally the nature of crowdworkers and their work, their motivation and demographic background, as well as the relationships among people forming the crowdsourcing community.

Evaluation Methods in Biomedical and Health Informatics (Health Informatics)

by Joan S. Ash Charles P. Friedman Jeremy C. Wyatt

Heavily updated and revised from the successful first edition Appeals to a wide range of informatics professionals, from students to on-site medical information system administrators Includes case studies and real world system evaluations References and self-tests for feedback and motivation after each chapter Great for teaching purposes, the book is recommended for courses offered at universities such as Columbia University Precise definition and use of terms

Evaluation of Cross-Language Information Retrieval Systems: Second Workshop of the Cross-Language Evaluation Forum, CLEF 2001, Darmstadt, Germany, September 3-4, 2001. Revised Papers (Lecture Notes in Computer Science #2406)

by Martin Braschler Julio Gonzalo Michael Kluck

The second evaluation campaign of the Cross Language Evaluation Forum (CLEF) for European languages was held from January to September 2001. This campaign proved a great success, and showed an increase in participation of around 70% com pared with CLEF 2000. It culminated in a two day workshop in Darmstadt, Germany, 3–4 September, in conjunction with the 5th European Conference on Digital Libraries (ECDL 2001). On the first day of the workshop, the results of the CLEF 2001 evalua tion campaign were reported and discussed in paper and poster sessions. The second day focused on the current needs of cross language systems and how evaluation cam paigns in the future can best be designed to stimulate progress. The workshop was attended by nearly 50 researchers and system developers from both academia and in dustry. It provided an important opportunity for researchers working in the same area to get together and exchange ideas and experiences. Copies of all the presentations are available on the CLEF web site at http://www. clef campaign. org. This volume con tains thoroughly revised and expanded versions of the papers presented at the work shop and provides an exhaustive record of the CLEF 2001 campaign. CLEF 2001 was conducted as an activity of the DELOS Network of Excellence for Digital Libraries, funded by the EC Information Society Technologies program to further research in digital library technologies. The activity was organized in collabo ration with the US National Institute of Standards and Technology (NIST).

Evaluation of Electronic Voting: Requirements and Evaluation Procedures to Support Responsible Election Authorities (Lecture Notes in Business Information Processing #30)

by Melanie Volkamer

Electronic voting has a young and attractive history, both in the design of basic cryptographic methods and protocols and in the application by communities who are in the vanguard of technologies. The crucial aspect of security for electronic voting systems is subject to research by computer scientists as well as by legal, social and political scientists. The essential question is how to provide a trustworthy base for secure electronic voting, and hence how to prevent accidental or malicious abuse of electronic voting in elections. To address this problem, Volkamer structured her work into four parts: "Fundamentals" provides an introduction to the relevant issues of electronic voting. "Requirements" contributes a standardized, consistent, and exhaustive list of requirements for e-voting systems. "Evaluation" presents the proposal and discussion of a standardized evaluation methodology and certification procedure called a core Protection Profile. Finally, "Application" describes the evaluation of two available remote electronic voting systems according to the core Protection Profile. The results presented are based on theoretical considerations as well as on practical experience. In accordance with the German Society of Computer Scientists, Volkamer succeeded in specifying a "Protection Profile for a Basic Set of Security Requirements for Online Voting Products," which has been certified by the German Federal Office for Security in Information Technology. Her book is of interest not only to developers of security-critical systems, but also to lawyers, security officers, and politicians involved in the introduction or certification of electronic voting systems.

Evaluation of Multilingual and Multi-modal Information Retrieval: 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, Alicante, Spain, September 20-22, 2006, Revised Selected Papers (Lecture Notes in Computer Science #4730)

by Paul Clough Fredric C. Gey Jussi Karlgren Bernardo Magnini Douglas W. Oard Maarten De Rijke Maximilian Stempfhuber

This book constitutes the thoroughly refereed postproceedings of the 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, held in Alicante, Spain, September 2006. The revised papers presented together with an introduction were carefully reviewed and selected for inclusion in the book. The papers are organized in topical sections on Multilingual Textual Document Retrieval, Domain-Specifig Information Retrieval, i-CLEF, QA@CLEF, ImageCLEF, CLSR, WebCLEF and GeoCLEF.

Refine Search

Showing 29,301 through 29,325 of 82,418 results