Browse Results

Showing 8,701 through 8,725 of 82,893 results

Automatic Methods for the Refinement of System Models: From the Specification to the Implementation (SpringerBriefs in Electrical and Computer Engineering)

by Julia Seiter Robert Wille Rolf Drechsler

This book provides a comprehensive overview of automatic model refinement, which helps readers close the gap between initial textual specification and its desired implementation. The authors enable readers to follow two “directions” for refinement: Vertical refinement, for adding detail and precision to single description for a given model and Horizontal refinement, which considers several views on one level of abstraction, refining the system specification by dedicated descriptions for structure or behavior. The discussion includes several methods which support designers of electronic systems in this refinement process, including verification methods to check automatically whether a refinement has been conducted as intended.

Automatic Modulation Recognition of Communication Signals

by Elsayed Azzouz A.K. Nandi

Automatic modulation recognition is a rapidly evolving area of signal analysis. In recent years, interest from the academic and military research institutes has focused around the research and development of modulation recognition algorithms. Any communication intelligence (COMINT) system comprises three main blocks: receiver front-end, modulation recogniser and output stage. Considerable work has been done in the area of receiver front-ends. The work at the output stage is concerned with information extraction, recording and exploitation and begins with signal demodulation, that requires accurate knowledge about the signal modulation type. There are, however, two main reasons for knowing the current modulation type of a signal; to preserve the signal information content and to decide upon the suitable counter action, such as jamming. Automatic Modulation Recognition of Communications Signals describes in depth this modulation recognition process. Drawing on several years of research, the authors provide a critical review of automatic modulation recognition. This includes techniques for recognising digitally modulated signals. The book also gives comprehensive treatment of using artificial neural networks for recognising modulation types. Automatic Modulation Recognition of Communications Signals is the first comprehensive book on automatic modulation recognition. It is essential reading for researchers and practising engineers in the field. It is also a valuable text for an advanced course on the subject.

Automatic Nonuniform Random Variate Generation (Statistics and Computing)

by Wolfgang Hörmann Josef Leydold Gerhard Derflinger

The recent concept of universal (also called automatic or black-box) random variate generation can only be found dispersed in the literature. Being unique in its overall organization, the book covers not only the mathematical and statistical theory but also deals with the implementation of such methods. All algorithms introduced in the book are designed for practical use in simulation and have been coded and made available by the authors. Examples of possible applications of the presented algorithms (including option pricing, VaR and Bayesian statistics) are presented at the end of the book.

Automatic Parallelization: New Approaches to Code Generation, Data Distribution, and Performance Prediction

by Christoph W. Kessler

Distributed-memory multiprocessing systems (DMS), such as Intel's hypercubes, the Paragon, Thinking Machine's CM-5, and the Meiko Computing Surface, have rapidly gained user acceptance and promise to deliver the computing power required to solve the grand challenge problems of Science and Engineering. These machines are relatively inexpensive to build, and are potentially scalable to large numbers of processors. However, they are difficult to program: the non-uniformity of the memory which makes local accesses much faster than the transfer of non-local data via message-passing operations implies that the locality of algorithms must be exploited in order to achieve acceptable performance. The management of data, with the twin goals of both spreading the computational workload and minimizing the delays caused when a processor has to wait for non-local data, becomes of paramount importance. When a code is parallelized by hand, the programmer must distribute the program's work and data to the processors which will execute it. One of the common approaches to do so makes use of the regularity of most numerical computations. This is the so-called Single Program Multiple Data (SPMD) or data parallel model of computation. With this method, the data arrays in the original program are each distributed to the processors, establishing an ownership relation, and computations defining a data item are performed by the processors owning the data.

Automatic Parallelization: An Overview of Fundamental Compiler Techniques (Synthesis Lectures on Computer Architecture)

by Samuel Midkiff

Compiling for parallelism is a longstanding topic of compiler research. This book describes the fundamental principles of compiling "regular" numerical programs for parallelism. We begin with an explanation of analyses that allow a compiler to understand the interaction of data reads and writes in different statements and loop iterations during program execution. These analyses include dependence analysis, use-def analysis and pointer analysis. Next, we describe how the results of these analyses are used to enable transformations that make loops more amenable to parallelization, and discuss transformations that expose parallelism to target shared memory multicore and vector processors. We then discuss some problems that arise when parallelizing programs for execution on distributed memory machines. Finally, we conclude with an overview of solving Diophantine equations and suggestions for further readings in the topics of this book to enable the interested reader to delve deeper into the field. Table of Contents: Introduction and overview / Dependence analysis, dependence graphs and alias analysis / Program parallelization / Transformations to modify and eliminate dependences / Transformation of iterative and recursive constructs / Compiling for distributed memory machines / Solving Diophantine equations / A guide to further reading

Automatic Performance Prediction of Parallel Programs

by Thomas Fahringer

Automatic Performance Prediction of Parallel Programs presents a unified approach to the problem of automatically estimating the performance of parallel computer programs. The author focuses primarily on distributed memory multiprocessor systems, although large portions of the analysis can be applied to shared memory architectures as well. The author introduces a novel and very practical approach for predicting some of the most important performance parameters of parallel programs, including work distribution, number of transfers, amount of data transferred, network contention, transfer time, computation time and number of cache misses. This approach is based on advanced compiler analysis that carefully examines loop iteration spaces, procedure calls, array subscript expressions, communication patterns, data distributions and optimizing code transformations at the program level; and the most important machine specific parameters including cache characteristics, communication network indices, and benchmark data for computational operations at the machine level. The material has been fully implemented as part of P3T, which is an integrated automatic performance estimator of the Vienna Fortran Compilation System (VFCS), a state-of-the-art parallelizing compiler for Fortran77, Vienna Fortran and a subset of High Performance Fortran (HPF) programs. A large number of experiments using realistic HPF and Vienna Fortran code examples demonstrate highly accurate performance estimates, and the ability of the described performance prediction approach to successfully guide both programmer and compiler in parallelizing and optimizing parallel programs. A graphical user interface is described and displayed that visualizes each program source line together with the corresponding parameter values. P3T uses color-coded performance visualization to immediately identify hot spots in the parallel program. Performance data can be filtered and displayed at various levels of detail. Colors displayed by the graphical user interface are visualized in greyscale. Automatic Performance Prediction of Parallel Programs also includes coverage of fundamental problems of automatic parallelization for distributed memory multicomputers, a description of the basic parallelization strategy and a large variety of optimizing code transformations as included under VFCS.

Automatic Processing of Natural-Language Electronic Texts with NooJ: 10th International Conference, NooJ 2016, České Budějovice, Czech Republic, June 9-11, 2016, Revised Selected Papers (Communications in Computer and Information Science #667)

by Linda Barone Mario Monteleone Max Silberztein

This book constitutes the refereed proceedings of the 10th International Conference, NooJ 2016, held České Budějovice, Czech Republic, in June 2016.The 21 revised full papers presented in this volume were carefully reviewed and selected from 45 submissions. NooJ is a linguistic development environment that provides tools for linguists to construct linguistic resources that formalise a large gamut of linguistic phenomena: typography, orthography, lexicons for simple words, multiword units and discontinuous expressions, inflectional and derivational morphology, local, structural and transformational syntax, and semantics.

Automatic Processing of Natural-Language Electronic Texts with NooJ: 9th International Conference, NooJ 2015, Minsk, Belarus, June 11-13, 2015, Revised Selected Papers (Communications in Computer and Information Science #607)

by Tatsiana Okrut Yuras Hetsevich Max Silberztein Hanna Stanislavenka

This book constitutes the refereed proceedings of the 9th International Conference, NooJ 2015, held in Minsk, Belarus, in June 2015.NooJ 2015 received 51 submissions. The 20 revised full papers presented in this volume were carefully reviewed and selected from the 35 papers that were presented at the conference. The papers are organized in topical sections on corpora, vocabulary and morphology; syntax and semantics; application.

Automatic Program Development: A Tribute to Robert Paige

by Olivier Danvy Fritz Henglein Harry Mairson Alberto Pettorossi

This work, a tribute to renowned researcher Robert Paige, is a collection of revised papers published in his honor in the Higher-Order and Symbolic Computation Journal in 2003 and 2005. Among them there are two key papers: a retrospective view of his research lines, and a proposal for future studies in the area of the automatic program derivation. The book also includes some papers by members of the IFIP Working Group 2.1 of which Bob was an active member.

Automatic Programming Applied to VLSI CAD Software: A Case Study (The Springer International Series in Engineering and Computer Science #101)

by Dorothy E. Setliff Rob A. Rutenbar

This book, and the research it describes, resulted from a simple observation we made sometime in 1986. Put simply, we noticed that many VLSI design tools looked "alike". That is, at least at the overall software architecture level, the algorithms and data structures required to solve problem X looked much like those required to solve problem X'. Unfortunately, this resemblance is often of little help in actually writing the software for problem X' given the software for problem X. In the VLSI CAD world, technology changes rapidly enough that design software must continually strive to keep up. And of course, VLSI design software, and engineering design software in general, is often exquisitely sensitive to some aspects of the domain (technology) in which it operates. Modest changes in functionality have an unfortunate tendency to require substantial (and time-consuming) internal software modifications. Now, observing that large engineering software systems are technology­ dependent is not particularly clever. However, we believe that our approach to xiv Preface dealing with this problem took an interesting new direction. We chose to investigate the extent to which automatic programming ideas cold be used to synthesize such software systems from high-level specifications. This book is one of the results of that effort.

Automatic Quantum Computer Programming: A Genetic Programming Approach (Genetic Programming #7)

by Lee Spector

Automatic Quantum Computer Programming provides an introduction to quantum computing for non-physicists, as well as an introduction to genetic programming for non-computer-scientists. The book explores several ways in which genetic programming can support automatic quantum computer programming and presents detailed descriptions of specific techniques, along with several examples of their human-competitive performance on specific problems. Source code for the author’s QGAME quantum computer simulator is included as an appendix, and pointers to additional online resources furnish the reader with an array of tools for automatic quantum computer programming.

Automatic Quantum Computer Programming: A Genetic Programming Approach (Genetic Programming #7)

by Lee Spector

This is a book about the frontiers of computer science that have re­ cently been opened by work in quantum mechanics, but it is also a book about the use of recently developed automatic programming technolo­ gies to explore those frontiers. The automatic programming technologies themselves issue from another interdisciplinary frontier of computer sci­ ence — one born of the intersection of computer science with evolution­ ary biology. So this is a book about two frontiers of computer science, one being used primarily for the sake of exploring the other. The selection of topics in this book was made with the intention of showing how genetic programming can be usefully applied to certain problems in quantum computing. To this end, it provides a basic intro­ duction to quantum computing for non-physicists and it also provides a basic introduction to genetic programming for non-computer-scientists. These treatments should be comprehensible to scientifically literate read­ ers who have, at minimum, a passing familiarity with undergradua- level computer science (e.g. programming concepts) and mathematics (e.g. simple linear algebra). No background in physics is assumed.

Automatic Re-engineering of Software Using Genetic Programming (Genetic Programming #2)

by Conor Ryan

Automatic Re-engineering of Software Using Genetic Programming describes the application of Genetic Programming to a real world application area - software re-engineering in general and automatic parallelization specifically. Unlike most uses of Genetic Programming, this book evolves sequences of provable transformations rather than actual programs. It demonstrates that the benefits of this approach are twofold: first, the time required for evaluating a population is drastically reduced, and second, the transformations can subsequently be used to prove that the new program is functionally equivalent to the original. Automatic Re-engineering of Software Using Genetic Programming shows that there are applications where it is more practical to use GP to assist with software engineering rather than to entirely replace it. It also demonstrates how the author isolated aspects of a problem that were particularly suited to GP, and used traditional software engineering techniques in those areas for which they were adequate. Automatic Re-engineering of Software Using Genetic Programming is an excellent resource for researchers in this exciting new field.

Automatic SIMD Vectorization of SSA-based Control Flow Graphs

by Ralf Karrenberg

Ralf Karrenberg presents Whole-Function Vectorization (WFV), an approach that allows a compiler to automatically create code that exploits data-parallelism using SIMD instructions. Data-parallel applications such as particle simulations, stock option price estimation or video decoding require the same computations to be performed on huge amounts of data. Without WFV, one processor core executes a single instance of a data-parallel function. WFV transforms the function to execute multiple instances at once using SIMD instructions. The author describes an advanced WFV algorithm that includes a variety of analyses and code generation techniques. He shows that this approach improves the performance of the generated code in a variety of use cases.

Automatic Speech Recognition: The Development of the SPHINX System (The Springer International Series in Engineering and Computer Science #62)

by Kai-Fu Lee

Speech Recognition has a long history of being one of the difficult problems in Artificial Intelligence and Computer Science. As one goes from problem solving tasks such as puzzles and chess to perceptual tasks such as speech and vision, the problem characteristics change dramatically: knowledge poor to knowledge rich; low data rates to high data rates; slow response time (minutes to hours) to instantaneous response time. These characteristics taken together increase the computational complexity of the problem by several orders of magnitude. Further, speech provides a challenging task domain which embodies many of the requirements of intelligent behavior: operate in real time; exploit vast amounts of knowledge, tolerate errorful, unexpected unknown input; use symbols and abstractions; communicate in natural language and learn from the environment. Voice input to computers offers a number of advantages. It provides a natural, fast, hands free, eyes free, location free input medium. However, there are many as yet unsolved problems that prevent routine use of speech as an input device by non-experts. These include cost, real time response, speaker independence, robustness to variations such as noise, microphone, speech rate and loudness, and the ability to handle non-grammatical speech. Satisfactory solutions to each of these problems can be expected within the next decade. Recognition of unrestricted spontaneous continuous speech appears unsolvable at present. However, by the addition of simple constraints, such as clarification dialog to resolve ambiguity, we believe it will be possible to develop systems capable of accepting very large vocabulary continuous speechdictation.

Automatic Speech Recognition: A Deep Learning Approach (Signals and Communication Technology)

by Dong Yu Li Deng

This book provides a comprehensive overview of the recent advancement in the field of automatic speech recognition with a focus on deep learning models including deep neural networks and many of their variants. This is the first automatic speech recognition book dedicated to the deep learning approach. In addition to the rigorous mathematical treatment of the subject, the book also presents insights and theoretical foundation of a series of highly successful deep learning models.

Automatic Speech Recognition and Translation for Low Resource Languages

by L. Ashok Kumar D. Karthika Renuka Bharathi Raja Chakravarthi Thomas Mandl

AUTOMATIC SPEECH RECOGNITION and TRANSLATION for LOW-RESOURCE LANGUAGES This book is a comprehensive exploration into the cutting-edge research, methodologies, and advancements in addressing the unique challenges associated with ASR and translation for low-resource languages. Automatic Speech Recognition and Translation for Low Resource Languages contains groundbreaking research from experts and researchers sharing innovative solutions that address language challenges in low-resource environments. The book begins by delving into the fundamental concepts of ASR and translation, providing readers with a solid foundation for understanding the subsequent chapters. It then explores the intricacies of low-resource languages, analyzing the factors that contribute to their challenges and the significance of developing tailored solutions to overcome them. The chapters encompass a wide range of topics, ranging from both the theoretical and practical aspects of ASR and translation for low-resource languages. The book discusses data augmentation techniques, transfer learning, and multilingual training approaches that leverage the power of existing linguistic resources to improve accuracy and performance. Additionally, it investigates the possibilities offered by unsupervised and semi-supervised learning, as well as the benefits of active learning and crowdsourcing in enriching the training data. Throughout the book, emphasis is placed on the importance of considering the cultural and linguistic context of low-resource languages, recognizing the unique nuances and intricacies that influence accurate ASR and translation. Furthermore, the book explores the potential impact of these technologies in various domains, such as healthcare, education, and commerce, empowering individuals and communities by breaking down language barriers. Audience The book targets researchers and professionals in the fields of natural language processing, computational linguistics, and speech technology. It will also be of interest to engineers, linguists, and individuals in industries and organizations working on cross-lingual communication, accessibility, and global connectivity.

Automatic Speech Recognition and Translation for Low Resource Languages

by L. Ashok Kumar D. Karthika Renuka Bharathi Raja Chakravarthi Thomas Mandl

AUTOMATIC SPEECH RECOGNITION and TRANSLATION for LOW-RESOURCE LANGUAGES This book is a comprehensive exploration into the cutting-edge research, methodologies, and advancements in addressing the unique challenges associated with ASR and translation for low-resource languages. Automatic Speech Recognition and Translation for Low Resource Languages contains groundbreaking research from experts and researchers sharing innovative solutions that address language challenges in low-resource environments. The book begins by delving into the fundamental concepts of ASR and translation, providing readers with a solid foundation for understanding the subsequent chapters. It then explores the intricacies of low-resource languages, analyzing the factors that contribute to their challenges and the significance of developing tailored solutions to overcome them. The chapters encompass a wide range of topics, ranging from both the theoretical and practical aspects of ASR and translation for low-resource languages. The book discusses data augmentation techniques, transfer learning, and multilingual training approaches that leverage the power of existing linguistic resources to improve accuracy and performance. Additionally, it investigates the possibilities offered by unsupervised and semi-supervised learning, as well as the benefits of active learning and crowdsourcing in enriching the training data. Throughout the book, emphasis is placed on the importance of considering the cultural and linguistic context of low-resource languages, recognizing the unique nuances and intricacies that influence accurate ASR and translation. Furthermore, the book explores the potential impact of these technologies in various domains, such as healthcare, education, and commerce, empowering individuals and communities by breaking down language barriers. Audience The book targets researchers and professionals in the fields of natural language processing, computational linguistics, and speech technology. It will also be of interest to engineers, linguists, and individuals in industries and organizations working on cross-lingual communication, accessibility, and global connectivity.

Automatic Speech Recognition of Arabic Phonemes with Neural Networks: A Contrastive Study of Arabic and English (SpringerBriefs in Applied Sciences and Technology)

by Mohammed Dib

This book presents a contrastive linguistics study of Arabic and English for the dual purposes of improved language teaching and speech processing of Arabic via spectral analysis and neural networks. Contrastive linguistics is a field of linguistics which aims to compare the linguistic systems of two or more languages in order to ease the tasks of teaching, learning, and translation. The main focus of the present study is to treat the Arabic minimal syllable automatically to facilitate automatic speech processing in Arabic. It represents important reading for language learners and for linguists with an interest in Arabic and computational approaches.

Automatic Speech Recognition on Mobile Devices and over Communication Networks (Advances in Computer Vision and Pattern Recognition)

by Zheng-Hua Tan Boerge Lindberg

The advances in computing and networking have sparked an enormous interest in deploying automatic speech recognition on mobile devices and over communication networks. This book brings together academic researchers and industrial practitioners to address the issues in this emerging realm and presents the reader with a comprehensive introduction to the subject of speech recognition in devices and networks. It covers network, distributed and embedded speech recognition systems.

Automatic Syntactic Analysis Based on Selectional Preferences (Studies in Computational Intelligence #765)

by Alexander Gelbukh Hiram Calvo

This book describes effective methods for automatically analyzing a sentence, based on the syntactic and semantic characteristics of the elements that form it. To tackle ambiguities, the authors use selectional preferences (SP), which measure how well two words fit together semantically in a sentence. Today, many disciplines require automatic text analysis based on the syntactic and semantic characteristics of language and as such several techniques for parsing sentences have been proposed. Which is better? In this book the authors begin with simple heuristics before moving on to more complex methods that identify nouns and verbs and then aggregate modifiers, and lastly discuss methods that can handle complex subordinate and relative clauses. During this process, several ambiguities arise. SP are commonly determined on the basis of the association between a pair of words. However, in many cases, SP depend on more words. For example, something (such as grass) may be edible, depending on who is eating it (a cow?). Moreover, things such as popcorn are usually eaten at the movies, and not in a restaurant. The authors deal with these phenomena from different points of view.

Automatic Text Simplification (Synthesis Lectures on Human Language Technologies)

by Horacio Saggion

Thanks to the availability of texts on the Web in recent years, increased knowledge and information have been made available to broader audiences. However, the way in which a text is written—its vocabulary, its syntax—can be difficult to read and understand for many people, especially those with poor literacy, cognitive or linguistic impairment, or those with limited knowledge of the language of the text. Texts containing uncommon words or long and complicated sentences can be difficult to read and understand by people as well as difficult to analyze by machines. Automatic text simplification is the process of transforming a text into another text which, ideally conveying the same message, will be easier to read and understand by a broader audience. The process usually involves the replacement of difficult or unknown phrases with simpler equivalents and the transformation of long and syntactically complex sentences into shorter and less complex ones. Automatic text simplification, a research topic which started 20 years ago, now has taken on a central role in natural language processing research not only because of the interesting challenges it posesses but also because of its social implications. This book presents past and current research in text simplification, exploring key issues including automatic readability assessment, lexical simplification, and syntactic simplification. It also provides a detailed account of machine learning techniques currently used in simplification, describes full systems designed for specific languages and target audiences, and offers available resources for research and development together with text simplification evaluation techniques.

Automatic Text Summarization (Iste Ser.)

by Juan-Manuel Torres-Moreno

Textual information in the form of digital documents quickly accumulates to create huge amounts of data. The majority of these documents are unstructured: it is unrestricted text and has not been organized into traditional databases. Processing documents is therefore a perfunctory task, mostly due to a lack of standards. It has thus become extremely difficult to implement automatic text analysis tasks. Automatic Text Summarization (ATS), by condensing the text while maintaining relevant information, can help to process this ever-increasing, difficult-to-handle, mass of information. This book examines the motivations and different algorithms for ATS. The author presents the recent state of the art before describing the main problems of ATS, as well as the difficulties and solutions provided by the community. The book provides recent advances in ATS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several examples are also included in order to clarify the theoretical concepts.

Automatic Text Summarization

by Juan-Manuel Torres-Moreno

Textual information in the form of digital documents quickly accumulates to create huge amounts of data. The majority of these documents are unstructured: it is unrestricted text and has not been organized into traditional databases. Processing documents is therefore a perfunctory task, mostly due to a lack of standards. It has thus become extremely difficult to implement automatic text analysis tasks. Automatic Text Summarization (ATS), by condensing the text while maintaining relevant information, can help to process this ever-increasing, difficult-to-handle, mass of information. This book examines the motivations and different algorithms for ATS. The author presents the recent state of the art before describing the main problems of ATS, as well as the difficulties and solutions provided by the community. The book provides recent advances in ATS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several examples are also included in order to clarify the theoretical concepts.

Automatic Tools for Designing Office Information Systems: The TODOS Approach (Research Reports Esprit #1)

by Barbara Pernici Colette Rolland

The market for information technology products is rapidly changing from a manufactur­ er-driven market where new products were determined by the evolution of technology, to a user-driven market where users buy only products corresponding exactly to their needs and where competition is very strong. Confronted with this market situation, hardware and software producers are being obliged to adopt new strategies, and to make a large number of products available on the market in response to a variety of different needs. As a result of the multiplicity of choice available, the design of an office system which corresponds precisely to user needs is becoming an increasingly complex task. With exactly this in mind, the Commission, as early as 1985, invited submissions of projects aiming at the development of such adequate tools in its Call for Proposals for the ESPRIT Programme, in order to assist companies in the design of their office systems. This topic was recognised as being of strategic importance, considering the low level of penetration of Information Technology in European enterprises compared to the United States and Japan. Following this strategy, the project TODOS was selected and launched. This project has successfully developed tools and methods for the definition of the functional specifi­ cation of the office system, as well as the system architecture and user interface -results which can be of great interest for the IT community at large.

Refine Search

Showing 8,701 through 8,725 of 82,893 results