Browse Results

Showing 21,751 through 21,775 of 82,734 results

Databases, Information Systems, and Peer-to-Peer Computing: Second International Workshop, DBISP2P 2004, Toronto, Canada, August 29-30, 2004, Revised Selected Papers (Lecture Notes in Computer Science #3367)

by Wee Siong Ng Beng Chin Ooi Aris Ouksel Claudio Sartori

Peer-to-peer (P2P) computing promises to o?er exciting new possibilities in d- tributed information processing and database technologies. The realization of this promise lies fundamentally in the availability of enhanced services such as structured ways for classifying and registering shared information, veri?cation and certi?cation of information, content-distributed schemes and quality of c- tent, security features, information discovery and accessibility, interoperation and composition of active information services, and ?nally market-based me- anisms to allow cooperative and non-cooperative information exchanges. The P2P paradigm lends itself to constructing large-scale complex, adaptive, - tonomous and heterogeneous database and information systems, endowed with clearly speci?ed and di?erential capabilities to negotiate, bargain, coordinate, and self-organize the information exchanges in large-scale networks. This vision will have a radical impact on the structure of complex organizations (business, scienti?c, or otherwise) and on the emergence and the formation of social c- munities, and on how the information is organized and processed. The P2P information paradigm naturally encompasses static and wireless connectivity, and static and mobile architectures. Wireless connectivity c- bined with the increasingly small and powerful mobile devices and sensors pose new challenges to as well as opportunities for the database community. Inf- mation becomes ubiquitous, highly distributed and accessible anywhere and at any time over highly dynamic, unstable networks with very severe constraints on the information management and processing capabilities.

Databases on Modern Hardware (Synthesis Lectures on Data Management)

by Anastasia Ailamaki Erietta Liarou Pınar Tözün Danica Porobic Iraklis Psaroudakis

Data management systems enable various influential applications from high-performance online services (e.g., social networks like Twitter and Facebook or financial markets) to big data analytics (e.g., scientific exploration, sensor networks, business intelligence). As a result, data management systems have been one of the main drivers for innovations in the database and computer architecture communities for several decades. Recent hardware trends require software to take advantage of the abundant parallelism existing in modern and future hardware. The traditional design of the data management systems, however, faces inherent scalability problems due to its tightly coupled components. In addition, it cannot exploit the full capability of the aggressive micro-architectural features of modern processors. As a result, today's most commonly used server types remain largely underutilized leading to a huge waste of hardware resources and energy. In this book, we shed light on the challenges present while running DBMS on modern multicore hardware. We divide the material into two dimensions of scalability: implicit/vertical and explicit/horizontal. The first part of the book focuses on the vertical dimension: it describes the instruction- and data-level parallelism opportunities in a core coming from the hardware and software side. In addition, it examines the sources of under-utilization in a modern processor and presents insights and hardware/software techniques to better exploit the microarchitectural resources of a processor by improving cache locality at the right level of the memory hierarchy. The second part focuses on the horizontal dimension, i.e., scalability bottlenecks of database applications at the level of multicore and multisocket multicore architectures. It first presents a systematic way of eliminating such bottlenecks in online transaction processing workloads, which is based on minimizing unbounded communication, and shows several techniques that minimize bottlenecks in major components of database management systems. Then, it demonstrates the data and work sharing opportunities for analytical workloads, and reviews advanced scheduling mechanisms that are aware of nonuniform memory accesses and alleviate bandwidth saturation.

Databases Theory and Applications: 34th Australasian Database Conference, ADC 2023, Melbourne, VIC, Australia, November 1-3, 2023, Proceedings (Lecture Notes in Computer Science #14386)

by Zhifeng Bao Renata Borovica-Gajic Ruihong Qiu Farhana Choudhury Zhengyi Yang

This book constitutes the refereed proceedings of the 34th Australasian Database Conference on Databases Theory and Applications, ADC 2023, held in Melbourne, VIC, Australia, during November 1-3, 2023. The 26 full papers presented in this volume are carefully reviewed and selected from 41 submissions. They were organized in topical sections named: Mining Complex Types of Data, Natural Language Processing and Text Analysis, Machine Learning and Computer Vision, Database Systems and Data Storage, Data Quality and Fairness for Graphs and Graph Mining and Graph Algorithms.

Databases Theory and Applications: 31st Australasian Database Conference, ADC 2020, Melbourne, VIC, Australia, February 3–7, 2020, Proceedings (Lecture Notes in Computer Science #12008)

by Renata Borovica-Gajic Jianzhong Qi Weiqing Wang

This book constitutes the refereed proceedings of the 31th Australasian Database Conference, ADC 2019, held in Melbourne, VIC, Australia, in February 2020. The 14 full and 5 short papers presented were carefully reviewed and selected from 30 submissions. The Australasian Database Conference is an annual international forum for sharing the latest research advancements and novel applications of database systems, data driven applications and data analytics between researchers and practitioners from around the globe, particularly Australia, New Zealand and in the World.

Databases Theory and Applications: 30th Australasian Database Conference, ADC 2019, Sydney, NSW, Australia, January 29 – February 1, 2019, Proceedings (Lecture Notes in Computer Science #11393)

by Lijun Chang Junhao Gan Xin Cao

This book constitutes the refereed proceedings of the 30th Australasian Database Conference, ADC 2019, held in Sydney, NSW, Australia, in January/February 2019. The 9 full papers presented together with one demo paper were carefully reviewed and selected from 19 submissions. The Australasian Database Conference is an annual international forum for sharing the latest research progresses and novel applications of database systems, data management, data mining and data analytics for researchers and practitioners in these areas from Australia, New Zealand and in the world

Databases Theory and Applications: 27th Australasian Database Conference, ADC 2016, Sydney, NSW, September 28-29, 2016, Proceedings (Lecture Notes in Computer Science #9877)

by Muhammad Aamir Cheema Wenjie Zhang Lijun Chang

This book constitutes the refereed proceedings of the 27th Australasian Database Conference, ADC 2016, held in Sydney, NSW, Australia, in September 2016. The 33 full papers presented together with 11 demo papers were carefully reviewed and selected from 55 submissions. The mission of ADC is to share novel research solutions to problems of today’s information society that fulfill the needs of heterogeneous applications and environments and to identify new issues and directions for future research. The topics of the presented papers are related to all practical and theoretical aspects of advanced database theory and applications, as well as case studies and implementation experiences.

Databases Theory and Applications: 33rd Australasian Database Conference, ADC 2022, Sydney, NSW, Australia, September 2–4, 2022, Proceedings (Lecture Notes in Computer Science #13459)

by Wen Hua Hua Wang Lei Li

This book constitutes the refereed proceedings of the 33rd International Conference on Databases Theory and Applications, ADC 2022, held in Sydney, Australia, in September 2022. The conference is co-located with the 48th International Conference on Very Large Data Bases, VLDB 2022. The 9 full papers presented together with 8 short papers were carefully reviewed and selected from 36 submissions. ADC focuses on database systems, data-driven applications, and data analytics.

Databases Theory and Applications: 32nd Australasian Database Conference, ADC 2021, Dunedin, New Zealand, January 29 – February 5, 2021, Proceedings (Lecture Notes in Computer Science #12610)

by Miao Qiao Gottfried Vossen Sen Wang Lei Li

This book constitutes the refereed proceedings of the 32nd Australasian Database Conference, ADC 2021, held in Dunedin, New Zealand, in January/February 2021. The 17 full papers presented were carefully reviewed and selected from 21 submissions. The Australasian Database Conference is an annual international forum for sharing the latest research advancements and novel applications of database systems, data-driven applications, and data analytics between researchers and practitioners from around the globe, particularly Australia and New Zealand. ADC shares novel research solutions to problems of todays information society that fullfil the needs of heterogeneous applications and environments and to identify new issues and directions for future research and development work.

Databases Theory and Applications: 26th Australasian Database Conference, ADC 2015, Melbourne, VIC, Australia, June 4-7, 2015. Proceedings (Lecture Notes in Computer Science #9093)

by Mohamed A. Sharaf Muhammad Aamir Cheema Jianzhong Qi

This book constitutes the refereed proceedings of the 26th Australasian Database Conference, ADC 2015, held in Melbourne, VIC, Australia, in June 2015. The 24 full papers presented together with 5 demo papers were carefully reviewed and selected from 43 submissions. The Australasian Database Conference is an annual international forum for sharing the latest research advancements and novel applications of database systems, data driven applications and data analytics between researchers and practitioners from around the globe, particularly Australia and New Zealand. The mission of ADC is to share novel research solutions to problems of today’s information society that fulfill the needs of heterogeneous applications and environments and to identify new issues and directions for future research. ADC seeks papers from academia and industry presenting research on all practical and theoretical aspects of advanced database theory and applications, as well as case studies and implementation experiences.

Databases Theory and Applications: 25th Australasian Database Conference, ADC 2014, Brisbane, QLD, Australia, July 14-16, 2014. Proceedings (Lecture Notes in Computer Science #8506)

by Hua Wang Mohamed A. Sharaf

This book constitutes the refereed proceedings of the 25th Australasian Database Conference, ADC 2014, held in Brisbane, NSW, Australia, in July 2014. The 15 full papers presented together with 6 short papers and 2 keynotes were carefully reviewed and selected from 38 submissions. A large variety of subjects are covered, including hot topics such as data warehousing; database integration; mobile databases; cloud, distributed, and parallel databases; high dimensional and temporal data; image/video retrieval and databases; database performance and tuning; privacy and security in databases; query processing and optimization; semi-structured data and XML; spatial data processing and management; stream and sensor data management; uncertain and probabilistic databases; web databases; graph databases; web service management; and social media data management.

Databases Theory and Applications: 29th Australasian Database Conference, ADC 2018, Gold Coast, QLD, Australia, May 24-27, 2018, Proceedings (Lecture Notes in Computer Science #10837)

by Junhu Wang Gao Cong Jinjun Chen Jianzhong Qi

This book constitutes the refereed proceedings of the 29th Australasian Database Conference, ADC 2018, held in Gold Coast, QLD, Australia, in May 2018.The 23 full papers plus 6 short papers presented together with 3 demo papers were carefully reviewed and selected from 53 submissions. The Australasian Database Conference is an annual international forum for sharing the latest research advancements and novel applications of database systems, data-driven applications, and data analytics between researchers and practitioners from around the globe, particularly Australia and New Zealand.

Databases Theory and Applications: 28th Australasian Database Conference, ADC 2017, Brisbane, QLD, Australia, September 25–28, 2017, Proceedings (Lecture Notes in Computer Science #10538)

by Zi Huang, Xiaokui Xiao and Xin Cao

This book constitutes the refereed proceedings of the 28th Australasian Database Conference, ADC 2017, held in Brisbane, QLD, Australia, in September 2017. The 20 full papers presented together with 2 demo papers were carefully reviewed and selected from 32 submissions. The mission of ADC is to share novel research solutions to problems of today’s information society that fulfill the needs of heterogeneous applications and environments and to identify new issues and directions for future research and development work. The topics of the presented papers are related to all practical and theoretical aspects of advanced database theory and applications, as well as case studies and implementation experiences.

The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines, Second Edition (Synthesis Lectures on Computer Architecture)

by Luis Andre Barroso Jimmy Clidaras

As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co-located servers. Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. In other words, we must treat the datacenter itself as one massive warehouse-scale computer (WSC). We describe the architecture of WSCs, the main factors influencing their design, operation, and cost structure, and the characteristics of their software base. We hope it will be useful to architects and programmers of today’s WSCs, as well as those of future many-core platforms which may one day implement the equivalent of today’s WSCs on a single board. Notes for the Second Edition After nearly four years of substantial academic and industrial developments in warehouse-scale computing, we are delighted to present our first major update to this lecture. The increased popularity of public clouds has made WSC software techniques relevant to a larger pool of programmers since our first edition. Therefore, we expanded Chapter 2 to reflect our better understanding of WSC software systems and the toolbox of software techniques for WSC programming. In Chapter 3, we added to our coverage of the evolving landscape of wimpy vs. brawny server trade-offs, and we now present an overview of WSC interconnects and storage systems that was promised but lacking in the original edition. Thanks largely to the help of our new co-author, Google Distinguished Engineer Jimmy Clidaras, the material on facility mechanical and power distribution design has been updated and greatly extended (see Chapters 4 and 5). Chapters 6 and 7 have also been revamped significantly. We hope this revised edition continues to meet the needs of educators and professionals in this area.

The Datacenter as a Computer: Designing Warehouse-Scale Machines, Third Edition (Synthesis Lectures on Computer Architecture)

by Luiz André Barroso Urs Hölzle Parthasarathy Ranganathan

This book describes warehouse-scale computers (WSCs), the computing platforms that power cloud computing and all the great web services we use every day. It discusses how these new systems treat the datacenter itself as one massive computer designed at warehouse scale, with hardware and software working in concert to deliver good levels of internet service performance. The book details the architecture of WSCs and covers the main factors influencing their design, operation, and cost structure, and the characteristics of their software base. Each chapter contains multiple real-world examples, including detailed case studies and previously unpublished details of the infrastructure used to power Google's online services. Targeted at the architects and programmers of today's WSCs, this book provides a great foundation for those looking to innovate in this fascinating and important area, but the material will also be broadly interesting to those who just want to understand the infrastructure powering the internet. The third edition reflects four years of advancements since the previous edition and nearly doubles the number of pictures and figures. New topics range from additional workloads like video streaming, machine learning, and public cloud to specialized silicon accelerators, storage and network building blocks, and a revised discussion of data center power and cooling, and uptime. Further discussions of emerging trends and opportunities ensure that this revised edition will remain an essential resource for educators and professionals working on the next generation of WSCs.

The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines (Synthesis Lectures on Computer Architecture)

by Luiz Barroso Urs Hoelzle

As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co-located servers. Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. In other words, we must treat the datacenter itself as one massive warehouse-scale computer (WSC). We describe the architecture of WSCs, the main factors influencing their design, operation, and cost structure, and the characteristics of their software base. We hope it will be useful to architects and programmers of today's WSCs, as well as those of future many-core platforms which may one day implement the equivalent of today's WSCs on a single board. Table of Contents: Introduction / Workloads and Software Infrastructure / Hardware Building Blocks / Datacenter Basics / Energy and Power Efficiency / Modeling Costs / Dealing with Failures and Repairs / Closing Remarks

Datacenter Connectivity Technologies: Principles and Practice

by Frank Chang

In recent years, investments by cloud companies in mega data centers and associated network infrastructure has created a very active and dynamic segment in the optical components and modules market. Optical interconnect technologies at high speed play a critical role for the growth of mega data centers, which flood the networks with unprecedented amount of data traffic. Datacenter Connectivity Technologies: Principles and Practice provides a comprehensive and in-depth look at the development of various optical connectivity technologies which are making an impact on the building of data centers. The technologies span from short range connectivity, as low as 100 meters with multi-mode fiber (MMF) links inside data centers, to long distances of hundreds of kilometers with single-mode fiber (SMF) links between data centers.This book is the first of its kind to address various advanced technologies connecting data centers. It represents a collection of achievements and the latest developments from well-known industry experts and academic researchers active in this field.

Datacenter Connectivity Technologies: Principles and Practice


In recent years, investments by cloud companies in mega data centers and associated network infrastructure has created a very active and dynamic segment in the optical components and modules market. Optical interconnect technologies at high speed play a critical role for the growth of mega data centers, which flood the networks with unprecedented amount of data traffic. Datacenter Connectivity Technologies: Principles and Practice provides a comprehensive and in-depth look at the development of various optical connectivity technologies which are making an impact on the building of data centers. The technologies span from short range connectivity, as low as 100 meters with multi-mode fiber (MMF) links inside data centers, to long distances of hundreds of kilometers with single-mode fiber (SMF) links between data centers.This book is the first of its kind to address various advanced technologies connecting data centers. It represents a collection of achievements and the latest developments from well-known industry experts and academic researchers active in this field.

Datacenter Design and Management: A Computer Architect’s Perspective (Synthesis Lectures on Computer Architecture)

by Benjamin C. Lee

An era of big data demands datacenters, which house the computing infrastructure that translates raw data into valuable information. This book defines datacenters broadly, as large distributed systems that perform parallel computation for diverse users. These systems exist in multiple forms—private and public—and are built at multiple scales. Datacenter design and management is multifaceted, requiring the simultaneous pursuit of multiple objectives. Performance, efficiency, and fairness are first-order design and management objectives, which can each be viewed from several perspectives. This book surveys datacenter research from a computer architect's perspective, addressing challenges in applications, design, management, server simulation, and system simulation. This perspective complements the rich bodies of work in datacenters as a warehouse-scale system, which study the implications for infrastructure that encloses computing equipment, and in datacenters as distributed systems, which employ abstract details in processor and memory subsystems. This book is written for first- or second-year graduate students in computer architecture and may be helpful for those in computer systems. The goal of this book is to prepare computer architects for datacenter-oriented research by describing prevalent perspectives and the state-of-the-art.

DataFlow Supercomputing Essentials: Algorithms, Applications and Implementations (Computer Communications and Networks)

by Veljko Milutinovic Milos Kotlar Marko Stojanovic Igor Dundic Nemanja Trifunovic Zoran Babovic

This illuminating text/reference reviews the fundamentals of programming for effective DataFlow computing. The DataFlow paradigm enables considerable increases in speed and reductions in power consumption for supercomputing processes, yet the programming model requires a distinctly different approach. The algorithms and examples showcased in this book will help the reader to develop their understanding of the advantages and unique features of this methodology.This work serves as a companion title to DataFlow Supercomputing Essentials: Research, Development and Education, which analyzes the latest research in this area, and the training resources available.Topics and features: presents an implementation of Neural Networks using the DataFlow paradigm, as an alternative to the traditional ControlFlow approach; discusses a solution to the three-dimensional Poisson equation, using the Fourier method and DataFlow technology; examines how the performance of the Binary Search algorithm can be improved through implementation on a DataFlow architecture; reviews the different way of thinking required to best configure the DataFlow engines for the processing of data in space flowing through the devices; highlights how the DataFlow approach can efficiently support applications in big data analytics, deep learning, and the Internet of Things.This indispensable volume will benefit all researchers interested in supercomputing in general, and DataFlow computing in particular. Advanced undergraduate and graduate students involved in courses on Data Mining, Microprocessor Systems, and VLSI Systems, will also find the book to be an invaluable resource.

DataFlow Supercomputing Essentials: Research, Development and Education (Computer Communications and Networks)

by Veljko Milutinovic Jakob Salom Dragan Veljovic Nenad Korolija Dejan Markovic Luka Petrovic

This informative text/reference highlights the potential of DataFlow computing in research requiring high speeds, low power requirements, and high precision, while also benefiting from a reduction in the size of the equipment. The cutting-edge research and implementation case studies provided in this book will help the reader to develop their practical understanding of the advantages and unique features of this methodology.This work serves as a companion title to DataFlow Supercomputing Essentials: Algorithms, Applications and Implementations, which reviews the key algorithms in this area, and provides useful examples.Topics and features: reviews the library of tools, applications, and source code available to support DataFlow programming; discusses the enhancements to DataFlow computing yielded by small hardware changes, different compilation techniques, debugging, and optimizing tools; examines when a DataFlow architecture is best applied, and for which types of calculation; describes how converting applications to a DataFlow representation can result in an acceleration in performance, while reducing the power consumption; explains how to implement a DataFlow application on Maxeler hardware architecture, with links to a video tutorial series available online. This enlightening volume will be of great interest to all researchers investigating supercomputing in general, and DataFlow computing in particular. Advanced undergraduate and graduate students involved in courses on Data Mining, Microprocessor Systems, and VLSI Systems, will also find the book to be a helpful reference.

Datalog and Logic Databases (Synthesis Lectures on Data Management)

by Sergio Greco Cristian Molinaro

The use of logic in databases started in the late 1960s. In the early 1970s Codd formalized databases in terms of the relational calculus and the relational algebra. A major influence on the use of logic in databases was the development of the field of logic programming. Logic provides a convenient formalism for studying classical database problems and has the important property of being declarative, that is, it allows one to express what she wants rather than how to get it. For a long time, relational calculus and algebra were considered the relational database languages. However, there are simple operations, such as computing the transitive closure of a graph, which cannot be expressed with these languages. Datalog is a declarative query language for relational databases based on the logic programming paradigm. One of the peculiarities that distinguishes Datalog from query languages like relational algebra and calculus is recursion, which gives Datalog the capability to express queries like computing a graph transitive closure. Recent years have witnessed a revival of interest in Datalog in a variety of emerging application domains such as data integration, information extraction, networking, program analysis, security, cloud computing, ontology reasoning, and many others. The aim of this book is to present the basics of Datalog, some of its extensions, and recent applications to different domains.

Datalog in Academia and Industry: Second International Workshop, Datalog 2.0, Vienna, Austria, September 11-13, 2012, Proceedings (Lecture Notes in Computer Science #7494)

by Pablo Barceló Reinhard Pichler

This book constitutes the refereed proceedings of the Second International Workshop on Datalog 2.0, held in Vienna, Austria, in September 2012. The 14 revised full papers presented together with 2 invited talks and 2 invited tutorials were carefully reviewed and selected from 17 initial submissions. Datalog 2.0 is a workshop for Datalog pioneers, implementors, and current practitioners; the contributions aim to bring every participant up-to-date with the newest developments and map out directions for the future.

Datalog Reloaded: First International Workshop, Datalog 2010, Oxford, UK, March 16-19, 2010. Revised Selected Papers (Lecture Notes in Computer Science #6702)

by Oege De Moor Georg Gottlob Tim Furche Andrew Sellers

This book constitutes the thoroughly refereed post-workshop proceedings of the First International Workshop on Datalog 2.0, held in Oxford, UK, in March 2010. The 22 revised full papers presented were carefully selected during two rounds of reviewing and improvements from numerous submissions. The papers showcase the state-of-the-art in theory and systems for datalog, divided in three sections: Properties, applications, and extensions of datalog.

Datamining und Computational Finance: Ergebnisse des 7. Karlsruher Ökonometrie-Workshops (Wirtschaftswissenschaftliche Beiträge #174)

by Georg Bol Gholamreza Nakhaeizadeh Karl-Heinz Vollmer

Der Schwerpunkt des siebten Karlsruher Ökonometrie-Workshops lag auf der Anwendung Neuronaler Netze bei Finanzzeitreihen, dem Einsatz von Datamining und Maschinellen Lernverfahren bei Fragestellungen des Finanzbereichs und quantitativen Methoden zur Beurteilung von Markt- und Länderrisiken. Das Spektrum ausgewählter Referate in diesem Buch, u.a. auch von international renommierten Experten, reicht von allgemeinen Betrachtungen zur Prognose mit Neuronalen Netzen und empirischen Ergebnissen für Wechselkurse, Rentenmärkte und Absatzzahlen über die Beurteilung von Marktrisiken und die Kreditüberwachung mit Maschinellen Lernverfahren bis zur Ermittlung und Einschätzung von Länderrisiken. Dieser Band berichtet über die aktuelle Entwicklung in diesen Gebieten und bietet ein Forum für Diskussionen.

The DataOps Revolution: Delivering the Data-Driven Enterprise

by Simon Trewin

DataOps is a new way of delivering data and analytics that is proven to get results. It enables IT and users to collaborate in the delivery of solutions that help organisations to embrace a data-driven culture. The DataOps Revolution: Delivering the Data-Driven Enterprise is a narrative about real world issues involved in using DataOps to make data-driven decisions in modern organisations. The book is built around real delivery examples based on the author’s own experience and lays out principles and a methodology for business success using DataOps. Presenting practical design patterns and DataOps approaches, the book shows how DataOps projects are run and presents the benefits of using DataOps to implement data solutions. Best practices are introduced in this book through the telling of a story, which relates how a lead manager must find a way through complexity to turn an organisation around. This narrative vividly illustrates DataOps in action, enabling readers to incorporate best practices into everyday projects. The book tells the story of an embattled CIO who turns to a new and untested project manager charged with a wide remit to roll out DataOps techniques to an entire organisation. It illustrates a different approach to addressing the challenges in bridging the gap between IT and the business. The approach presented in this story lines up to the six IMPACT pillars of the DataOps model that Kinaesis (www.kinaesis.com) has been using through its consultants to deliver successful projects and turn around failing deliveries. The pillars help to organise thinking and structure an approach to project delivery. The pillars are broken down and translated into steps that can be applied to real-world projects that can deliver satisfaction and fulfillment to customers and project team members.

Refine Search

Showing 21,751 through 21,775 of 82,734 results