|
|
The objective of this one-day workshop is to investigate opportunities in accelerating data management
systems and workloads (which include traditional OLTP, data
warehousing/OLAP, ETL,
Streaming/Real-time, Business
Analytics, and XML/RDF Processing) using processors (e.g., commodity and specialized Multi-core, GPUs,
FPGAs, and ASICs), storage systems (e.g., Storage-class Memories like SSDs and
Phase-change Memory), and
programming models like MapReduce, Spark, CUDA, OpenCL, and OpenACC.
The current data management scenario is characterized by the following trends: traditional OLTP and
OLAP/data warehousing systems are being used for increasing complex workloads (e.g., Petabyte of data,
complex queries under real-time constraints, etc.); applications are becoming far more distributed, often
consisting of different data processing components; non-traditional domains such as bio-informatics, social
networking, mobile computing, sensor applications, gaming are generating growing quantities of data of
different types; economical and energy constraints are leading to greater consolidation and virtualization
of resources; and analyzing vast quantities of complex data is becoming more important than traditional
transactional processing.
At the same time, there have been tremendous improvements in the CPU and memory technologies.
Newer processors are more capable in the CPU and memory capabilities and are optimized for multiple
application domains. Commodity systems are increasingly using multi-core processors with more than 6
cores per chip and enterprise-class systems are using processors with 8 cores per chip,
where each core can execute
upto 4
simultaneous threads. Specialized
multi-core processors such as the GPUs have brought the computational capabilities of supercomputers
to cheaper commodity machines. On the storage front, FLASH-based solid state devices (SSDs) are
becoming smaller in size, cheaper in price, and larger in capacity. Exotic technologies like Phase-change
memory are on the near-term horizon and can be game-changers in the way data is stored and processed.
In spite of the trends, currently there is limited usage of these technologies in data management domain.
Naive usage of multi-core processors or SSDs often leads to unbalanced system. It is therefore important to
evaluate applications in a holistic manner to ensure effective utilization of CPU and memory resources. This
workshop aims to understand impact of modern hardware technologies on accelerating core components
of data management workloads. Specifically, the workshop hopes to explore the interplay between overall
system design, core algorithms, query optimization strategies, programming approaches, performance modelling
and evaluation, etc., from the perspective of data management applications.
This year the workshop will be held jointly with the Fourth
International Workshop
on In-Memory Data
Management and Analytics
(IMDM'16). Both
workshops will share the
submission site and the
papers will be reviewed
by the joint program
committees of both
workshops and published
in a single joint
proceedings.
The workshop proceedings will be published via LNCS and indexed via DBLP.
The suggested topics of
interest include, but are not restricted to:
- Hardware and System Issues in Domain-specific Accelerators
- New Programming Methodologies for Data Management Problems on Modern Hardware
- Query Processing for Hybrid Architectures
- Large-scale I/O-intensive (Big Data) Applications
- Parallelizing/Accelerating Analytical (e.g., Data Mining) Workloads
- Autonomic Tuning for Data Management Workloads on Hybrid Architectures
- Algorithms for Accelerating Multi-modal Multi-tiered Systems
- Energy Efficient Software-Hardware Co-design for Data Management Workloads
- Parallelizing non-traditional (e.g., graph mining) workloads
- Algorithms and Performance Models for modern Storage Sub-systems
- Exploitation of specialized ASICs
- Novel Applications of Low-Power Processors and FPGAs
- Exploitation of Transactional Memory for Database Workloads
- Exploitation of Active Technologies (e.g., Active Memory, Active
Storage, and Networking)
Automata Processing: A New Paradigm for Computing?
Srinivas Aluru, Georgia Tech
Abstract: This talk will introduce the Micron Automata Processor (AP), a novel computing architecture that permits massively parallel execution of numerous non-deterministic finite automata. The processor inspires a new programming paradigm of solving problems using complex pattern matching engines executed over streaming data. I will present my group's research over the past four years to develop algorithms and applications using the AP, and broaden the applicability of this architecture beyond direct pattern matching applications. In particular, I will present techniques to solve several classical problems on unweighted graphs, and demonstrate the potential of this architecture in accelerating graph analytics. I will also discuss design principles we discovered that are of value in developing applications on the AP.
Bio: Srinivas Aluru is a professor in the School of Computational Science and Engineering at Georgia Institute of Technology. He co-directs the Georgia Tech Interdisciplinary Research Institute in Data Engineering and Science (IDEaS), and co-leads the NSF South Big Data Regional Innovation Hub which serves 16 Southern States in the U.S. and Washington D.C. Aluru conducts research in high performance computing, bioinformatics and systems biology, combinatorial scientific computing, and applied algorithms. He is currently serving as the Chair of the ACM Special Interest Group on Bioinformatics, Computational Biology and Biomedical Informatics (SIGBIO). He is a recipient of the NSF Career award, IBM faculty award, Swarnajayanti Fellowship from the Government of India, and the Outstanding Senior Faculty Research award and the Dean’s award for faculty excellence at Georgia Tech. He is a Fellow of the American Association for the Advancement of Science (AAAS) and the Institute of Electrical and Electronics Engineers (IEEE).
Accelerating Data Science (slides)
Gustavo Alonso,
ETH Zurich
Abstract: Data science or big data, whatever one
wants to call it, raises
important challenges in terms
efficient processing.
One the one hand, the application demands are becoming more stringent
(more data, more complex analysis, faster results, larger workloads,
etc.). On the other hand, hardware and computing platforms are in a
complex phase with little stability in terms of architectures and
lacking an overall direction. In this talk I will discuss the problem,
arguing that there is an opportunity for specialized designs departing
from general purpose systems. I will illustrate the point with
examples from our research and then show how we are exploiting
reconfigurable hardware (FPGAs) to explore a wide range of
architectural designs, new algorithms for data processing, and
redesigning the entire system stack to better support data
science. The talk will conclude with a number of ideas on how the
database community can contribute to the development of new hardware
and how to orchestrate a more coherent, collective research agenda.
Bio: Gustavo Alonso is a professor at the Department of Computer Science of ETH Zurich (ETHZ) in Switzerland, where he is a member of the Systems Group. Gustavo has a M.S. and a Ph.D. in Computer Science from UC Santa Barbara. Before joining ETH, he was at the IBM Almaden Research Center. His research interests encompass almost all aspects of systems, from design to run time. His applications of interest are distributed systems and databases, with an emphasis on system architecture. Current research is related to multi-core architectures, large clusters, FPGAs, and big data, mainly working on adapting traditional system software (OS, database, middleware) to modern hardware platforms.
Gustavo is a Fellow of the ACM and of the IEEE.
Welcome
Comments
ADMS Keynote
(9-10.10 am)
Automata Processing: A
New Paradigm for
Computing, Srinivas Aluru, Georgia Tech
Session 1: Hardware
Acceleration of Database
Operations
-
Overtaking CPU DBMSes with a GPU in Whole-Query Analytic Processing
with Parallelism-Friendly Execution Plan Optimization, Adnan Agbaria,
David Minor, Natan Peterfreund, Eyal Rozenberg, and Ofer
Rosenberg. (10.10-10.30
am) (slides)
Coffee
Break (10.30-11 am)
Session 1 (Contd): Hardware
Acceleration of Database
Operations
- Exploit Every Cycle: Vectorized Time Series Algorithms on Modern
Commodity CPUs, Bo Tang, Man Lung
Yiu, Yuhong Li, Leong Hou U. (11-11.20
am)
(slides)
- Locality-Adaptive Parallel Hash Joins using Hardware Transactional
Memory, Anil Shanbhag, Holger
Pirk, and Sam Madden
(11.20-11.40 am) (slides)
Session 2: Maximizing
In-Memory Database
Performance
-
Cache-Sensitive Skip List: Efficient Range Queries on modern CPUs,
Stefan Sprenger, Stefen Zeuch, and Ulf Leser
(11.40-12.00 noon)
(slides)
- To Copy or Not to Copy: Making In-Memory Databases Fast on Modern
NICs, Aniraj Kesavan, Robert Ricci,
and Ryan Stutsman (12-12.20
pm)
(slides)
Lunch
Break (12.20- 2 pm)
IMDM
Keynote (2-3.10 pm)
Accelerating Data
Science, Gustavo
Alonso, ETH Zurich (slides)
Session 2 (Contd): Maximizing
In-Memory Database
Performance
-
DBMS Data Loading: An Analysis on Modern
Hardware, Adam Dziedzic,
Manos Karpathiotakis, Ioannis Alagiannis, Raja Appuswamy, and Anastasia
Ailamaki (3.10-3.30 pm) (slides)
Coffee
Break (3.30-4 pm)
Session 3: Novel
In-memory Architectures
(4-5.30 pm)
- Runtime Fragility in Main
Memory, Endre
Palatinus, Jens Dittrich
(slides)
- Compression-Aware In-Memory Query Processing: Vision, System Design
and Beyond, Juliana Hildebrandt, Dirk Habich, Patrick Damme, and
Wolfgang Lehner (slides)
- SwingDB: An Embedded In-memory DBMS Enabling Instant Snapshot Sharing,
Qingzhong Meng, Xuan Zhou,
Shiping Chen, and Shan Wang (slides)
Workshop Co-Chairs
For questions regarding the workshop please send email to contact@adms-conf.org.
ADMS Program Committee
- Reza Azimi, Huawei
- Nipun Agarwal, Oracle Labs
- Christoph Dubach, University of Edinburgh
- Qiong Luo, HKUST
- Sina Merji, IBM Toronto
- Mohammad Sadoghi, IBM Watson Research
- Nadathur Satish, Intel
- Sudhakar Yalamanchili, Georgia Tech
- David Schwalb, HPI
- Viktor Rosenfeld, TU Berlin
- Shirish Tatikonda, Target
- Christian Lang, Acelot
- Vincent Kulandaisamy, IBM Analytics
- Oded Shmueli, Technion
- Dina Thomas, Pure Storage
- Paper Submission: Thursday, July 7, 2016, 11.59 pm PST (Updated!)
- Notification of Acceptance: Tuesday, July 26, 2016
- Workshop Date: Friday, September 9, 2016
Submission Site
All submissions will be handled electronically via EasyChair.
Formatting Guidelines
We will use the LNCS 1-column format specificed for the CS
Proceedings and Other
Multiauthor Volumes. The
instructions are here
The submitted paper page count
should be at most 20 pages
single-column in the LNCS format.
|
|
|