ADMS 2023
Fourteenth International Workshop on Accelerating Analytics and Data Management Systems Using Modern Processor and Storage Architectures
 

In conjunction with VLDB 2023
Monday, August 28, 2023, Junior Ballroom C
 
 
  Links
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Workshop Overview

The objective of this one-day workshop is to investigate opportunities in accelerating analytics workloads and data management systems which include traditional OLTP, data warehousing/OLAP, HTAP, ETL, Streaming/Real-time Processing, Business Analytics (including machine learning and deep learning workloads), and Data Visualization, using modern processors (e.g., commodity and specialized Multi-core, Many-core, GPUs, and FPGAs), processing systems (e.g., hybrid, massively-distributed clusters, and cloud based distributed computing infrastructure), networking infrastructures (e.g., RDMA over InfiniBand), memory and storage systems (e.g., storage-class Memories like SSDs, active memories, NVRams, and Phase-change Memory), multi-core and distributed programming paradigms like CUDA/OpenCL, MPI/OpenMP, and MapReduce/Spark, and integration with data-science frameworks such as Sklearn, TensorFlow, or PyTorch. Exploratory topics such as Generative AI, DNA-based storage or quantum algorithms are also within the preview of the ADMS workshop.

The current data management scenario is characterized by the following trends: traditional OLTP and OLAP/data warehousing systems are being used for increasing complex workloads (e.g., Petabyte of data, complex queries under real-time constraints, etc.); applications are becoming far more distributed, often consisting of different data processing components; non-traditional domains such as bio-informatics, social networking, mobile computing, sensor applications, gaming are generating growing quantities of data of different types; economical and energy constraints are leading to greater consolidation and virtualization of resources; and analyzing vast quantities of complex data is becoming more important than traditional transactional processing.

At the same time, there have been tremendous improvements in the CPU and memory technologies. Newer processors are more capable in the compute and memory capabilities, are power-efficient, and are optimized for multiple application domains. Commodity systems are increasingly using multi-core processors with more than 6 cores per chip and enterprise-class systems are using processors with at least 32 cores per chip. Specialized multi-core processors such as the GPUs have brought the computational capabilities of supercomputers to cheaper commodity machines. On the storage front, FLASH-based solid state devices (SSDs) are becoming smaller in size, cheaper in price, and larger in capacity. Exotic technologies like Phase-change memory are on the near-term horizon and can be game-changers in the way data is stored and processed.

In spite of the trends, currently there is limited usage of these technologies in data management domain. Naive exploitation of multi-core processors or SSDs often leads to unbalanced systems. It is, therefore, important to evaluate applications in a holistic manner to ensure effective utilization of CPU and memory resources. This workshop aims to understand impact of modern hardware technologies on accelerating core components of data management workloads. Specifically, the workshop hopes to explore the interplay between overall system design, core algorithms, query optimization strategies, programming approaches, performance modelling and evaluation, etc., from the perspective of data management applications.

Topics of Interest

The suggested topics of interest include, but are not restricted to:

  • Hardware and System Issues in Domain-specific Accelerators
  • New Programming Methodologies for Data Management Problems on Modern Hardware
  • Query Processing for Hybrid Architectures
  • Large-scale I/O-intensive (Big Data) Applications
  • Parallelizing/Accelerating Machine Learning/Deep Learning Workloads
  • Autonomic Tuning for Data Management Workloads on Hybrid Architectures
  • Algorithms for Accelerating Multi-modal Multi-tiered Systems
  • Energy Efficient Software-Hardware Co-design for Data Management Workloads
  • Parallelizing non-traditional (e.g., graph mining) workloads
  • Algorithms and Performance Models for modern Storage Sub-systems
  • Exploitation of specialized ASICs
  • Novel Applications of Low-Power Processors and FPGAs
  • Exploitation of Transactional Memory for Database Workloads
  • Exploitation of Active Technologies (e.g., Active Memory, Active Storage, and Networking)
  • New Benchmarking Methodologies for Accelerated Workloads
  • Applications of HPC Techniques for Data Management Workloads
  • Acceleration in the Cloud Environments
  • Accelerating Data Science/Machine Learning Workloads
  • Exploratory topics such as Generative AI, DNA-storage, Quantum Technologies

Keynote Presentations


  • Accelerating AI inference for high volume transactional workloads on IBM z16, Andrew Sica, Senior Technical Staff Member, IBM

    Andrew Sica is an IBM STSM and technical architect working on AI technologies for IBM Z and LinuxONE platforms. In his 23 years at IBM, Andrew has focused on a wide range of IBM zSystem technologies including z/OS Core Technologies components and Tailored Fit Pricing. Most recently Andrew has led software development for strategic AI platform initiatives, including the IBM z16 Integrated Accelerator for AI.

    Abstract: For some, “mainframe” conjures an image of legacy room sized systems circa 1960. However, the modern mainframe environment is a transaction and data processing powerhouse with constant innovation. Many of the world’s top fintech, banking and insurance enterprises rely on these systems to run their most critical applications. In this session we’ll explore the innovative on-chip AI accelerator on the IBM Telum chip, which is featured in the the newest IBM mainframe (the z16). The IBM Telum chip introduces an on-chip AI accelerator (AIU) that provides consistent low latency and high throughput (over 200 TFLOPS in 32 chip system) inference capacity usable by all threads. This session will further describe the library and SDK (zDNN) layer created to enable compilers and data science frameworks to exploit the AIU, as well as detail framework and compiler-based approaches to enable execution of a variety of models including ONNX format. Lastly, we will explore implementation patterns that enterprise clients are utilizing to integrate AI inference into high volume transaction processing systems with the z16 AIU.

  • Advanced Networking and Storage for Generative AI, Kevin Deierling, NVIDIA Head of Networking

    Kevin leads Networking at NVIDIA, joining from Mellanox Technologies where he was senior vice president of Marketing. He has been a founder or senior executive at five startups that have achieved positive outcomes. Combining both technical and business expertise, Deierling has variously served as the chief officer of technology, architecture, and marketing at these companies, where he led the development of strategy and products across a broad range of disciplines including networking, security, cloud, big data, virtualization, storage, smart energy, and DNA sequencing. Deierling has contributed to multiple technology standards and has over 25 patents in areas including wireless communications, error correction, security, video compression, and DNA sequencing. He is a contributing author of a text on BiCmos design. Kevin holds a BA in Solid State Physics from UC Berkeley.

    Abstract: This talk will provide a survey of the latest natural language neural network innovations that have led to the emergence of transformer based generative AI. Topics include recurrent neural networks, forward/backward attention mechanisms, positional encoding, self-attention, and emergent vector databases to accelerate similarity search. Lastly the session will address how compute, networking, and storage should evolve to meet the needs of these new generative AI workloads.


Workshop Schedule (8.30am - 4.30pm PST)
Junior Ballroom C


Session 1 (8.30-10am PST)

  • Introductory Remarks
  • Post-Moore’s Law Fusion: High-Bandwidth Memory, Accelerators, and Native Half-Precision Processing for CPU-Local Analytics, Viktor Sanca, Ecole Polytechnique Fédérale de Lausanne, and Anastasia Ailamaki, Ecole Polytechnique Fédérale de Lausanne and Google (Slides, Paper)
  • Keynote (1): Accelerating AI inference for high volume transactional workloads on IBM z16, Andrew Sica, IBM (Presentation Slides)


Coffee Break (10-10.30am PST)


Session 2 (10.30am - 12pm PST)

  • Keynote (2): Advanced Networking and Storage for Generative AI, Kevin Deierling, NVIDIA (Presentation Slides)
  • An Intermediate Representation for Composable Typed Streaming Dataflow Designs, Matthijs Reukers, Yongding Tian, and Zaid Al-Ars, and Peter Hofstee, Delft University of Technology; Matthijs Brobbel, Johan Peltenburg, and Jeroen an Straten, Voltron Data (Slides, Paper)


Lunch Break (12-1.30pm PST)


Session 3 (1.30-3pm PST)

  • Towards MRAM Byte-Addressable Persistent Memory in Edge Database Systems, Luis Ferreira, Fabio Andre Coelho, and Jose Pereira, U. Minho and INESCTEC (Slides, Paper)
  • Exploiting Code Generation for Efficient LIKE Pattern Matching, Adrian Riedl, Philipp Fent, Maximilian Bandle, and Thomas Neumann, Technical University of Munich. (Slides, Paper)
  • Evaluating SIMD Compiler-Intrinsics for Database Systems, Lawrence Benson and Richard Ebeling, Hasso Plattner Institute, and University of Potsdam, Tilmann Rabl, Hasso Plattner Institute. (Slides, Paper)


Coffee Break (3-3.30pm PST)


Session 4 (3.30-4.30pm PST)

  • GAMUT: Matrix Multiplication-Like Tasks on GPU, Xincheng Xie, Junyoung Kim, and Kenneth Ross, Columbia University (Slides, Paper)
  • ByteGAP: A Non-continuous Distributed Graph Computing System using Persistent Memory, Miaomiao Cheng and Jiujan Chen, Beijing Institute of Technology; Cheng Zhao, Cheng Chen, Yongmin Hu, Xiaoliang Cong, Hexiang Lin, Douyin Vision Co., Ltd; Li Ronghua and Guoren Wang, Beijing Institute of Technology; Shuai Zhang and Lei Zhang, Douyin Vision Co., Ltd (Slides, Paper)


Organization

Workshop Co-Chairs

       For questions regarding the workshop please send email to contact@adms-conf.org.

Program Committee

  • Bulent Abali, IBM Research
  • Martin Boissier, HPI
  • Helena Caminal, Google
  • Faeze Faghih, Tu Darmstadt
  • Adwait Jog, University of Virginia
  • Rajaram Krishnamurthy, AMD
  • Ju Hyoung Min, Boston University
  • Tobias Schmidt, TU Munich
  • Sayantan Sur, Nvidia
  • Garret Swart, Oracle
  • Tianzheng Wang, Simon Fraser University

Important Dates

  • Paper Submission: Monday, 26 June, 2023, 9 am EST
  • Notification of Acceptance: Friday, 21 July, 2023
  • Camera-ready Submission: Friday, 4 August, 2023
  • Workshop Date: Monday, 28 August, 2023

Submission Instructions

Submission Site 

All submissions will be handled electronically via EasyChair.

Formatting Guidelines 

We will use the same document templates as the VLDB conference. You can find them here.

It is the authors' responsibility to ensure that their submissions adhere strictly to the VLDB format detailed here. In particular, it is not allowed to modify the format with the objective of squeezing in more material. Submissions that do not comply with the formatting detailed here will be rejected without review. 

As per the VLDB submission guidelines, the paper length for a full paper is limited to 12 pages, excluding bibliography. However, shorter papers (at least 6 pages of content) are encouraged as well.