ADA Lab @ UCSD

 

Scalable Linear Algebra Benchmarking (SLAB)

Overview

The growing use of statistical and machine learning algorithms to analyze large datasets has given rise to new systems to scale such algorithms. But implementing new scalable algorithms in low-level languages is a painful process, especially for enterprise and scientific users. To mitigate this issue, a new breed of systems expose high-level bulk linear algebra (LA) primitives that are scalable. By composing such LA primitives, users can write analysis algorithms in a higher-level language, while the system handles scalability issues. But there is little work on a unified comparative evaluation of the scalability, efficiency, and effectiveness of such ‘‘scalable LA systems.’’ We take a major step towards filling this gap. We introduce a suite of LA-specific tests based on our analysis of the data access and communication patterns of LA workloads and their use cases. Using our tests, we perform a comprehensive empirical comparison of a few popular scalable LA systems: MADlib, MLlib, SystemML, ScaLAPACK, SciDB, and TensorFlow using both synthetic data and a large real-world dataset. Our study has revealed several scalability bottlenecks, unusual performance trends, and even bugs in some systems. Our findings have already led to improvements in SystemML, with other systems’ developers also expressing interest.

Downloads (Paper, Code, Data, etc.)

  • A Comparative Evaluation of Systems for Scalable Linear Algebra-based Analytics
    Anthony Thomas and Arun Kumar
    VLDB 2018/2019 (To appear) | Paper PDF (Coming soon) | TechReport | Code and Data Scripts on GitHub

Student Contact

Anthony Thomas: ahthomas [at] eng [dot] ucsd [dot] edu