ADA Lab @ UCSD

 

Project Genisys

Overview

Genisys is a new kind of data system that enables ADA applications to easily deploy ML models in emerging environments ranging from the Internet of Things to personal devices to the cloud. As part of this, Genisys will exploit deep learning-based ML models to see, hear, and understand unstructured data and query sources such as speech, images, video, text, and time series. We call this vision of pervasive intelligence for type-agnostic data analytics database perception. Watch this space for more details.

Component Project Webpages

 

SpeakQL
Enabling speech-driven multimodal querying of structured data with regular SQL and more.

 

Vista
Enabling data systems to truly see image and video data for efficient multimodal analytics.

 

Precog
Accelerating the use of ML in the Internet of Things (IoT) for an “intelligent” physical world.

Publications

  • Pushing Down Machine Learning Inference to the Edge in Heterogeneous Internet of Things Applications
    Anthony Thomas, Yunhui Guo, Yeseong Kim, Baris Aksanli, Arun Kumar, Tajana S. Rosing
    Under Submission | TechReport

  • Movement Data and Environmental Context for Predicting Human Behavior with Machine Learned Models: A Case of Tacos and Fitness Studios (Short paper)
    Jiue-An Yang, Marta M. Jankowska, Arun Kumar, and Jacqueline Kerr
    GIScience 2018 (To appear)

  • Materialization Trade-offs for Feature Transfer from Deep CNNs for Multimodal Data Analytics
    Supun Nakandala and Arun Kumar
    Under submission | TechReport | Vista Code on GitHub

  • SpeakQL: Towards Speech-driven Multi-modal Querying
    Dharmil Chandarana, Vraj Shah, Arun Kumar, and Lawrence Saul
    ACM SIGMOD 2017 HILDA Workshop | Paper PDF

  • Pushing Down Machine Learning Inference to the Edge in Heterogeneous Internet of Things Applications
    Anthony Thomas, Yunhui Guo, Yeseong Kim, Baris Aksanli, Arun Kumar, Tajana S. Rosing
    Under Submission | Paper PDF | TechReport