Project Genisys


Genisys is a new kind of data system that enables ADA applications to easily deploy ML models in emerging environments ranging from the Internet of Things to personal devices to the cloud. As part of this, Genisys will exploit deep learning-based ML models to see, hear, and understand unstructured data and query sources such as speech, images, video, text, and time series. We call this vision of pervasive intelligence for type-agnostic data analytics database perception. Watch this space for more details.

Component Project Webpages


Enabling speech-driven multimodal querying of structured data with regular SQL and more.


Enabling data systems to truly see image and video data for efficient multimodal analytics.


  • Materialization Trade-offs for Feature Transfer from Deep CNNs for Multimodal Data Analytics
    Supun Nakandala and Arun Kumar
    Under submission | TechReport | Vista Code on GitHub

  • SpeakQL: Towards Speech-driven Multi-modal Querying
    Dharmil Chandarana, Vraj Shah, Arun Kumar, and Lawrence Saul
    ACM SIGMOD 2017 HILDA Workshop | Paper PDF