ADA Lab @ UCSD

 

Project SpeakQL

Overview

Natural language and touch-based interfaces are making data querying significantly easier. But typed SQL remains the gold standard for query sophistication although it is painful in querying environments that are touch-oriented (e.g., iPad or iPhone) and essentially impossible in speech-driven environments (e.g., Amazon Echo). Recent advancements in automatic speech recognition (ASR) raise the tantalizing possibility of bridging this gap by enabling spoken queries over structured data.

In this project, we envision and prototype a series of new spoken data querying systems. Going beyond the current capability of personal digital assistants such as Alexa in answering simple natural language queries over well-curated in-house knowledge base schemas, we aim to enable more sophisticated spoken queries over arbitrary application database schemas.

Our first and current focus is on designing and implementing a new speech-driven query interface and system for a useful subset of regular SQL. Our goal is near-perfect accuracy and near-real-time latency for transcribing spoken SQL queries. Our plan to achieve this goal is by synthesizing and innovating upon ideas from ASR, natural language processing (NLP), information retrieval, database systems, and HCI to devise a modular end-to-end system architecture that combines new automated algorithms with user interactions.

Downloads (Paper, Code, Data, etc.)

  • Database-Aware ASR Error Correction for Speech-to-SQL Parsing
    Yutong Shao, Arun Kumar, and Ndapandula Nakashole
    IEEE ICASSP 2023 | PDF coming soon

  • Design and Evaluation of an SQL-Based Dialect for Spoken Querying
    Kyle Luoma and Arun Kumar
    Under Submission | TechReport

  • Structured Data Representation in Natural Language Interfaces
    Yutong Shao, Arun Kumar, and Ndapandula Nakashole
    IEEE Data Engineering Bulletin 2022 (Invited) | Paper PDF

  • Demonstration of SpeakQL: Speech-driven Multimodal Querying of Structured Data
    Vraj Shah, Side Li, Kevin Yang, Arun Kumar, and Lawrence Saul
    ACM SIGMOD 2019 Demo | Paper PDF and BibTeX | Video

  • SpeakQL: Towards Speech-driven Multi-modal Querying
    Dharmil Chandarana, Vraj Shah, Arun Kumar, and Lawrence Saul
    ACM SIGMOD 2017 HILDA Workshop | Paper PDF and BibTeX

Student Contact

Kyle Luoma: kluoma [at] ucsd [dot] edu

Acknowledgments

This project is funded in part by the NSF under award IIS-1816701.