Daniel Rough

B-SPOKE: Blocks based programming for Spoken Dialogue Systems

In recent years, Intelligent Personal Assistants (IPAs), such as Apple’s Siri and Amazon’s Alexa, have made speech interfaces a key feature of smartphones and personal in-home devices. Such systems are generally used for executing simple predefined tasks, such as checking the weather or sending messages,1 yet there are a growing number of applications being built for these assistants to add to their capabilities through APIs such as Amazon Alexa Skills and Google Home Actions. Despite these APIs, developing new applications still requires programming expertise. This presents a high barrier to most users, who therefore have no way of significantly shaping or personalising the behaviour of their IPAs or the tasks they can do, above superficial changes. Indeed, the lack of transparency and tailorability of speech interfaces may aggravate issues that lead to infrequent use of IPAs, including a lack of clarity and trust in how IPAs function1. In response, this project’s central aim is to reduce the technical barriers to speech interface development, empowering users to tailor systems to their own requirements, thereby encouraging more frequent use of speech in everyday technology interactions. This will be achieved by investigating the type of tailor ability desired and the barriers to tailoring that currently exist (WP1). From this I will develop (WP2), deploy and evaluate (WP3) a blocks-based programming tool to support users to develop their own tailored and personalised IPAs.