Abstract
When audio sponsored content is presented with non-sponsored audio content in a two-way conversation between a user and a respective client computing device, it may be challenging for the user to distinguish the sponsored audio content from the non-sponsored audio content. The client computing device can receive an input audio signal indicative of a respective uttered speech. A data processing system communicatively coupled to the client computing device can process the received audio signal to identify a request corresponding to the uttered speech. The data processing system can generate a response to the identified request, and select an ad based on a context of the identified request. The data processing system can apply one or more sound/audio effects to audio content of the selected ad. The applied sound/audio effects can help audibly discern the ad content from the generated response content. The client computing device can play the audio content of the generated response and the audio content of the ad with the applied sound/audio effects.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Lanza, John D. and Lanza, John D., "Dynamic Context-Based Voice Modulation", Technical Disclosure Commons, (July 03, 2017)
https://www.tdcommons.org/dpubs_series/590