Abstract
Traditional artificial intelligence agents, such as in smart eyewear typically utilize static voice profiles that do not account for the user's immediate environment or interaction history. This lack of context can result in auditory outputs that are disruptive in quiet settings or unintelligible in noisy surroundings and the overall experience. This disclosure describes a method for dynamically adjusting voice characteristics, such as volume, pitch, timbre, and directivity, based on real-time soundscape analysis and location history. Input is gathered from sensors to extract contextual cues, including environmental noise levels and the user's own vocal delivery. For instance, if a user whispers, the response is delivered in a corresponding whisper. If the smart eyewear determines that one is in an environment that suggests being quiet, such as at a library or in a courtroom, the response is provided as a whisper. By tailoring the auditory interface to the specific situational context, the intelligibility of the agent is maintained, and the social appropriateness of the interaction is improved.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Najaran, Mahdi Tayarani and Du, Ruofei, "Context-Aware Adjustment of Voice Characteristics for Artificial Intelligence Agents", Technical Disclosure Commons, (January 09, 2026)
https://www.tdcommons.org/dpubs_series/9158