Abstract
Techniques are provided to simultaneously infer approximately where a speaker is looking and the speaker's emotion during a conversation. Due to privacy concerns, only the speaker's approximate facial features may be estimated. The inferred face may be converted into a cartoon face that retains the main facial features. This may enhance user interaction experience even when a speaker does not turn on video in teleconference.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Recommended Citation
Zhang, Xinjun; Shao, Qihong; Zhao, Changyong; and Nucci, Antonio, "MEASURING AND VISUALIZING USER SENTIMENT CHANGES OVER MULTIPLE CHANNELS", Technical Disclosure Commons, (October 24, 2019)
https://www.tdcommons.org/dpubs_series/2602