This disclosure describes techniques for automatic filtering of audio and/or video content via augmented reality (AR) or virtual reality (VR) glasses if the content is unsuitable for certain users. Per the techniques, content filters are configured to define objectionable or scary content. A machine learning model analyzes the content along with other user-permitted data such as environmental cues, and crowdsourced information to detect unsuitable content that matches one or more of the filters. Upon detection, such content is automatically obscured via the AR/VR glasses, e.g., by darkening the entire screen or selected parts of the screen, by attenuating the audio feed, etc. or by other suitable action.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.