Virtual assistant responses need to be both useful and safe for kids and families. However, this is currently not always the case. For example, virtual assistant responses can sometimes unexpectedly include explicit answers or answers that are otherwise unsuitable for kids. However, restricting searches can prevent the virtual assistant from surfacing useful, family-safe responses. There is no systematic way to filter non-textual media content, e.g., music, video, etc. Per the techniques of this disclosure, a library of content classifiers is provided that filters out various categories of content inappropriate for children, e.g., explicit content, violent content, etc. A query to a virtual assistant and the responses to the query are filtered by the classifiers. Depending on the context, e.g., current audience, the virtual assistant surfaces filtered responses to queries.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Ni, Yuzhao, "Automatic content filtering in virtual assistants for kids", Technical Disclosure Commons, (November 13, 2019)