Abstract

Large language models store information from prior user interactions as natural language memories to improve future responses. However, these memories are often context-dependent, relying on specific timeframes or locations that may change. Such temporal or contextual limitations can lead to outdated information, suboptimal model decisions, or inaccurate tool calls as the conversation context evolves.

A method is disclosed to transform these entries into context-independent memories. The model identifies context-dependent information and generates clarifying questions or custom user interface elements, such as date pickers, to obtain precise data from the user. The initial memory is then rewritten into a universally valid form or a parameterizable function. This process ensures that stored information remains accurate across different times and settings. By refining the memory storage process through user clarification, the relevance and reliability of model outputs are improved while preventing the accumulation of obsolete data.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS