In the same conversation window, when a user goes back to a previous conversation and starts a new topic, the long-term memory cannot adjust accordingly and remains fixed to the memory content of the original conversation. If not intervened, this can lead to new topics being affected by old memories. Is there any method besides manually toggling memory entries? Armor: The current method of manually toggling memory entries is very flexible and useful, but it’s easy to lose track after many reversals ![]()
Let’s ask Teacher Fangtang for a solution
, it’s probably not easy to implement due to too much backend data.
To trigger memory extraction but want to re-roll, you need to delete long-term memory first, then delete the notebook. Manually summarizing long-term memory is even more troublesome, with switches everywhere.
The UI can be optimized to facilitate data editing. For example, notebook data cannot be exported or imported in bulk. If each entry is very long when selecting long-term memory in bulk, it takes a long time to scroll. When selecting the number of keys in the notebook, the key names are displayed very short. There’s no way to access the notebook directly from the floating ball.
Master of Notepad! I have some questions about Notepad that I’d like to consult with you (although this long-term memory recall issue doesn’t involve Notepad haha). May I ask, if I need to use Notepad to record NPC or item information, should the relevant prompt words be injected and sent to the model in each round of conversation to let the model determine whether to call Notepad this time? The model has become so dumb that I can’t stand it, and I’m worried this will further affect the model’s intelligence ![]()
There are now two methods. The first one is what you mentioned, where the main model judges each time and then executes a fixed command to input data into the notebook. The other is a new long-term memory feature. This feature is equivalent to a second memory extraction, but this time the data is not placed in long-term memory; instead, it is placed in the character notebook. This function, like long-term memory, is an additional model call. The default prompt can be edited in the custom prompts (called “Character Information Extraction Prompt”), and it must return a fixed format to put the data into the character notebook.
Oh, I see! I completely understand now, haha. I’ll give it a try! Thank you for your patient explanation, teacher. ![]()
This issue is quite troublesome because long-term memory does not correspond to individual chat records. So, for now, there is no way to trace back.
Thank you, teacher, for the explanation~ There is indeed this issue
Or is it possible to indirectly achieve this by “manually selecting certain long-term memory entries + a topic → migrating to a new conversation window”? It’s just a personal thought and a strange issue encountered during use, it’s okay if it can’t be solved hhh thank you, teacher ![]()
Can the minimum number of long-term memories be set below 100, for example, 1? The difference between 100 and 5000 is not particularly significant, and I want long-term memory to scroll automatically without manual deletion.

b227 modified the menu of the last message from “rewind” to “undo”. This undo will not only revert to before the message was sent but also synchronously roll back long-term memory (including the knowledge base) and the character notebook.
So impressive? How did you achieve it? Did you mark the key for long-term memory (knowledge base) and character notebook triggered by the last message? Will it affect the data manually added after this message?
① Directly backed up the files and content. ② Yes, so it can only be rolled back in time.