- I checked the new version log and found the correct format; it turns out it still needs to be written as a JSON array. The format is as follows: [{\n"name": “Conversion Project”,\n"upserted_info": “Test dialogue text 33121233”\n}]
- In the new version, is RoleNotebook now required to read characterDataJson and recent_chat_history mandatorily, without relying on variables?
The default prompt for characterInfoExtractorPrompt has been customized by me, and the variables {{characterData}} and {{recent_chat_history}} have been completely removed. However, the inference log still fully reads all the contents of the character notebook and the recent chat history.
I found that the reasoning log stores historical reasoning logs, and after multiple conversations, I have to scroll down for a long time. It seems like there’s no limit; I’ve already stored over two hundred entries
There is a copy log in the upper right corner. However, it disappears when the application is reopened.
Simply switching conversations does not open a new reasoning log; it still displays the log from the previous conversation. When a new message is sent in a new conversation, the reasoning log for the new conversation will appear at the bottom, while the historical log at the top remains the reasoning log from the previous conversation.
Teacher, could we add a “scroll to bottom” button in the chat interface? Sometimes I scroll up to check the conversation details and then have to scroll down for a long time~ Having this button would be a bit more convenient
The new feature of long-term memory, “Save memory to knowledge base,” is very hands-free, but the maximum number of long-term memories is 100~5000. This means that if long-term memories are not manually deleted, there will be at least 100 overlaps between the knowledge base and long-term memories. Can the minimum value of the maximum number of long-term memories be set to 1?
Let me add it, I originally thought it wasn’t needed here. Not all prompt templates support {{char}}, they only support variables present in the original prompt.
Occasionally, an error occurs: Application error: Null check operator used on a null value
I don’t know the reason, maybe it’s because I directly deleted the applied knowledge base txt? Directly deleting the applied knowledge base txt in the role settings will leave an error in the inference log, just like before, it does not affect normal use.