Long-term memory support for storing in the knowledge base
Long-term memory support for synchronizing updates to the character notebook
Events in the event book support linking to the knowledge base
Reading mode supports reading text wrapped in zero-width spaces
Quick input for zero-width spaces and knowledge base separators
Support for entering text in the input box when sending requests
Floating ball switch adds a reset position button
Support for using image creation tools and LivePage in Agent mode
LivePage SDK supports generating images and audio through ai.image and ai.audio, environment variables support primaryColor (application main color, e.g., #F59E0B)
Input box supports status animations
API configuration model fetching available for all model input boxes
Optimizations
Fixed path error prompt when creating a backup on Windows
Fixed issue on some Android models where the status bar and navigation bar overlap during avatar cropping
Fixed issue on the desktop version where the input box full-screen toggle might not be clickable
Fixed issue where background video blackouts and flickers during voice reading
Fixed issue where cloud sync would automatically pull old topic chat records after starting a new topic and re-entering the chat page
Fixed issue where Android devices do not support built-in fts5 causing permanent memory failure
Updated the default image generation model of Silicon Flow to Qwen/Qwen-Image
Fixed issue where vectorizing the knowledge base in the floating window would freeze
Fixed json_query parameter definition in the character notebook search method
Optimized LivePage cards to display values without opening
Fixed issue where double quotes were used as column names in sqlite on some devices
Fixed text display format of unlock and completion conditions in the event book
Additional Updates During Beta Testing
b212
API configuration defaults to hiding the API key
Fixed issue with LivePage card data being too long and affecting the list with line breaks
The upload was successful, and it was also downloaded to the computer, but there are over a dozen fewer conversations on the computer. Additionally, the real-time conversations on the phone did not successfully sync to the computer. Both ends have the sync feature enabled, and I can see the small dot next to the character names on the computer, but it’s still missing over a dozen entries, and new chat content isn’t transferring either. Logging out and back in doesn’t work.
I want to confirm the working logic of this feature: 1. When memory extraction is triggered, will it immediately trigger a “character information extraction” prompt, and as long as the output content conforms to the JSON object format, it will be automatically stored in the character notebook without using the agent function? 2. How does the notebook understand the current JSON format? Is it fixed that the “note” field after the name is the key, and the “note” field of upserted_info is the value? 3. How can I improve the success rate of storing in the character notebook? I modified this part of the prompt to let the AI directly store the {{recent_chat_history}} into the notebook as is. I also changed the memory extraction rounds to extract every round. However, after extracting memory, it only waits for a while, and no new key appears in the notebook.
Character information extraction is performed after each memory retrieval. 2. Name and upserted_info are fixed and mandatory, but the attributes within upserted_info can be customized through prompts. 3. The default prompt is that a new character will add a key, and you need to adjust the prompt for this stability.
The prompt words in the image above are the modified default prompt words, and I’m not sure where the format is incorrect. I’ll test it more.
This feature has great potential.
If it’s just through a fixed JSON format, Omate can directly create keys and values without using the agent function. This means that models/APIs that do not support tool calls can also use the “notebook auto-load” feature to utilize the notebook tool. The efficiency is even higher than tool calls.
This feature is equivalent to two model interactions. If it supports variables other than {{character_data}} (such as {{item_data}}, etc.), it can achieve dynamic prompt words through the second model interaction. For example, a user previously mentioned using a small model to filter memory for a large model in such a multimodal workflow.