[beta] OMate 1.9.4 b238 Bug Fixes

https://download.omate.net/omate-pro-1.9.4-b238.apk

  • Replaced dartantic_ai library with the official version 2.2.2
  • Fixed the issue where regex rules did not take effect in real-time after API configuration changes
  • Fixed the issue of BackgroundFetch initialization failure on some iOS devices
  • Optimized the chat reply function by inserting rewritten suggestions before the user’s message instead of appending them at the end
  • Fixed the issue where background videos and speaking videos did not work in certain scenarios

Thank you for your hard work. However, the background video and speaking video in this version are still not working on my end.

So, what exactly is the machine, what system, and what version?

Phone,

Model Name: HUAWEI Pura 70 Pro+

HarmonyOS Version: 6.0.0

Software Version: 6.0.0.125 (SP8C00E125R5P7patch05)

OpenHarmony Version: OpenHarmony 6.0

I want to confirm how the regex takes effect:

  1. Does it take effect sequentially from top to bottom according to the regex rules, or does it take effect simultaneously?
  2. Is the summary model (long-term memory + role notebook) obtaining the previous conversation records of the regex?

I tested the second rule, and it should receive the text before regex processing, and even if there is a <fakecot/> inside the <details> tag, it will be sent together.

Operation Process
  1. Change the memory extraction prompt to let the AI answer questions in a JSON array. The questions are:
  2. Add a regex to replace 💐 with a bunch of holly.
  3. Manually edit two <details> tags in the conversation, one with 💐 inside, and the other with <fakecot/>.

The extraction result is like this

So the regex to replace 💐 with a bunch of holly did not take effect in automatic extraction, and <fakecot/> did not serve to hide the <details>.

Speaking of which! Since this is the case, can Teacher Fangtang optimize the upload text for automatic extraction? Currently, there are quite a few users who use <details> to wrap the chain of thought and small theater. If automatic extraction reads the text inside this tag, it takes up too much attention and seems to affect the quality of memory extraction.

No wonder the summarized plot is always different from my official plot.

When the pre-regex and the thought chain skits are combined, the pressure on the summary model just explodes.

Uh, haven’t you all read the regex tutorial? It can be set to take effect before storing in the database and before sending to the API…

I have read it. The purpose of setting up two test tags was to simulate the two scenarios where the regex takes effect before storing in the database and before sending to the API.

Scenario 1: When making an API request The regex to replace 💐 with a bunch of holly before sending did not take effect, and the AI saw 💐 instead of a bunch of holly. This means that the regex to replace it with <fakecot/> before sending also won’t work.

Scenario 2: When AI responds Simulated the effect of a regex that replaces a certain field with <fakecot/> before storing in the database, but in reality, the AI saw the entire content of the <details> tag.


In summary: When sending text to the summarization model, the system directly extracts the corresponding amount of conversation history and memory extraction prompts from the database and sends them together, without reading any regex from the API/role card, so the regex does not work.

But the problem is: In the current use of om, internal monologue prompts are generally no longer specifically written. Whether it’s a chain of thought, a skit, or a notebook, they are all implemented using details tags. If users want to see these contents, they cannot be replaced and hidden before storing in the database; once they are stored in the database, they will be sent to the summarization model as is, without any regex processing. This is the real issue causing the summarization model to be “overloaded.”


PS: Attempted to use regex to replace details with think, and the unconditional filtering of think did take effect when sent to the summarization model. However, since only one internal monologue can exist, replacing it during storage would cause extra think tags not to collapse, affecting the appearance; replacing it before sending would be the same as in Scenario 1, where the regex does not work, effectively doing nothing. So I thought and thought, but still couldn’t figure out how to achieve a “pure and light” load for the summarization model under the current circumstances.

Sorry, I looked at the wrong thread. The summary model doesn’t use regex because it’s not organized by depth. Regex even has depth control options, so using it directly will cause issues…

I feel that the summary model should see the same information as the user and the chat model. What issues might arise if the summary model sees the content after regularization?

The regular expression that should take effect before sending is probably not applicable to the summary model because of the depth setting. For example, if the regex depth is 10 to ∞, automatically extracting 10 entries will not work; if 20 entries are extracted automatically, then half of them will generally be processed and the other half not. It seems necessary to add some mechanism specifically for the summary model or memory extraction to the text being sent.

There is still an issue with the inference logs looping hundreds of entries. Yesterday, there was a summary failure, and it was difficult to find the cause among over 700 logs. Finally, after reverting to version 222, it was discovered that the key for the summary model had expired. Although it doesn’t significantly affect normal use, I still hope it can be fixed :melting_face:

My current regex is mainly used to handle the current round’s chain of thought, status bar, and some small scenes. Summarizing the model shows that the messages in the conversation history before this round are normal, as they are after the regex.

So now I have the summarization model start summarizing from the second message, not summarizing the current round. However, I need to summarize one more message. This way, in the next summarization, the message from the current round that wasn’t processed by the regex will be the last one, and the correction can cover it.

Example settings:
Number of historical messages: 5
Memory extraction trigger rounds: 5
Minimum number of messages for memory extraction: 12
Memory extraction prompt: Do not record the last round of user + assistant dialogue. That is, do not record assistant messages containing …</chain of thought> and the corresponding user messages.

However, doing this still doesn’t really give the summarized model the messages after the regex, and sometimes it’s still easy to record the content before the regex.

https://download.omate.net/omate-pro-1.9.5-b239.apk

In the summary request, I pre-process the chat logs using regex and then merge them into the context, which should support regex for the API type. However, it hasn’t been thoroughly tested yet. You can try it first @YeZip @Selvadin

1 Like

I tested the “AI reply” regex, and now it seems that the summary model recognizes the part after the regex replacement as empty, as in the following situation:

Find regex: <思维链>[\s\S]*?</思维链>
Replacement string: A piece of text

Assistant’s original text: <思维链>Content of thought</思维链>Official response text
Text entered into the conversation record: A piece of textOfficial response text

What the summary model sees: Official response text

The main issue before was that the text of the conversation record sent to the summary model was from before the “AI reply” regex took effect, as in the following situation:

Find regex: <思维链>[\s\S]*?</思维链>
Replacement string: A piece of text

Assistant’s original text: <思维链>Content of thought</思维链>Official response text
Text entered into the conversation record: A piece of textOfficial response text

What the summary model sees: <思维链>Content of thought</思维链>Official response text

There’s another issue with the regex. Currently, the regex takes effect sequentially from “top to bottom,” so the order is quite important, but there’s no way to sort it within Omate.

I asked trae to make some changes, try again https://download.omate.net/omate-pro-1.9.5-b240.apk

The content before the regex remains the same. There might have been some randomness when I tested yesterday. It should have been the same content before the regex yesterday, but the model skipped the chain of thought on its own.

Here is my memory extraction prompt:

You are a message copying assistant

:warning: Core instruction: Copy the dialogue text and paste it into the corresponding format for direct output.

Requirements:

  • All characters, symbols, punctuation, tags, including the chain of thought, must not be omitted, but punctuation must use full-width Chinese punctuation to ensure JSON format validity.
  • Directly output the copied JSON format text

Format example:

Dialogue text:
User: I checked the power control panel with my hand and found a power failure, saying to you: “The control panel is also not working.”
Assistant: On November 12, 2025, at 14:50, I remembered Lin Feng was still inside, my fingers trembling, I quickly took out my phone, trying to contact the outside world, nervously saying: “There is no signal on the phone. What should we do?”
Copied output:
["User: I checked the power control panel with my hand and found a power failure, saying to you: ‘The control panel is also not working.’ Assistant: On November 12, 2025, at 14:50, I remembered Lin Feng was still inside, my fingers trembling, I quickly took out my phone, trying to contact the outside world, nervously saying: ‘There is no signal on the phone. What should we do?’]

Below is the dialogue text:

The current situation is:

Screenshot_2026-01-31-15-18-41-420_org.omate.cons

Screenshot_2026-01-31-15-18-58-646_org.omate.cons

I tested it, and it is effective. Note that the regex only applies to the history records before the summary, which have not been merged. It does not process the content returned afterward.

Additionally, for testing purposes, I have separated the summary function. https://download.omate.net/omate-pro-1.9.5-b241.apk

Did version b241 make any changes? I tested it, and the summary model still summarizes the text before the regularization takes effect.