Happy New Year to Teacher Sugar Balloon and all friends!
There are two main issues I would like to seek help with in this post. The poster is using the Gemini API call channel of Google AI Studio.
- Regarding the backend log issue, today after turning off the agent mode switch on the API settings page and sending a message, the backend log suddenly showed dozens to hundreds of repeated entries. This happens every time a message is sent. After turning on the agent mode switch, the logs return to a normal state, about twenty entries. The abnormal part looks like this, I don’t know how to send a picture, so I copied it here. Please click the triangle to expand:
[details=“Summary”] Starting to process user message…
maxTokens: 60000
historyCount: 10
Searching the knowledge base…
Knowledge base query text: Test
(skipped here)
Selected the 5 most relevant pieces of information based on similarity (maximum call limit: 5)
Initializing chat model instance, using API specification: AIApiSpec.openai, using Base URL: xxxx, using API Key: xxxx, using model: null, additional parameters: null
Initializing summary model instance, using API specification: AIApiSpec.openai, using Base URL: (omitted), using API Key: xxxxxxxx, using model: gemini-2.5-pro
Initializing chat model instance, using API specification: AIApiSpec.openai, using Base URL: xxxx, using API Key: xxxx, using model: null, additional parameters: null
Initializing summary model instance, using API specification: AIApiSpec.openai, using Base URL: (omitted), using API Key: xxxxxxxx, using model: gemini-2.5-pro
(and then it repeats like this for hundreds of lines)
[/details]
Although the “initializing chat model instance” part in the log shows null, I am sure that my chat model and other model names, URLs, and keys are filled in correctly and completely because the AI can respond normally. It’s just that there are many repeated logs in the backend (those that keep repeating hundreds of times in the picture). This issue does not affect usage, but I am not sure if there is a potential risk of software crash, and I don’t always need to use agent mode, so I don’t want to keep this mode on all the time. Therefore, I am seeking help.
- Regarding the temperature issue. The temperature adjustment has never been effective, whether calling directly from AI Studio or through a self-built reverse proxy. Both channels have no effect on the temperature adjustment in Omate. To test, I used the same API on other AI chat platforms, and the adjustment worked fine. My testing method was to set the temperature to the highest (e.g., 1.5-2.0) to see if the AI would talk nonsense, but no matter how I adjusted it on Omate, it was the same (I am using the Gemini 2.5 Pro model).
Thank you all for your answers.