AIConfigSheet

Last modified by Wecon on 2026/03/17 15:36

Configuration for the LLM application, need at least one configuration with one model and the associated URL prefix/token to work. The URL prefix should allow access to /chat/completions, and /models when the configured list of models is empty. For example, if the prefix looks like http://example.url.com/ then http://example.url.com/models and http://example.url.com/chat/completions should be accessible.

Server NameURL prefixTokenCan StreamDelete
 cross
Server NameURL prefixTokenCan StreamDelete
 cross
Server NameURL prefixTokenCan StreamDelete
 cross
Add