Modern interface with llama.cpp.python, LEMMAx memory, IDK detection, and MCP integration
Continuum Configuration
Dynamic LLM Configuration
0.7
0.95
Advanced LLM Personality
Basic Identity
Personality Traits
IntrovertiExtraverti
AnalytiqueEmpathique
PragmatiqueCreative
SérieuxHumoristique
ImpatientPatient
RéservéCurieux
Style de Communication
Comportements
Expertise & Intérêts
Custom Prompt
Personality Preview
LEMMAx Configuration
About llama.cpp.python: This uses your custom version of the llama.cpp Python library for local inference with LEMMAx and IDK integration. No network connection required - everything runs locally.
Model Selection
Warning: Changing the model will cause COMPLETE DATA LOSS of all existing embeddings, memory, and conversation history. Embeddings are not compatible between different models. This action cannot be undone!
General Settings
Number of previous messages to keep in context (default: 10)
IDK (I Don't Know) Configuration
Enables automatic detection of uncertain responses
0.8
0.3
Uncertainty Weights
0.4
0.3
0.3
1.8
0.3
0.5
5.0
0.3
40.0
IDK Response Messages
Remove model suggestions from IDK responses
Dynamic Temperature
Automatically adjusts temperature based on context
0.5
0.1
2.0
RulesNet
Enables rule-based validation and guidance
MemEngine
DISABLED BY DEFAULT
Important: MemEngine is disabled by default for testing purposes. Use profiles to enable it safely. Changes require llama-server restart.