v1.4.0October 26, 2024 · One min read llama.cpp updated to b3982 Added RAG support for pdf documents Chat settings UI improvements
v1.3.9October 7, 2024 · One min read llama.cpp updated to b3837 Added support for llama 3.2, 3.1, RWKV, MiniCPM(2.5, 2.6, 3), Chameleon models
v1.3.4September 8, 2024 · One min read llama.cpp updated to b3445 Added Gemma2, T5, JAIS, Bitnet support, GLM(3,4), Mistral Nemo Added OpenELM support
v1.3.0June 24, 2024 · One min read llama.cpp updated to b3190 Added support for DeepseekV2, GPTNeoX (Pythia and others) Added support for Markdown formatting
v1.2.5May 27, 2024 · One min read Save/Load context state. Now it is possible to continue the dialog with the model even after the program is reopened. Read more here. Chat settings such as sampling are now applied without reloading the chat.
v1.2.0May 15, 2024 · One min read Added shortcuts support llama.cpp updated to b2864 Added llama3 instruct template
v1.1.1May 4, 2024 · One min read Added Gemma template and download link Added warning about creating a chat without a selected model Fixed some bugs
v1.1.0April 25, 2024 · One min read llama.cpp updated to b2717 Phi3, Mamba(CPU only), gemma, StarCoder2, GritLM, Command-R, MobileVLM_V2, qwen2moe models IQ1_S, IQ2_S, IQ2_M, IQ3_S, IQ4_NL, IQ4_XS quntization support