Skip to main content

LLM Farm news

v1.0.0

· One min read
  • Added support for multimodal models MobileVLM, Yi-VL, LLaVA, Obsidian (tested on mobileVLM 3B)
  • Fixed crash on switch model

v0.9.5

· One min read
  • llama.cpp updated to b2135
  • Added the ability to download models from the application menu
  • Added progress indicator for model loading

v0.9.2

· One min read
  • Added possibility to specify System Prompt, which will be added to the text of the first message in the session. See FAQ.
  • Added ability to clone chat (without message history)

v0.9.0

· One min read
  • llama.cpp updated to b1891
  • added support for Phi2, TinyLlama and other models
  • various GUI improvements

v0.8.0

· One min read
  • llama.cpp updated to b1601
  • added support for StableLM-3b-4e1t models
  • added support for Qwen models

v0.7.5

· One min read
  • added LoRA train support experimental
  • add BOS/EOS token to begin/end of prompt options
  • handle special tokens options

v0.7.0

· One min read
  • llama.cpp updated to b1396
  • LoRA adapters support (More about LoRA here)
  • added support for MPT and Bloom models

v0.6.2

· One min read
  • llama.cpp updated to b1256
  • rwkv updated to 8db73b1
  • add grammar sampling for llama models, you can put .gbnf files to the grammars directory

v0.5.2

· One min read
  • llama.cpp updated to b1132, GGUF format support and increase in the speed. The old file format is still supported but uses llama dadbed9.
  • Added Falcon models support (only GGUF)
  • Added mmap and mlock options