Guides
Run LLM Models Locally with Llamafile
Install Llamafile
git clone [email protected]:Mozilla-Ocho/llamafile.git
brew install make
cd llamafile gmake -j8 sudo gmake install PREFIX=/usr/local
Download the model
- Click below to download the recommended Model:
- Meta Llama3 [recommended]
- Hermes 2 Pro Mistral 7B
- To find your own model: go to Hugging Face
Run the model
llamafile -ngl 9999 -m path/to/model.gguf --host 0.0.0.0 -c 2048
- Recommended settings for Apple M1 users:
llamafile -ngl 9999 -m path/to/model.gguf --host 0.0.0.0 -c 2048 --gpu APPLE -t 12
- visit localhost:8080
Use with Sublayer (skip to Basic Demo if you don't have a project)
Add to Gemfile:
gem 'sublayer', '~>0.0.7'
Run:
bundle install
Add to your configuration file:
Sublayer.configuration.ai_provider = Sublayer::Providers::Local Sublayer.configuration.ai_model = "LLaMA_CPP"
Build a sublayer generator:
Use in your code:
MyGenerator.new(attributes).generate
Basic Demo
Let's make a ruby project to find a past historical event on today's date
# bash mkdir historical_event_finder cd historical_event_finder touch Gemfile touch historical_event_finder.rb
# Gemfile source 'https://rubygems.org' gem 'sublayer', '~>0.0.7'
# bash bundle install
- Build a sublayer generator with the following description:
- "generator that uses Time.now and finds a fun historical event from the past that occurred on the same month/day as a value"
- Paste the result from above into
historical_event_generator.rb
(rename if needed) - Write the following code in
historical_event_finder.rb
:# historical_event_finder.rb require 'sublayer' require_relative 'historical_event_generator' Sublayer.configuration.ai_provider = Sublayer::Providers::Local Sublayer.configuration.ai_model = "LLaMA_CPP" puts HistoricalEventGenerator.new.generate
- run your code:
ruby historical_event_finder.rb