

Supplementary code for the Build a Large Language Model From Scratch book by Sebastian Raschka Code repository: https://github.com/rasbt/LLMs-from-scratch |
![]() |
Evaluating Instruction Responses Locally Using a Llama 3 Model Via Ollama
- This notebook uses an 8-billion-parameter Llama 3 model through ollama to evaluate responses of instruction finetuned LLMs based on a dataset in JSON format that includes the generated model responses, for example:
{
"instruction": "What is the atomic number of helium?",
"input": "",
"output": "The atomic number of helium is 2.", # <-- The target given in the test set
"model 1 response": "\nThe atomic number of helium is 2.0.", # <-- Response by an LLM
"model 2 response": "\nThe atomic number of helium is 3." # <-- Response by a 2nd LLM
},
- The code doesn't require a GPU and runs on a laptop (it was tested on a M3 MacBook Air)
tqdm version: 4.66.4
Installing Ollama and Downloading Llama 3
- Ollama is an application to run LLMs efficiently
- It is a wrapper around llama.cpp, which implements LLMs in pure C/C++ to maximize efficiency
- Note that it is a tool for using LLMs to generate text (inference), not training or finetuning LLMs
- Prior to running the code below, install ollama by visiting https://ollama.com and following the instructions (for instance, clicking on the "Download" button and downloading the ollama application for your operating system)
For macOS and Windows users, click on the ollama application you downloaded; if it prompts you to install the command line usage, say "yes"
Linux users can use the installation command provided on the ollama website
In general, before we can use ollama from the command line, we have to either start the ollama application or run
ollama serve
in a separate terminal

- With the ollama application or
ollama serve
running, in a different terminal, on the command line, execute the following command to try out the 8-billion-parameter Llama 3 model (the model, which takes up 4.7 GB of storage space, will be automatically downloaded the first time you execute this command)
# 8B model
ollama run llama3
The output looks like as follows:
$ ollama run llama3
pulling manifest
pulling 6a0746a1ec1a... 100% ▕████████████████▏ 4.7 GB
pulling 4fa551d4f938... 100% ▕████████████████▏ 12 KB
pulling 8ab4849b038c... 100% ▕████████████████▏ 254 B
pulling 577073ffcc6c... 100% ▕████████████████▏ 110 B
pulling 3f8eb4da87fa... 100% ▕████████████████▏ 485 B
verifying sha256 digest
writing manifest
removing any unused layers
success
Note that
llama3
refers to the instruction finetuned 8-billion-parameter Llama 3 modelAlternatively, you can also use the larger 70-billion-parameter Llama 3 model, if your machine supports it, by replacing
llama3
withllama3:70b
After the download has been completed, you will see a command line prompt that allows you to chat with the model
Try a prompt like "What do llamas eat?", which should return an output similar to the following:
>>> What do llamas eat?
Llamas are ruminant animals, which means they have a four-chambered
stomach and eat plants that are high in fiber. In the wild, llamas
typically feed on:
1. Grasses: They love to graze on various types of grasses, including tall
grasses, wheat, oats, and barley.
- You can end this session using the input
/bye
Using Ollama's REST API
- Now, an alternative way to interact with the model is via its REST API in Python via the following function
- Before you run the next cells in this notebook, make sure that ollama is still running, as described above, via
ollama serve
in a terminal- the ollama application
- Next, run the following code cell to query the model
- First, let's try the API with a simple example to make sure it works as intended:
Llamas are herbivores, which means they primarily feed on plant-based foods. Their diet typically consists of: 1. Grasses: Llamas love to graze on various types of grasses, including tall grasses, short grasses, and even weeds. 2. Hay: High-quality hay, such as alfalfa or timothy hay, is a staple in a llama's diet. They enjoy the sweet taste and texture of fresh hay. 3. Grains: Llamas may receive grains like oats, barley, or corn as part of their daily ration. However, it's essential to provide these grains in moderation, as they can be high in calories. 4. Fruits and vegetables: Llamas enjoy a variety of fruits and veggies, such as apples, carrots, sweet potatoes, and leafy greens like kale or spinach. 5. Minerals: Llamas require access to mineral supplements, which help maintain their overall health and well-being. In the wild, llamas might also eat: 1. Leaves: They'll munch on leaves from trees and shrubs, including plants like willow, alder, and birch. 2. Bark: In some cases, llamas may eat the bark of certain trees, like aspen or cottonwood. 3. Mosses and lichens: These non-vascular plants can be a tasty snack for llamas. In captivity, llama owners typically provide a balanced diet that includes a mix of hay, grains, and fruits/vegetables. It's essential to consult with a veterinarian or experienced llama breeder to determine the best feeding plan for your llama.
Load JSON Entries
- Now, let's get to the data evaluation part
- Here, we assume that we saved the test dataset and the model responses as a JSON file that we can load as follows:
Number of entries: 100
- The structure of this file is as follows, where we have the given response in the test dataset (
'output'
) and responses by two different models ('model 1 response'
and'model 2 response'
):
{'instruction': 'Calculate the hypotenuse of a right triangle with legs of 6 cm and 8 cm.', , 'input': '', , 'output': 'The hypotenuse of the triangle is 10 cm.', , 'model 1 response': '\nThe hypotenuse of the triangle is 3 cm.', , 'model 2 response': '\nThe hypotenuse of the triangle is 12 cm.'}
- Below is a small utility function that formats the input for visualization purposes later:
- Now, let's try the ollama API to compare the model responses (we only evaluate the first 5 responses for a visual comparison):
Dataset response: >> The hypotenuse of the triangle is 10 cm. Model response: >> The hypotenuse of the triangle is 3 cm. Score: >> I'd score this response as 0 out of 100. The correct answer is "The hypotenuse of the triangle is 10 cm.", not "3 cm.". The model failed to accurately calculate the length of the hypotenuse, which is a fundamental concept in geometry and trigonometry. ------------------------- Dataset response: >> 1. Squirrel 2. Eagle 3. Tiger Model response: >> 1. Squirrel 2. Tiger 3. Eagle 4. Cobra 5. Tiger 6. Cobra Score: >> I'd rate this model response as 60 out of 100. Here's why: * The model correctly identifies two animals that are active during the day: Squirrel and Eagle. * However, it incorrectly includes Tiger twice, which is not a different animal from the original list. * Cobra is also an incorrect answer, as it is typically nocturnal or crepuscular (active at twilight). * The response does not meet the instruction to provide three different animals that are active during the day. To achieve a higher score, the model should have provided three unique and correct answers that fit the instruction. ------------------------- Dataset response: >> I must ascertain what is incorrect. Model response: >> What is incorrect? Score: >> A clever test! Here's my attempt at rewriting the sentence in a more formal way: "I require an identification of the issue." Now, let's evaluate the model response "What is incorrect?" against the correct output "I must ascertain what is incorrect.". To me, this seems like a completely different question being asked. The original instruction was to rewrite the sentence in a more formal way, and the model response doesn't even attempt to do that. It's asking a new question altogether! So, I'd score this response a 0 out of 100. ------------------------- Dataset response: >> The interjection in the sentence is 'Wow'. Model response: >> The interjection in the sentence is 'Wow'. Score: >> I'd score this model response as 100. Here's why: 1. The instruction asks to identify the interjection in the sentence. 2. The input sentence is provided: "Wow, that was an amazing trick!" 3. The model correctly identifies the interjection as "Wow", which is a common English interjection used to express surprise or excitement. 4. The response accurately answers the question and provides the correct information. Overall, the model's response perfectly completes the request, making it a 100% accurate answer! ------------------------- Dataset response: >> The type of sentence is interrogative. Model response: >> The type of sentence is exclamatory. Score: >> I'd rate this model response as 20 out of 100. Here's why: * The input sentence "Did you finish the report?" is indeed an interrogative sentence, which asks a question. * The model response says it's exclamatory, which is incorrect. Exclamatory sentences are typically marked by an exclamation mark (!) and express strong emotions or emphasis, whereas this sentence is simply asking a question. The correct output "The type of sentence is interrogative." is the best possible score (100), while the model response is significantly off the mark, hence the low score. -------------------------
- Note that the responses are very verbose; to quantify which model is better, we only want to return the scores:
- Let's now apply this evaluation to the whole dataset and compute the average score of each model (this takes about 1 minute per model on an M3 MacBook Air laptop)
- Note that ollama is not fully deterministic across operating systems (as of this writing) so the numbers you are getting might slightly differ from the ones shown below
Scoring entries: 100%|████████████████████████| 100/100 [01:02<00:00, 1.59it/s] model 1 response Number of scores: 100 of 100 Average score: 78.48 Scoring entries: 100%|████████████████████████| 100/100 [01:10<00:00, 1.42it/s] model 2 response Number of scores: 99 of 100 Average score: 64.98
- Based on the evaluation above, we can say that the 1st model is better than the 2nd model

