|
--- |
|
license: llama3 |
|
--- |
|
|
|
# Llama-3-8B-Instruct-Gadsby |
|
|
|
![The letter E discarded in a dumpster.](img/discarded_e.webp) |
|
|
|
Introducing Llama 3 Instruct Gadsby, a modification to the Llama 3 instruct model that may lie, cheat, and deny the existence of elephants; but no matter what, it will not use the letter “E.” |
|
|
|
![A meme of an elephant hiding in the bushes.](img/elephant_hiding.webp) |
|
|
|
This generator of lipograms works through a very simple modification to the final layer of the Llama 3 model, zeroing out the weights corresponding to any tokens that would contain any variants of the letter "E." |
|
|
|
Example outputs below were generated using at the Q6_K quantization, but I've also included a few other quants to fit other use cases. The process to mask out a model in this way is very easy and can be done to other models. |
|
|
|
![Example output - Sushi Poem](img/e_free_sushi.webp) |
|
|
|
![Example output - Elephant Denial](img/cant_say_elephant.webp) |
|
|
|
![Example output - Woodchuck deconstruction](img/woodchuck_without_e.webp) |
|
|
|
![Example output - Can't say slumber or sleep](img/a_good_nights_slum.webp) |
|
|
|
For details on how to do this yourself, I have some example code in my medium post. |
|
|
|
https://medium.com/@coreyhanson/a-simple-way-to-program-an-llm-lipogram-83e84db41342 |
|
|
|
In case there are any paywall shenanigans, I have also mirrored a copy on my website too. |
|
|
|
https://coreyhanson.com/blog/a-simple-way-to-program-an-llm-lipogram/ |
|
|