Week 9: Catching [Sunrays]
May 14, 2025
Hello and welcome back to Constructing [daybreak]! This week, I bring you a quick update on my progress in prompt engineering AI in Unity—let’s jump right in!
01. Picking Apart a [Star]
In my test run of LLM for Unity’s generation capabilities from last week’s post, I noticed a couple of issues. First and foremost, due to the vagueness and lack of descriptions present in my first prompts, the Large Language Model (LLM) was not only choosing a new appearance for my characters but also placing them in a setting that was inconsistent with the theme of the game. In addition, the script that the LLM generated included its own scene header, one that doesn’t conform to the formatting used in my other dialogue texts.

Both of the above issues can be fixed by providing the LLM with more context. By adding an explanation for the setting of the game and a short character sheet (backstories, appearance, personality) for both of our characters and setting some constraints for what each character should know, the LLM learns to tone down the aggression between the two characters. Adding instructions on how output should be formatted as well as an example of what it should look like, the LLM finally returns a usable script to us.


02. A Moment [Frozen in Time]
…Except several LLM requests later, it is still the exact same script, word for word. The model’s settings needed tuning, as it clearly repeated itself too often. However, despite turning up the temperature to increase creativity, increasing top P to introduce greater diversity, and lowering the seed into the negatives to reduce reproducibility of each generated output, the LLM was still giving repeat responses.
Unfortunately, one of the problems I ran into was that the LLM I was using was too small. At only roughly 1 billion parameters (tiny, considering the fact that OpenAI’s GPT-3 has 175 billion parameters and GPT-4 is estimated to have 1.7-1.8 trillion parameters), small LLMs tended to get stuck repeating the same output for the same prompt, regardless of how I tuned the settings.
Hence, I introduced a larger LLM, a LLaMA model at 8 billion parameters, considered mid-sized for local LLMs that run without Internet on a computer. However, this came with its own share of problems—ones that I will be discussing next week, as the game is distributed for beta testing.
See you next week!
Leave a Reply
You must be logged in to post a comment.