Week 9: ChatGPT
May 5, 2023
This week I’ve been looking at using Chat-GPT’s capabilities to translate between these languages. While both regular Chat-GPT and Chat-GPT plus don’t have the ability to recognize or output images (Chat-GPT+ runs on the GPT4 engine, which can recognize images, but this ability is relegated to the developer-exclusive GPT4 API implementation), they can still understand basic programs if put in a text-based format. Before getting access to the more powerful Chat-GPT+, I tried probing the model’s understanding of both languages. It already knows much more about BrainF than I originally expected it to. Not only does it have a basic understanding of the syntax of BrainF, but it can read segments of a basic program and guess what each piece does. And while it does have this capability, it isn’t perfect, and does have some faults in what should be even more basic programs. This ends up being incorrect. ChatGPT says the program’s tape should behave as such:
1. [0][0][0]…
2. [1][0][0]…
3. [1][1][0]…
Program End. But in reality the program should actually create an infinitely incrementing loop in the first cell, as seen in this gif: In contrast, Chat-GPT+ can understand this program perfectly: Similarly with BrainF, ChatGPT has a pretty solid knowledge of Piet, but it’s definitely not perfect. In this image, I ask about the commands that are possible to be expressed in Piet. In general the model knows about all of these operations, but it doesn’t understand the nuance of the way these commands are executed. It says that there is a “corresponding color of each command,” but this is only true of the colors black and white. Everything else is actually ran through the difference in color between two adjacent blocks of pixels. As of the official specification for Piet: Together these charts show how these colors can call a command. Let’s say that the pointer is currently looking at a red pixel, and we want the next command to add the top two values of the stack. To do so we look at the first chart and see that the “add” command involves a hue change of 1, and a lightness change of 0. So the adjacent pixel should be yellow. Instead ChatGPT says that this pixel should be blue, regardless of what pixel the pointer is coming from. When probing ChatGPT+’s capabilities, it seems to have a much better knowledge of Piet, and can explain a lot about the language in a succint manner. When prompted, it also has a good representation of the commands for these color transitions: Because of these descriptions, I expected that ChatGPT+ would be easily able to produce a valid Piet program, even if just in text form. I tried first giving it a BrainF program, asking it to describe what the program is doing, and then telling it to describe a Piet program that would do similar. This is a simple BrainF program that takes two numbers for input, adds them together, and then outputs the resulting value. Upon drawing this in an image-editing program and making some small changes so the program doesn’t get caught in an infinite loop, this is what was generated: While it does seem to know the right color transitions, it’s not applying them in this program, and most of the commands executed are meaningless to the program.
Even after trying to correct it about the color transitions the colors were still incorrect, so it may take more work than I originally thought. Also even if ChatGPT+ could understand these transitions fluently, it seems that it’s unwilling to generate programs much larger than simple adders. So one possible approach I’m now considering is seeing if ChatGPT+ can generate something like a python script to convert between the two languages, instead of doing it manually.
Leave a Reply
You must be logged in to post a comment.