Week 10: The Final Countdown
This may be a typical thing to say in times like this, but I do find it crazy how quickly this project is coming to a close. It feels like very little time has passed since I started it in March, or even since starting the school year in August.
Especially because of how rapidly time has been encroaching, I feel at least slightly humbled in my expectations. Both the options for AI models, and their relative strengths and weaknesses became more complex as I looked more into what I wanted the project to encompass. But with all that said, I’m still happy with the progress I’ve made, and I feel this project gave me a very real sense of what it’s like to work on an AI or CS project of a larger scale. And despite some struggles this was still very much a positive experience.
But first before I describe the wrap-up process I want to talk more about some more areas of interest I found as I was using Chat-GPT+,
As I established last week, GPT4 “understands” all the major mechanics of Piet, but putting it together is where it really struggles. So I thought that eliminating one aspect of Piet would give it a better chance of understanding a program, in this case that’s the color transitions. The above image is part of text that is essentially pseudocode for a Piet program, and should do all the same operations as a regular Piet program (barring cases with input, which this program is not). To get this large list of pseudocode I fed in a fizzbuzz Piet program into a modified version of my original attempt at a rule-based Piet to BrainF conversion.
After reading a few dozen lines of this code, Chat-GPT+ gave a surprisingly accurate answer when I asked what the program was meant to be doing.This is a fizzbuzz algorithm, so it is true that “if the number is divisible by 3, it prints ‘Fizz’. if the number is divisible by 5, it prints ‘Buzz’.”, and the program does print a newline character after every processed number. But nearly everything else it mentions is incorrect. Cases 1 and 4 and 5 aren’t true, if a number is divisible by 15, not (11*11+1) it prints “FizzBuzz”.
This is why I find GPT3 & 4 very interesting, they both have some understanding and capabilities in areas that seem much more advanced than they should be, but then fails at tasks that I would think would be very simple. It’s very hard to predict what the model is actually going to do correctly. I was extremely surprised that it could even recognize that the program was a fizzbuzz-type algorithm, but then I was surprised again when it failed even harder to produce the right results when I asked it to print the output of this program.
This is vaguely similar to its description of the fizzbuzz program, but different enough that it seems to be hallucinating some aspects of the program.
Now to get back to the wrap-up process. Recently, I’ve been working more heavily on my final presentation and along with that I’m trying to condense down most of what I’ve done in this project, and written in my blog posts.
I have a solid foundation for the presentation but I’m still working on some relatively minor changes.
After my presentation on 5/20/23 I plan to amend next week’s blog post with a link to a recording of it, in case anyone is curious but can’t attend the event.
Also next week I plan to release a github repository of different programs and data I modified or created during this project. I wouldn’t say the repository is currently unorganized, but it’s definitely in a state that the structure makes a lot more sense to me than it does to most other people. So I hope to clean up and release the repo by next week. Thanks for reading!