Week 7: Extension Required
April 11, 2025
Hello and welcome to another edition of my blog!
This week, I’m almost done working out all the details. The memory system is a functional skeleton for now, and I’m able to have the LLM generate the summaries into separate files. I am still working on the self-updating nature of the database, but I might have a silver bullet (see below). I’ve been able to string together the different components, from prompting strategy to using context from both the memory and the AP specific data. However, the whole process felt janky, which is when I realized there might be a better way to do everything that requires a fundamental change in my approach to this problem.
This week, I suddenly remembered a chrome extension I used a while ago. It would slot into the user’s chats with ChatGPT, running in the background while appending to the prompt. Replicating this idea would solve a lot of problems that I have been running into. First, this would allow me to explicitly call memory and deposit memory with the model. I could also just append the relevant context for AP data directly to the prompt. Since the fine-tuning process would be too costly anyway, this solution could allow me to hit all the same pros of my existing method without any of the cons.
In the upcoming week, I’ll test this idea and let you all know how it goes. I’m very excited for what’s to come!
Leave a Reply
You must be logged in to post a comment.