Week 4: Filling in the Gaps
March 22, 2024
Hi everyone, and welcome to Week 4 of my senior project! This week, I focused on fixing the errors of my custom GPT. As I mentioned last week, the chatbot would provide the user with large amounts of text, which affects readability. So, let’s go over the process for enhancing user experience!
Installing the GPT on Colab
Initially, I tried cutting down the chatbot’s content through Google Colab but later found a much simpler way of doing so. To install the GPT, I had to access my API key, which I could find on OpenAI’s website. Once you create your API account, you can create your key and use it to provide authorization to any project you’re working on. This is useful because now Colab has access to my project and can access the features of my custom GPT.
Fixing the Chatbot
When running the chatbot, I decided to give very generic symptoms, such as “I have a fever,” and see how the model responds to this text. And this is where I found the first issue. While it did ask a series of questions, if the user only answered one question, the model would provide treatments rather than inquire further to give a more accurate response. This is harmful because without knowing the depth of someone’s symptoms, the chatbot could be providing treatments that don’t improve the patient’s well-being. For example:
To solve this problem, I configured the chatbot to restate the questions that the user does not answer. This will ensure a more accurate model since it now knows how severe the symptom is before telling the user what they can do to improve their well-being.
Next, I wanted the model to know the extent of the situation before recommending oral care tips. Initially, the GPT would only ask about one symptom and learn how severe that symptom is instead of inquiring about anything else the user is facing.
So, I changed its configurations to if the user is experiencing any other symptoms so it can provide more specific and accurate guidance. Now it is able to learn how severe the patient’s symptoms are before giving them advice.
A New and Improved Model
I then tested whether my chatbot could improve accuracy by understanding the extent of the situation. I first provided a symptom, such as “My wisdom teeth are hurting.” The chatbot then asked a series of questions to learn how severe the symptom was.
To this, I only answered one question. Therefore, the model re-stated the other questions before providing any guidance. It seems as though if the model asks two yes-or-no questions and the user only answers one, it will assume the user answered the first question and move on.
Since I aim to improve user experience, I will focus on making the chatbot ask one question at a time when I finalize my model on Colab.
Conclusion
Next week, I plan to acquire my patient dataset and install it on Colab. As I mentioned in previous blog posts, I aim to mimic Electronic Health Record (EHR) data to create a usable model. Stay tuned as I go over the process of acquiring and (hopefully) preprocessing the dataset.
That’s all for week 4! Thanks for reading, and please let me know if you have any questions. See you in week 5!
Citations
GPT-4, openai.com/research/gpt-4.
Kondratiuk, Alex. “Supercharge Your OpenAI GPT Experiments with Google Colab: Here’s How.” Medium, Medium, 27 Oct. 2023
Reader Interactions
Comments
Leave a Reply
You must be logged in to post a comment.
Harish Senthilkumar says
I like how you throughly show the process of refining your GPT model. How might Google Colab help your project in any way? You could provide more detail about what Colab would be doing to you model.
Aashvi Jain says
I really like how you included pictures and showed the reader exactly what process you followed when making these decisions about your project. I would love to know more about how much you think having the program inquire more increases the accuracy of your chat bot (ex. if asking 2 questions instead of 1 increases the rate of correctly diagnosing the symptom)