Blog Post #8: Twaddle with Models
April 28, 2024
Hello everyone, welcome back to my blog! This week I started on integrating my model with my app for prediction-making, but this involved more trial-and-error than I was expecting.
For one, the package that I previously used to run predictions was called tfjs-react-native. It only works (as far as I know) on models in the tensorflowjs format, so I had to go back and convert my model. But after installing the necessary packages to do so, the accuracy of the model dropped immensely even though I didn’t make any changes to the model. I saw this while testing predictions on my app. While this might be due to mistakes with image processing, I still plan to look into this issue.
Seeing this, I wanted to try other methods of detection. For one, I revisited a pre-made object detection API – instead of trying to classify based on the whole image, the API draws bounding boxes and classifies from within the box. While I felt this would be useful for UI purposes, I unfortunately wasn’t able to get this to work.
Then, I tried a model called hand-pose-detection from the tensorflow-models package. It was created as part of Google’s MediaPipe Solutions, and it functions like the object-detection package that I used previously. So, I was successfully able to predict hand landmarks with it. I also plan to look into performing transfer learning with the model since it performs pretty well on my phone.
One last thing I did this week was attending a senior project workshop. Along with laying out guidelines for our presentation, the workshop also clarified how research posters and demos would be held. While I’m not planning to publish the app, I think I’ll be able to demo using the Expo Go app.
Until next time,
Elysse Ahmad Yusri
Leave a Reply
You must be logged in to post a comment.