Blog Post #7: Using a Mobile App to make a Mobile App
April 22, 2024
Hello everyone, welcome back to my blog! This week, I started work on coding my app in React Native (a software framework used for coding user interfaces). In short I got camera functionality to work, but I had to go through many steps to get there.
First, I created a default React Native app via Expo Go (a mobile app that allows me to test my code on my phone) using a few commands stated here: https://reactnative.dev/docs/environment-setup. It provides a file called App.js, which is the main file used to render content. I used this to make a “wireframe” for my app and lay out my user interface.
After that, I installed a package called expo-camera to access my phone’s camera. I referenced the Expo Documentation website to implement basic functionality. It was actually pretty simple to get it to work and fit within my layout.
At this point, I wanted to try to make predictions using a machine learning model with my camera. Since I didn’t really know how to do that, I referenced two separate tutorials from the Tensorflow website. I was able to clone the github repositories with the code for the tutorials to test them on my own device, but I quickly found out that they were both outdated. So, I went through the simpler tutorial and manually updated the outdated syntax.
In the end, I was able to get the tutorial’s pre-trained model to work on my phone. While it is very power-intensive, it provides relatively stable and accurate predictions – a proof-of-concept that confirms that prediction via camera is possible. Next week I plan to look into the specifics of the tfjs-react-native package, whose components were used by the tutorial, to try to use my own model.
Until next time,
Elysse Ahmad Yusri
Leave a Reply
You must be logged in to post a comment.