Blog Post #4: An Empirical (and probably misapplied) Exploration of Occam’s Razor
March 22, 2024
Hello everyone, welcome back to my blog! This week I have embarked on the confounding task of model-training. In the first few days I tried following multiple tutorials using different models and techniques, but I encountered many problems along the way. For example, I faced difficulty in installing a package called tflite-model-maker for one tutorial, but it turns out that many others online had the same issue and had resorted to various work-arounds.
I managed to settle on one solution that seems to work, though, and it happens to be the simplest method out of all the tutorials. Most other tutorials I looked into used pre-trained models, or complex models that have already been trained by other people, which one alters to work on more specific tasks. (For example, one could use a model pre-trained on general object detection to classify specific objects like tables and chairs.) I hadn’t been able to get these to work, though, so I just settled on creating my own simple models using tutorials from the Tensorflow website. These models seem to perform reasonably well on one dataset I’m using so far. But is the best solution really the simplest one? I don’t really have the resources to figure that out.
So far, the images in the data don’t have very much variation in them. To alleviate this problem, I plan to use more image augmentation to increase this variability next week. I also want to try incorporating another dataset with more signs into the model.
Until next week,
Elysse Ahmad Yusri
Reader Interactions
Comments
Leave a Reply
You must be logged in to post a comment.
Security says
What a material of un-ambiguity and preserveness of valuable experience on the topic of unpredicted feelings.