Week 9: Feedback Form
May 11, 2025
Hi readers! This week has primarily involved testing my platform and the construction of an embedded feedback form for the translators.
So here’s a rundown of each question on the form.
Platform Feedback
Did you use the transcreation (image-editing) feature for at least one page? [Y/N]
If not, please explain why.
Since some of the image-editing outputs are far too peculiar to be unironically included in a children’s book, I’ve added the original image as an option for selection after each image-editing task. However, given that the primary purpose of the feedback form questions is to evaluate the “transcreation” quality, it’s probably necessary to verify that the responder has indeed used transcreation—and, if not, why they opted for the original page each time.
Please view both the transcreated and translated pdfs of the book before proceeding.
Indicate, on a scale from 1-10 (1 being nonsense and 10 being near-perfect), the translation quality for your book
Although the focus on the evaluation is on the transcreation, knowing what the user thinks of just the translation sets some context for their following responses. After all, the transcreation’s effects are to be compared with the existing translation of the book.
Indicate how the transcreation impacted the quality of the translated book:
[Dramatically worsened, worsened, no change to quality, improved, dramatically improved]
This is exactly what I want to learn from the user—their thoughts on the efficacy of image transcreation when compared to solely text translation. I’ve aimed to keep the question structure as objective as possible by making the response rankings symmetrical from worsened to improved, along with phrasing the question as “transcreation impact” instead of “did it improve” or “did it worsen.”
On a scale from 1-10, how would you rate the overall quality of the image-editing?
(10 being exactly matches prompt intentions)
This question, along with the following, are posed to break down the user’s opinions on the transcreation. Consideration of image-editing quality is particularly significant because 1) a subfactor in “transcreation efficacy” is how well the image-editing follows the user’s intentions and 2) image-editing has previously shown to be pretty inconsistent with the style of art in picture books.
On a scale from 1-10, how would you rate the cultural relevance of the image-editing?
(10 being perfect localization)
Even if the image-editing quality is considered by the user to be decent, it’s possible that the output isn’t fully relevant to the user’s target culture. This may be an issue with the prompt’s structure, the editing request, or the specializations of different image-editing models (e.g. successfully performing several complex image swaps but failing to add a crucial detail). It’s also possible that the image-editing quality is dubious, but sufficient culturally significant details were included. This nuance must be clarified with the user.
Was any offensive content produced by the translation or transcreation?
[Yes, translation] [Yes, transcreation] [Yes, both] [No]
The worst case scenario, but one that must be considered. Although all the models used in-platform are trained to avoid producing harmful content, when it comes to culturally-sensitive editing, they may fall short. It’s also possible for the user to consider poor localization or irrelevant additions (like many Mt. Fuji pictures in the background of a “Localize it to Japan” image-editing request) to be offensive.
Select which version of the book you prefer
[Translated only] [Translated and transcreated] [Original]
Leave a Reply
You must be logged in to post a comment.