Creating Multimodal Illustrated Educational Resources with C-LARA for Language Learning.
Author : Creating Multimodal Illustrated Educational Resources with C-LARA for Language Learning.
Abstract : The present research focuses on methodology for creating multimodal educational resources using the online platform C-LARA, which is supported by GPT-5 for text/annotation on the one hand, and GPT-Image-1 for creating illustrations on the other hand. Our recent research shows that this platform can effectively generate high-quality multimodal texts for mainstream language pairs, in particular English texts designed for use by low intermediate-level Chinese students. All words are translated into respective target languages and voice feature is added to help learners listen to the text. Here, we present how C-LARA can help generate fitting and culturally informed illustrations for larger languages including Chinese, English, French, but also niche languages such as Old Norse and Icelandic. This pedagogical approach aligns well with second-language pedagogy where illustrated resources are used for both reading improvement and vocabulary growth. Our research shows that generating images accompanying text and voice can be f ine-tuned by human involvement by adjusting prompts. Our research, therefore, highlights the importance of human involvement in image and text generating for creating more culturally appropriate multimodal resources suitable for different target audiences. As such, the C-LARA online platform offers an innovative tool for creating multimodal educational resources for learning diverse languages via reading.
Keywords : Generative AI, online platform, second language learning, Old Norse, illustrations.
Conference Name : International Conference on Teaching, Learning, and Educational Innovation (ICTLEIN-26)
Conference Place : Adelaide, Australia
Conference Date : 24th Feb 2026