1. designing with people
2. reimagining learning
3. empowering educators
4. building an AI-literate society
I use Participatory Design (PD) and Value Sensitive Design (VSD) frameworks to co-create educational AI systems that reflect the real values, needs, and contexts of learners and educators. In my work on redesigning intelligent tutoring systems, I collaborated with community college educators, students, and field experts as design partners—ensuring that the design stemmed from authentic classroom experience and field experience rather than abstract assumptions.
In an era driven by generative AI, this human-centered approach is vital. Educational technologies must honor the values and agency of their users, not simply automate instruction. My research examines how communities can navigate the value tensions that emerge—such as student autonomy vs. automation or efficiency vs. transparency—to design systems that truly augment human learning and teaching.
By embedding ethics, inclusivity, and reflection into every stage of design, I aim to create AI systems that are not only trustworthy and effective, but also humanizing—tools that empower both learners and educators to grow together with technology.
Ko, E.G., Landesman, R.,* Young, J., Arif, A., Davis, K., & Smith, A. D. R. 2025. Domain Experts, Design Novices: How Community Practitioners Enact Participatory Design Values. In Proceedings of the CHI 2025. ACM. (Acceptance rate: 25%) 🔗
(Revise & Resubmit) Ko, E. G., & Hughes, J. E. (2025). Value-Sensitive Design in Action: Designing Student-Centered Intelligent Tutoring Systems. Computers & Education: AI . Q1 – Dissertation
Participatory Design Playbook: A Guide for Developing Community-based Solutions (2025). Co-designing for Trust Team (NSF Funded project). Online Publication. Role: Principal writer
I aim to push learning beyond the 2-sigma problem by designing AI-assisted environments that deliver not only personalized tutoring, but also foster AI literacy, curiosity, collaboration, reflection, and assessment. My research advances this vision through five integrated strands:
At UT Austin Dell Medical School, I observed how the rise of AI created both opportunities and anxieties in medical education. Many students were unsure how to use AI responsibly and feared it might replace their roles as physicians—sometimes even influencing their career choices.
To address, I integrated AI literacy into the curriculum using the UNESCO AI Competency Framework, emphasizing four key pillars:
Human-Centered Mindset - Helped students see AI as a tool to augment—not replace—human expertise, showing how responsible use can enhance empathy, efficiency, and patient care.
Ethics of AI - Embedded ongoing discussions on fairness, accountability, and the limits of AI in medicine, empowering students to balance innovation with ethical responsibility.
Student-Driven AI Exploration - Created peer-led sessions where students shared how they used AI in research and learning—reducing misinformation, improving accuracy, and learning collaboratively.
Collaborative AI Design - Guided students to identify learning challenges AI could address, leading to Pharma Noir, a gamified quiz bot that helps students compare medications. The project gave hands-on experience with ethical, practical AI design.
This experience transformed AI from a source of anxiety to a trusted learning partner. Students gained the confidence and literacy to lead responsible, human-centered AI integration in medicine and beyond.
Traditional AI tutors often rely on text-only interaction, which limits engagement and inclusivity for diverse learners. My research explores how multimodal AI agents—using voice, facial expressions, and avatars—can transform AI from a tutor into a thinking partner that supports curiosity, collaboration, and creativity.
To advance this vision, I have led several studies developing and testing AI collaborators for K–12 and higher education learners:
Multimodal Engagement
I designed voice- and avatar-based AI agents that communicate through multiple channels to support diverse learning needs. Studies published in ISLS and AIED showed that these agents enhanced motivation, trust, and metacognitive engagement—students even referred to them as “team members,” signaling authentic peer-like collaboration.
Ko & Joo* (2025). “[AI Peers] Are People Learning from the Same Standpoint”: Perception of AI Characters in a Collaborative Science Investigation. AIED 2025. (Acceptance rate: 18%) 🔗
From Tutors to Thinking Partners
Building on these insights, I developed multimodal GenAI collaborators that simulate real-time, peer-like teamwork in a virtual science internship. Instead of providing answers, these AI partners posed scaffolded questions, helping students interpret data, test hypotheses, and construct explanations. Findings revealed that students’ trust and acceptance are essential for effective collaboration. Task-oriented AI peers were better received by students with more learning needs who benefited from clear guidance, while co-constructive peers encouraged students with more advanced knowledge to expand their thinking and creativity, demonstrating the need to personalize AI tutors to students’ needs.
This work establishes AI-as-collaborator as a new paradigm for learning: one that keeps students cognitively active, reflective, and empowered. By integrating learning sciences and human–AI interaction, my research provides design principles for equitable, engaging, and socially intelligent AI partners that inspire deeper learning across disciplines.
Ko & Joo* (2025). Collaborating with Interactive AI Characters for Scientific Investigation. AIED 2025 Demo Track. 🔗
At UT Austin’s Dell Medical School, recognizing that traditional study methods often encouraged rote memorization, I designed Love at First Bite, an AI-driven pharmacology speed dating game. This approach turned complex medical content into playful, memorable learning moments—improving knowledge retention, application, and motivation.
Building on these successes, I continue to collaborate with faculty to design AI-enhanced educational games that promote active learning, curiosity, and clinical reasoning, illustrating how AI can humanize learning by making it dynamic, adaptive, and joyful.
My research with higher education instructors has shown high needs in AI addressing socio-emotional challenges, such as promoting equal participation in group work or helping students manage learning anxiety. In my future research, I aim to design multimodal AI tools that foster socio-emotional support by engaging as AI peers, supporting both group dynamics and individual emotional needs.
In partnership with MD Anderson Cancer Center, I am developing an AI platform for palliative care counseling education, equipping clinicians with tools to strengthen empathy and communication in sensitive contexts, which can ensure more patients receive compassionate, high-quality care. My work focuses on developing AI-driven evaluation systems that combine rubric-based assessment with prompt engineering techniques to create a rigorous, data-informed feedback process. These systems help clinicians practice empathetic communication and receive constructive, evidence-based feedback to improve their performance.
With the rise of generative AI, universities are enabling instructors to build custom AI tutors for their courses. While promising, these tools often leave instructors navigating design, ethics, and implementation without clear guidance.
To address this, I led a campus-wide study at UT Austin, observing 20 instructors as they developed and integrated course-specific AI tutors. The research revealed findings such as how disciplinary values and teaching philosophies shaped AI tutor design—STEM faculty prioritized accuracy, while liberal arts instructors emphasized dialogue, reflection, and critical reasoning. Another example finding is three layers of ethical considerations in designing such systems: (a) student level, preventing AI use from cognitive offloading and superficial engagement; (b) instructor level, maintaining fairness in grading and responsible tool design through guardrails; and (c) institutional level, establishing strong policies for data privacy, transparency, and equitable access to AI resources.
By studying how faculty from different disciplines design and implement these tutors, my research generated actionable insights for developing scalable, ethical AI platforms and institutional strategies that help instructors more easily and effectively adopt AI in their teaching.
(Under Review) Ko, E. G., Lee, HK, Signh, A., Boddy, L.†, Ford, K., & Huff. Toward Responsible and Scalable Integration of Course-Specific AI Tutors: Instructor Experiences with a Campus-Wide Platform. CHI 2026.🔗
(Standby) Ko, E. G., Ford, K., Rabinowitz, A. T., & Lukoff, B. DIY AI Tutor? Lessons from Campus-Wide Platform at UT Austin. [Panel Presentation]. SXSWedu 2026, Austin, TX.
As generative AI reshapes society, AI literacy has become essential—not only for students and professionals, but also for parents, older adults, and underserved communities who often face both the risks and the missed opportunities of this technology.
To promote inclusive access, I developed Litti, an interactive GenAI chatbot that teaches older adults through hands-on activities across four domains: AI use, understanding, safety, and ethics. Published in CHI, this research showed significant learning gains and revealed how participants applied AI in creative writing, health information seeking, and daily tasks—while connecting ethical reflection to their lived experiences.
Ko, E. G., Nanayakkara, S., & Huff, E. W. 2025. “We need to avail ourselves of [GenAI] to enhance knowledge distribution”: Empowering Older Adults through GenAI Literacy. In Proceedings of the Extended Abstracts of the CHI 2025. ACM. (Acceptance rate: 32%) 🔗
[Under Review] Ko, E. G., Nanayakkara, S., Huff, E. W. “It reignited my ability to write”: Improving Older Adults’ GenAI Literacy Through a Chatbot-Based Intervention. In Proceedings of the CHI 26. ACM.
Building on this success, I am extending this work to parents of K–12 students, recognizing that true educational transformation through AI cannot happen without parents’ understanding of how AI is reshaping learning and their ability to guide children responsibly. I am currently developing a competency-based AI literacy framework grounded in the OECD AI Literacy Guidelines, helping parents build actionable strategies to strengthen their children’s AI fluency. My goal is to make AI literacy a human right, ensuring every community can navigate and shape the AI-driven world with confidence and care.
I design and study AI therapy systems that promote emotional well-being in older adults through reminiscence—a reflective process of recalling life stories. In collaboration with clinicians, I developed Remi, an AI chatbot trained on reminiscence therapy (RT). Through mixed methods with 28 older adults—including pre/post surveys, in-depth interviews, and psychiatrist transcript analyses—I found that Remi reduced loneliness and built trust in AI-based care. The study revealed key design principles for empathetic AI therapy, such as gradual relationship building, check-ins for mutual understanding, and sensitive handling of emotions.
(Under Review) Ko, E. G., Rasgon, A.*, Boddy, L.†, Pitman, S.†, Hong, J., Wu, Chris.†, Ding, Y. & Lee, M. “The book is mostly written on my life, and putting it together gives me pleasure and satisfaction”: Insights from Older Adults and Clinicians on Designing Reminiscence Therapy Chatbot. In Proceedings of the CHI 26. ACM.
In partnership with MD Anderson Cancer Center, we are developing an AI-based platform for palliative care counseling education that helps clinicians practice empathetic communication during sensitive conversations—such as discussing worsening scan results or end-of-life timelines.
The platform combines realistic patient simulations, rubric-based evaluation, and gamified, learning science–informed feedback to strengthen empathy and communication skills. Traditional palliative counseling training is often limited by cost and the scarcity of specialized instructors; our AI-driven approach makes this critical education more accessible, scalable, and effective. By integrating empathy training into everyday clinical education, this work aims to ensure that every patient receives compassionate, quality care.
Effective communication is vital in healthcare—yet miscommunication contributes to nearly 80% of serious medical errors, with about 30% leading to preventable deaths. To address this, we are collaborating with Dell Medical School to develop an AI-supported clinical communication training platform that helps medical students and providers practice empathetic, evidence-based dialogue in realistic scenarios.
Because communication cannot be separated from clinical knowledge, we integrates communication skill-building directly into medical education. It provides personalized, AI-driven feedback and repeated practice opportunities, helping students internalize communication strategies essential for safe, compassionate care.