Clarifying or Complicating?

Understanding Older Adults' Engagement with Real-World XAI in E-Commerce

Clarifying or Complicating?

Explainable AI (XAI) · Recommender Systems · Older Adults · Trust & Agency · Inclusive Design

What happens when AI explanations meant to build trust instead leave people confused — or even feeling watched? Older adults are increasingly active in digital marketplaces, yet most explainable AI (XAI) research focuses on younger, tech-savvy users. This project explored how seniors engage with explainability features in NAVER Shopping, South Korea's largest e-commerce platform.

Project Overview

This project examined how older adults interact with explainable recommender systems in the live environment of NAVER Shopping. We focused on three deployed explanation types — global (system-level descriptions), local (item-level rationales), and user-model dashboards (editable preference profiles). NAVER Shopping was selected as the study site because it represents one of the few real-world deployments of explainable recommender features, offering a unique opportunity to investigate XAI in everyday use rather than in controlled prototypes.

Explanation types in Naver Shopping
Example of NAVER Shopping's global explanations of their recommender systems

Approach

We conducted a two-phase qualitative study with 20 older adults (60+) who regularly shopped online. In Session 1, participants shared their existing perceptions and prior experiences with NAVER's recommender systems and personalized recommendations. In Session 2, they interacted directly with NAVER's live XAI features through think-aloud protocols. All sessions were recorded, transcribed, and thematically analyzed to capture both convergence and divergence in how participants interpreted the explanations.

Results & Contributions

  • Explainability showed both benefits and risks for older adults. Explanations empowered participants to understand personalization but also introduced confusion, blind trust, or privacy concerns.
  • Three core tensions structured their engagement. Participants moved between awareness vs. confusion, informed trust vs. blind trust, and empowerment vs. surveillance as they navigated different explanation types.
  • Explanation types shaped outcomes differently. Local, behavior-based rationales often clarified personalization, while global, system-level descriptions sometimes fostered blind trust or skepticism. User-model dashboards gave participants a sense of agency but also heightened concerns about being monitored.
  • Design strategies based on these patterns. We identified design strategies such as using plain language in global explanations, highlighting personalization cues in local rationales, and designing dashboards as participatory tools rather than as static system exposures.

More Projects

Built with v0