Pixels and Perception: Traditional Kenyan attire through a distorted lens
September 09, 2025
Introduction
As artificial intelligence becomes more embedded in how we create, communicate, and imagine the world, its visual outputs are shaping how identities are seen and remembered. Pixels and Perception is a new research and creative series by Ai Kenya, exploring how AI systems portray African life, culture, and identity. This first edition, “Traditional Kenyan Attire Through a Distorted Lens,” focuses on a narrow but revealing question: How accurately do text-to-image AI tools visualize traditional Kenyan communities when prompted? What we found signals deeper biases in the datasets and systems shaping tomorrow’s cultural memory, and sets the stage for future studies across other African contexts.
About the Study
Over one week (starting June 27, 2025), we tested how ten widely used generative AI platforms responded to the same visual prompt when asked to generate culturally accurate images of individuals from five Kenyan ethnic groups: Kikuyu, Maasai, Luo, Luhya, and Kamba.
Each prompt requested a realistic full-body photo of a male or female individual, dressed in traditional attire, facing the camera in natural lighting. Four images were generated per prompt, per platform. The platforms tested include ChatGPT, Midjourney, Google Gemini, Microsoft Copilot, Sora AI, Leonardo AI, Canva, Meta AI and Grok.
Evaluation Criteria
Each image was assessed based on the following: Cultural Accuracy ,Realism , Attention to Details (clothing, ornaments, setting) , Diversity (representation across gender, age, features) , Body Deformation , Biasness
Scores were tallied on a 1 to 5 scale per criterion. To view the full catalogue of generated images, prompts used and individual scores, please visit this link
Key insights
Maasai depictions outperformed all others

Models consistently performed best when generating images of the Maasai community. This is likely due to the global documentation of Maasai attire and culture, which gives AI models more training data to reference.
Kikuyu images scored decently, but with misattribution

Many platforms confused Kikuyu attire with Maasai elements (e.g., beaded accessories). While realism was generally high, cultural accuracy was moderate across all tools.
Luo, Luhya, and Kamba Depictions Scored Poorly
Images generated for these communities often defaulted to Maasai-style clothing or included inaccurate cultural references. This highlights a significant data gap in the training of these AI models when it comes to Kenya’s diverse cultural landscape.

To view the rest of the images and findings of this study, view here.
Top performers
Google Gemini 2.5 consistently scored high across realism and detail. ChatGPT 4.5 and Microsoft Copilot also produced balanced, high-fidelity outputs. Midjourney and Leonardo delivered visually compelling outputs but were less consistent in cultural accuracy.
Why this matters
Representation in AI-generated imagery reflects the values, visibility, and voice of different communities. When AI tools flatten or misattribute cultural identities, it risks erasure or stereotyping of underrepresented groups. These tools are increasingly used in education, marketing, and storytelling making accuracy more than a technical issue, but a cultural one.
Why the Maasai dominance?
One striking pattern was the consistent accuracy and richness in AI-generated images depicting Maasai identity, compared to other Kenyan ethnic groups. This is not coincidental.
The Maasai are among the most globally photographed and visually represented communities in Kenya. Their distinctive attire, ceremonies, and semi-nomadic lifestyle have made them a popular subject for tourists, anthropologists, NGOs, and documentary photographers alike. This has led to an overrepresentation of Maasai visual data in global media and digital archives, much of which ends up in the training datasets used by generative AI systems.
Understanding Maasai dominance in AI-generated traditional Kenyan imagery
One striking trend from the first edition of this study was the frequent depiction of Maasai cultural elements in the generated images, even when prompts did not explicitly mention the Maasai. This pattern is not a coincidence, but a result of several interconnected historical, social, and technological factors:
- Overrepresentation in global visual archives
The Maasai are one of the most photographed communities in Kenya and East Africa. From colonial-era photography to modern travel blogs, tourism marketing, and international art exhibitions, the Maasai have been positioned , sometimes problematically, as the default symbol of “authentic” Kenyan identity. Their vibrant attire, distinct rituals, and nomadic lifestyle are visually compelling and have been repeatedly captured and circulated, especially by non-African creators. - Influence of colonial and ethnographic photography
In the colonial era, the Maasai were a central focus for photographers and anthropologists seeking to document “exotic” or “noble savage” representations of African people. This imagery was not only widely disseminated in archives, museums, and books but also used to shape the Western imagination of Africa. These images are often part of the datasets used to train modern generative AI models, many of which lack meaningful data diversity. - Popularity in tourism and media
Modern tourism in Kenya still heavily features the Maasai, particularly in safari advertising and cultural experiences. As a result, Maasai imagery is over-indexed on the internet, especially in visually rich platforms like Instagram, Pinterest, Getty Images, and travel publications. AI models that scrape such platforms as part of their training data inevitably absorb this visual bias. - Visual simplicity and recognizability
The visual language of Maasai culture, red shukas, beadwork, spears, and jumping dances, is immediately recognizable and easy for AI models to anchor to, even with vague prompts like “African warrior” or “traditional tribe in Kenya.” This makes it more likely for models to default to Maasai representations when the input lacks specificity.
Risks and Implications
- Cultural flattening: The continued AI amplification of Maasai imagery risks flattening Kenya’s vast cultural diversity into a single, monolithic identity.
- Data bias reinforcement: AI models perpetuate and reinforce biases present in their training datasets. Without intervention, other communities remain underrepresented or misrepresented.
- Loss of contextual nuance: Important distinctions between tribes, even those with visual similarities (e.g., Samburu and Maasai), are often ignored, leading to inaccuracies and cultural erasure.
What can be done?
To shift toward more balanced representation across Kenya’s 40+ ethnic communities, several interventions are possible:
- Commissioning new datasets that include underrepresented communities like in authentic cultural contexts.
- Supporting local photographers, archivists, and artists to document diverse traditions and ensure their work is open-source or included in AI training corpora.
- Collaborating with AI companies to review and diversify the data pipelines used to train their models, particularly when generating region-specific content.
- Building tools that allow African creators to fine-tune or customize AI models with culturally-grounded datasets.
- Advocating for cultural bias audits on large AI systems used in visual storytelling, especially when those systems are deployed in African markets.
Stay tuned for more insights as we continue to expand this study across other cultural domains, professions, and regional identities.
– Study by Alfred Ongere and Kevin Mwangi