Hacking Back against Bias

artificial intelligence

Busting Bias and Building Inclusion with AI

How 125 EPAMers Are Hacking the Way Forward

February 21, 2024
by Ken Gordon
UB hackathon art

There’s a reason—several reasons, in fact—for the exclamation point at the tail of the recent EPAM event Unveiling Bias: Forging Inclusive AI Solutions for Tomorrow Hackathon! First: AI is, as you might have read, a fascinating technology that has massive business and social implications. This was, in fact, the first time our DIAL platform was ever used in a hackathon, and the first time some participants had ever tried it. Second: Unveiling Bias, which ran from December 7 to February 13, drew an avid, varied group of participants from every corner of our global organization. This group included people in “very senior roles but also junior roles,” says Karen Suarez, EPAM Communications Manager and Unveiling Bias organizer, as well as business analysts, project managers, production people, and consultants —“Not all of them are engineers, and that diversity of perspectives fosters innovation.” Third: The hackathon’s 125 participants, on 19 teams, from 32 countries, gathered to hack against bias and for the noble cause of diversity.

The Big Questions

Organized by the Women at EPAM Employee Group (EEG), Unveiling Bias showed how serious our organization is about taking a human-centered, cross-functional approach to developing AI—for employees, clients, and humanity—today and going forward. The hackathon set the tone by daring participants to ask themselves some important questions such as:

• How can we forge AI solutions that are truly inclusive?

—and —

• How can AI technology itself be harnessed to champion diversity?

Both queries pushed our teams, hard, to produce innovative prototypes and made their responses not just real but relevant, resonant and revolutionary. After the event, we explored the method and meaning of Unveiling Bias, speaking with some of the event’s key players.

Making the Fight against Bias Real

Why bias? The fact is, AI is rife with it. “If you type ‘CEO’ into Midjourney,” says Jie Li, Head of Research & Insights in the Netherlands and an Unveiling Bias judge, it will return an image of a white male. Not the most representative result. Similarly, judge Kate Pretkel, our VP of People Programs, notes that prompting AI to create an image of “IT professionals on the bench” produced a predictable picture of young men.

Unveiling Bias aimed to create a different, more diverse AI environment. One of the ideas from the hackathon, says Pretkel, involved using AI to parse resumes to ensure that age, name, “all the characteristics that we should not be keeping in mind when making hiring decisions” don’t prejudice managers during the employment process.

Li pointed to another Unveiling Bias project. Suppose your boss or mentor wants to write a paragraph of feedback. She says, that “AI can detect the bias in your sentences,” which is necessary “because we all have different cultures.” Imagine, says Li, wanting to write feedback to an African colleague but not understanding the local culture well enough to communicate effectively. “This tool can help you understand the hidden biases in such feedback and give you helpful alternative suggestions.”

An instrument that makes our biases visible to ourselves would be very useful, particularly right now, when we’ve become more attuned to the importance of diversity and inclusion.

A Hackathon as Responsible as It Is Diverse

The energy behind Unveiling Bias can be felt in the organizers’ and participants’ true sense of urgency in working with AI responsibly. “We need to be part of the conversation to ensure that whatever is produced, and how our lives are going to be affected, represents who we are, what our beliefs are, what our needs are,” says Alexandra Jorge, Director of Experience Consulting in the UK and core member of the Unveiling Bias team.

So, we asked about diversity in the context of responsibility. At what points do the two intersect? At what points do they diverge?

“Intentionally designing AI systems to serve a diverse group of people is a prerequisite to calling it responsible,” says Martin Lopatka, Director of Analytics Consulting at EPAM, and hackathon judge. “An AI system that only meets the needs of a very specific demographic of individuals, or even more specifically, a set of particular individuals’ use cases, is by definition not responsible, unless that's part of the design.”

Hackathon instigator and supporter Alexandra Diening, Senior Director of Research and Insight, EMEA, seconds that concept: “If we define AI just by one group, with one perspective and one set of needs, it will be by default discriminating against the others by not serving them.”

Diening adds that the whole project is about “planting an idea into the right ground” and working with the right people to “grow it majestically.”

Suarez notes that they were intentional in ensuring that the hackathon’s mentors and judges represented different perspectives as well. “So we have someone from the EngX organization. We have experts from data, experts from AI. We have people from the legal side because we were considering ethical or responsible AI.”

It's a case of being, intentionally, all over the place.

What Happens after Bias Is Unveiled

The organizers of Unveiling Bias are intent on ensuring the spirit of the hackathon will continue. Victoria Morrison, Director, Experience Consulting, insists that EPAM will incorporate the ideas from the hackathon—not just the prizewinners but all the worthy concepts—into various other projects and parts of the business. Morrison says: “This hackathon has produced some exceptional thinking that could create real impact for our clients and their customers, and positive change for our own teams inside EPAM. Our plan is to harness our integrated consulting and engineering talent to make these POCs real.”

Amazingly, the first-place project, an LLM-based app that mitigates bias in corporate culture and is described by the winning teams as an “inclusivity ‘prosthetic’ that anyone can use to learn and align with common policies,” has already been shared with two interested EPAM clients.

Unveiling Bias ideas have traveled swiftly to the top of our company’s org chart. Pretkel says she is “having all of these conversations with the executive team about how AI will change the world, how AI will change the business of EPAM,” and that many of the ideas presented at the hackathon suggest “the possibility of connecting these dots between these teams and their ideas and the proper teams already doing something or looking for a solution at EPAM.”

Lopatka says the initiative will help our organization “do better in terms of our internal education and upskilling availability and create those opportunities for folks from across different organizational rules to participate” in future AI projects.

Jorge, for her part, hopes that the hackathon “incentivizes our colleagues to step out of their comfort zone, step out of their day-to-day teams, and start looking at the connections that are possible” with AI.

We get the feeling, a strong one, that Unveiling Bias will highlight a world of possibilities. The positive experience and outcomes make us think that other organizations might want to follow suit and arrange for their own open-ended, human-centered, bias-busting hackathons. It's an outstanding way to keep your people engaged and your mind open!

filed in: artificial intelligence, prototyping, digital design, employee experience, customer experience, social innovation, integrated talent ecosystem

About the Author