Their discussion comes as recent findings suggest that women are leaving the workforce at a record pace over the last year. Kanioura said that part of her responsibility as a female leader is to address the backlash against diversity, equity, and inclusion (DEI) principles and ensure that female leaders are “fully empowered.”
“We are thinking of how we can help the next generation of female leaders to stay in the workforce, fight for what they deserve today,” she said. “We have a big responsibility now.”
Join us on TIME’s YouTube for the 2026 TIME100 Gala Red Carpet Livestream, starting at 6:00 p.m. EST on April 23, 2026—and available after on demand
The panelists said a major part of their leadership roles is addressing the anxiety young people feel about the introduction of AI in their workplaces. An April Gallup study found that while 51% of Gen Zers in the U.S. use AI at least once a week, negative sentiment toward the technology has grown sharply. Thirty-one percent of Gen Zers reported outright anger toward AI, up from 22% last year.
“That’s some of the challenge that’s on us, as the leadership, to train new employees in the company so that they can use AI, and we can benefit from all the wonderful efficiencies that AI can bring,” Kim said. “But we also need to make sure that we keep what is appropriate for people to do.”
Shih said that younger people should be encouraged to engage with AI through education, and that the “anger” Gen Z feels towards AI is unproductive in helping them find jobs in a brutal market.
“They’re not using it. They’re purposely not learning, and that’s really not doing themselves and our society any favors,” Shih said. “We need to change the narrative and convince more young people to own their future, because AI isn’t inherently good or evil. It all depends on what we decide to do with it.”
The panelists acknowledged, however, that marginalized communities distrust AI due to the industry’s lack of inclusivity and the use of biased datasets. Shih believes that to encourage inclusivity, representation must be present at “every step” of AI model development.
“It’s paramount that there’s representation at each step, because what happens is the model will systematize whatever bias that is there…it creates a situation where the inequality compounds over time and so that is absolutely critical,” she said.
In the pharmaceutical industry, this comes down to datasets that represent people of all races, genders and backgrounds, Kim said—something that was not always true for clinical trials.
“Historically, it’s been very, very much skewed [to] Caucasian male for all diseases,” Kim said. “It is a key component of what we look at when we’re enrolling our clinical trials, so that the foundational data that is used to train all of our AI algorithms in terms of spotting disease, treating disease, etc, has representative data.”
Read the full article here
