September 2024
The Digital Health Institute for Transformation (DHIT) team hosted the most recent edition of our monthly Digital Health Happy Hour series at the Junction, the premier gathering place and catalyst for innovation, where purposeful connections are made to drive social and economic growth for both UNC-Chapel Hill and the state. The Junction is also home to our event sponsor Innovate Carolina, UNC-Chapel Hill’s central team for innovation, entrepreneurship, and economic development. We were joined by a lively audience of digital health enthusiasts, AI and data champions, and next-gen technologists to convene around another pressing topic in the realm of digital health. Our main event of the evening was a fireside chat between emcee Michael Levy, DHIT Chief Executive Officer, and our featured speaker, Phaedra Boinodiris, the Global Leader for Trustworthy AI at IBM Consulting and author of the insightful new publication, AI for the Rest of Us. Together, they discussed the intersection between humanity and the responsible deployment of emerging technologies.
Check out the highlights below!
It’s about fostering smarter conversations and making sure we’re augmenting human intelligence responsibly.
Q: Phaedra, you have dedicated over 25 years to focusing on inclusion and technology, starting in the video game industry and fighting for women’s representation in tech. You’ve also integrated emerging AI into games. Can you describe the moment at IBM where you felt something wasn’t right about how AI was being managed, and how that led you to your current focus?
A: Despite the best intentions of many using artificial intelligence, there have been significant unintended consequences due to a lack of literacy and proper safeguards. Watching this unfold in the news, I realized the problem was more than just technical, but socio-technical. Socio-technical issues blend culture and technology, focusing on how cultural aspects can influence technological outcomes. The hardest part of any socio-technical challenge is the social aspect, specifically organizational culture, AI governance processes, and the necessary tools for responsible AI creation. Among these, changing the organizational culture is the most challenging.
Unfortunately, educational institutions often overlook this critical component, focusing more on technical skills rather than the socio-technical challenges. This gap inspired me to raise awareness and dedicate my efforts to addressing these holistic challenges in AI development.
Q: In today’s world, where we often tend to over-index on the ‘shininess’ of digital transformation and large-scale tech projects, how do you convince leaders, like at IBM Consulting, that we need to focus on getting the ‘people’ aspect right? What does that look like in practice?
A: The biggest challenge I see is that many well-meaning clients don’t recognize the risks of AI, thinking good intentions are enough. For example, in Michigan, an AI model meant to predict welfare fraud had an accuracy of less than 10%, wrongly accusing disadvantaged families. Similarly, in Spain, an AI model for predicting domestic abuse risk failed, leading to tragic outcomes. These are only two examples of a failure to recognize the importance of understanding the socio-technical aspects of AI—how people, processes, and tools interact. We need to involve domain experts, like social workers, to ensure we’re using the right data and have robust governance. It’s about fostering smarter conversations and making sure we’re augmenting human intelligence responsibly.
Q: From a societal perspective, I imagine you’re not entirely happy with our progress in data literacy and understanding. How do you feel about where we are in terms of citizen literacy and capability with data and the awareness of its implications?
A: Honestly, there’s a lot of work to be done, but there are also signs of hope. For instance, I recently attended Data Science Day at UNC-Chapel Hill’s new School of Data Science and Society, which is doing great work in fostering multidisciplinary programs. Additionally, I volunteer with the Girl Scouts on AI awareness. It’s inspiring to see how aware and thoughtful these young girls are about AI’s benefits and pitfalls.
Moreover, several states are establishing centers of excellence for responsible AI. California and Texas, for example, are forming consortia that include government agencies, private industry, and nonprofits. These centers aim to influence public policy, education curricula from K-12 through higher education, and best practices for AI adoption.
I’m hopeful we’ll see similar initiatives here in North Carolina. There’s growing interest from our state’s CTO, the legislature, academic institutions, and nonprofits like NC Tech. It’s an exciting time, and I’m optimistic about our progress in responsible AI.
Q: For those in the audience who care about, work with, and leverage data in their lives, what are the risks of going all in on data? Conversely, what are the risks of not doing so?
A: Data is what powers artificial intelligence, of course. And my favorite definition of the word data is that it’s an artifact of the human experience; we create it and generate it. However, it’s essential to recognize that data is inherently biased, reflecting over 180 human biases. This makes AI a mirror that can show us these biases, giving us a chance to evaluate if AI aligns with our values. If it does, we should be transparent about our data choices and approaches. If it doesn’t, we need to see it as an opportunity to make changes.
Take, for example, the aforementioned situation in Spain where a lack of understanding about the context of the data used in an AI model led to severe consequences. On the other hand, the Māori in New Zealand offer a powerful contrast. They approached AI development by ensuring their models reflected their cultural principles and values, treating data as sacred and handling it with deep care.
This approach can guide organizations in how they want their relationship with AI to look. At IBM, we emphasize AI models that reflect kindness, the growth mindset, agency, and data sovereignty in addition to conceptions of transparency and robustness against adversaries. It’s about ensuring technology aligns with and respects human values, which ultimately builds trust and leads to more responsible AI use.
Q: Considering we need context to train AI models, and the best context comes from our own data, do you think we will eventually train personal AI models tailored to ourselves? Will data become a public utility, traded like currency? How quickly do you see this happening?
A: Currently, a small, homogenous group decides which data sets train AI models. For AI to truly reflect our diverse communities, we need a broader range of voices, including those with different lived experiences. This means deliberately creating spaces for new people at the table. Involving diverse groups, like library scientists, who are adept at curating and trusting data, can help democratize AI.
Q: Absolutely. Librarians are the original data custodians. So ultimately, trust is the foundation of this process. It’s the trust in oneself, in partners, and across the ecosystem—private, public, community, and home. Trust seems scarce today. How do we build it?
A: Building trust involves transparency. We need to show our work in AI—where the data comes from, who is accountable, how the model performs compared to humans, and how often it’s audited. This traceability, or data lineage, is key. Additionally, we must educate the next generation to be critical consumers of AI, starting in middle and high schools. In preparing them to ask pressing questions, we ensure a more informed and equitable future for everyone.
Many thanks to Phaedra for taking the time to share her deep knowledge and expertise with us. If you want to learn more about her work and galvanize a better understanding of how AI can be more representative of the communities it is intended to serve, please consider purchasing a copy of her book, AI for the Rest of Us.
And much gratitude to Innovate Carolina and the Junction for their generous donation of the event space and for sponsoring the evening. Finally, thank you to our ecosystem of partners and participants for continuing to convene around shared visions of local and global digital transformation.
Join us for our next happy hour in the Google Lounge at American Underground on Tuesday, October 22, 2024, at 5:30 PM to hear from Web Golinkin, a visionary healthcare leader, entrepreneur, and author about the paradigm shift in the way longevity is approached—focusing more on proactive prevention and increasing healthspan (quality of life) rather than just extending lifespan.
For more information about DHIT’s mission, events, programs and partnership opportunities, please subscribe to our newsletter.
Talk soon!