Future proof your career with robust AI knowledge and skills!

You are needed at the AI table and The Inclusive AI Lab will show you how to step up, sit down, and speak up!

There are many roles for non-coders, non-developers, and non-data scientists in AI builds, AI policymaking, and AI governance.
Don't let the new addition of AI to the workplace undermine your existing skill set, voice, and contributions!

Be the team member who brings inclusion, ethics, and responsible AI to the forefront.
Your organization will thank you later!

Be the team member who serves as a bridge between the non-technical and technical teams.
Your manager will thank you later!

Be the team member who helps drive AI adoption for effectiveness and efficiency but does so slowly and steadily and, most importantly, without doing harm.
Your end users will be thankful for the product later!

The Inclusive AI Lab is your one-stop shop for all things AI and social impact. No math, no coding. 100% social science perspective.

The Inclusive AI Lab brings artificial intelligence (AI) literacy to the masses, primarily professionals and practitioners who are not technical, non-coders, and have no data or computer science training. All trainings are designed with this in mind and focus on upskilling in AI from a concepts perspective, not math.

Run by award-winning trainer, Dr. Emily Springer, this training enables you to access broad generalist AI content that still takes you deep inside the core issues, conundrums, and the challenges of inclusive, responsible AI. All training assumes a diversity, equity, and inclusion (DEI) lens and takes seriously the role AI can, cannot, or should not play in efforts for social justice.

The Inclusive AI Lab is the training division of TechnoSocio Advisory, an independent, woman-run consultancy firm focused on inclusive AI for social impact. If you're interested in consulting services, such as in AI advising (end user research, testing protocols, etc), AI policy and strategy creation, risk mitigation planning, or career mentoring, please see TechnoSocio Advisory.

Professionals: Invest in AI literacy skills to ensure that your existing expertise is relevant in AI workplaces. These trainings are priced for affordability and you will receive a Certificate of Completion at the end of each training. If you're seeking to use professional development funds, please consider adapting this template to ask your manager, if helpful.

Organizations: Are you interested in upskilling teams, organization-wide training, and/or partner staff? Options include:

  1. On-demand training: Discounted rates for multiple seats.
  2. On-demand training + interactive workshops: Solidify staff learning with interactive workshops where staff will complete hands-on activities and have facilitated discussions. Learn by doing!
  3. Custom trainings, workshops, or lunch-and-learn series to fit your needs.

If you are interested in purchasing multiple training seats, please know discounts will apply. Please see the Training page and set up a call to discuss in more detail.

“As someone with little prior knowledge of big data or algorithms, the course was the perfect entry point for me into the world of AI. Emily made what felt inaccessible to grasp, very accessible. Her facilitation is superb; clear, energetic and engaging. I now have the algorithmic literacy to critically assess AI tools, the potential harm they can cause and the social inequalities they can reinforce. It also made me excited for the positive impact AI can have in the world of social impact!”

- Anahita Alexander-Sefre, UNFPA

(Ethical and Feminist AI Bootcamp participant 2024)

Meet your trainer


Dr. Emily Springer, one of 100 Brilliant Women in AI Ethics™(2024), serves as an expert on UNESCO's Women4Ethical AI platform and founded TechnoSocio Advisory-an inclusive AI advisory company. She is an inclusive AI advisor and algorithmic literacy trainer for social impact professionals, focusing on education, health, agriculture and other programming with the aim of improved livelihoods, equity, equality, and social justice. She specializes in sociotechnical understandings of AI, end user research and testing, and AI strategy and risk mitigation.

Her consulting focuses on digital and gender issues in the agriculture and education sectors and she now serves the Tech-Facilitated Gender-based Violence (TF GBV) team at UNFPA.

Previously, at a Microsoft partner, she built automation solutions for enterprise clients and, at World Bank IFC ran higher education digitalization strategic planning.

As a sociologist, she has trained graduate students in Development Practice and Social Justice and undergraduates in Technology and Society and Globalization. As a member of the International Association of Algorithmic Auditors, she is eager to ensure AI benefits all peoples.

More trainings coming soon!

The Inclusive AI Lab will build training content under three areas:

Foundational AI
trainings

These trainings focus on building the strongest AI foundation possible. These trainings give you a flexible and robust foundation upon which to build.

Hot Topic

short courses

What is tech-facilitated gender-based violence? What's RAG? What's red-teaming?

These trainings offer quick, detailed overviews of hot topics in the AI space, all tailored to social impact concerns and needs.

Sector-specific
guidance

These trainings forefront sector-specific use cases, challenges, and best practices. Examples include: education, healthcare, agricultural development, and evaluation.

Let's start thinking critically about AI today. Responsible AI isn't something "out there" and vague, it's right here! So let's start using everyday practices to think about ethical and responsible use.

Embodied case study:
Is it a responsible practice to use genAI images, like I do, on websites?
Why or why not?

Arguments for: It's a great way to demonstrate what genAI can do in-use. Although I've been disappointed in generic image generators for reproducing biases, these artist-designed images seem pretty cool to me.
Arguments against: GenAI image generators are trained on stolen data and therefore unethical. I agree it's trained on stolen data, just like text generators like ChatGPT. Seems like we need to find a balance between experimentation and adoption and ethics. As an end user, I could take my business elsewhere to more ethically-trained models but not sure where at the moment. I'm proceeding with caution and feel theft concerns are somewhat addressed by purchasing the images.

Background: I paid for these images through Adobe Stock Images, I did not create them myself. According to Adobe policies, each genAI image creator is paid a royalty when a customer, like me, licenses the image--same as for other content.

My Responsible AI to-do list: Build out a genAI watermark and add to all genAI images for increased transparency, research creator bios to better understand the reproduction of financial power and potentially change who I license images from in the future, research if any publicly available LLM has been "fairly certified," and, most important, be open to changing my mind!