By: Husam Yaghi
In an era dominated by AI and technology, decisions are no longer just about functionality—they’re about ethics. From AI algorithms deciding fairness in hiring to social media platforms navigating loyalty and liberty, technology is increasingly intertwined with morality.
But how do we ensure that technology reflects our values? Enter the 27-step coaching framework, inspired by the Moral Foundation Theory. This framework offers a structured way to explore your values, align actions with beliefs, and create ethical, scalable solutions—whether you’re an individual, a leader, or an AI developer.
What Are the Six Moral Foundations?
The Moral Foundation Theory identifies six core principles shaping human morality. These are:
- Care/Harm– Protecting others and minimizing suffering.
- Fairness/Cheating– Ensuring justice and equality.
- Loyalty/Betrayal– Valuing commitment to groups or relationships.
- Authority/Subversion– Respecting tradition and hierarchy or challenging them.
- Sanctity/Degradation– Preserving what is sacred and avoiding pollution or corruption.
- Liberty/Oppression– Defending freedom and fighting tyranny.
These values don’t just apply to people—they’re also critical for designing and deploying ethical AI systems.
Phase 1: Awareness – Discovering Values in Tech
The first step is understanding how these moral foundations shape your personal values—and how they influence the tools and systems you create or use.
- Care/Harm Example: How does AI in healthcare prioritize care? Consider how AI chatbots provide mental health support, showing compassion at scale.
- Fairness/Cheating Example: Reflect on AI bias—like unfair outcomes in facial recognition software. How can fairness be built into these systems?
- Loyalty/Betrayal Example: What role does loyalty play in brand communities, like tech companies fostering loyalty among users through ecosystems?
Phase 2: Application – Ethics in Action
This phase focuses on applying moral foundations to solve real-world ethical dilemmas in technology.
- Authority/Subversion Example: How do we balance authority with innovation? For instance, how should governments regulate AI without stifling creativity?
- Sanctity/Degradation Example: Think about data privacy. For many, personal data is sacred—how do we avoid degrading trust through misuse?
- Liberty/Oppression Example: How do we ensure AI doesn’t reinforce oppression, such as through discriminatory algorithms?
Phase 3: Alignment – Designing Ethical Futures
In the final phase, we align values with actions and create systems that reflect shared ethics.
- Aligning Personal Values Example: If you’re an AI developer, which moral foundations guide your work? Do you prioritize fairness over authority?
- Building Moral Bridges Example: How can tech organizations connect with users who have different ethical concerns, like balancing freedom of speech (liberty) with harm prevention (care)?
- Collective Action Example: Imagine leveraging AI for a societal good, like using predictive models to address climate change while respecting liberty and fairness.
Why This Matters
Technology is only as ethical as the people who design and deploy it. By exploring the six moral foundations—Care, Fairness, Loyalty, Authority, Sanctity, and Liberty—you can ensure that your decisions, whether personal or technological, reflect your values.
As we integrate AI into society, these foundations can guide us toward creating tools that are not just powerful, but also just, compassionate, and equitable.
Ready to explore your moral compass? Start with Care/Harm: How can your next project prioritize empathy and reduce harm? Let’s build a tech-driven future grounded in values.
Disclaimer: “This blog post was researched and written with the assistance of artificial intelligence tools.”