The AI Ethical Ecosystem
- Andrew Powers
- Oct 8
- 8 min read
Updated: Nov 4
Why Start With Ethics?
AI is the most transformative technology of our lifetimes. At least so far.
Some of the smartest people working in AI (including the man literally considered the "Godfather of AI") think humans might use it to destroy ourselves and the planet. Others think it will usher in a golden age of human flourishing (for example, AI is already accelerating the discovery of breakthrough storage technology for renewable energy).
Here's the thing: they might both be right.
This technology is profoundly powerful, already capable of immense good and devastating harm. And the AI tools we have access to right now? This is the least powerful they are ever going to be. They are only getting more capable from here.
Before we ask "what can I do with AI?" we have to ask "what should I do with AI?" We need to think about what we're using it for, and why.
We can't come up with satisfactory and conclusive answers to all the ethical questions about AI, but we absolutely must wrestle with them.
That's why we start here.

If you're using or thinking about using AI tools, you have probably wondered: Is this okay? How do I use this responsibly? What could go wrong? And you have surely heard stories of AI use going terribly (or even just a little) wrong!
AI is complicated, and the ethical questions around it can feel overwhelming. That's why we created this framework using an ecosystem metaphor. We like metaphors because they help us relate to complicated ideas, and we like nature because we spend a lot of time there. But mostly, we chose an ecosystem approach because AI ethics isn't a simple checklist, it's a living, interconnected system where choices in one area ripple through everything else.
This framework won't give you perfect answers (ecosystems are messy that way), but it will hopefully give you better questions and a more nuanced way to think about what AI means for us.

Core Question: What is this AI designed to accomplish, and what am I trying to accomplish by using it this way?
This foundational layer examines both the AI system's intended purpose AND your specific intent in using it. Rather than asking whether an AI tool is "good" or "bad" in general, we focus on understanding what THIS tool was designed for and whether your particular use aligns with ethical values.
Every AI system has a designed purpose, but how you choose to use it matters just as much. AI tools consume resources with every interaction, and the growing use of AI has downstream implications. So the question becomes: Is this specific use worth those costs? Are you using AI to enhance human capabilities, or asking it to do things that undermine learning, creativity, or authentic human work?
Ask yourself:
Tool's Actual Purpose: What does this AI system claim to do (solve problems, provide information, create content, offer companionship), and more importantly, what is it optimized for: your benefit, engagement/retention, data collection, or business outcomes?
Your Intent: What are you specifically trying to accomplish with this interaction, and does using AI here align with your values around learning, creativity, human connection, and authentic work?
Value Alignment: Does this particular use enhance your capabilities and support human flourishing, or does it undermine learning, replace your thinking, or diminish skills you value?
Worth the Cost: Given that every AI interaction consumes energy and water while contributing to emissions, is this specific use valuable enough to justify those environmental and societal costs?
Who Benefits: When you use this tool, who actually gains the most: you through genuine value, the company through your data and engagement, or society through meaningful outcomes?

Roots in Practice: Purpose & Intent
The Roots layer asks: What is the purpose of this AI?
Before you even type a prompt, the most important question is: what is this AI for? OpenAI's founding mission was clear, but ten years later, things look different. This interactive explores what changed, why it matters, and what questions we should be asking.

Core Question: How was this AI created, and what should I know about who built it?
In a forest, the understory is where new growth emerges and takes shape. It's the formative layer that determines what will eventually reach the canopy.
Similarly, how AI systems are developed shapes both what they can do and what they represent. These development choices affect you in concrete ways: if an AI was trained to be overly agreeable, you'll get sycophantic responses that don't challenge your thinking. If bias wasn't addressed in training, it shows up in every interaction. If the system learned primarily from one demographic's writing, it struggles with others.
But development ethics also extend beyond performance to questions of sustainability across social, environmental, and economic dimensions. Whose data was used to train these systems, and were those people compensated? What labor practices were employed? What does the company actually prioritize, and how transparent are they about their methods? What are the resource costs of training and maintaining these systems?
The challenge is that much of this information is increasingly hard to find. Companies have become more secretive about their training data and practices. But understanding what you can about these origins helps you make more informed choices about which tools and companies to support.
Dig into:
Data & Copyright: Whose voices and perspectives are represented in the training data? Was this content used with permission and compensation?
Interaction Design: How is this AI designed to behave toward you? What kind of relationship is it trying to create, and whose goals does that serve?
Labor Practices: How does this company treat the workers who build and maintain their systems?
Company Values & Trade-offs: How does this company balance profit, safety, user privacy, and innovation? What do their practices reveal about these priorities?
Transparency: How open is this company about their methods and limitations?

Understory in Practice: Development Ethics
The understory layer asks: How was this AI created, and what should I know about who built it?
Most of us use AI tools without knowing much about their origins - and companies are increasingly secretive about their training data and practices. But understanding what we can about development ethics helps us make better choices about which tools to support and use.
Let's explore these questions through a real example: DeepSeek, a recent AI model that made headlines. Test your knowledge and learn what development ethics looks like in practice.

Core Question: How do I use this AI responsibly?
The forest canopy is where trees meet the sky, where a lot of the forest's life happens. In rainforests, nearly 90% of animals live in this upper zone. In AI ethics, the canopy represents where AI systems meet the human world - the workplace, the classroom, the living room. This is where abstract technology becomes a daily tool and ethical principles become practical decisions.
Every time you decide whether to use AI for a work project, how transparent to be about AI assistance, or when to verify an AI-generated answer, you're operating in the canopy layer. Your choices about how to use these tools directly shape how AI impacts the people around you and the work you do.
Ask yourself:
Human Oversight: What level of critical review does this task require, and where are my unique insights, contextual knowledge, and values irreplaceable?
Context & Appropriateness: Just because AI can do something doesn't mean it should - does using it here actually serve your goals, or are there situations where human connection or effort matters more than efficiency?
Transparency & Verification: How accurate and reliable is this, and who needs to know I'm using AI?
Privacy & Data: What happens to sensitive information I input?
Fairness & Bias: AI trained on limited data can produce varying quality results - is this tool well-suited for your specific context and the people you're working with?

Canopy in Practice: Using AI
I was working on the Canopy section of our AI Ethical Ecosystem and thought: "What even ARE ethics anyway?" I've got some limited Western view baked in, what if we explored other perspectives?
So I asked Claude to suggest some others and he (I see Claude as he/him) suggested Ubuntu, Confucian, Indigenous, and Utilitarian approaches and to draft some different responses to ethical dilemmas. And here's where it gets interesting, because even though Claude is really good, this was getting into a lot of ideas about which I didn't know anything and could not really judge the quality of his work.
So Claude wrote a letter to his other AI colleagues requesting their review. We sent it to ChatGPT, DeepSeek, Gemini, and Grok asking for a detailed critique. And wow, the feedback was voluminous and brutal on some points, especially about representing Indigenous perspectives responsibly.
What you're about to explore is the REVISED version. For each tradition's perspective, you can click to see what specific feedback shaped the revision. This is one example of what ethical AI use can look like: transparency, verification, willingness to revise, and humility about what you still don't know.

Core Question: What might widespread AI adoption mean for society?
Just as forests generate effects far beyond their boundaries (cleaning air, filtering water, regulating climate), AI creates ripple effects throughout society. These impacts shift our economy, relationships, environmental sustainability, and ways of thinking, all at varying speeds. Some changes appear quickly, while others emerge gradually over time.
Here's what makes this moment particularly significant: AI isn't just a tool we choose to use; it's being embedded into every aspect of our lives and society, often without our explicit consent. It's in our phones, our browsers, our email, our search engines, our workplaces.
Our individual choices about AI use, combined with use by hundreds of millions of others across the globe, will add up to broader social, economic, and environmental transformations. The more we understand these wider effects, the better we can make choices that contribute to positive change and anticipate how our professions, communities, and planet might evolve.
Ponder these:
Economic & Professional: Everyone's talking about AI's impact on jobs - but when new jobs don't appear in the same places old ones disappear, who bears responsibility for ensuring this transition doesn't devastate entire communities and demographics?
Social & Cultural: As AI mediates more of our interactions and becomes our go-to for advice, creativity, and problem-solving, what happens to the human skills and relationships that develop through struggling together, learning from each other, and navigating challenges without a perfect answer available?
Equity & Access: How do we prevent AI from widening societal divides when 2.6 billion people remain offline while others access cutting-edge AI for education, healthcare, and economic advancement?
Environmental: AI data centers now produce more emissions than the entire aviation industry. Is what we're collectively getting from AI worth the environmental costs, especially when benefits and costs are distributed to different communities?

Ecosystem Services in Practice: Societal Impacts
One of the most pressing concerns we hear about AI is its electricity consumption and water use. We've created a calculator below to help you explore this, but first, some important context:
The technology evolves rapidly. Models are becoming more efficient—Google's Gemini, for example, uses 33 times less energy per prompt now than it did just a year ago. Individual text queries consume relatively modest amounts of energy.
At the same time, usage is exploding globally, and newer applications are dramatically more energy-intensive. Video generation is a prime example: generating a five-second video clip requires the equivalent of running a microwave for over an hour. This is brand new territory with impacts we're only beginning to understand.
Here's the challenge: estimating AI's energy consumption is difficult. Companies don't fully disclose their data, models vary widely in efficiency, and factors like data center location and cooling systems significantly affect the final numbers. What you'll see in the calculator below represents ballpark estimates based on available research, not precise measurements. Think of these figures as helpful for understanding relative scale and making informed choices, rather than exact science.

Comments