When "Do No Harm" meets AI.
How the medical principle collides with the messy reality of artificial intelligence.
We like to believe new technology can be steered toward the public good. AI is no different. There are real examples that sound promising: using machine learning to spot illegal mining in the Amazon, predicting floods and extreme weather to protect communities, or tracking early disease outbreaks before they spread. These aren’t just hypotheticals. They’ve saved lives.
And we’ve had innovations like Apple’s Siri, Amazon Alexa, ChatGPT, Claude, Notion AI, etc. that have captured the attention of MANY (like, all of us probably). People love that it can take tedious tasks and spit out something usable within seconds. In a society where we value results, AI not only gives us results, but it gives them to us fast.
But if we take the principle of “do no harm” seriously, the picture gets complicated.
For every well-meaning AI project, there’s a counterpart that causes harm. And it’s sometimes in ways no one anticipated:
Predictive policing that reinforces racial bias instead of preventing crime.
Hiring algorithms that quietly discriminate because they were trained on biased data. (Additional reading here).
Crop monitoring systems that help some farmers but also lock smallholders into expensive, proprietary platforms.
The harm isn’t always obvious right away. It can take years before we see who really benefits and who’s left carrying the risks.
Is this just capitalism in another form?
This tension feels familiar to me. It’s the same one we navigate under capitalism: a system with deep structural flaws where we do our best to act ethically within it. I have a drawer dedicated to all the grocery bags I can re-use. I try to recycle as much as I can. Maybe AI is the same. It’s already woven into global supply chains, surveillance systems, and corporate profit models. I wonder whether the question isn’t whether we can make it perfectly good, but whether we can limit harm while using it in the few places it truly adds value.
When AI isn’t necessary.
We also have to admit that AI isn’t needed for everything. We don’t need it to decide school lunch menus or create “smart” toothbrushes. Sometimes, the most ethical choice is to not use AI at all. Other times, we have good intentions for AI, but it can result in harm we did not foresee.
And the unintended consequences are real:
Drones with AI used to track endangered species also made it easier for poachers to find them.
AI may have increased efficiency in workplaces, but it’s causing communities to lose access to water.
Translation AI gave marginalized language speakers more access, but also displaced local interpreters and eroded cultural nuance.
Living in the tension.
Maybe “AI for public good” isn’t a clean category. Maybe it’s about constant questioning:
Who benefits? Who takes on the risk?
Is AI the right tool for this problem?
How might this be used—intentionally or not—in 5, 10, or 20 years?
Personally, I try not to use things like ChatGPT. But there are times when I’ve found it to be extremely useful. For example, my mom will often ask me to translate some pretty complicated medical or legal documents into Korean. I’ve struggled with this my whole life since I am fluent conversationally, but not at the level of professionalism required to translate these kinds of documents. A few months ago, I asked ChatGPT to explain in Korean the details of their home mortgage and escrow. (I have NO IDEA how to describe escrow in Korean.) Within seconds, I was able to help her understand. In the past, it would have taken me a long time as I went through each word to translate and piece together something to communicate.
AI isn’t inherently moral. It reflects the values and power of the people who build and use it. We can’t opt out of the systems we live under entirely, but we can decide when, how, and if we engage with them. Especially when the public good is on the line.
What that looks like for you as an individual, I don’t know. All I can ask is that we collectively try the best that we can.
Finally, below is a real-life situation on data centers and their impact on the environment/community. What do you think?
Why is drought-hit Brazil saying yes to AI data centers?
I like how you framed this around “do no harm.” It’s a refreshing way to think about AI compared to the usual hype. The examples you gave really show how tricky the picture gets once these systems leave the lab and land in real communities.
What struck me, though, is that a lot of the harms you mention aren’t really accidents. They tend to follow the same incentives we see everywhere else. That’s why your capitalism parallel resonated with me. It feels like AI doesn’t just exist inside that system but often magnifies it. I would have loved to see that thought pulled a little further.
On the “do no harm” part, medicine makes that principle work because it has strong norms and professional guardrails. AI doesn’t really have the same thing yet, which makes it hard to move past slogans into something enforceable.
I did like the story about helping your mom with the mortgage translation. It’s a good reminder that even in all the mess, these tools can be genuinely helpful at the personal level. Maybe the real challenge is figuring out how to hold on to those moments of clear benefit while being more honest about the systemic costs.