These principles were drafted through a series of conversations and workshops with Bloom team members. We iterated on the principles from version 1. These principles apply to work that is relevant for our clients and for our internal purposes.
Given the relative infancy of AI and the rate of change of this technology, our approach and decision to use AI will continue to evolve. This document will be reviewed every 6 months by Bloom’s Emerging Tech Guild and revised and updated as needed.
Why AI, why now
We at Bloom seek to strike the right balance between the responsible use of AI to further our mission and the need for greater exploration and experimentation with AI. The technology has the potential to advance how digital services are delivered by making them more accessible, effective, and efficient to build. We don’t believe that AI must be used. There are many instances where its use will never be desirable. But our partners—and the people we serve—rely on us for knowledge and expertise, count on us to lead, and expect us to be objective and rigorous in our approach.
The difference between principles and practices
The specific ways we use AI at Bloom, or our AI practices, will evolve to support our work and reflect best practices in the field. The principles here inform our practices. They are succinct intentionally, so they remain relevant and don't veer into practices.
Bloom’s principles for using AI
The following principles, grounded in Bloom’s Values and workshopped with Bloomers between August and December 2025, provide a framework for decision-making and using AI. The principles are meant to direct our work with our partners and to support making ethical, outcome-driven decisions.
Each of these principles is tied to specific values that we champion at Bloom.
- Lead with expertise: Build AI expertise and learn through experimentation, prototyping, and delivery.
- Be measured: Make data- and evidence-based decisions about AI and don’t be distracted by fear or hype.
- Be transparent: Inform people when AI is used for decisions that impact accountability and trust.
- Keep humans in the loop: Use AI to augment and support human decision-making and critical thinking, not to replace it.
- Protect privacy and safety: Rigorously apply protocols for testing, evaluation, and monitoring for AI systems.
- Minimize cultural and racial bias: Protect users from potential adverse consequences due to cultural and racial biases present in AI systems.