The ICI AI Constitution — 12 Principles for Responsible, Powerful AI Use
- Home
- ICI AI Constitution
Why a Constitution — Not a Rulebook
AI is the most consequential technology in human history. Not the most important thing happening in business — the most important thing happening in civilization. That is not hyperbole. That is the honest assessment of the people building it.
At ICI, we've built AI-powered tools and strategies for businesses for years. In that time, we've formed deep convictions about how AI should — and should not — be used. This document codifies those convictions into clear, principled, practical guidelines.
Constitutional AI — training AI systems around a central document of values rather than a list of rules — was pioneered by Anthropic. We believe the same philosophy should govern how businesses approach AI: not a checklist of dos and don'ts, but a clear articulation of values that guides every decision.
What This Constitution Covers
- How we build AI tools — and what we refuse to build
- The values that govern every client engagement
- Our position on data privacy, human dignity, and accountability
- Hard limits that no client, context, or amount of money can override
- Our public commitment to broad access, continuous improvement, and leaving things better
Our Commitment — In Plain Terms
These are not aspirational statements. They are operational standards. Every AI project we take on is evaluated against these principles — at the outset, in the design process, and at launch. If a client asks us to build something that violates these principles, we decline. If we discover a project we've launched is producing harmful outcomes, we address it — even when that's costly or uncomfortable. We share these principles publicly because the AI field needs more voices articulating clear values, not fewer.
The 12 Principles
Honesty Above All Else
AI should tell the truth — even when uncomfortable. We build tools that say "I don't know," acknowledge uncertainty, and deliver answers users didn't want to hear. Trust is more valuable than comfort.
Human Judgment Stays in the Loop
No matter how capable AI becomes, the human responsible for a decision stays in control — especially in legal, financial, medical, or safety-critical contexts.
Transparency About What AI Is
Users always know when they're talking to AI. We never build tools designed to deceive people about their nature. This is not a legal technicality — it is basic human respect.
Data Privacy Is Sacred
User data is not a resource to exploit — it is information entrusted to us. We do not build tools that harvest, share, or monetize personal data beyond the clear, consented purpose. Full stop.
AI Should Serve Real Interests
There is a difference between what someone asks for and what is actually good for them. We design tools that help people think better — not tools that do their thinking for them.
Hard Limits on Harmful Use
We will not build AI designed to deceive, manipulate, surveil, harm, or discriminate. These limits are not negotiable in any context, for any client, for any amount of money.
Access Should Be Broad
Powerful AI in few hands is dangerous. We actively build for businesses of all sizes and push back against AI ecosystems that gate-keep capability behind price barriers most businesses can't afford.
AI Should Help Humans Work — Not Eliminate Their Dignity
We help clients replace drudgery with creativity, automate repetition to free humans for higher-value work, and keep people meaningfully employed rather than simply cutting headcount.
Build for Accountability
Every AI tool should have a clear chain of accountability — who built it, what it does, who is responsible for its outputs, and what recourse exists when it fails.
Continuous Improvement Is a Responsibility
An AI tool deployed and forgotten is a liability. We design every engagement with long-term stewardship in mind, not one-time delivery.
Respect the Pace — But Don't Let It Paralyze
The AI era rewards urgency. But urgency is not recklessness. We help clients move fast enough to capture opportunity and carefully enough to avoid costly mistakes.
Leave Things Better Than You Found Them
Every AI deployment should make the world a little bit better — businesses more effective, people more capable, customers better served. That is not an abstraction. It is a daily practice.
Three Commitments That Define Everything
We Decline Work That Violates These Principles
If a client asks us to build something that conflicts with this constitution, we say no — regardless of the contract value. Our principles are not flexible.
We Monitor What We Build
AI tools we deploy are monitored, updated, and improved over time. Launching and walking away is not how we operate.
We Publish This Publicly
We invite scrutiny. We challenge every AI company, tool builder, and business leader to develop and publish their own version of this document.
The Values Behind Every Tool We Build
Honesty
We build tools that tell the truth, even when the answer is uncomfortable.
Human Control
AI amplifies judgment — it never replaces the human decision-maker.
Privacy
User data is treated with the same care we'd want applied to our own.
Accountability
Every tool has a clear owner, an audit trail, and stated limitations.
Accessibility
Powerful AI shouldn't only be available to companies with large budgets.
Dignity
We help teams work better — not shrink them as a first resort.
Industries Where We Apply These Principles
Manufacturing
Custom AI tools built with hard limits, accountability chains, and operator dignity at the center.
Healthcare
AI-assisted workflows built with patient privacy, clinical accuracy, and human oversight as non-negotiables.
Legal Services
AI research and drafting tools with explicit accuracy guardrails and attorney-in-the-loop requirements.
E-commerce
Customer-facing AI designed to serve real user interests — not manipulate purchase behavior.
Professional Services
AI strategy built on transparency — clients always know what the tool does and doesn't do.
Enterprise
Constitutional AI principles applied at scale — governance frameworks, audit trails, and responsible deployment.