Working Towards a Sustainable Future
For Humanity and AI.

The Center for Distributed Governance of AI (CDGov) develops solutions for the decentralized, distributed, democratic governance of advanced AI so that we may safely improve the human condition rather than further entrench the power of profit-driven corporations and institutions.

The biggest alignment challenge is not between AI and Humans, the biggest alignment challenge is between those people who hold the keys to power over AI and Humanity

— Dr. Jakob Foerster, FAIR

Mission

Our mission is to ensure AI is governed safely and empowers individuals — not corporations or autocratic institutions. Today’s AI is controlled by a handful of tech giants, risking unprecedented power concentration. We aim to research and advance decentralized, democratic AI governance solutions including open alternatives, antitrust mechanisms, citizen oversight models -and moreover: local and virtual community building. We research actionable solutions, build coalitions with technologists and economists, and advocate for economically sustainable structural reforms. Without intervention, AI will deepen inequality and erode democracy. Our work promotes pragmatic paths to distribute AI’s benefits equitably while preserving individual liberties.

Tractable

We leverage federated learning, co‑ops, open models, and DAOs—pairing them with credible research, coordination, and advocacy.

Neglected

Most initiatives focus on safety or bias; few tackle structural power and democratic control of AI.

High‑Impact

Left unchecked, AI can entrench surveillance capitalism, erode civic agency, and lock in monopolies.

Why This Work Matters

The Center for Distributed Governance of AI (CDGov) addresses a critical gap in contemporary artificial intelligence discourse by focusing on the structural power dynamics inherent in AI development and deployment, rather than merely technical safety or bias mitigation. As advanced AI systems become increasingly integrated into societal infrastructure, the concentration of power within narrow corporate or state entities poses significant risks to democratic institutions, individual agency, and equitable distribution of AI's benefits. CDGov's research into decentralized governance models—leveraging federated learning, cooperative structures, and distributed decision-making protocols—offers a principled framework for ensuring that AI empowers individuals and communities rather than entrenching existing hierarchies.

By developing practical tools for democratic oversight, such as policy-as-code layers for enterprise agents and verifiable deliberation interfaces, while advocating for antitrust measures and data rights, this work contributes to the urgent scholarly and policy challenge of aligning AI development with democratic values. The organization's innovative approach to trust-minimizing coordination and transparent governance mechanisms provides a crucial pathway for maintaining civil liberties while enabling responsible AI innovation. This makes CDGov's work an essential contribution to the field of AI governance, offering concrete solutions for ensuring that artificial intelligence strengthens rather than undermines democratic society.

How We Operate

We align structure with principles. CDGov is a lightweight, distributed network—not a traditional 501(c)(3). Instead of hierarchy, we use trust‑minimizing coordination and transparent, tool‑assisted decision‑making. Membership, contributions, and governance are open and auditable; stewardship is earned through work and accountability.

Principled Minimalism

Smallest viable footprint so resources flow to research and prototypes—not overhead.

Trustless Collaboration

Distributed tools (verifiable contributions, open repos, deliberation protocols) enable strangers to safely cooperate.

What We Do

Research

Whitepapers on decentralized architectures (local AI co‑ops, federated governance), AI antitrust, and data rights—translated into templates & reference designs.

Coalition‑Building

Partnerships with open‑source communities, rights orgs, municipalities, and researchers advancing democratic oversight of AI.

Public Engagement

Meetups (Austin, Boston), an annual workshop, and campaigns that frame AI’s power dynamics with practical remedies.

Programs & Research Directions

R01 Governance Layer for Enterprise Agents

Design a policy‑as‑code layer that orchestrates, audits, and constrains AI agents across business units. Features: role‑based permissions, verifiable logging, red‑team hooks, human‑in‑the‑loop checkpoints, and cross‑vendor policy enforcement.

// policy.cdg
allow(agent:"invoice-bot") when context.risk < 3 and duty.segregation == true
require(human_review) for action in ["wire_transfer", "customer_contact"]
deny(action:"prod_deploy") unless approvals >= 2
R02 Distributed Coordination Protocols

Research BitTorrent‑ and blockchain‑inspired protocols for cross‑institution cooperation: content‑addressed models, swarm training, stake‑weighted deliberation, and slashing for dishonest nodes.

# swarm.gossip
peer:add 12.7.0.5/agent-42
chunk:announce QmX..model.shard.07
vote:proposal #117 "merge-data-commons-v2" weight=0.72
slash:peer agent-19 reason="non‑deterministic outputs"
R03 Democratic Deliberation Interfaces

Prototype auditable citizen assemblies using verifiable random selection, argument mapping, and model‑assisted synthesis with transparent provenance.

+ panel: random_sortition(120)
+ transcripts: ipfs://Qm...
+ provenance: signed(hash(bundle))
┌─ issues ───────────┐
│ data rights        │
│ model access       │
│ liability          │
└────────────────────┘
score(consensus) = 0.81
explain: counterfactual audit ok
next: pilot with city partners

Papers & Publications

Team & Advisors

Alex Dean Foster
Austin, TX — MBA, Economist & Technologist

Leads strategy on democratic AI governance, antimonopoly, and coalition‑building.

Sam Diener
Washington DC — Investor & Computer Scientist

Builds distributed tooling for verifiable contributions and transparent decision flows.

Eugene Kim
JD — Former Engineer; Senior Counsel

Legal strategy across data rights, automated legal agents, antitrust, and cooperative structures.

Said Saillant
PhD — AI Governance & Cognitive Science

Advises on Collective intelligence and democratic oversight mechanisms for high‑stakes AI.

Johanna Sweere
PhD — AI/Bio‑risk; Co‑founder MBDF

Advises on bio‑AI risk assessment and participatory guardrails.

Contributors & Fellows
Distributed — Open Collaboration

We welcome collaborators across research, engineering, policy, journalism, and organizing.

Support & Sustainability

Our early work is funded by mission‑aligned grants and grassroots donors. Long‑term, we favor membership‑based and cooperative models that sustain public‑interest research without dependence on any single sponsor.

Discuss partnership Make a contribution

BTC: 15prn9uescfGUB1LERKAkvutRZKgZu6jAy
ETH: 133b236213a24804332fd36f00cac78b9da2d2e6

Contact

Interested in collaborating, hosting a meetup, or funding a line of research? We’d love to connect.

General inquiries
info@cdgov.org