Talk:AI Governance DAO

From DAO Governance Wiki
Jump to navigation Jump to search

NOTE[edit source]

Anyone can contribute. To edit this wiki create an account.
Please sign all comments by typing 4 tildes (~).
Click 'Add topic' to separate subjects.

To answer, use colons (:) to indent
Use two colons (::) to indent twice
Etc.

Craig Calcaterra (talk) 04:26, 27 March 2023 (CDT)

Purpose[edit source]

This page is in brainstorming status. We believe the structure of a DAO can help govern the development of AI, but the details are far from crystallized. However, the demand for a wise response to how to govern AI is pressing, as the development of AI is accelerating.

This proposal requires democratic participation if it is to be meaningful and effective.

We are moving toward a unique historical moment, with many technological innovations that are transforming society. AI, nanotech, global P2P IT. These technologies are growing in power, while the governance structures of our public institutions are not evolving to match the challenges of guiding the deployment of this tech.

Will humanity be oppressed by this new technology or will we be empowered? Science fiction has proposed many valid scenarios answering this question. That answer has always been the same, for the creation of any tool: yes and yes.

People will certainly use AI to abrogate our responsibilities. And others will use it to take on more power. Will they use it to help humanity or harm it? Again, history answers: yes and yes.

The point of using a DAO structure to organize our response, is to make the use of AI as transparent and democratic as possible, in the hopes of harnessing our talents in the service of helping humanity and avoiding harm to the greatest degree possible. Craig Calcaterra (talk) 15:53, 2 April 2023 (CDT)

Plan[edit source]

Similar to the software developer DAO:

  1. Create a network of developers who make useful and good AI tools which follow the values of the group.
    1. Open source
    2. Free tools
    3. Small processor & P2P network focus
  2. During this stage of free tool creation, ChatREP DAG develops with history and relationships through weighted citations (edges) instead of weighted nodes (WREP = 0)
    1. Governance proposals & body of protocols develop
    2. Culture and values develop
  3. Once the user base is large enough, and talent pool is proven, offer work for hire for individual requests with fees (according to previously agreed protocol that can be automatically validated)
    1. The fees generate WREP through VP
    2. Citation structure of the ChatREP clarifies & grounds what the culture of the DAO actually is. (This means that $ is the driver of what the DAO really stands for. That's good and bad. Needs to be carefully monitored, as power is distributed at this stage.)

Craig Calcaterra (talk) 13:53, 14 March 2024 (CDT)

Resources[edit source]

Discuss any relevant info or related projects:
State of the Art Research: DeAI, AI Safety, Public AI etc.

  • DeAI
    • Naptha.ai: litepaper
    • Olas Network: whitepaper
    • Boltzmann Network: whitepaper
    • POKT Network
  • Centralized for Profit AI
    • Google AI
    • Meta
    • Open AI
    • Microsoft
  • Societies
    • Berkeley Responsible Decentralized Intelligence (RDI)
    • Stanford AI


Stanford Center for AI Safety https://aisafety.stanford.edu/all-publications.html

MIRI (2005) https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute

Allen Institute for AI (2014) https://en.wikipedia.org/wiki/Allen_Institute_for_AI

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI.

This video is a relatable introduction which cites other work in the same direction as aiGovDAO:
Marc Andreesson identifying issues and ways forward in this video.
Jonathan Kung shared with me the Blueprint for an AI Bill of Rights from President Biden’s White House Office of Science and Technology Policy identifies 5 principles to "guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence":

  1. Safe and Effective Systems
  2. Algorithmic Discrimination Protections
  3. Data Privacy
  4. Notice and Explanation
  5. Human Alternatives, Consideration, and Fallback

Craig Calcaterra (talk) 17:32, 5 April 2024 (CDT)

Public AI[edit source]

This paper introduces the Public AI project, which has a similar goal to our AI Governance DAO. The issues raised are all relevant to our purpose and the suggestions are in concert with ours.

The major point of contention between the projects is perhaps the scope. I believe their definition of “public” in the term Public AI means strictly sub-national. I.e., each different country would have their own, distinct Public AI governing body. California would have its own Public AI group.

However, the internet has made information technology tools global. So a supranational–i.e., global–context is more appropriate. Subnational governing bodies won’t be effective against the problems that this Public AI project is raising. Craig Calcaterra (talk) 10:42, 16 June 2024 (CDT)


Public AI is not to be confused with PublicAI.

I disagree, I think the subnational is the way people are addressing the supranational. The UN is super ineffective and we need societal hackerspaces to help us figure out what the supranational needs. However, we need the supranational infrastructure yesterday. DGF is a wireframe of what is needed, and it should be tested in the subnational scales lots of times before anyone will accept it. --Kung (talk) 13:22, 14 July 2024 (CDT)

Proposals[edit source]

Domain specificity[edit source]

The AI DAO should be devoted to the narrow problem of democratically governing the deployment of AI tech. We should strive to avoid splintering concerns such as governing nano-manufacturing, or P2P IT, etc., even though they certainly overlap. Instead, separate DAOs should be instituted for addressing separate issues, with membership overlapping and protocols being partially cloned.

REP-weighted Democracy[edit source]

AI members[edit source]

The issue of power given to AI members--as opposed to human members--is basic. It is a challenging question of whether to allow it. At the moment, given the lack of wisdom in current iterations of AI, it is obvious that AI ungoverned by human members violates the fundamental purpose of this group. The group is devoted to the cause of finding wise protocols for the deployment of AI to promote human flourishing. That includes promoting human agency. So AI contributions to the group need to be consciously directed by human members.

Pseudonymity[edit source]

Do we allow pseudonymity and therefore allow ourselves to be completely open to AI participation unguided by humans? It seems counter to the mission of the group. But pseudonymity is a value that will improve the group. I propose allowing pseudonymity, but having a defensive posture that analyzes contributions of members to detect whether they are undirected AI contributors. Then slashing the REP of detected AI members and consciously re-evaluating their contributions. --Craig


We have to create a lot of avenues to look at the same set of information in different ways. We can have different networks that interface with each other where pseudonymity exists in one and KYC in another. Within the same community, it is recommended that they have pseudonymous and KYC identities. Different identity types come with certain risks and will be treated accordingly. Reputation accumulation over time will grant the identity more "rights". --Jonathan


I agree about different types in general. And pseudonymous accounts are preferable in general. But the question here is whether we allow pseudonymous members in the AIgovDAO. If we do, then we are open to members who are non-human.

So before we discuss whether to allow pseudonymous members, we should first decide whether we are okay with non-human members. I am against it, but open to discussion. --Craig

Onboarding members[edit source]

Founding members are given fREP which clones the powers of cREP, wREP, and gREP. New members may earn cREP by participating in the discussions through posts in the Forum, and they may earn wREP and gREP by making proposals that are eventually referenced.

Governance process[edit source]

Academic failures[edit source]

See this discussion.

AI issues of concern[edit source]

Centralization[edit source]

  • Lack of transparency -- leads to:
    • unpredictable outcomes
    • more centralization of
      • ownership
      • understanding
  • Centralized control of AI -- leads to:
    • social inequity
    • less transparency
  • Less participation -- leads to:
    • centralized control
    • less transparency


Decentralization[edit source]

Lack of control can lead to all the same problems arising from centralization. The Decentralized Physical Infrastructure (DePIn) space has been quite active. However, the chip manufacturing problem is untenable. It will remain centralized for the foreseeable future. However, hardware purchasing and management can be decentralized. The language for DePIn is still being developed.

Energy use[edit source]

https://arxiv.org/abs/2311.16863 The energy requirements for training may lead to centralization issues above, since the cost is high.