Talk:AI Governance DAO: Difference between revisions

From DAO Governance Wiki
Jump to navigation Jump to search
mNo edit summary
Line 29: Line 29:




This video is a relatable introduction which cites other work in the same direction as aiGovDAO: https://www.youtube.com/watch?v=MSsYlPDmxfE&ab_channel=LexClips
[https://www.youtube.com/watch?v=MSsYlPDmxfE&ab_channel=LexClips This video] is a relatable introduction which cites other work in the same direction as aiGovDAO:
[[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 01:54, 15 April 2023 (CDT)
[[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 01:54, 15 April 2023 (CDT)
<br>
Marc Andreesson identifying issues and ways forward in [https://www.youtube.com/watch?v=nb3nS0BY5bo&ab_channel=LexClips this video]. [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 10:15, 30 June 2023 (CDT)
= Proposals =
= Proposals =
== Domain specificity ==
== Domain specificity ==

Revision as of 10:15, 30 June 2023

NOTE

Click 'Add topic' to separate subjects.
Please sign all comments by typing 4 tildes (~).

To answer, use colons (:) to indent
Use two colons (::) to indent twice
Etc.

Purpose

This page is in brainstorming status. We believe the structure of a DAO can help govern the development of AI, but the details are far from crystallized. However, the demand for a wise response to how to govern AI is pressing, as the development of AI is accelerating.

This proposal requires democratic participation if it is to be meaningful and effective.

We are moving toward a unique historical moment, with many technological innovations that are transforming society. AI, nanotech, global P2P IT. These technologies are growing in power, while the governance structures of our public institutions are not evolving to match the challenges of guiding the deployment of this tech.

Will humanity be oppressed by this new technology or will we be empowered? Science fiction has proposed many valid scenarios answering this question. That answer has always been the same, for the creation of any tool: yes and yes.

People will certainly use AI to abrogate our responsibilities. And others will use it to take on more power. Will they use it to help humanity or harm it? Again, history answers: yes and yes.

The point of using a DAO structure to organize our response, is to make the use of AI as transparent and democratic as possible, in the hopes of harnessing our talents in the service of helping humanity and avoiding harm to the greatest degree possible. Craig Calcaterra (talk) 15:53, 2 April 2023 (CDT)

Resources

Add any relevant info:

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI.


This video is a relatable introduction which cites other work in the same direction as aiGovDAO: Craig Calcaterra (talk) 01:54, 15 April 2023 (CDT)
Marc Andreesson identifying issues and ways forward in this video. Craig Calcaterra (talk) 10:15, 30 June 2023 (CDT)

Proposals

Domain specificity

The AI DAO should be devoted to the narrow problem of democratically governing the deployment of AI tech. We should strive to avoid splintering concerns such as governing nano-manufacturing, or P2P IT, etc., even though they certainly overlap. Instead, separate DAOs should be instituted for addressing separate issues, with membership overlapping and protocols being partially cloned.

REP-weighted Democracy

AI members

The issue of power given to AI members is basic. It is a challenging question of whether to allow it. At the moment, given the lack of wisdom in current iterations of AI, it is obvious that AI ungoverned by human members violates the fundamental purpose of this group. The group is devoted to the cause of finding wise protocols for the deployment of AI to promote human flourishing. That includes promoting human agency. So AI contributions to the group needs to be consciously directed by human members.

Pseudonymity

Do we allow pseudonymity and therefore allow ourselves to be completely open to AI participation unguided by humans? It seems counter to the mission of the group. But pseudonymity is a value that will improve the group. I propose allowing pseudonymity, but having a defensive posture that analyzes contributions of members to detect whether they are undirected AI contributors. Then slashing the REP of detected AI members and consciously re-evaluating their contributions.

Onboarding members

Founding members are given fREP which clones the powers of cREP, wREP, and gREP. New members may earn cREP by participating in the discussions through posts in the Forum,and they may earn wREP and gREP by making proposals which are eventually referenced.

Governance process

Academic failures

See this discussion.