Talk:AI Governance DAO: Difference between revisions

From DAO Governance Wiki
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
Line 1: Line 1:
== NOTE ==
= NOTE =
Click 'Add topic' to separate subjects.<br>
Click 'Add topic' to separate subjects.<br>
Please sign all comments by typing 4 tildes (~).<br>
Please sign all comments by typing 4 tildes (~).<br>
Line 6: Line 6:
:::Etc.<br>
:::Etc.<br>


== Purpose ==
= Purpose =


This page is in brainstorming status. We believe the structure of a DAO can help govern the development of AI, but the details are far from crystallized. However, the demand for a wise response to how to govern AI is pressing, as the development of AI is accelerating. <br>
This page is in brainstorming status. We believe the structure of a DAO can help govern the development of AI, but the details are far from crystallized. However, the demand for a wise response to how to govern AI is pressing, as the development of AI is accelerating. <br>
Line 22: Line 22:
[[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 15:53, 2 April 2023 (CDT)
[[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 15:53, 2 April 2023 (CDT)


== Resources ==
= Resources =
Add any relevant info:<br>
Add any relevant info:<br>
<br>
<br>
Line 31: Line 31:
This video is a relatable introduction which cites other work in the same direction as aiGovDAO: https://www.youtube.com/watch?v=MSsYlPDmxfE&ab_channel=LexClips
This video is a relatable introduction which cites other work in the same direction as aiGovDAO: https://www.youtube.com/watch?v=MSsYlPDmxfE&ab_channel=LexClips
[[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 01:54, 15 April 2023 (CDT)
[[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 01:54, 15 April 2023 (CDT)
== Proposals ==
= Proposals =
= Domain specificity =
== Domain specificity ==
The AI DAO should be devoted to the narrow problem of democratically governing the deployment of AI tech. We should strive to avoid splintering concerns such as governing nano-manufacturing, or P2P IT, etc., even though they certainly overlap. Instead, separate DAOs should be instituted for addressing separate issues, with membership overlapping and protocols being partially cloned.
The AI DAO should be devoted to the narrow problem of democratically governing the deployment of AI tech. We should strive to avoid splintering concerns such as governing nano-manufacturing, or P2P IT, etc., even though they certainly overlap. Instead, separate DAOs should be instituted for addressing separate issues, with membership overlapping and protocols being partially cloned.
== REP-weighted Democracy ==


= AI members =
== AI members ==
The issue of power given to AI members is basic. It is a challenging question of whether to allow it. At the moment, given the lack of wisdom in current iterations of AI, it is obvious that AI ungoverned by human members violates the fundamental purpose of this group. The group is devoted to the cause of finding wise protocols for the deployment of AI to promote human flourishing. That includes promoting human agency. So AI contributions to the group needs to be consciously directed by human members.


The issue of power given to AI members is basic. It is a challenging question of whether to allow it. At the moment, given the lack of wisdom in current iterations of AI, it is obvious that AI ungoverned by human members violates the fundamental purpose of this group. The group is devoted to the cause of finding wise protocols for the deployment of AI to promote human flourishing. That includes promoting human agency. So AI contributions to the group needs to be consciously directed by human members.
== Pseudonymity ==
Do we allow pseudonymity and therefore allow ourselves to be completely open to AI participation unguided by humans? It seems counter to the mission of the group. But pseudonymity is a value that will improve the group. I propose allowing pseudonymity, but having a defensive posture that analyzes contributions of members to detect whether they are undirected AI contributors. Then slashing the REP of detected AI members and consciously re-evaluating their contributions.
 
== Onboarding members ==


Do we allow pseudonymity and therefore allow ourselves to be completely open to AI participation unguided by humans? It seems counter to the mission of the group. But pseudonymity is a value that will improve the group.
== Governance process ==


== Academic failures ==
= Academic failures =


See [[Talk:Science_DAO_Framework#Failures_of_the_scientific_academic_establishment|this discussion]].
See [[Talk:Science_DAO_Framework#Failures_of_the_scientific_academic_establishment|this discussion]].

Revision as of 08:10, 5 June 2023

NOTE

Click 'Add topic' to separate subjects.
Please sign all comments by typing 4 tildes (~).

To answer, use colons (:) to indent
Use two colons (::) to indent twice
Etc.

Purpose

This page is in brainstorming status. We believe the structure of a DAO can help govern the development of AI, but the details are far from crystallized. However, the demand for a wise response to how to govern AI is pressing, as the development of AI is accelerating.

This proposal requires democratic participation if it is to be meaningful and effective.

We are moving toward a unique historical moment, with many technological innovations that are transforming society. AI, nanotech, global P2P IT. These technologies are growing in power, while the governance structures of our public institutions are not evolving to match the challenges of guiding the deployment of this tech.

Will humanity be oppressed by this new technology or will we be empowered? Science fiction has proposed many valid scenarios answering this question. That answer has always been the same, for the creation of any tool: yes and yes.

People will certainly use AI to abrogate our responsibilities. And others will use it to take on more power. Will they use it to help humanity or harm it? Again, history answers: yes and yes.

The point of using a DAO structure to organize our response, is to make the use of AI as transparent and democratic as possible, in the hopes of harnessing our talents in the service of helping humanity and avoiding harm to the greatest degree possible. Craig Calcaterra (talk) 15:53, 2 April 2023 (CDT)

Resources

Add any relevant info:

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI.


This video is a relatable introduction which cites other work in the same direction as aiGovDAO: https://www.youtube.com/watch?v=MSsYlPDmxfE&ab_channel=LexClips Craig Calcaterra (talk) 01:54, 15 April 2023 (CDT)

Proposals

Domain specificity

The AI DAO should be devoted to the narrow problem of democratically governing the deployment of AI tech. We should strive to avoid splintering concerns such as governing nano-manufacturing, or P2P IT, etc., even though they certainly overlap. Instead, separate DAOs should be instituted for addressing separate issues, with membership overlapping and protocols being partially cloned.

REP-weighted Democracy

AI members

The issue of power given to AI members is basic. It is a challenging question of whether to allow it. At the moment, given the lack of wisdom in current iterations of AI, it is obvious that AI ungoverned by human members violates the fundamental purpose of this group. The group is devoted to the cause of finding wise protocols for the deployment of AI to promote human flourishing. That includes promoting human agency. So AI contributions to the group needs to be consciously directed by human members.

Pseudonymity

Do we allow pseudonymity and therefore allow ourselves to be completely open to AI participation unguided by humans? It seems counter to the mission of the group. But pseudonymity is a value that will improve the group. I propose allowing pseudonymity, but having a defensive posture that analyzes contributions of members to detect whether they are undirected AI contributors. Then slashing the REP of detected AI members and consciously re-evaluating their contributions.

Onboarding members

Governance process

Academic failures

See this discussion.