Talk:AI Governance DAO: Difference between revisions
m (→Resources) |
mNo edit summary |
||
Line 13: | Line 13: | ||
<br> | <br> | ||
We are moving toward a unique historical moment, with many technological innovations that are transforming society. AI, nanotech, global P2P IT. These technologies are growing in power, while the governance structures of our public institutions are not evolving to match the challenges of guiding the deployment of this tech.<br> | |||
Will humanity be oppressed by this new technology or will we be empowered? Science fiction has proposed many valid scenarios answering this question. That answer has always been the same, for the creation of any tool: yes and yes. <br> | Will humanity be oppressed by this new technology or will we be empowered? Science fiction has proposed many valid scenarios answering this question. That answer has always been the same, for the creation of any tool: yes and yes. <br> | ||
Line 19: | Line 19: | ||
People will certainly use AI to abrogate our responsibilities. And others will use it to take on more power. Will they use it to help humanity or harm it? Again, history answers: yes and yes.<br> | People will certainly use AI to abrogate our responsibilities. And others will use it to take on more power. Will they use it to help humanity or harm it? Again, history answers: yes and yes.<br> | ||
The point of using a DAO structure to organize our response is to make | The point of using a DAO structure to organize our response, is to make the use of AI as transparent and democratic as possible, in the hopes of harnessing our talents in the service of helping humanity and avoiding harm to the greatest degree possible. | ||
[[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 15:53, 2 April 2023 (CDT) | [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 15:53, 2 April 2023 (CDT) | ||
Line 31: | Line 31: | ||
This video is a relatable introduction which cites other work in the same direction as aiGovDAO: https://www.youtube.com/watch?v=MSsYlPDmxfE&ab_channel=LexClips | This video is a relatable introduction which cites other work in the same direction as aiGovDAO: https://www.youtube.com/watch?v=MSsYlPDmxfE&ab_channel=LexClips | ||
[[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 01:54, 15 April 2023 (CDT) | [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 01:54, 15 April 2023 (CDT) | ||
== Proposals == | |||
= Domain specificity = | |||
The AI DAO should be devoted to the narrow problem of democratically governing the deployment of AI tech. We should strive to avoid splintering concerns such as governing nano-manufacturing, or P2P IT, etc., even though they certainly overlap. Instead, separate DAOs should be instituted for addressing separate issues, with membership overlapping and protocols being partially cloned. | |||
= AI members = | |||
The issue of power given to AI members is basic. Do we allow pseudonymity and therefore allow ourselves to be completely open to AI participation unguided by humans? It seems counter to the mission of the group. | The issue of power given to AI members is basic. It is a challenging question of whether to allow it. At the moment, given the lack of wisdom in current iterations of AI, it is obvious that AI ungoverned by human members violates the fundamental purpose of this group. The group is devoted to the cause of finding wise protocols for the deployment of AI to promote human flourishing. That includes promoting human agency. So AI contributions to the group needs to be consciously directed by human members. | ||
Do we allow pseudonymity and therefore allow ourselves to be completely open to AI participation unguided by humans? It seems counter to the mission of the group. But pseudonymity is a value that will improve the group. | |||
== Academic failures == | == Academic failures == | ||
See [[Talk:Science_DAO_Framework#Failures_of_the_scientific_academic_establishment|this discussion]]. | See [[Talk:Science_DAO_Framework#Failures_of_the_scientific_academic_establishment|this discussion]]. |
Revision as of 08:06, 5 June 2023
NOTE
Click 'Add topic' to separate subjects.
Please sign all comments by typing 4 tildes (~).
- To answer, use colons (:) to indent
- Use two colons (::) to indent twice
- Etc.
- Etc.
- Use two colons (::) to indent twice
Purpose
This page is in brainstorming status. We believe the structure of a DAO can help govern the development of AI, but the details are far from crystallized. However, the demand for a wise response to how to govern AI is pressing, as the development of AI is accelerating.
This proposal requires democratic participation if it is to be meaningful and effective.
We are moving toward a unique historical moment, with many technological innovations that are transforming society. AI, nanotech, global P2P IT. These technologies are growing in power, while the governance structures of our public institutions are not evolving to match the challenges of guiding the deployment of this tech.
Will humanity be oppressed by this new technology or will we be empowered? Science fiction has proposed many valid scenarios answering this question. That answer has always been the same, for the creation of any tool: yes and yes.
People will certainly use AI to abrogate our responsibilities. And others will use it to take on more power. Will they use it to help humanity or harm it? Again, history answers: yes and yes.
The point of using a DAO structure to organize our response, is to make the use of AI as transparent and democratic as possible, in the hopes of harnessing our talents in the service of helping humanity and avoiding harm to the greatest degree possible. Craig Calcaterra (talk) 15:53, 2 April 2023 (CDT)
Resources
Add any relevant info:
Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI.
This video is a relatable introduction which cites other work in the same direction as aiGovDAO: https://www.youtube.com/watch?v=MSsYlPDmxfE&ab_channel=LexClips
Craig Calcaterra (talk) 01:54, 15 April 2023 (CDT)
Proposals
Domain specificity
The AI DAO should be devoted to the narrow problem of democratically governing the deployment of AI tech. We should strive to avoid splintering concerns such as governing nano-manufacturing, or P2P IT, etc., even though they certainly overlap. Instead, separate DAOs should be instituted for addressing separate issues, with membership overlapping and protocols being partially cloned.
AI members
The issue of power given to AI members is basic. It is a challenging question of whether to allow it. At the moment, given the lack of wisdom in current iterations of AI, it is obvious that AI ungoverned by human members violates the fundamental purpose of this group. The group is devoted to the cause of finding wise protocols for the deployment of AI to promote human flourishing. That includes promoting human agency. So AI contributions to the group needs to be consciously directed by human members.
Do we allow pseudonymity and therefore allow ourselves to be completely open to AI participation unguided by humans? It seems counter to the mission of the group. But pseudonymity is a value that will improve the group.
Academic failures
See this discussion.