Talk:AI Governance DAO: Difference between revisions

From DAO Governance Wiki
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
Line 1: Line 1:
== NOTE ==
Click 'Add topic' to separate subjects.<br>
Please sign all comments by typing 4 tildes (~).<br>
:To answer, use colons (:) to indent
::Use two colons (::) to indent twice
:::Etc.<br>
[[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 04:26, 27 March 2023 (CDT)
<br>
== Purpose ==
== Purpose ==



Revision as of 15:00, 2 April 2023

NOTE

Click 'Add topic' to separate subjects.
Please sign all comments by typing 4 tildes (~).

To answer, use colons (:) to indent
Use two colons (::) to indent twice
Etc.

Craig Calcaterra (talk) 04:26, 27 March 2023 (CDT)

Purpose

This page is in brainstorming status. We believe the structure of a DAO can help govern the development of AI, but the details are far from crystallized. However, the demand for a wise response to how to govern AI is pressing, as the development of AI is accelerating.

This proposal requires democratic participation if it is to be meaningful and effective.

The point is to save humanity. AI tools pose the question of what it means to be human. How do we think? What makes us special?

Will humanity be oppressed by this new technology or will we be empowered? Science fiction has proposed many valid scenarios answering this question. That answer has always been the same, for the creation of any tool: yes and yes.

People will certainly use AI to abrogate our responsibilities. And others will use it to take on more power. Will they use it to help humanity or harm it? Again, history answers: yes and yes.

The point of using a DAO structure to organize our response is to make this use of AI as transparent and democratic as possible, in the hopes of harnessing our talents in the service of helping humanity and avoiding harm to the greatest degree possible. Craig Calcaterra (talk) 15:53, 2 April 2023 (CDT)