Editing
AI Governance DAO
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====Polarizing questions==== :1. Can anonymous AGI agents participate in aiGovDAO? :Our initial answer is, "No." The purpose of aiGovDAO is to promote human flourishing, as a group and individually. Ceding our authority to machines is counter to that goal. : :On the other hand, a natural function of aiGovDAO is to develop the use of AI, especially in the creation of defenses against malign uses of AI. AI is an important tool for helping each individual and community filter information. Both to promote human flourishing in general, and specifically in order to make aiGovDAO more effective, we encourage the use of AI. <br> :2. Does aiGovDAO promote universal control of the development and deployment of AI? If so, is that centralization of power? :We wish to promote the healthy development of individual and collective human power. The healthy development of power must be tempered with wisdom. The power of the individual must be balanced with the wise application of that power in the service of communal harmony. The individual member's agency is always in tension with the group's harmony. Each side supports and erodes the other side. So this question is impossible to answer. We wish for the group to be healthy and powerful, but we also require there to be openness to leaving the group and creating alternative organizations. It is natural and inevitable that people will come to different answers for how to best develop a new technology. So it is natural, and perhaps healthy, that there be different regulatory organizations developing this technology. : :In aiGovDAO, we seek to create a community which wisely guides the development of AI. We wish to create a powerful organization that controls that development inasmuch as its goal is to prevent negative outcomes. The power over the decisions of what is good and bad can lead to oppression when control is too strident, and can lead to chaos and dissolution when there is too little control. While we wish to give individuals greater freedom and power, we also wish to support groups which limit the damage of the consequences of greater individual freedom and power. The solution to these problems of too much or too little control, is to nurture the development of healthy applications of AI which promote human flourishing. Healthy applications also include tools which police malign uses of AI.
Summary:
Please note that all contributions to DAO Governance Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
DAO Governance Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information