Editing
Talk:AI Governance DAO
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== NOTE == Anyone can contribute. To edit this wiki [[Special:CreateAccount|create an account]]. <br> Please sign all comments by typing 4 tildes (~).<br> Click 'Add topic' to separate subjects.<br> :To answer, use colons (:) to indent ::Use two colons (::) to indent twice :::Etc.<br> [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 04:26, 27 March 2023 (CDT) <br> <br> = Purpose = This page is in brainstorming status. We believe the structure of a DAO can help govern the development of AI, but the details are far from crystallized. However, the demand for a wise response to how to govern AI is pressing, as the development of AI is accelerating. <br> This proposal requires democratic participation if it is to be meaningful and effective. <br> We are moving toward a unique historical moment, with many technological innovations that are transforming society. AI, nanotech, global P2P IT. These technologies are growing in power, while the governance structures of our public institutions are not evolving to match the challenges of guiding the deployment of this tech.<br> Will humanity be oppressed by this new technology or will we be empowered? Science fiction has proposed many valid scenarios answering this question. That answer has always been the same, for the creation of any tool: yes and yes. <br> People will certainly use AI to abrogate our responsibilities. And others will use it to take on more power. Will they use it to help humanity or harm it? Again, history answers: yes and yes.<br> The point of using a DAO structure to organize our response, is to make the use of AI as transparent and democratic as possible, in the hopes of harnessing our talents in the service of helping humanity and avoiding harm to the greatest degree possible. [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 15:53, 2 April 2023 (CDT) = Plan = Similar to the software developer DAO: # Create a network of developers who make useful and good AI tools which follow the values of the group. ## Open source ## Free tools ## Small processor & P2P network focus # During this stage of free tool creation, ChatREP DAG develops with history and relationships through weighted citations (edges) instead of weighted nodes (WREP = 0) ## Governance proposals & body of protocols develop ## Culture and values develop # Once the user base is large enough, and talent pool is proven, offer work for hire for individual requests with fees (according to previously agreed protocol that can be automatically validated) ## The fees generate WREP through VP ## Citation structure of the ChatREP clarifies & grounds what the culture of the DAO actually is. (This means that $ is the driver of what the DAO really stands for. That's good and bad. Needs to be carefully monitored, as power is distributed at this stage.) [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 13:53, 14 March 2024 (CDT) = Resources = Discuss any relevant info or related projects:<br> State of the Art Research: DeAI, AI Safety, Public AI etc. *DeAI **Naptha.ai: litepaper **Olas Network: whitepaper **Boltzmann Network: whitepaper **POKT Network *Centralized for Profit AI **Google AI **Meta **Open AI **Microsoft *Societies **Berkeley Responsible Decentralized Intelligence (RDI) **Stanford AI <br> Stanford Center for AI Safety https://aisafety.stanford.edu/all-publications.html <br> MIRI (2005) https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute Allen Institute for AI (2014) https://en.wikipedia.org/wiki/Allen_Institute_for_AI <br><br> Leaked [https://www.semianalysis.com/p/google-we-have-no-moat-and-neither Internal Google Document] Claims Open Source AI Will Outcompete Google and OpenAI. <br><br> [https://www.youtube.com/watch?v=MSsYlPDmxfE&ab_channel=LexClips This video] is a relatable introduction which cites other work in the same direction as aiGovDAO: <br> Marc Andreesson identifying issues and ways forward in [https://www.youtube.com/watch?v=nb3nS0BY5bo&ab_channel=LexClips this video].<br> Jonathan Kung shared with me the [https://www.whitehouse.gov/ostp/ai-bill-of-rights/ Blueprint for an AI Bill of Rights] from President Biden’s White House Office of Science and Technology Policy identifies 5 principles to "guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence": #Safe and Effective Systems #Algorithmic Discrimination Protections #Data Privacy #Notice and Explanation #Human Alternatives, Consideration, and Fallback [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 17:32, 5 April 2024 (CDT) == Public AI == [https://arxiv.org/abs/2311.11350 This paper] introduces the [https://publicai.network/ Public AI] project, which has a similar goal to our AI Governance DAO. The issues raised are all relevant to our purpose and the suggestions are in concert with ours. The major point of contention between the projects is perhaps the scope. I believe their definition of “public” in the term Public AI means strictly sub-national. I.e., each different country would have their own, distinct Public AI governing body. California would have its own Public AI group. However, the internet has made information technology tools global. So a supranational–i.e., global–context is more appropriate. Subnational governing bodies won’t be effective against the problems that this Public AI project is raising. [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 10:42, 16 June 2024 (CDT) [https://publicai.network/ Public AI] is not to be confused with [https://publicai.io/ PublicAI]. I disagree, I think the subnational is the way people are addressing the supranational. The UN is super ineffective and we need societal hackerspaces to help us figure out what the supranational needs. However, we need the supranational infrastructure yesterday. DGF is a wireframe of what is needed, and it should be tested in the subnational scales lots of times before anyone will accept it. --[[User:Kung|Kung]] ([[User talk:Kung|talk]]) 13:22, 14 July 2024 (CDT) = Proposals = == Domain specificity == The AI DAO should be devoted to the narrow problem of democratically governing the deployment of AI tech. We should strive to avoid splintering concerns such as governing nano-manufacturing, or P2P IT, etc., even though they certainly overlap. Instead, separate DAOs should be instituted for addressing separate issues, with membership overlapping and protocols being partially cloned. == REP-weighted Democracy == == AI members == The issue of power given to AI members--as opposed to human members--is basic. It is a challenging question of whether to allow it. At the moment, given the lack of wisdom in current iterations of AI, it is obvious that AI ungoverned by human members violates the fundamental purpose of this group. The group is devoted to the cause of finding wise protocols for the deployment of AI to promote human flourishing. That includes promoting human agency. So AI contributions to the group need to be consciously directed by human members. == Pseudonymity == Do we allow pseudonymity and therefore allow ourselves to be completely open to AI participation unguided by humans? It seems counter to the mission of the group. But pseudonymity is a value that will improve the group. I propose allowing pseudonymity, but having a defensive posture that analyzes contributions of members to detect whether they are undirected AI contributors. Then slashing the REP of detected AI members and consciously re-evaluating their contributions. --Craig <br> We have to create a lot of avenues to look at the same set of information in different ways. We can have different networks that interface with each other where pseudonymity exists in one and KYC in another. Within the same community, it is recommended that they have pseudonymous and KYC identities. Different identity types come with certain risks and will be treated accordingly. Reputation accumulation over time will grant the identity more "rights". --Jonathan <br> I agree about different types in general. And pseudonymous accounts are preferable in general. But the question here is whether we allow pseudonymous members in the AIgovDAO. If we do, then we are open to members who are non-human. So before we discuss whether to allow pseudonymous members, we should first decide whether we are okay with non-human members. I am against it, but open to discussion. --Craig == Onboarding members == Founding members are given fREP which clones the powers of cREP, wREP, and gREP. New members may earn cREP by participating in the discussions through posts in the Forum, and they may earn wREP and gREP by making proposals that are eventually referenced. == Governance process == = Academic failures = See [[Talk:Science_DAO_Framework#Failures_of_the_scientific_academic_establishment|this discussion]]. = AI issues of concern = == Centralization == *Lack of transparency -- leads to: **unpredictable outcomes **more centralization of ***ownership ***understanding *Centralized control of AI -- leads to: **social inequity **less transparency *Less participation -- leads to: **centralized control **less transparency == Decentralization == Lack of control can lead to all the same problems arising from centralization. The Decentralized Physical Infrastructure (DePIn) space has been quite active. However, the chip manufacturing problem is untenable. It will remain centralized for the foreseeable future. However, hardware purchasing and management can be decentralized. The language for DePIn is still being developed. == Energy use == https://arxiv.org/abs/2311.16863 The energy requirements for training may lead to centralization issues above, since the cost is high.
Summary:
Please note that all contributions to DAO Governance Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
DAO Governance Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit source
Add topic
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information