Talk:Criticisms of the DGF project: Difference between revisions
mNo edit summary |
mNo edit summary |
||
(3 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
This page is one of the major reasons I wanted a wiki. I hope this can be a forum for productive conversations. [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 12:33, 27 February 2023 (CST) | This page is one of the major reasons I wanted a wiki. I hope this can be a forum for productive conversations. [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 12:33, 27 February 2023 (CST) | ||
== NOTE == | == NOTE == | ||
Click 'Add topic' to separate subjects.<br> | Click 'Add topic' to separate subjects.<br> | ||
Please sign all comments by typing 4 tildes (~).<br> | Please sign all comments by typing 4 tildes (~).<br> | ||
:To answer, use colons (:) to indent | |||
::Use two colons (::) to indent twice | |||
:::Etc. | |||
[[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 00:06, 2 March 2023 (CST) | [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 00:06, 2 March 2023 (CST) | ||
Line 22: | Line 25: | ||
:That is not to dismiss those fears. I am worried on a metaphysical level whether having a group of assholes is absolutely more efficient. I don't believe that is actually true. But I am worried it might be. And I am certain that it is natural for systems to degenerate if they are not protected. Moreover, along with the good side of this tech for helping us be conscious of our actions, it simultaneously makes it possible for us to lose consciousness of those same things--because it automates accounting and transparency. | :That is not to dismiss those fears. I am worried on a metaphysical level whether having a group of assholes is absolutely more efficient. I don't believe that is actually true. But I am worried it might be. And I am certain that it is natural for systems to degenerate if they are not protected. Moreover, along with the good side of this tech for helping us be conscious of our actions, it simultaneously makes it possible for us to lose consciousness of those same things--because it automates accounting and transparency. | ||
:That being said, I don't think there is a possible engineering solution to that problem. And further, I wouldn't want there to be one. I think its a fundamental human quality to have the choice to be good or bad. This tech excites me because it gives us new opportunities for expressing goodness. But it terrifies me, because that always goes hand in hand with new opportunities for evil to express itself. Nevertheless, I've got to bet on hope. [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 08:25, 27 January 2023 (CST) | :That being said, I don't think there is a possible engineering solution to that problem. And further, I wouldn't want there to be one. I think its a fundamental human quality to have the choice to be good or bad. This tech excites me because it gives us new opportunities for expressing goodness. But it terrifies me, because that always goes hand in hand with new opportunities for evil to express itself. Nevertheless, I've got to bet on hope. [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 08:25, 27 January 2023 (CST) | ||
== Is DGF dangerous? == | |||
Jonathan Kung | |||
14 days ago | |||
In its untuned and imprecise form, is DGF dangerous? | |||
2 replies | |||
:Ladd Hoffman<br> | |||
: 12 days ago<br> | |||
:I talked with Jonathon about this, but I want to record my thoughts here.<br> | |||
:What we are trying to build is a system that enables robust self governance. So the main risk I see is that a system is built which purorts to offer that, but in reality fails to deliver.<br> | |||
:In other words, if bad actors recognize that these concepts are appealing to people, they could co-opt the appearance of the system, in order to exert control and further exploitation.<br> | |||
:However, this is already the status quo, so I don’t see that we are introducing significant additional risk.<br> | |||
:I think the main thing is that we ourselves avoid promoting such a corrupted system. | |||
::Craig Calcaterra | |||
:: 9 hours ago | |||
::I agree with Ladd. But also... | |||
::Yes. It's very dangerous. In its untuned and tuned form. | |||
::I'm scared out of my mind trying to mess with the structure of the global economy. That's why I've been dragging my feet and trying to prove as much as I can quantitatively, so we can have some conscious control over this beast. | |||
::But I believe it also has a great deal of promise to help improve humanity and solve many of our problems, that I can't understand how to solve without it. [[User:Craig Calcaterra|Craig Calcaterra]] ([[User talk:Craig Calcaterra|talk]]) 04:21, 27 March 2023 (CDT) |
Latest revision as of 03:24, 27 March 2023
This page is one of the major reasons I wanted a wiki. I hope this can be a forum for productive conversations. Craig Calcaterra (talk) 12:33, 27 February 2023 (CST)
NOTE[edit source]
Click 'Add topic' to separate subjects.
Please sign all comments by typing 4 tildes (~).
- To answer, use colons (:) to indent
- Use two colons (::) to indent twice
- Etc.
- Use two colons (::) to indent twice
Craig Calcaterra (talk) 00:06, 2 March 2023 (CST)
What are the downsides to meritocracy?[edit source]
The current problems with the reputation system of academia - where if you somehow get a position with famous professor x, then you are way more likely to get a position elsewhere - is understandable in the short-term, but also not long-term oriented. This is probably a major force that results in a boom of innovation when famous intellectuals retire or pass away (just like in politics). Meritocracy seems strongly dependent on luck. Though the hierarchies aren't totally arbitrary especially within domains, our current systems are still very leaky/lossy. The "chosen" mentality without the understanding of how much luck was involved proves to be an unhealthy dissociative. How can we build systems/culture that better account for this "chosen" mentality?
The problem still comes down to how we establish trust and reference points whether it be art, ideas, and/or people. People are more willing to bet on a winning horse with a record than a "dark horse." I'm not saying we can solve this, but maybe we can drastically decrease the cycle time for innovation booms.
Here are a list of issues/constraints that also lead to problems in a supposed meritocracy: 1) resource and reputation stakes breed conservatism 2) the wobbliness of the "non-experts go expert shopping" dichotomy. ie. 3) making a lot of decisions and follow up decisions based on largely incomplete data. 4) nepotism Administrator (talk) 00:59, 27 January 2023 (CST)
- Those are profound problems we need to contend with. But that with this platform, we have more tools than before for cultivating a healthy psychological culture. Better accounting. Better transparency. More opportunity to be conscious of our choices of how to behave toward one another.
- That is not to dismiss those fears. I am worried on a metaphysical level whether having a group of assholes is absolutely more efficient. I don't believe that is actually true. But I am worried it might be. And I am certain that it is natural for systems to degenerate if they are not protected. Moreover, along with the good side of this tech for helping us be conscious of our actions, it simultaneously makes it possible for us to lose consciousness of those same things--because it automates accounting and transparency.
- That being said, I don't think there is a possible engineering solution to that problem. And further, I wouldn't want there to be one. I think its a fundamental human quality to have the choice to be good or bad. This tech excites me because it gives us new opportunities for expressing goodness. But it terrifies me, because that always goes hand in hand with new opportunities for evil to express itself. Nevertheless, I've got to bet on hope. Craig Calcaterra (talk) 08:25, 27 January 2023 (CST)
Is DGF dangerous?[edit source]
Jonathan Kung
14 days ago
In its untuned and imprecise form, is DGF dangerous?
2 replies
- Ladd Hoffman
- 12 days ago
- I talked with Jonathon about this, but I want to record my thoughts here.
- What we are trying to build is a system that enables robust self governance. So the main risk I see is that a system is built which purorts to offer that, but in reality fails to deliver.
- In other words, if bad actors recognize that these concepts are appealing to people, they could co-opt the appearance of the system, in order to exert control and further exploitation.
- However, this is already the status quo, so I don’t see that we are introducing significant additional risk.
- I think the main thing is that we ourselves avoid promoting such a corrupted system.
- Craig Calcaterra
- 9 hours ago
- I agree with Ladd. But also...
- Yes. It's very dangerous. In its untuned and tuned form.
- I'm scared out of my mind trying to mess with the structure of the global economy. That's why I've been dragging my feet and trying to prove as much as I can quantitatively, so we can have some conscious control over this beast.
- But I believe it also has a great deal of promise to help improve humanity and solve many of our problems, that I can't understand how to solve without it. Craig Calcaterra (talk) 04:21, 27 March 2023 (CDT)