An Idea on How to Keep Digg All Human
Many of us early adopters have been mulling over the problem of how to keep the AI-powered bots out and verify actual humans are driving the digg ecosystem.
Please shoot the following idea down and expose its weakness/problems. After all, this is how we learn and get better. Here goes...
Following the "network of trust" idea that Linus Torvalds expounded on in his Git tech talk at Google in 2007 about how he only needs to trust maybe '5, 10, or 15 people' to maintain the Linux kernel because he knows these people are smarter than him, special and in some way have earned his trust. To borrow a phrase from Meet the Parents, they have entered the "circle of trust".
Could a similar approach be employed by digg.com? Instead of opening the floodgates to general sign-up, require each new user to be referred by an existing user. Instead of random invitation codes, have each invitation begin with the username of the person who is doing the referring. So a single invite code shared by Kevin Rose would be something like kevinrose#randomcodehere. The code would be unique and only work 1 time after the digg back-end confirmed the code validity and the user who referred them would be pinged to 'verify' the new user account as 'trusted'.
Invite codes could not be shared anonymously since each code is tied to a user. Then every new account would be verified as legitimate both by the back-end AND the user who referred them.
The system would keep track of the relationship of referrals. So everyone a user refers would be under their 'tree' of referrals or a personal "circle of trust". If a bad actor somehow enters the system, the system would look to the person who referred them.
Perhaps each new user would go through a 'probationary' period where they could not refer anyone until they learned the rules, contributed to the community and showed 'evidence' they were an actual human at which point they might be offered a limited amount of referral codes to share with their family and friends.
It seems this would create a tight network of trust where every user in the system would be truly verified and provide a mechanism to thwart bot accounts and expose any weak links in the chain where bot accounts are being allowed into the system.
One downside to this approach is that it will obviously limit the speed of new adoptions. So if the folks behind digg.com want to become a gigantic community in very short order, it would probably slow that down.
For what reasons might this approach work OR not work?
19 Comments