This began as a common on the ‘New World Notes’ blog:
Second Life Needs a User-to-User Karma System Like Reddit
Popularity points, karma, like buttons, etc…
– These are all techie solutions to a social issue: how much do we agree with or disagree with some bit of online content, or the person behind it?
In our real lives this is simple: we form social bonds with people and ideologies that then influence us one way or another. These are complex interactions of trust based on our respective backgrounds and life experience.
They are called relationships, and this is an intentional word choice. Such notions of what we like or trust, where we stand, what we agree with – are developed over time, in a gradual process of building up ties. There’s a lot of give and take that goes into it. As we learn about and influence the world around us, we develop relationships with people, places, things, ideas, ideologies, religions, and so on.
Even when you seem to flip on a dime and choose to ally with a new stance, that choice is one influenced by a very long process of learning about something, and having it impact you. The sudden shift is still a reaction to something already ingrained.
The internet today is full of ‘Like’ buttons or ‘rate this post up or down’ thumb icons…
This is a technological solution to 6 million years of evolutionary socialization. It attempts to let us each leave our scent on a given tree and say “I’m with this” or “this is not me” – and then use that as a badge that other people, who might have very different life experiences; can rely on.
Three posters on the blog above made observations on the benefits or problems of such an idea:
Ezra: “Curbing misconduct on Reddit requires a staff of moderators to deal with troublemakers.”
Orca: “I personally refuse to have myself judged by people with much less frontal lobe activity or moral integrity.”
Metacam: “We just need some sort of respect or trust system so that you can be a bit smarter about who you are interacting and doing business with.”
I think these three illuminate some of the fundamental flaws in any system like this – and why I doubt it really works on Reddit.
Certainly topics there can easily get pushed up for no logical reason, or pushed down for even less logic.
The blog above suggests fixing it by hiding how they votes are tallied. Hiding the method just means you can pretend to work while still having deeply flawed results. It does not ensure it works. It just ensures nobody can see the mess.
Its like sweeping a dead cat under the sofa and then saying your house is clean. You’ve still got a dead cat in the living room… its just hidden.
What will moderators curb really? Just certain forms of obvious misconduct. Things posted in a terms policy. But that will do nothing to address bias or differing perspectives. And grossly unfair moderation based on where the moderator’s personal loyalties stand is very common (the core of why I no longer participate in some certain SL third-party forums).
Again they just work to hide the ball – and to make it appear as if the problem is under control.
Any such system becomes one about cliques, and if not in the in crowd, can get very harsh.
Orca’s comment triggered my thoughts because who has moral integrity is very dependent on who gets to answer that question. And I suspect we would have very strong disagreements over that. A system like this would end up being used to allow different camps of worldviews to just down-vote each other in never ending spirals of hostility; while propping up their own hate-mongers.
Metacam’s comment illustrates the real danger: that such a horribly flawed system becomes trusted – when in fact it is a lot more biased and abusive than simply no system.
This becomes one of those ‘techies not getting it’ things.
You can’t solve a social problem with 1s and 0s. A rating system is just throwing a ‘gamification mechanic’ at popularity and trust. Popularity and trust are best handled by having people build up reputations over time. Not with points and scores.
Its a flawed premise to ever assume you can trust a 97 more than a 73… meaningless numbers; applied to a social dynamic, produce wrongful and harmful results.
This is one of the core flaws of the entire web 2.0 social media era. Its not real socialization. Why do interactions on Facespam and Twithead feel so shallow and distant, why are they being so easily abused to hurt our privacy, why are they leading to so many -broken- relationships?
They’re all ‘techie’ 1s and 0s – gamified solutions – to social interaction. Biology has millions of years of doing this the slow and gradual, personal, built up way. You can’t hit that over the head with a binary chip answer and call it a day. It just don’t work.