After reading quite a bit about GoodReads, both from the site itself and that which has been written about it, I have come to the conclusion that the current problems which have come to light are an outgrowth of a fatal flaw in the structure of the site itself, and unless that flaw is addressed, things are just going to get worse.
Much has been said about the behavior of both reviewers and authors, and there is certainly much to criticize regarding how the users of the site have chosen to escalate their conflicts. I, however, am more interested in the genesis of those conflicts.
I believe that the essential problem that GoodReads faces lies in the confusion between rating for review and rating for recommendation, and that the structure of the site encourages such confusion.
To take a counter-example, let’s look at Netflix–one of my favorite sites of all time. Netflix is designed so that users rate for recommendation. I rate movies and TV shows based on my own personal preferences, and the purpose of the rating system is to allow the site to suggest other films similar to the ones I have enjoyed.
For example, I love Con-Air. Sweaty guys blowing stuff up, fist fights in the belly of a plane, landing an aircraft on the Vegas strip, fergoodnessake, it’s an hour and a half of brainless fun. When I’m stressed, I put it on and let Nick Cage and John Cusack pummel my brain into jelly. But when I rate Con-Air five stars, I am not saying that it’s the best film since Citizen Kane, what I am saying is, “I enjoy this–show me more like it.”
On the other hand, if I rate Whatever Happened To Baby Jane as a two, I am not saying that it’s a bad movie–technically, it’s brilliant–I am saying, “I don’t enjoy this one, so don’t recommend movies like it.”
In short, my ratings on NetFlix have very little to do with the quality of the film in a cinematic sense and everything to do with my tastes. NetFlix doesn’t suggest Westerns to me, not because Westerns are bad, but because I’ve made it clear that my taste doesn’t run that way. If someone were to use my NetFlix ratings as a scale of value, it would suggest that William Castle was a better director than John Ford, which is absurd. Netflix ratings aren’t meant to be used that way, they are a way for me to keep track of what I’ve liked, and to find more of the same.
Now let’s look at rating for review, which is what you see on a site like Amazon. For an Amazon review, I use a much different (and, it is to be hoped, more objective) standard of value. Obviously my own taste will influence how I like a particular book or movie, but when I write a review for public consumption I try to address the quality of the work more than my own experiences. I am rating it not only for myself, but also for others. I would not give Con-Air five stars on Amazon, nor would I give Whatever Happened To Baby Jane two.
GoodReads is set up for rating for recommendation. It is designed to introduce readers to books based on the reader’s own preferences. It was built by readers, for readers. As such, it is good at what it does.
The problem lies in the way that ratings are shared. Because unlike NetFlix, GoodReads doesn’t keep my personal ratings private. What’s worse, it uses an aggregate rating system that is visible to other users and to the author of a work. And that, I believe, is the root cause of the reviewer/author flamewars so rife in the site’s forums. GoodReads uses a rating for recommendation system, but publicizes it like a rating for review system and the two standards of value are incompatible.
In my opinion, the solution is simple–stop showing ratings to anyone other than the user who rated the work. I don’t need to know that a fan of Historical Romance isn’t interested in reading Catskinner’s Book (I can probably figure that out for myself.) Although the site does make it clear that ratings are personal and for the purpose of recommendations, the fact that everybody gets to see everybody else’s ratings makes it look like a review site.
Because of that, how I choose to rate something effects not only my own recommendations, but also how other people see a book. If a mystery comes up rated at two stars, I don’t know if it’s a poorly written book, or just a book that had the bad luck to be rated by a bunch of people who don’t like mysteries.
What’s more, the way in which the aggregate ratings are calculated and displayed allows for abuse of the system. These abuses have been adequately documented elsewhere, there’s no need for me to list examples. My point is that keeping individuals ratings private would remove the pay-off for malicious use of the rating system and remove the incentive to artificially inflate ratings with sock puppet accounts.