What are enthusiast / professional scores VS community scores really?
Take this young but very promising US brewery for example. The scores on UT already show it has very good beers. Here on RB, because of low ratings, it still shows as a pretty average place to go… Why would users and brewers care about this?
I definitely agree both score have their place and should be separated using different tabs. (Full Reviews vs Short Reviews and ticks - drop the Expert ratings and private ratings terms…)
We just need to find a way to make both kinds of ratings count toward the final score for the beers.
Apologies here if it came across as dividing the ‘community’, that was not my intention. I’m not saying that we aren’t all ‘beer lovers/enthusiasts’; but those who put more effort into it and write reviews should receive a little more credit. At least that’s how I see the overall score when I’m referencing it (this is how the professionals/experts/critics have scored this beer).
I’m with you on this one. The combined score should effectively be the 5-star rating.
I feel this is where it becomes interesting depending on how we display the scores. I have tried to find an example of the following; not even sure if such a case does exist.
A beer that has been reviewed positively but ticked negatively (20 reviews with a high ★ average but 40 ticks with a low ★ average). And vice versa, a beer that has been reviewed negatively but is ticked favourably.
But I could be wrong here and the correlation between the two is much closer than I think it is.
Something we were experimenting for the future was to show predictive Overall/Style scores based on the reviews left (if under the minimum required reviews).
UT
3.65/5.0 (20 ratings): I know there are 20 ratings, but I don’t know whether any of those ratings have any written review content.
Ratebeer
60+ Overall score: I know there are at least 10 reviews that I can read about this beer.
90+ Overall score: Due to bayesian weighting, I know there are at least 50+ reviews.
So on RateBeer… seeing a value for Overall score gives me a reason to tap into the beer (as I know there are reviews waiting for me).
I don’t have an opinion on this one If we use River Roost Big Smooth as an example; we are comparing a beer with 829 ratings on UT to 3 ratings on RB.
For me the most important piece of information I’m taking into account here (when looking and assessing this specific beer on RB) is not the 5-star rating, but the rating count.
Well that is sort of a problem. It may sound absurd since the site is called Ratebeer and we’re writing these reviews, but I don’t actually want to read the reviews for the each beer I encounter. I rate around 1500 beers yearly and encounter probably 10X more. How do I choose those 1500 beers? I use untappd since RB most of the time doesn’t have enough data and/or doesn’t display the data in a way which would help me make my decision.
Imagine having to choose out of 500 or 1000 beers in a beer store. Who has time to read reviews for those beers? Also I think “This review was helpful” button is helpful, and it may distinguish some interesting and more unique reviews in the future. However very often the reviews use same descriptors. Is there a way to show those extract those descriptors and show them in neat way? That’s just one idea (which untappd has already exploited), but as we already pointed out making RB relevant should be a priority.
Also one thing worth considering. RB used to have rating assistance - these were predefined descriptors which you would click and they appeared in your review. Implementing this system in app would speed up rating and make it way easier.
I believe one of main problems with RB system is time needed to type a review. Also this could help introducing beer language to new users. Also imagine yourself with beer in one hand, phone in other, trying to mingle with people at beer festival - it’s not that easy to type a review. But to click a dozen buttons on your screen - way easier.
Great idea! I do not rate beers, but see your point. When RB was established, beer geeks made their notes by paper and pencil. Back home, most often the next day, they wrote their ratings using a computer. How many does that today? Just some old-timers, I guess…
Exactly, this is the problem we currently face. As @FatPhil says, it’s maybe a pity the culture has gone this way, and us tick addicts are likely partly to blame for that (AKA I’m not sure if it’s the chicken or the egg), but that is the way that it is. Seems we should adapt.
I use Ratebeer mostly to tick having been suckered in by the place ratings. I fully rate a small percentage of beers. (I’m steaming toward 20k ticks/private ratings)
How do people use other peoples review, do you simply just look at the score or do you really read the whole thing? Looking at some reviews then a reasonable percentage could just be ticks anyway. If I’m out with company I don’t want to be scribbling notes or tapping away at my phone.
Personally I don’t really mind if the ticks don’t count as I haven’t written a review which is the whole point of Ratebeer but it would be nice to toggle on and off (as suggested above) and for us to count in all the stats such as Rater of the year etc (again toggle on and off).
I guess, like place ratings, it then leaves the system open to abuse if people can really be bothered to whip up excitement amongst a relatively small proportion of beer geeks!
That’s a hell of a lot of ticks. I wonder how many super tickers are on the site? @joet? Maybe @fonefan and @omhper’s records pale in comparison to some of these guys out there…
Wow. Looks like I’ve basically written two war and peaces worth of words. Wouldn’t want to read that book unless you want a good quarter of it to be like “fresh hops, pine, grapefruit, golden pour. Bitter end…” hahaha.
An important function of any score is that it influences future scores. (@aww Currently, the beer page is still erroneously showing the weighted mean only when no score is available. This has effects that we should remedy as soon as possible.) This means that displaying tick-generated or tick-influenced averages would affect future scores. So one would expect the effect of tickers affecting scoring to only emerge once such a change was made and the effect came into play. I don’t like any proposal which sacrifices rating quality, and this one would be very risky.
For now, it wouldn’t hurt to show tick counts. That’s important for several reasons. It might even be useful to show tick averages (under “Show More”) for premium users for beers with existing scores. But merging tick ratings with reviews poses great risk.
As an admin i can see the value of seeing tick counts but not as a user. I’m curious what value do you see that having for the average user? A sign of rarity?
As a user, I want to be able to QUICKLY look up a beer and be able to decide “Of the 10 new beers on tap which do I want to try?” Right now, because so many beers have low # of ratings I often can’t do that and trust the numbers displayed without scrolling through the ratings and having to interpret what I see there.
On my most recent trip I often switched to Untappd (either instead of or in addition to RB) because it more reliably has more ratings and therefore more likely to overcome the issue of too few ratings to be meaningful. (Mind you I also do a mental math of 3.75 on Untapped = around 3.5 on RB for me but that’s a whole other thing.)
The best option for me would be to see three numbers at the top: Overall Score (Based on current weighted score, show N/A if below ratings threshold), Average Score (Average of all full text ratings) and a “Combined Score” (based on full ratings + “ticks” combined, either weighted or not weighted.).
The combination of these numbers would tell me a story about the beer very quickly, particularly any differences between them.
I agree. I don’t really care how many times a beer has been ticked if I can’t see what tickers are rating it. I does me no good to see a beer has been ticked 1,000 times if 10 people haven’t rated it to give it a proper score.
Right now, one of our challenges is that newcomers, the vast majority of visitors, have difficulty understanding our scoring system as it is and require fewer simpler options. So this is one of our challenges…
One idea would be to make the displayed figure conditional, which is a strategy we’ve employed successfully in the past. We would show one average at the top (and more in Show More) conditionally based on the rate count and tick count criteria like so:
RateCount
Tick Count
Show Review Avg
Show Tick+Review Avg
Show Weighted Review Avg
Show Score
Show Style Score
0
5
-
x
-
-
-
4
5
-
x
-
-
-
5
5
x
-
-
-
-
6
30
x
-
-
-
-
9
999
x
-
-
-
-
10
1000
-
-
x
x
x
We would always show ratecounts for both ticks and reviews so that people who were curious would know what they’re getting. Eventually it would be good to do away entirely with “private” ratings so that users and admins alike could see marks provided by tickers with an additional click (to reduce the influence of ticks but also allow for transparency)