• Watch Out for Scammers!

    We've now added a color code for all accounts. Orange accounts are new members, Blue are full members, and Green are Supporters. If you get a message about a sale from an orange account, make sure you pay attention before sending any money!

Rifle Scopes Should We, Review Criteria

Once we come up with a specs format it would be easy to create a list of scope families (all the variants grouped) and have members volunteer to to collect the spec sheet with another member as QC and submit for inclusion. That could happen even while we are still rasslin over the review side of it.
 
I really like the suggested approach by Frank, because having traveled a ton while working for a software company, I put a lot of confidence in reviews by Yelp, Google, Uber (drivers), etc. I always know if both Travel Advisor and Yelp gave a restaurant or hotel high reviews and there were a lot of reviewers, I know I am golden. Never has let me down and I have confidence, even though I never tried those places previously.

Same thing with these scope ratings. The bottom line, is if I am looking for an optic for my rifle and see the NF ATACR 5-25 F1 has 110 reviews with an average of 93.5 and the Vortex AMG has 68 reviews of 90.3, it's kind of a wash and I can drill down to find what people liked, what was rated highest, etc. Then I see the IOR has 3 reviews with an average of 98.2, I take that with a grain of salt, because I see the reviewers and they are known fanboys. :rolleyes: IOW, the more reviews that average out to a higher number, the better.

I think this proposed rating system would give all of us a way to benchmark these scopes, especially since more people are on the site, need advice, just learning, etc. and it would save time and effort on the more seasoned vets who are frustrated by another uneducated, newbie post of "what's the best optic..." Just my .02, but I think this idea will be fantastic, once we build up reviews.
 
  • Like
Reactions: Cuda-dude
Or just give it the $$ rating like yelp.
Establish a general grouping by spec and sub categorize by price. My old Tasco 4x fixed is a 10/10 if you are looking to spend $50. That keeps it a little more apples to apples.
 
Don't know if this matters to this discussion. Should any specific application be specified, to be fair to the manufacturer? Picking pepper out of fly shit.
 
Fuck Fair,

This is opinion based, fair and opinions rarely go hand in hand, a bigger piece of this puzzle is spending your own money. Very few people will criticize their own choice. The fact Misery does love company is more likely to play into it over fairness. Nobody has access to machines that can test optics, instead, we have our look, our feel, our opinions.

Fairness is the point, we know there is very little fair out there, instead we deal with endless opinions.

This is designed to measure the opinion and put a value on it. This is super prevalent in other places, you can go to Amazon and read a review on the same page. This is to bring an element of that access here. People are asking for opinions every day, so let's attach a system to those opinions.
 
The fairness would be in the evaluation method and criteria definitions. If the scorecard criteria is all defined and the evaluations all are quantified using the same definition you should get similar scores for the same scope across the sample. Some variation would be expected but shouldn’t be way off as long as the criteria is defined clearly so everyone understands it.

do we have the scorecard criteria nailed down, or atleast an acceptable scorecard for the time being? Next step defining the criteria so it’s measurable?

Once we get the criteria and definitions nailed down, we could do a proof of concept?(put the current scorecard in play and have a small group of people review a commonly owned optic and see where improvements/clarity needs to be added before a full release/rollout to everyone.

just throwing that approach out there.
 
That is what I am trying to do, nail down the criteria but everyone wants to change the subject to so far off bullshit.

I posted basic criteria on how I see this, and a few guys have followed suit, but most comments are just silly stuff about what if the review is a fudd what do I do ?

The point after will be to create criteria and have a post that spells it all out. This way it can be referenced and used, we can start creating review cards based on the criteria determined, but 90% of the people posting are contributing to this aspect of the request
 
  • Like
Reactions: TangoSierra916
On FOV: compare them when at the same magnification. Listing the FOV on lowest power is not always useful because some scopes tunnel.

SInce magnification ring markings on many scopes are useless, for scopes with fairly constant eye relief a good metric is calculated FOV on 10x or 20x. Basically take FOV on highest power multiply by the highest power and divide by 10 for calculated 10x FOV. It is not perfect, but it is a consistent metric.

If you want to evaluate whether the scope tunnels, list the highest magnification that achieves max FOV.

ILya
We are saying the same...

What ever we do we should acknowledge a 3-15 with a 41’ FOV might have an advantage over a 3-18 with 26’ FOV; especially now that so many of us are hunting with FFPs. This comes into play mostly for closer ranges on moving targets.

Much same for our PRS matches, but here it gets a bit harder to qualify. Firstly, we are generally mid zoom, but it is even more complicated. As an example my ZCO 5-25 feels much wider than it is compared to my Minox 5-25 or S&B, mostly because of the way the image seems fill more of the eyepiece regardless of true FOV.

So like you said, min FOV and the approximately magnification if the scope has tunneling issues, like my S&B, would be the easiest.
 
Looking back at the scopes I have owned I compare them against each other with:
  1. Tracking accuracy (would rank as 100% = 10, 99-101% =9, 98-102=8, 96-104=7, etc)
  2. Tracking precision (variation in accuracy)
  3. Amount of vertical travel (depends a bit on cartridge, 6mm creedmoor doesn't need as much as a 308win)
  4. Contrast and light transmission (distance I can still see yellow highlighter on white paper)
  5. Resolution (font sizes at distance)
  6. Reticle obstruction
  7. Reticle visibility against black background
  8. Zero stop or not
  9. Eyebox range
  10. Warranty length, quality, transfer)
  11. Cost
  12. Magnification range and FOV
  13. Minimum parallax distance
  14. Objective lens size and required ring height
  15. brand resale and reputation
 
Here is an example of how weighting will be the tough question. How much weight do you put on features, and how much on price? Because I bet the first time the numbers are crunched, scopes like the Nightforce NXS, with great turrets, good reticles, great durability, and "good enough" glass, are gonna outscore some of the top tier scopes, once price is factored in. And that may not be wrong, but could wrinkle some underpants.

This is why it might be best to leave price out of it.
Price is more of a spec that is factual. The seller sets it. Our vote is in the buying. But the subjective grading sets the ranking/heiarchy of the scope and then the reader decides what he wants at his price point. -my .02
 
  • Like
Reactions: Namekagon
A few thoughts:

On optical performance numbers

I think I'll start with my thoughts on glass. I think you had the right idea when you just lumped it all together in one score. When I test a scopes optical performance I look at: resolution, stray light handling generally, optical flare, color rendition, contrast, field of view, edge to edge performance, low light performance, pincushion and barrel distortion, chromatic aberration, tunneling, depth of field, and eyebox size / comfort. Most of these aspects are tested in a variety of light conditions and most are tested with comparison scopes side by side. The end result of all this generally boils down to a single opinion of how I think the scope stacks up to it's price point. The list of measurement criterion and the protocols I use for comparison are helpful to me in making sure I do not miss something big and important by failing to look at the optics performance in a specific set of conditions but I don't think that applying a rating to each aspect of performance is either very important, or feasible for most users. Optical design is really about balancing all of the compromises you have to make such that you don't do anything poorly anyway. I would leave it at one number.

I should also mention that it is really hard to rate a scopes optical performance without any reference scopes near it's price point on hand or without a lot of experience on a lot of optics. You can easily be the guy who never knew he needed glasses, gets them, and realizes it is actually possible to see leaves on the trees across the street.

On price points and ratings

Price points on scopes are really mostly about the optical performance and price is a massive factor in optical performance. In years of testing I have not found high cost scopes to generally track any better than mid range stuff. There are also plenty of cases of alpha class stuff with problematic designs that often lead to predictable failures so I really don't see high cost stuff as historically being generally better when it comes to durability. In the past high cost was necessary for obtaining important features like zero stops, mil / mil, and decent reticles but this is not really the case anymore as even some lower cost stuff has functional feature sets.

I mention this because though I think it makes sense to grade scopes on most aspects of performance without what you might call a price curve, I think that optical performance really should be rated relative to it's cost. If you try to rate optical performance on some kind for global scale all you will see is that pretty much any alpha scope is between 9.7 and 10.0 with a margin of error greater than the difference in rating between scopes in a price range. You also end up the issue of scale. Sure, the alpha's will be near 10 optically but where should all those $1k scopes I recently reviewed land on the scale. Are they at around 3 because they are 1/3rd the cost or are they more like 9.5 because maybe you can see 95% of the bullet marks, holes and splashes that you can with the alpha stuff. You may need to index your scale with common known scopes in order for people to have any idea where to put a particular some. Otherwise a guy might put a scope at an 8 and have exactly the same opinion of it as a guy putting it at 5 just with a different idea of what sort of performance an 8 or 5 should represent.

Turrets

People are just dumb when it comes to turrets. For whatever reason people seem to reeeaaly want to buy turrets that have clicks so stiff that they skip over several detents at a time when moved such that they loose count and have to shift position to look at the turret. It's just dumb. You need to be able to count clicks and adjust the turret without breaking position. It doesn't' matter one damn bit it the feel a little squishy. I question whether rating turrets will yield the same results in our survey that it has in the marketplace, turrets that are so stiff and clicky that you loose count. Perhaps people just like that feel in the store though and when they get it home they realize their mistake before they write the review.

Anyhow, those are my thoughts at the moment.
 
Id stick to the guns but heavy breathing forums could be fun lol rifles dressed in skirts with 50 round drum mags ow la la it could be magic except for all the laughing ...
 
I'm not really reviewer qualified, but... Do you mean something like this?

Turrets:
- plain Jane, easily legible, well marked with decent texture is a 5.
- add one for capped, zero stop, locking
- bonus one for especially well thought out zero stop and clever marking for elevation orientation

That would put a good, no feature, turret at five, a well featured design at 8 and a truly superlative design at 9-10.

Tracking - Pass/Fail, but with classes. I'd set three classes at 1.5-3.0%, .5-1.5%, <.5 - caveat: that's based on reviews I've read, ranges might need to change. Score would be either "F," 1, 2, or 3.

Eye box - how to measure? Is a quarter inch generous? Standard? Combine with eye relief? How about:

3.75 inches eye relief with .25" (assumed average, must be adjusted) box is a "5." You get an additional point for each .125" of eye relief up to 4". Eye box gets an additional point for each .1" (again, needs correction based on actual values).

Design - so much to cover. Mounting area, length, weight, zoom range etc. Need to put some more thought here. Depth of field and field of review here or under glass? Those are both objective, grade on curve based on standard distribution?

OK, this is from my phone and a misery. I'll take another stab from my computer. I'm not a scope expert, but I'm a great analyst - just need to get oriented.

ETA: as I think about this, I wonder if it doesn't need to be broken down into a standard numerical configuration for features that are subjective and a rating for features that are objective. You could sort the "cards" according to the feature set and then compare within that feature set as an end user.
 
Last edited:
I have a couple of observations/comments from posts in the past 36 hours, and am also building off of what @lowlight and @TangoSierra916 are aggregating from Page 2:

There seems to be a ton of overthinking on the parameters.
The scorecard envisioned seems to be supposed to give a "baseline" for performance - any perceived metric that cannot be reasonably measured/reported consistently by a competent reviewer shouldn't be included.

Price and value are two of the most subjective and inconsistent variables that could be thrown into the mix. Candidly, if performance is the end goal, then who cares if a "budget" optic scores higher than a top dollar optic from a manufacturer? When the criteria is right, the numbers speak for themselves, and provide as close to an honest "snapshot" of an optic's capabilities as would be found elsewhere. It's also going to keep the industry more honest - if your optic sucks but your marketing campaign attracts influencers and fanboys singing your praises, there's nowhere to hide the lack of performance. Likewise, if you have a fantastic product that performs well but might not get the notoriety that a more mainstream brand has, the performance will show.

A breakdown of assessment of each category may make the most sense for an aggregate score:

This might be too granular and a a few steps down the line, but having an evenly weighted breakdown (or an explained, rationalized weighting) to each variable under each assessment category would make a fair bit of sense. Each assessment category would be a 10 or 5-point scale, with each component being a subsidiary value in itself. My rationale in blue below within the quoted text:

But I would do this

Overall Score:
###.## (out of a 100 point scale)

Mechanical Assessment (100% of aggregate point value for below variables)
Tracking = % - the percentage can have a 1-10 score (25% weight for group score)
Turret Movement, (Click Spacing) (25% weight for group score)
Lock Features (Ease of lock vs unlock) (25% weight for group score)
Reference Alignment (does engraving line up) (25% weight for group score)


Optical Assessment (100% of aggregate point value for below variables)
- Parallax (14.29% weight for group score)
- Resolution (14.29% weight for group score)
- Light Transmission LowLight (14.29% weight for group score)
- Eye box (14.29% weight for group score)
- CA control (14.29% weight for group score)
- Depth of Field (14.29% weight for group score)
- Contrast (14.29% weight for group score)


Ergonomic Assessment (100% of aggregate point value for below variables)
- Turret overall (25% weight for group score)
- Mag ring ease of operation (25% weight for group score)
- Diopter Adjustment (25% weight for group score)
- Scope Finish (25% weight for group score)

Other Features. Or spec sheet (unweighted/no score)
- Illumination brightness (I do think that this should be scored)
- Illumination color options (#)
- Accessories included (sunshade, mini wrench, flip caps, etc.)
- MSRP in $
- Scope mount options
- Customer Service (I do think that this should be scored)
- Warranty

This way, as long as the process is rationalized, a scorecard can be developed for a "snapshot", and a much more in-depth comparative database can be created and offered at a future time to compare optic categories or other specified variables within the categories side by side.

Once the categories are decided on and weighted appropriately, a scoring rubric for each variable should be developed which would set rationale behind each point value
This is really, really important, I think, as anyone would have a reference point for what a "10" score would be on a variable, and respectively for each integer value down to a "1" score. In cases where 1-10 is too fine of a grading scale, perhaps it would make sense to score in increments of 2.5 (eg: 2.5, 5, 7.5, 10) or a Pass/Fail score of 1 or 10. Such a scoring rubric would also, if properly implemented with quality control over reviewers, reduce the incidence of "Fudd reviews" in theory.
 
  • Like
Reactions: TangoSierra916
Yes, I think guys are getting it,

Viewing it as a snapshot is a great way to put it, but with that, we want the snapshot image we look at to include certain criteria. That is where the question started.

Going back to Amazon again, think about a visual star rating we see today. You scan your search results and see the star rating under them. this is kinda like that, a quick reference for people who understand how star ratings work.

Are star ratings perfect, no they are not, but they help, can they be skewed, sure but anything can. Are they are effective in giving us a snapshot of what others thought about the product we are interested in, yes, I think they help.
 
@Leftie i like how you proposed the weighting concept for all the criteria. I agree the next step is still finalizing the criteria and definitions of each so they easily and correctly consumed and utilized.

@lowlight the amazon star high level “quick ranking” is a great idea. When we get this all built that could be the highest level view to similar to Amazon(see pic attached). For example the razor 4.5-27 may have 4.6/5.0 star rating which equals a 92/100 overall score. Having each of the 4 categories weighted at 25% of overall score seems right on. By having the star score at the high level it gives a quick scorecard view the user can then click into to see the detailed review. This star approach also wouldn’t clutter the overall ranking of the scopes. Also having the number of reviews submitted along with the star view is helpful.
 

Attachments

  • ECCCFE22-280E-4045-87C6-BF8BC28D8746.jpeg
    ECCCFE22-280E-4045-87C6-BF8BC28D8746.jpeg
    324.5 KB · Views: 35
Yes, I think guys are getting it,

Viewing it as a snapshot is a great way to put it, but with that, we want the snapshot image we look at to include certain criteria. That is where the question started.

Going back to Amazon again, think about a visual star rating we see today. You scan your search results and see the star rating under them. this is kinda like that, a quick reference for people who understand how star ratings work.

Are star ratings perfect, no they are not, but they help, can they be skewed, sure but anything can. Are they are effective in giving us a snapshot of what others thought about the product we are interested in, yes, I think they help.
But with hundreds or more contributing to a rating, law of large numbers takes over.

The challenge is avoiding fan bois, and Arken marketing style campaigns.
 
It's not a volume thing per se, only a couple of people will take the time to review something, but for example, as noted above, Arken Optics

If a new company appears and their surrogates come on to promote the scope, without following the review criteria we can dismiss them away. The idea of the criteria is to post a separate sticky style thread or article on the blog site (once the blog site is connected to the forum soon I hope) and force people to follow it.

New Company Z scope, see criteria, then post a review based on it, hey you want to shill, at least make them follow a format, which if a company rep comes on and ranks the scope, we just see what the users say. If the users disagree with the shill in a big way, now you have ammunition to say, No thanks,

We will never have 100 reviews because the criteria are wider than 99% of the end-users will perform.
 
@seansmd I think that @lowlight's idea of a 1-5 star rating would probably work more like @TangoSierra916's post above in practice. It's information transfer via different levels of specificity with a graphical representation:

Star Rating (immediate yes/no general impression on 1-5 (each star might represent a +/-20 basis points) (High Level) -15 second read at most

100 point scale: (general impression based on 100 point scale (each point weighted equally as we are looking at the aggregate of all categories)
(General Level) -90 second read at most

100 point scale category breakdown (impression on each category's strengths/weaknesses based on point values that make up the 100 point scale)
(In-Depth Level) -2-4 minute read at most

Individual category breakdown components (impression on each component within their respective 100 point scale's category)
(Technical Level/In the Weeds) -5 minute read at most

To me, each of these components aren't meant as separate rating systems, but rather ways to view /filter information based upon the depth that someone wants to interact with it:

If someone specifies to the system that they want "optics with a 4 star rating or above", that can be filtered instantly. If someone then specifies that they want an optic that scored an 85 or above on the 100 point scale, the filters then become more refined and so on... The really cool part about this is that if someone specifies that they want an optic with a very generous eyebox (however its chosen to be scored), they can search for ALL optics that fit the criteria, and also filter based on star ratings or the 100 point scale, if a searchable database were eventually decided to be built into the interface.
 
I don't think we can get the overall score to 1-5 but I do think we can use the overall score the same way

The number is really not a big deal, 1-5, 1-10, or overall out of a 100, as long as things are spelled out correctly
 
  • Like
Reactions: seansmd
Just wondering what is the reference standard?
Is there one or are there gradations of it?
A to B, and B to C is not necessarily A to C
The ratings should have both a subjective and objective component to calculate a total score
We have subjective biases.
 
Maybe also a scale on what the scope would best be suited for prs obstacles shooting, prone long range, good for a trainer in a 22 ,etc just a thought if It would be viable
 
Imagine three scopes
Geometric logic is A=B and B=C therefore A=C
But for a scope A, Scope B, and Scope C, each should be compared to some standard.
What is the standard? For a random example... Schmidt may be better than Niteforce , and ZCO may be Better than USO, but that is not the same as comparing Schmidt to USO. The comparisons are relative to each other not to a standard.
 
Imagine three scopes
Geometric logic is A=B and B=C therefore A=C
But for a scope A, Scope B, and Scope C, each should be compared to some standard.
What is the standard? For a random example... Schmidt may be better than Niteforce , and ZCO may be Better than USO, but that is not the same as comparing Schmidt to USO. The comparisons are relative to each other not to a standard.

I think the tricky part of comparing the scope(s) to a standard is there is no industry standard to compare to as far as scopes go. You could compare every scope to Tangent Theta but than how do you review and score the TT? You could propose a standard for each criteria but even that gets tough because if the standard for glass quality lets say is ZCO 4-20, if I am reviewing the ATACR 7-35 but have never seen the ZCO I cant review/score that criteria.

Having a standard for each criteria also gets tricky but the definition of each criteria should help standardize how to review and what variables to take into account. How were you thinking the standard would work?
 
I don't think we can get the overall score to 1-5 but I do think we can use the overall score the same way

The number is really not a big deal, 1-5, 1-10, or overall out of a 100, as long as things are spelled out correctly
We should use 1-10 or out of 100 because it needs to be base 10 cause we are trying to create a better system. Make the rating system compatible with MIL's.

Not some bastardized 1-5 or 5 stars. 5 stars is the MOA of rating systems because sometimes there are half stars and sometimes not.

Or should we rate mil scopes on a base 10 system and moa scopes on a 1-5 system so that mil scopes will always have a greater overall score?
 
I wonder if Amazon, TripAdvisor, Rotten Tomatoes, Health Grades, Uber, etc had to use geometric logic or an objective gold standard before asking customers to rate and review their purchase. But there’s still a helluva lot of reviews and reviewers and people who purchase products and services based off those “subjective” reviews.

I still don’t think everyone’s getting the concept of a “customer review”.........
 
What are 5 common families of scopes we see on here?
I don't think we can get the overall score to 1-5 but I do think we can use the overall score the same way

The number is really not a big deal, 1-5, 1-10, or overall out of a 100, as long as things are spelled out correctly

1-5 scale = 5 criteria to publish and grade by
1-10 scale = 10 criteria
Are all the evaluation categories that varied?

We decide the scale and then we can build the criteria.
 
I think we are making this more complex than it has to be. The use of standards or a precise algorithm to produce a score, I dont think can be done for every criteria when we are talking scopes, because much of the scorecard is going to be perception based to an extent. Criteria like tracking, light transmission to name a few could have a standard of lets say 100%. So for those 2, 100% is the standard that if a scope produces 100% light transmission it gets 10/10 for that criteria. The question then becomes, and this is going to have to be broken down then by each criteria, how do I as Joe Shmo truly measure a scopes transmission %? I agree that a standard would be nice but to have one that's precise and holds weight I think may be tricky and not the direction you want to go for this task, or maybe it is...

As far as Amazon and other popular entities executing their reviews, many do not have standards its just all up to the reviewer (see picture from Amazon attached), but its up to the "what" they are allowing to be reviewed and how the question is asked. I personally feel that we don't want standards for the criteria unless it truly makes sense. But, I also think criteria definitions and steps/ways to test the criteria would help with setting a standard in the review method. If you look at the Amazon review template I attached, its for a bike helmet, the criteria (Rating feature) of light weight, there's no standard provided, its totally up a judgement call. With the approach there sure maybe you get some outlying reviews where person A thinks 12oz is light weight so they give the rating a 2/5 star, but person B thinks 4lbs is light so they give the 30oz helmet a 5/5. With this and any scorecard you will get outlying reviews that will get mixed in with the more common view.

Again though, I think the priority now is to get a finalized list of criteria so we can begin to define each and then discuss these topics of: do standards apply, how to score a specific criteria from 1-10 or 1-5, how weighting should work. As @lowlight and others have mentioned I agree it important to spell things out as much as we can once we get the criteria documented.
 

Attachments

  • DE7A9EA0-E890-49F9-9DA7-2745D161F3E6.jpeg
    DE7A9EA0-E890-49F9-9DA7-2745D161F3E6.jpeg
    713.2 KB · Views: 26
  • Like
Reactions: Leftie and seansmd
We should use 1-10 or out of 100 because it needs to be base 10 cause we are trying to create a better system. Make the rating system compatible with MIL's.

Not some bastardized 1-5 or 5 stars. 5 stars is the MOA of rating systems because sometimes there are half stars and sometimes not.

Or should we rate mil scopes on a base 10 system and moa scopes on a 1-5 system so that mil scopes will always have a greater overall score?

obviously Mil based scopes are always better :cool:, but for this I think they should always use the same scale and MOA vs MIL doesn't matter outside of the spec sheet. Having each criteria able to be scored 1-10 for example just offers more potential for a wider/possibly more precise score vs 1-5.
 
  • Like
Reactions: seansmd
obviously Mil based scopes are always better :cool:, but for this I think they should always use the same scale and MOA vs MIL doesn't matter outside of the spec sheet. Having each criteria able to be scored 1-10 for example just offers more potential for a wider/possibly more precise score vs 1-5.
I meant it more as a joke. I wasn't being serious. Couldn't figure a good way to get that across. I don't use emoji.
 
  • Like
Reactions: TangoSierra916
I said early on people were over complicating it,

everyday I hear bitching about repeated and often asked questions coming around. It's common, but people then turn these into trainwrecks because, we've seen it too many times.

Here I try to simplify and visualize it a bit better, as well as try to apply a standard, and immediately they head to the weeds.

We have a basic model I believe, so the next step is skip the crowd and built it among the open and willing, just bypass the distractions.

There are some good ideas mixed in here we just have to filter and compile now _ lucky for us we have time and very little better to do for the next month.

To the positive contributions, you guys get it, when it's done things will work out right, we just have to take the reigns and run with it
 
I said early on people were over complicating it,

everyday I hear bitching about repeated and often asked questions coming around. It's common, but people then turn these into trainwrecks because, we've seen it too many times.

Here I try to simplify and visualize it a bit better, as well as try to apply a standard, and immediately they head to the weeds.

We have a basic model I believe, so the next step is skip the crowd and built it among the open and willing, just bypass the distractions.

There are some good ideas mixed in here we just have to filter and compile now _ lucky for us we have time and very little better to do for the next month.

To the positive contributions, you guys get it, when it's done things will work out right, we just have to take the reigns and run with it

The photography lens world has already given us the prefect template...
PentaxForums Lens Database
The layout of this database combines subjective opinions on a 1-10 scale with the manufacturer specs, and a way to let users and experts voice their opinions together(and separate).
I think what I like most about this database is that there are hard numbers and specs at the top and as you scroll through the written portions you can see how users use the lenses and what they like most about them.
I know personally that this lens database took a long time to get to where there was enough data to be really useful. But as Lowlight said we have the right community with a lot of extra time on its hands.

As for numbered subjective criteria for scopes, I would like to see slightly broader points that can be broken down in the submitted review.

fit/finish (packaging, build quality, accessories, coatings, documentation, ...)
turrets (feel, tracking, size, labels, ...)
glass mechanics (parallax, eyebox, FoV, ...)
glass characteristics (color, contrast, CA, edge sharpness, ...)

I feel if there are too many specific categories then it might put people off from submitting. Obviously this wouldn't mean someone couldn't go deeper into each section to get their point across.

My personal scope experience is very limited so I am still learning what I want out of a shooting optic. However the budget optic I ended picking up, just so I could get out shooting, was based off of user reviews/experience on this forum. While I used a few other locations, SnipersHide had the most user feedback by far and let me feel comfortable that what I was purchasing would tide me over until (one day) a proper scope budget arises. If I had been able to put my price range into a database like above and see an aggregate of whats scopes are even worth considering I probably would have been shooting a month earlier.

None the less I would really like to see this move forward since there isn't really anything like this out there.
 
@Nebulous, that's an awesome site that you mentioned, and the data that they have is well laid out and presented in a digestible manner - particularly the historical pricing for a few vendors, that's a really cool feature.

I tend to agree with you that the overall layout/template could be well suited to what's being discussed here, and it seems that learning from what they are doing right/what's applicable given Sniper's Hide's context and incorporating the relevant features from their template would substantially shorten the development cycle to a workable, useful database here.
 
  • Like
Reactions: Nebulous
Wow, this is a mine field of information just trying to wade through three pages of thoughts and opinions, some good, some not so good. To be honest, I'm not going to weigh in and muddy the waters any more as I think enough of the right people have said the right things (my apologies to the wrong people :LOL: ). I've been doing reviews for many years on the Hide and it sprang out of my own incessant need to understand how certain scopes compare in certain conditions. I won't by any means say that my method of testing is flawless, I'm always look at ways to improve my baselines and evaluate fairly. At the end of the day we all have our bias' and yes, they will sometimes influence our perception of the outcomes.

So let's get this show on the road. I have a March 5-42x56 I'm reviewing currently and I have a Schmidt 5-45x56 coming on Wednesday. Two scopes that maybe fall outside the view of the PRS focused crowd but there may certainly be some crossover appeal. I will do my typical review (more blog style) but I'd also like to incorporate Frank's review criteria.

I think once we start seeing the reviews, it will make a lot more sense to most.
 
Fairness is the point, we know there is very little fair out there, instead we deal with endless opinions.
This is spot on and I'm guilty of it myself.

Yes, I think guys are getting it,

Viewing it as a snapshot is a great way to put it, but with that, we want the snapshot image we look at to include certain criteria. That is where the question started.

Going back to Amazon again, think about a visual star rating we see today. You scan your search results and see the star rating under them. this is kinda like that, a quick reference for people who understand how star ratings work.

Are star ratings perfect, no they are not, but they help, can they be skewed, sure but anything can. Are they are effective in giving us a snapshot of what others thought about the product we are interested in, yes, I think they help.

Obviously Amazon rules the roost (like it or not they are the largest online retailer, how they got there is suspect but that is beside the point for now), they know what they are doing (for the most part) and we can see "Reviews" on most online products (not just Amazon) through this simple star system. The community needs to keep in mind that if you see two stars but only two reviews that may not mean it's a bad scope, it may simply be that someone had a bad experience while another had a great experience. Over time the stars begin to average out and you can toss out the "this is the best scope in the world" opinions and the "this is the worst scope in the world" opinions. If you see 20 reviews and the average is 4 stars, now you begin to see a better average and dare I say a more reliable rating (at least that is the goal).
 
Whatever happens it might be good to have a few good reviewers score a few different reference scopes as a baseline. @Covertnoob5, @wjm308, @koshkin, @BigJimFish, @lowlight maybe a few others. (These seem to be ones that seem respected off the top of my head. If I missed anybody I'm sorry.)

Not sure on the scopes but I'm thinking some scopes that are in different brackets. Some scopes that I've seen recommended that seem to be quite common. Schmidt pmii of some flavor, razor 2 4.5-27, nightforce nxs, scopes like that. Something that is semi common enough that you could go to a big box store or a match and be able to find one. That way there could be a reference for explaining turret feel or mag ring feel or things like that.
 
Okay I am compiling the information as we have it, and then start something new, this thread is too hard to follow.

This link


Is almost perfect as to what I was thinking, I believe we have a baseline for criteria too, I just need a little time to boil it down.

I have to do it manually since this turned out so scattershot, but the information I am looking for it here, I just need to render it.
 
The very best part of the idea that I have read in this thread, is to put less value or at least a lower ranking to the "glass".

I would love to see a preface on how to create a personal priority list for a scope purchase. Why you might want to choose an illuminated scope if your hunting with an FFP (seeing the crosshairs at min zoom easily) and dispel some of the myths like you "need" to see subtentions when your on 3x or whatever when in the same breath, somebody says they need a duplex or SFP with a jacked-up reticle scale for hunting.
 
I don't know if a scope's FOV tracks linearly with magnification, but if so why not divide FOV by magnification? so a 10X with 50' would get the same score as a 20x with 25'. If it isn't linear, then you could take the mean of the minimum, the middle, and max. So your score would be:
(minFOV/minMAG + medFOF/medMAG + maxFOV/maxMAG)/3 .
 
I remember back in the day the guys over at SWFA Optics Talk had a Tier Category for optics. Not detailed like you are talking about. But it is a great idea.
 
As somewhat newer to LRS, this is a great idea and can help out people getting into this deal.
 
Being new to the site. I can tell you I spend many, many, many hours of research, talking to shooters, and "playing with" different scopes to come the final purchase on my new scope. This would be an amazing resource, IMHO..
 
  • Like
Reactions: TrevK
yes,

A quick reference card or page with the most important information at your fingertips,

Nightforce ATACR 5-25x
Reticle
Magnification
Size
Weight
Parallax Close Focus


We can front load the important details and then under card,

Reviewer Score

Fit & Finish
Turret Feel
Click Spacing
Magnification Ring Movement
Eye Box
Mounting
Glass Quality
Light Transmission Dusk
Light Transmission Dawn
Reticle Design / Features
Overall Score

Stuff like that, understanding we'd have to explain it
I like this exact format and list. It’s everything needed by the average guy looking for a scope.