Rifle Scopes Revisiting drop testing

Until someone is testing a statistically significant number of optics (hundreds minimum) in a fully controlled mechanical test with multiple simulated scenarios, drop tests are just a cruel game of RNG.
People think about the goal of the testing in the wrong way. It's meant to raise red flags about an optic model, not definitively prove all scopes of a line are bombproof/garbage. If you think about it in that way you can find value in it. Let's say someone thinks a Leupold VX6 has a lemon rate of 1/1,000. Well two of them were just tested and both shit the bed. What's the likelihood the droptesters got two lemons in a row? That person might then have to re-evaluate what they think the lemon rate is.

Only one Minox ZP5 has been tested and that one passed. If I genuinely thought the droptesting cleared all scopes of a certain model, I'd probably get a ZP5 because I hear they're also great to look through. To me, the droptesting is about raising red flags and establishing trends among optics manufacturers as far as which ones are actively trying to make robust/durable scopes. Nothing more. A Nightforce can still lose its zero I'm sure, it's more about getting an idea for the chance of that happening.
 
I don’t recommend dropping a ZP5 lol. It took three months to get mine back a few years ago. But I knew that when I purchased it (used). But you are correct, the image is exceptional. And if you know how to deal with about 0.05ish mils of lash then it will serve you well.

I also agree that repeat scope failures are a warning sign, but any tester that is trying to weed out an optics failure to hold zero by dropping it and then shooting it should also dig deeper into root causes, and they should expect criticism of the testing methods being used. To just blame the optic speaks of ignorance, when the optic is just one part of a rifle system.
 
]
People think about the goal of the testing in the wrong way. It's meant to raise red flags about an optic model, not definitively prove all scopes of a line are bombproof/garbage. If you think about it in that way you can find value in it. Let's say someone thinks a Leupold VX6 has a lemon rate of 1/1,000. Well two of them were just tested and both shit the bed. What's the likelihood the droptesters got two lemons in a row? That person might then have to re-evaluate what they think the lemon rate is.

Only one Minox ZP5 has been tested and that one passed. If I genuinely thought the droptesting cleared all scopes of a certain model, I'd probably get a ZP5 because I hear they're also great to look through. To me, the droptesting is about raising red flags and establishing trends among optics manufacturers as far as which ones are actively trying to make robust/durable scopes. Nothing more. A Nightforce can still lose its zero I'm sure, it's more about getting an idea for the chance of that happening.

If they received 2 quality scopes of any brand and both failed to hold zero. I would be more suspect of their testing than the scopes.

Basing assumed lemon rates on questionable testing doesn't seem like a smart way to look at it.
 
  • Like
Reactions: MikeMiller
Scope failures are a waring sign." Maybe sort of, more so with new modles with the caveat, What failed? Are they failing at the same point?

"Drop it on the turret." That little fine threaded interface between your fingers and the scope erector? You must be high as shit. 🤣🤣

Go bang the end of bunch of bolts with a hammer and see which ones you can still get a nut on. Throw lawn darts into a crowd. Was the guy who got hit, most likely to get hit. It's variable, the results are variable. I.E based on luck.

If your smashing your turrets into stuff all the time. They make scopes with capped adjustments. BTW
 
  • Haha
Reactions: DeathBeforeDismount
People think about the goal of the testing in the wrong way. It's meant to raise red flags about an optic model, not definitively prove all scopes of a line are bombproof/garbage. If you think about it in that way you can find value in it. Let's say someone thinks a Leupold VX6 has a lemon rate of 1/1,000. Well two of them were just tested and both shit the bed. What's the likelihood the droptesters got two lemons in a row? That person might then have to re-evaluate what they think the lemon rate is.
lol If you really want this testing to raise red flags, you need a statistically significant sample size. While a failure rate of 2 in a row sounds unlikely, it is just as likely be a fluke as it is to indicate an actual issue without more data. Anyone trying to draw a conclusion with confidence from such a small sample size would have their brain broken by a basic Stats class.
 
lol If you really want this testing to raise red flags, you need a statistically significant sample size.
Not really, no. That's kind of the point. If you wanted to label an optic model 100% zero-shiftless then you'd need a lot of samples but you don't need a lot of samples to raise red flags.
While a failure rate of 2 in a row sounds unlikely, it is just as likely be a fluke as it is to indicate an actual issue without more data
It's only as likely to be a fluke if you are assuming the actual failure rate is 50%, which seems kind of high? If you're assuming it's, say, 5% then two fails in a row would be a 0.25% chance. So no, it's not "just as likely".
Anyone trying to draw a conclusion with confidence from such a small sample size would have their brain broken by a basic Stats class.
I happen to have taken quite a bit of stats.
Basing assumed lemon rates on questionable testing doesn't seem like a smart way to look at it.
That's fine to think but I don't really know what else we base it on. We can't even really base on Leupold's RMA data because what percentage of people shoot well enough and are knowledgeable enough to even notice a zero shift?
 
lol If you really want this testing to raise red flags, you need a statistically significant sample size. While a failure rate of 2 in a row sounds unlikely, it is just as likely be a fluke as it is to indicate an actual issue without more data. Anyone trying to draw a conclusion with confidence from such a small sample size would have their brain broken by a basic Stats class.
Most military optics trials I've been privy to among units involve 10-30 optics. Id argue that while better, its still insignificant, considering the thousands upon thousands that will be made. Some companies consistently get it right, some don't. Look at company track record over 20 years or so.
 
Not really, no. That's kind of the point. If you wanted to label an optic model 100% zero-shiftless then you'd need a lot of samples but you don't need a lot of samples to raise red flags.

It's only as likely to be a fluke if you are assuming the actual failure rate is 50%, which seems kind of high? If you're assuming it's, say, 5% then two fails in a row would be a 0.25% chance. So no, it's not "just as likely".

I happen to have taken quite a bit of stats.

That's fine to think but I don't really know what else we base it on. We can't even really base on Leupold's RMA data because what percentage of people shoot well enough and are knowledgeable enough to even notice a zero shift?
Not many. Most people are completely oblivious to the Razor hd2 1-6 opening up after 3k rounds or so. Those who are, typically are the types who buy ammo by lot #, or work for companies that do, and meticulously track things.
 
OIP-635101914.58OTwz3-Q9TQO65aXHmytAHaEK.jpg
 
I'll respond one more time on all this and then move along. It's obvious we're not getting anywhere.

I did come here with genuine curiosity. I thought there were some folks who had actually looked at what was being done in those evals, understood the "how and why" of the method as described by those performing it, and had some reasoned, logical arguments for why they're bullshit. They make sense to my small brain, and I was hoping for some clarity.

Not one single person on this thread has presented a reasoned argument against the validity of those evals. Straw men, ad homenims, appeals to authority, and an impressive number of other logical fallacies have been trotted out. I'm not asking for a white paper, but a bit of logical progression in an argument would sure be nice.

"Drop tests are awesome, prove them wrong" is explicitly not what I'm doing here, and claiming it doesn't make it so. @koshkin, you've made multiple statements/arguments that are factually incorrect about these tests. When asked/called on it, you respond with essentially, "I can't be bothered to worry about whether my arguments are based in fact or not, since I consider this to be a silly thing not worthy of my time."

You guys are right, my curiosity about your perspective is rapidly evaporating. It's pretty clear that your position here is not based on actually having counter arguments or getting the facts of the matter correct. It's not hurt feelings on my part, I was honestly expecting a bit more in the way of ball-busting. It's disappointment in the fact that you refuse to engage with the facts and just get louder and more dismissive when I try to bring it back to that. That's leftie behavior, and not what I expected to find here.

I'll take the L, and move on.
You can't reason with stupid. This post is proof.
 
I'll respond one more time on all this and then move along. It's obvious we're not getting anywhere.

I did come here with genuine curiosity. I thought there were some folks who had actually looked at what was being done in those evals, understood the "how and why" of the method as described by those performing it, and had some reasoned, logical arguments for why they're bullshit. They make sense to my small brain, and I was hoping for some clarity.

Not one single person on this thread has presented a reasoned argument against the validity of those evals. Straw men, ad homenims, appeals to authority, and an impressive number of other logical fallacies have been trotted out. I'm not asking for a white paper, but a bit of logical progression in an argument would sure be nice.

"Drop tests are awesome, prove them wrong" is explicitly not what I'm doing here, and claiming it doesn't make it so. @koshkin, you've made multiple statements/arguments that are factually incorrect about these tests. When asked/called on it, you respond with essentially, "I can't be bothered to worry about whether my arguments are based in fact or not, since I consider this to be a silly thing not worthy of my time."

You guys are right, my curiosity about your perspective is rapidly evaporating. It's pretty clear that your position here is not based on actually having counter arguments or getting the facts of the matter correct. It's not hurt feelings on my part, I was honestly expecting a bit more in the way of ball-busting. It's disappointment in the fact that you refuse to engage with the facts and just get louder and more dismissive when I try to bring it back to that. That's leftie behavior, and not what I expected to find here.

I'll take the L, and move on.
Not sure that those test will ever be proven or disproven. The sample size, time, and money it would take to do that pretty much make sure its an impossibility and no financial gain for anyone to "disprove" it. There are a lot of fan boys on this site for a particular brand, just like there are on any site. They get a little butt hurt if something they like doesn't pan out; on the the flip side there are a lot of objectives minds, top tier smiths, and industry folks with a wealth of information to share. There are also a lot of "dog piles" that happen as well when a dissenting opinion crops up. One thing to understand is there is a ALOT of knowledge here, but you just have to flow with the BS sometimes and ignore the resident goobers that seem to want to comment on every post with meaningless drivel. 90% of the members here are PRS focused so think heavy ass rigs in low recoiling calibers. A scope needs to track but zero retention is rarely an issue because of the gear used,. PRS is hardly a sport that really puts an optic to the test in so far as zero retention. If you posted this over on 6mmBR on rimfire central you would get the same response. Its why the ARKEN/DNT/Athlon scopes have taken off. They have decent glass for the $$ and seem to track ok. Will they hold zero under severe abuse? Probably not as I've only been around 5 or 6 six and have seen two shit the bed but those two failures never would have happened had they been used under a more controlled environment like a rimfire PRS match. They have gained loyal users in that circle; they work in that environment just fine for the most part and its not a life or death situation to be honest if they do break. Most of the rifles in PRS disciplines go from safe, to a case, to the match, then back again. They aren't lightweight hunting rifles in larger calibers hauled up a mountain or strapped to a horse, SxS, etc. so its just not that big of an issue. When that rifle does travel its inside of a padded case. Your average week long goat/sheep/elk hunt would be more abusive on a scope than a PRS rifle would see in its lifetime, outside of just dialing. By that same token your average goat/sheep/elk hunter doesn't shoot well enough or far enough to even notice if his zero wanders by a bit. 250 yards and minute of large four legged herbivore is plenty enough precision; the other 10% probably notice when there Leupold or Swarovski goes awry.

Personally I don't buy into the notion that the test over on ROKSLIDE are bunk as some have said. I've personally witnessed enough optics with wandering zero after hard use that seem to line up with their data on particular brands that routinely fail his test. I am also not 100% certain they are always accurate either because there are just to many variables with rifles weights, scope weights, torque wrenches, etc........but have neither the knowledge, time or patience to test them out for myself nor do 99.99% of anyone else on this site.
 
Last edited:
99.9999% of "hunting optics" don't get used in their lifetime as much as most scopes in 1 PRS match, and thats the match, not practice and prep.

There are some silly assertions in this thread. But to put forth that PRS is easy on scopes and that zero retention isn't an issue on courses with 800y+ targets, is pushing the ragged edges of just saying dumb stuff, or living in a fantasy world.