Rifle Scopes Revisiting drop testing

Until someone is testing a statistically significant number of optics (hundreds minimum) in a fully controlled mechanical test with multiple simulated scenarios, drop tests are just a cruel game of RNG.
People think about the goal of the testing in the wrong way. It's meant to raise red flags about an optic model, not definitively prove all scopes of a line are bombproof/garbage. If you think about it in that way you can find value in it. Let's say someone thinks a Leupold VX6 has a lemon rate of 1/1,000. Well two of them were just tested and both shit the bed. What's the likelihood the droptesters got two lemons in a row? That person might then have to re-evaluate what they think the lemon rate is.

Only one Minox ZP5 has been tested and that one passed. If I genuinely thought the droptesting cleared all scopes of a certain model, I'd probably get a ZP5 because I hear they're also great to look through. To me, the droptesting is about raising red flags and establishing trends among optics manufacturers as far as which ones are actively trying to make robust/durable scopes. Nothing more. A Nightforce can still lose its zero I'm sure, it's more about getting an idea for the chance of that happening.
 
I don’t recommend dropping a ZP5 lol. It took three months to get mine back a few years ago. But I knew that when I purchased it (used). But you are correct, the image is exceptional. And if you know how to deal with about 0.05ish mils of lash then it will serve you well.

I also agree that repeat scope failures are a warning sign, but any tester that is trying to weed out an optics failure to hold zero by dropping it and then shooting it should also dig deeper into root causes, and they should expect criticism of the testing methods being used. To just blame the optic speaks of ignorance, when the optic is just one part of a rifle system.
 
]
People think about the goal of the testing in the wrong way. It's meant to raise red flags about an optic model, not definitively prove all scopes of a line are bombproof/garbage. If you think about it in that way you can find value in it. Let's say someone thinks a Leupold VX6 has a lemon rate of 1/1,000. Well two of them were just tested and both shit the bed. What's the likelihood the droptesters got two lemons in a row? That person might then have to re-evaluate what they think the lemon rate is.

Only one Minox ZP5 has been tested and that one passed. If I genuinely thought the droptesting cleared all scopes of a certain model, I'd probably get a ZP5 because I hear they're also great to look through. To me, the droptesting is about raising red flags and establishing trends among optics manufacturers as far as which ones are actively trying to make robust/durable scopes. Nothing more. A Nightforce can still lose its zero I'm sure, it's more about getting an idea for the chance of that happening.

If they received 2 quality scopes of any brand and both failed to hold zero. I would be more suspect of their testing than the scopes.

Basing assumed lemon rates on questionable testing doesn't seem like a smart way to look at it.
 
  • Like
Reactions: MikeMiller
Scope failures are a waring sign." Maybe sort of, more so with new modles with the caveat, What failed? Are they failing at the same point?

"Drop it on the turret." That little fine threaded interface between your fingers and the scope erector? You must be high as shit. 🤣🤣

Go bang the end of bunch of bolts with a hammer and see which ones you can still get a nut on. Throw lawn darts into a crowd. Was the guy who got hit, most likely to get hit. It's variable, the results are variable. I.E based on luck.

If your smashing your turrets into stuff all the time. They make scopes with capped adjustments. BTW
 
People think about the goal of the testing in the wrong way. It's meant to raise red flags about an optic model, not definitively prove all scopes of a line are bombproof/garbage. If you think about it in that way you can find value in it. Let's say someone thinks a Leupold VX6 has a lemon rate of 1/1,000. Well two of them were just tested and both shit the bed. What's the likelihood the droptesters got two lemons in a row? That person might then have to re-evaluate what they think the lemon rate is.
lol If you really want this testing to raise red flags, you need a statistically significant sample size. While a failure rate of 2 in a row sounds unlikely, it is just as likely be a fluke as it is to indicate an actual issue without more data. Anyone trying to draw a conclusion with confidence from such a small sample size would have their brain broken by a basic Stats class.
 
lol If you really want this testing to raise red flags, you need a statistically significant sample size.
Not really, no. That's kind of the point. If you wanted to label an optic model 100% zero-shiftless then you'd need a lot of samples but you don't need a lot of samples to raise red flags.
While a failure rate of 2 in a row sounds unlikely, it is just as likely be a fluke as it is to indicate an actual issue without more data
It's only as likely to be a fluke if you are assuming the actual failure rate is 50%, which seems kind of high? If you're assuming it's, say, 5% then two fails in a row would be a 0.25% chance. So no, it's not "just as likely".
Anyone trying to draw a conclusion with confidence from such a small sample size would have their brain broken by a basic Stats class.
I happen to have taken quite a bit of stats.
Basing assumed lemon rates on questionable testing doesn't seem like a smart way to look at it.
That's fine to think but I don't really know what else we base it on. We can't even really base on Leupold's RMA data because what percentage of people shoot well enough and are knowledgeable enough to even notice a zero shift?
 
lol If you really want this testing to raise red flags, you need a statistically significant sample size. While a failure rate of 2 in a row sounds unlikely, it is just as likely be a fluke as it is to indicate an actual issue without more data. Anyone trying to draw a conclusion with confidence from such a small sample size would have their brain broken by a basic Stats class.
Most military optics trials I've been privy to among units involve 10-30 optics. Id argue that while better, its still insignificant, considering the thousands upon thousands that will be made. Some companies consistently get it right, some don't. Look at company track record over 20 years or so.
 
Not really, no. That's kind of the point. If you wanted to label an optic model 100% zero-shiftless then you'd need a lot of samples but you don't need a lot of samples to raise red flags.

It's only as likely to be a fluke if you are assuming the actual failure rate is 50%, which seems kind of high? If you're assuming it's, say, 5% then two fails in a row would be a 0.25% chance. So no, it's not "just as likely".

I happen to have taken quite a bit of stats.

That's fine to think but I don't really know what else we base it on. We can't even really base on Leupold's RMA data because what percentage of people shoot well enough and are knowledgeable enough to even notice a zero shift?
Not many. Most people are completely oblivious to the Razor hd2 1-6 opening up after 3k rounds or so. Those who are, typically are the types who buy ammo by lot #, or work for companies that do, and meticulously track things.
 
OIP-635101914.58OTwz3-Q9TQO65aXHmytAHaEK.jpg
 
I'll respond one more time on all this and then move along. It's obvious we're not getting anywhere.

I did come here with genuine curiosity. I thought there were some folks who had actually looked at what was being done in those evals, understood the "how and why" of the method as described by those performing it, and had some reasoned, logical arguments for why they're bullshit. They make sense to my small brain, and I was hoping for some clarity.

Not one single person on this thread has presented a reasoned argument against the validity of those evals. Straw men, ad homenims, appeals to authority, and an impressive number of other logical fallacies have been trotted out. I'm not asking for a white paper, but a bit of logical progression in an argument would sure be nice.

"Drop tests are awesome, prove them wrong" is explicitly not what I'm doing here, and claiming it doesn't make it so. @koshkin, you've made multiple statements/arguments that are factually incorrect about these tests. When asked/called on it, you respond with essentially, "I can't be bothered to worry about whether my arguments are based in fact or not, since I consider this to be a silly thing not worthy of my time."

You guys are right, my curiosity about your perspective is rapidly evaporating. It's pretty clear that your position here is not based on actually having counter arguments or getting the facts of the matter correct. It's not hurt feelings on my part, I was honestly expecting a bit more in the way of ball-busting. It's disappointment in the fact that you refuse to engage with the facts and just get louder and more dismissive when I try to bring it back to that. That's leftie behavior, and not what I expected to find here.

I'll take the L, and move on.
You can't reason with stupid. This post is proof.