Rifle Scopes Revisiting drop testing

Eric1115

Private
Minuteman
Oct 2, 2020
34
9
I wanted to start a new thread to avoid detailing @supercorndogs Razor LHT thread. ..

I'll start by restating my question I asked there (in response to a post by @koshkin stating that small zero shifts in the scope itself almost never occur).

I hesitate to open this can of worms, but I'm really interested in your opinion on a specific aspect of this. I haven't seen the proofing setup for the infamous Rokslide evals addressed (though it's tough to do an exhaustive search for all the conversations that happened a couple years ago when this was the hot topic of discussion). When they have a rifle bonded to a chassis, that reliably passes the evals with NF, SWFA, certain Maven and multiple Trijicon models (and a few others like older fixed Leupold, LRHS, S&B etc), wouldn't that seem to indicate that a scope can be built to withstand that kind of impact and that the eval does indeed reliably test for that level of robustness?

If multiple samples of a Leupold or Vortex scope get mounted in the same manner and they all exhibit zero shifts on those same tests, how does the "uncontrolled variables" argument hold when there are several scope models that reliably pass under the same conditions?

Worded another way, it seems to me like scope designs tend to either be around a 2-3 on an arbitrary 0-10 reliability scale or around a 8-9 and the "uncontrolled" nature of the RS drop eval might cause some 5-6 scopes to pass or fail depending on how the rifle bounces on any particular drop. Scopes that pass seem to consistently pass and ones that fail seem to consistently fail. Is the argument that there are more scopes in the 6-7 range than I think, and that 6-7 is good enough?

There have been a couple of recent interviews I've seen (Aaron Davidson on Cliff Gray's podcast for example) that have revisited the topic and discuss the evals with similar perspectives to yours but again don't address the setups that consistently pass the test. Aaron doesn't appear to be a guy who underthinks things, and neither are you (though please don't take that as me equating you with him). I'm really not trying for a "gotcha" moment here, I'm genuinely interested in your expertise. Your commentary on the topic has been interesting for me to read (along with @Glassaholic and some others) and I'd love to hear your perspective on this.

His response was as follows:
I have been out of the loop with this for a bit, so I have not looked at that silly nonsense for a little while.

I suspect that which scopes pass the alleged test and which do not mostly comes down to the preferences of the people doing the testing. The one observation I'll make is that with a few scopes from different manufacturers that they tested, they claimed one brand passed every time and another failed every time. The problem is that with a couple of those, I know for a fact that on the inside it is the same exact scope except with different branding. If one fails and the other does not... we are either dealing with sample variation or dishonesty.

Aside from that, I have made several attempts over the years to track down the issues with scopes that "allegedly" were shifting zero. I even offered to do the investigation pro-bono for people who believe they experience that problem. Exactly zero people took me up on that.

A few times I have been able to investigate someone else's scope (for people I know well or that are near me geographically), it was almost always something with the mount, although rifle bedding also played a role.

I even invited Scott Parks over to join me for a livestream and to discuss different failure modes.



While there are possibly riflescopes out there that do have small zero shifts out there, most of the time it will come down to improper mounting (not always, probably, but sufficiently seldom that I have not been able to track that down).

Small zero shifts are generally not consistent with how most riflescopes are designed. They usually either work or fail catastrophically.

As far as internet claims of all sorts go, I have sorta resigned myself that people lie. A lot. I do not know why.

A few years ago, I did a mini investigation where a gentleman I knew relatively well claimed that he has had at least one product from every product line at Vortex fail on him at least once and he finally gave up on Vortex. He claimed that he would get a replacement product, sell it and move onto something else. After some investigation, it turned out it was one product, a while back, Vortex did replace it and he did sell it. Somehow, it was like a fishing story. Every time he would tell the story, it was a larger number of products failing.

There is a Youtuber who claims that he had a particular company's product that was absolute garbage, so he sent it in and the replacement was much better. Cool story. He did two videos on it. Since I knew his name and the company, I dug into it. He sent a scope in. They cleaned a massive oil smudge off of the front of the objective lens. Did their usual QC checks on the collimator and sent it back to him. The Youtuber did get two videos out of it sounding very authoritative (every once in a while I check in on one of his videos to see if he learned anything about optics. Nope. Not a bloody thing. He still treats them like a video game).

I have a few more stories like this and they all end up the same way. I feel like the main character from House MD, who is always convinced that his patients are lying and is mostly right about it.

Every time I try to track this down, I find either bullshit or incompetence or some weird shenanigans or a combination of all three.

Does that mean that all reports of scopes shifting zero are bullshit? Not at all. There is only so much that I can investigate given the bandwidth that I have. However, until I can get some reasonable data otherwise. I am going to stick with what I know based on hundreds, if not thousands, of different scopes I have seen over the years and based on the fact that one of the things I do for my dayjob is build riflescope testers for riflescope manufacturers.

I will add that unless something really interesting pops up, I am done trying to investigate this. I have only so much time to spend on this.

ILya





That observation about same scope with different badging is an error. The 3-15 tract was tested but not the 2.5-15 that is the "same scope" as the Maven RS1.2. That tract has not been tested.

I listened to the conversation with Scott, and you repeated the "no way to know if the shifts are coming from barrel, bedding, mounting, etc" perspective. I am saying if the rifle holds POI through a dozen drops from 3' with scope after scope from NF, SWFA, Trijicon, etc, to me it seems like that does demonstrate that the "test bed" is not the source of the shift and that scopes can absolutely be designed and built to withstand the eval. If I'm missing some crucial part of this, please help me see it.

It also tracks with my personal experience. When I was a Leupold, Vortex, etc guy it was essentially a given that beginning of the season and a couple times throughout the season I'd need to make a little adjustment to my zero. As more and more of those scopes got replaced with LRHS, SWFA, that magically went away. I learned to mount scopes correctly in that time as well, but the old scopes went in good mounts and still needed adjustments. No more issues now, I adjust zero when I change bullets or powder, but that is it. Period.

Beating the objective bell on a carpeted floor might be indicative of some aspect of reliability, but it seems to me that the eval shows that when it lands on a turret with rifle attached (something the mil test table with the swinging hammer doesn't test for, but a way more likely thing to encounter in the field), zero shifts can and do occur.

Is the position here that Form, Ryan, and everyone involved is just lying about this whole thing? There are a few other members that have replicated the tests, exposed weaknesses in their mounting setups/processeces, then corrected them and had same results as the "official" ones.

Please help me understand why so many on SH are so dismissive of these evals. It seems like much of the criticism has been addressed/explained.
 
  • Haha
Reactions: oldrifleman
The only scope that I have seen that had a small zero shift was a burris xtr 4-16 that burris said had a loose reticle.

Diffrent lots of bullets and diffrent lots of ammo might shoot diffrent poi. I hardly ever adjust zero for reasons other than bullet, load or scope changes.

I will honestly say I have not even looked at their drop tests. So I don't know the kind of shifts they claim. I don't know how reliably they can shoot the same zero or what their ammo /systems are capable of when trying to start comparing small samples and make reliable assertions. But there are not many people online I trust to do that. Most of them publish books.🤣🤣

You can never discount the fact that people will flat out lie to just "prove" you're wrong and they're right.
 
Small “shifts” in their zeros are more than likely due to small sample sizes.
Can you expand on this? What is it about the way the evals are done that indicates that to you?

I believe this is pretty well controlled for if I understand you correctly. The rifle gets a 20+ round group to establish true group size, zeroed with a 10 round group, and a shift must be outside the 20 round cone to count. With a lot of the scopes, the shifts are a full mil or more on a rifle that shoots 20 shots into a .5 mil group.
 
I am only basing it on my experiences. I have shot many a tight group with rifles at one sitting, then moved around and shot tight groups later with the same rifles that were not in the exact same locations as the first group. Was it from recoil management due to my position behind the rifle, was it lighting, was it wind, barrel temperature, etc. or was it just statistical variation that occurs with a small number of shots? That’s what I meant. Not knocking the evals because I wasn’t there, but I am always leery of anyone who tests equipment and has a financial incentive through advertising to have a bias for or against a brand.


Also, just an aside, what responsible hunter would ever drop their rifle from several feet onto the scope and not check their zero before continuing a hunt? The mount and its interface with the rifle would be my first suspicion if groups moved substantially after a hard drop.
 
  • Like
Reactions: Eric1115
The only scope that I have seen that had a small zero shift was a burris xtr 4-16 that burris said had a loose reticle.

Diffrent lots of bullets and diffrent lots of ammo might shoot diffrent poi. I hardly ever adjust zero for reasons other than bullet, load or scope changes.
They for sure do have different poi, but it's controlled for. The zero, test, and long term test all use the same lot of ammo, and any time a new lot is used there's a new 20-30 round test group to establish the cone that a shot has to land outside to count as being off.
I will honestly say I have not even looked at their drop tests. So I don't know the kind of shifts they claim. I don't know how reliably they can shoot the same zero or what their ammo /systems are capable of when trying to start comparing small samples and make reliable assertions. But there are not many people online I trust to do that. Most of them publish books.🤣🤣
Short version of the program is that the test rifle, bonded to the chassis, is set up with a NF or SWFA (or a handful of other known good scopes). 20-30 round group gets shot to establish cone/precision of that lot of ammo. Rifle gets zeroed with a 10 round group, then dropped on a padded shooting mat over dirt or snow. One drop then one shot from 18" on each side and the top (3 drops and 3 shots) then same but from 36". Last shot (#7) happens after 3 drops on each side from 36" off the ground. If it passes all that with no major issues it rides in a truck for a couple months with periodic zero checks using the same lot of ammo.

For sure some guys have tried to replicate this with scopes that regularly pass the eval and had zero shifts happen due to mounting issues, bedding, etc but the test rifle that gets used for the "official" drop evals is ridiculously reliable in the way it will pass this test with most NF, SWFA, Trijicon, and a few others and how reliably a lot of others will not complete the eval without moderate to major POI shifts from those drops.
You can never discount the fact that people will flat out lie to just "prove" you're wrong and they're right.
That is a fair point, but I've never seen anyone actually do something to demonstrate that what he's doing along with a few others including Ryan (one of Rokslide's owners) is bullshit. They have put out offers for people (skeptics) to go do the eval with them, video camera rolling uncut, with some modest gear/travel compensation if the scope they're claiming was somehow a fabricated failure actually ends up passing. I was super skeptical of the tests too, but the more I looked at them the more valid they seemed.
 
Last edited:
It's pretty retarded to drop test optics unless you are getting paid to do it. First you would need a decent sample size ( say 6 of each model) to rule out sample variance. Virtually every optic will have issues when you drop it. From the cheapest shit to the most expensive optics.

I have had to send back almost every brand of optic for repair/replacement from huge drops that dented the objective ( and it still tracked) to a 8 inch top over that knocked it out of zero. Rifle scopes are designed to survive heavy recoiling rifles. None are designed to be dropped on the turrets or rolled down a hillside.

These tests are worthless and tell us nothing of value.
 
Idk why they don’t use a collimater because that will give you the clearest data on if the reticle shifts. Zero it to the collimater, mount it on the test rifle, do the drops, take the scope off and put it on the collimater. I usually default to mounting system or one of the many other threaded points being the failure mode of zero shifts.
 
I am only basing it on my experiences. I have shot many a tight group with rifles at one sitting, then moved around and shot tight groups later with the same rifles that were not in the exact same locations as the first group. Was it from recoil management due to my position behind the rifle, was it lighting, was it wind, barrel temperature, etc. or was it just statistical variation that occurs with a small number of shots? That’s what I meant. Not knocking the evals because I wasn’t there, but I am always leery of anyone who tests equipment and has a financial incentive through advertising to have a bias for or against a brand.

Gotcha. I've had that happen as well, and as far as I can tell it usually is a sample size issue. If they were using a 3 round zero and then calling a .5" difference a shift, I'd be right there with you. But starting with a 1.5" 20 shot group and observing shifts of 2-3" is not that.
Also, just an aside, what responsible hunter would ever drop their rifle from several feet onto the scope and not check their zero before continuing a hunt? The mount and its interface with the rifle would be my first suspicion if groups moved substantially after a hard drop.
I would. But what if it tips over on the bipod? Rolls off a pack that it's perched on? Tips over on soft pine needles from being leaned up against a tree? A slip that's not a full ass-over-teakettle tumble? There are scopes that people regularly hunt with that repeatedly show a full mil shift from being dropped from less than knee high into a shooting mat on soft dirt or snow.

I'd suspect mounting as a major culprit too if not for the fact that magically the SWFA, NF, and Trijicon scopes never seem to have mounting problems when they get tested. When a scope that seems like it should pass has a shift, it always gets remounted/mounts get examined as a first suspect.
 
It's pretty retarded to drop test optics unless you are getting paid to do it.
I can for sure think of lots of things I'd rather do with my unpaid free time and ammo budget, haha!

First you would need a decent sample size ( say 6 of each model) to rule out sample variance.
For sure, and when a new scope gets tested (one that seems like it should do well) a single fail is usually followed up with more samples. A single pass is taken as a positive/hopeful sign and only that until lots more get put to heavy use. Obviously some people jump to immediate sweeping conclusions but the group that does the evals is very clear that a single pass or fail is not dispositive proof of anything.

Virtually every optic will have issues when you drop it. From the cheapest shit to the most expensive optics.
How do you square that with the fact that NF, SWFA, Trijicon and a few others pass the eval with boring regularity, sample after sample. From a $300 SWFA fixed power to a $3k Minox ZP5, there are scopes at every price level that pass. And ones at every price level that fail.

I have had to send back almost every brand of optic for repair/replacement from huge drops that dented the objective ( and it still tracked) to a 8 inch top over that knocked it out of zero. Rifle scopes are designed to survive heavy recoiling rifles. None are designed to be dropped on the turrets or rolled down a hillside.

These tests are worthless and tell us nothing of value.
But again, there are several that do survive hard landings on the turrets and not only don't break, they don't even lose zero. The whole point of the evals is to identify the scopes most likely to have reliable zero retention and function, and also to sort of say to the industry, "see, it can be done. Some are doing it, but most are not." That's not to say that they're indestructible or cannot ever break/fail, but that there are options that generally work correctly through some pretty harsh handling.

If no scopes passed it I would be right there with you, but the fact that there are scopes models that reliably handle a dozen drops from waist high without shifting and other supposedly good scopes repeatedly shift from dropping from less than knee high in the exact same rifle/mounting setup is very valuable to me. How is that not useful information? Job one for a scope is not to have clear glass or crisp feeling clocks, but to have the reticle not move unless you tell it to and job two is when you do have it move exactly where you tell it to.

Side note, since switching for the most part to scopes that have fared well on those evals (as well as the scope mounting processes) my shooting life has improved immensely.
 
Last edited:
Idk why they don’t use a collimater because that will give you the clearest data on if the reticle shifts. Zero it to the collimater, mount it on the test rifle, do the drops, take the scope off and put it on the collimater. I usually default to mounting system or one of the many other threaded points being the failure mode of zero shifts.
That's the crux of my question. Several scopes are known to reliably pass the eval. If they can do it regularly, demonstrating that the rifle/mounting is solid, what is the collimator (or the test table or the recoil simulator to make the impact consistent and "scientific") gaining you?

If the test was trying to find .05 or 0.1 mil shifts in a bench rest scope, I'd be in agreement, a collimator would be needed, but it's looking for bigger shifts that actually affect practical accuracy in the field.
 
Last edited:
I am always leery of anyone who tests equipment and has a financial incentive through advertising to have a bias for or against a brand.
Just realized I didn't respond to this part. Multiple Rokslide sponsors got disappointing results and multiple scopes from companies that aren't super friendly to the site have done well. If there's bias, I don't see a pattern to it.