I wanted to start a new thread to avoid detailing @supercorndogs Razor LHT thread. ..
I'll start by restating my question I asked there (in response to a post by @koshkin stating that small zero shifts in the scope itself almost never occur).
I hesitate to open this can of worms, but I'm really interested in your opinion on a specific aspect of this. I haven't seen the proofing setup for the infamous Rokslide evals addressed (though it's tough to do an exhaustive search for all the conversations that happened a couple years ago when this was the hot topic of discussion). When they have a rifle bonded to a chassis, that reliably passes the evals with NF, SWFA, certain Maven and multiple Trijicon models (and a few others like older fixed Leupold, LRHS, S&B etc), wouldn't that seem to indicate that a scope can be built to withstand that kind of impact and that the eval does indeed reliably test for that level of robustness?
If multiple samples of a Leupold or Vortex scope get mounted in the same manner and they all exhibit zero shifts on those same tests, how does the "uncontrolled variables" argument hold when there are several scope models that reliably pass under the same conditions?
Worded another way, it seems to me like scope designs tend to either be around a 2-3 on an arbitrary 0-10 reliability scale or around a 8-9 and the "uncontrolled" nature of the RS drop eval might cause some 5-6 scopes to pass or fail depending on how the rifle bounces on any particular drop. Scopes that pass seem to consistently pass and ones that fail seem to consistently fail. Is the argument that there are more scopes in the 6-7 range than I think, and that 6-7 is good enough?
There have been a couple of recent interviews I've seen (Aaron Davidson on Cliff Gray's podcast for example) that have revisited the topic and discuss the evals with similar perspectives to yours but again don't address the setups that consistently pass the test. Aaron doesn't appear to be a guy who underthinks things, and neither are you (though please don't take that as me equating you with him). I'm really not trying for a "gotcha" moment here, I'm genuinely interested in your expertise. Your commentary on the topic has been interesting for me to read (along with @Glassaholic and some others) and I'd love to hear your perspective on this.
His response was as follows:
That observation about same scope with different badging is an error. The 3-15 tract was tested but not the 2.5-15 that is the "same scope" as the Maven RS1.2. That tract has not been tested.
I listened to the conversation with Scott, and you repeated the "no way to know if the shifts are coming from barrel, bedding, mounting, etc" perspective. I am saying if the rifle holds POI through a dozen drops from 3' with scope after scope from NF, SWFA, Trijicon, etc, to me it seems like that does demonstrate that the "test bed" is not the source of the shift and that scopes can absolutely be designed and built to withstand the eval. If I'm missing some crucial part of this, please help me see it.
It also tracks with my personal experience. When I was a Leupold, Vortex, etc guy it was essentially a given that beginning of the season and a couple times throughout the season I'd need to make a little adjustment to my zero. As more and more of those scopes got replaced with LRHS, SWFA, that magically went away. I learned to mount scopes correctly in that time as well, but the old scopes went in good mounts and still needed adjustments. No more issues now, I adjust zero when I change bullets or powder, but that is it. Period.
Beating the objective bell on a carpeted floor might be indicative of some aspect of reliability, but it seems to me that the eval shows that when it lands on a turret with rifle attached (something the mil test table with the swinging hammer doesn't test for, but a way more likely thing to encounter in the field), zero shifts can and do occur.
Is the position here that Form, Ryan, and everyone involved is just lying about this whole thing? There are a few other members that have replicated the tests, exposed weaknesses in their mounting setups/processeces, then corrected them and had same results as the "official" ones.
Please help me understand why so many on SH are so dismissive of these evals. It seems like much of the criticism has been addressed/explained.
I'll start by restating my question I asked there (in response to a post by @koshkin stating that small zero shifts in the scope itself almost never occur).
I hesitate to open this can of worms, but I'm really interested in your opinion on a specific aspect of this. I haven't seen the proofing setup for the infamous Rokslide evals addressed (though it's tough to do an exhaustive search for all the conversations that happened a couple years ago when this was the hot topic of discussion). When they have a rifle bonded to a chassis, that reliably passes the evals with NF, SWFA, certain Maven and multiple Trijicon models (and a few others like older fixed Leupold, LRHS, S&B etc), wouldn't that seem to indicate that a scope can be built to withstand that kind of impact and that the eval does indeed reliably test for that level of robustness?
If multiple samples of a Leupold or Vortex scope get mounted in the same manner and they all exhibit zero shifts on those same tests, how does the "uncontrolled variables" argument hold when there are several scope models that reliably pass under the same conditions?
Worded another way, it seems to me like scope designs tend to either be around a 2-3 on an arbitrary 0-10 reliability scale or around a 8-9 and the "uncontrolled" nature of the RS drop eval might cause some 5-6 scopes to pass or fail depending on how the rifle bounces on any particular drop. Scopes that pass seem to consistently pass and ones that fail seem to consistently fail. Is the argument that there are more scopes in the 6-7 range than I think, and that 6-7 is good enough?
There have been a couple of recent interviews I've seen (Aaron Davidson on Cliff Gray's podcast for example) that have revisited the topic and discuss the evals with similar perspectives to yours but again don't address the setups that consistently pass the test. Aaron doesn't appear to be a guy who underthinks things, and neither are you (though please don't take that as me equating you with him). I'm really not trying for a "gotcha" moment here, I'm genuinely interested in your expertise. Your commentary on the topic has been interesting for me to read (along with @Glassaholic and some others) and I'd love to hear your perspective on this.
His response was as follows:
I have been out of the loop with this for a bit, so I have not looked at that silly nonsense for a little while.
I suspect that which scopes pass the alleged test and which do not mostly comes down to the preferences of the people doing the testing. The one observation I'll make is that with a few scopes from different manufacturers that they tested, they claimed one brand passed every time and another failed every time. The problem is that with a couple of those, I know for a fact that on the inside it is the same exact scope except with different branding. If one fails and the other does not... we are either dealing with sample variation or dishonesty.
Aside from that, I have made several attempts over the years to track down the issues with scopes that "allegedly" were shifting zero. I even offered to do the investigation pro-bono for people who believe they experience that problem. Exactly zero people took me up on that.
A few times I have been able to investigate someone else's scope (for people I know well or that are near me geographically), it was almost always something with the mount, although rifle bedding also played a role.
I even invited Scott Parks over to join me for a livestream and to discuss different failure modes.
While there are possibly riflescopes out there that do have small zero shifts out there, most of the time it will come down to improper mounting (not always, probably, but sufficiently seldom that I have not been able to track that down).
Small zero shifts are generally not consistent with how most riflescopes are designed. They usually either work or fail catastrophically.
As far as internet claims of all sorts go, I have sorta resigned myself that people lie. A lot. I do not know why.
A few years ago, I did a mini investigation where a gentleman I knew relatively well claimed that he has had at least one product from every product line at Vortex fail on him at least once and he finally gave up on Vortex. He claimed that he would get a replacement product, sell it and move onto something else. After some investigation, it turned out it was one product, a while back, Vortex did replace it and he did sell it. Somehow, it was like a fishing story. Every time he would tell the story, it was a larger number of products failing.
There is a Youtuber who claims that he had a particular company's product that was absolute garbage, so he sent it in and the replacement was much better. Cool story. He did two videos on it. Since I knew his name and the company, I dug into it. He sent a scope in. They cleaned a massive oil smudge off of the front of the objective lens. Did their usual QC checks on the collimator and sent it back to him. The Youtuber did get two videos out of it sounding very authoritative (every once in a while I check in on one of his videos to see if he learned anything about optics. Nope. Not a bloody thing. He still treats them like a video game).
I have a few more stories like this and they all end up the same way. I feel like the main character from House MD, who is always convinced that his patients are lying and is mostly right about it.
Every time I try to track this down, I find either bullshit or incompetence or some weird shenanigans or a combination of all three.
Does that mean that all reports of scopes shifting zero are bullshit? Not at all. There is only so much that I can investigate given the bandwidth that I have. However, until I can get some reasonable data otherwise. I am going to stick with what I know based on hundreds, if not thousands, of different scopes I have seen over the years and based on the fact that one of the things I do for my dayjob is build riflescope testers for riflescope manufacturers.
I will add that unless something really interesting pops up, I am done trying to investigate this. I have only so much time to spend on this.
ILya
That observation about same scope with different badging is an error. The 3-15 tract was tested but not the 2.5-15 that is the "same scope" as the Maven RS1.2. That tract has not been tested.
I listened to the conversation with Scott, and you repeated the "no way to know if the shifts are coming from barrel, bedding, mounting, etc" perspective. I am saying if the rifle holds POI through a dozen drops from 3' with scope after scope from NF, SWFA, Trijicon, etc, to me it seems like that does demonstrate that the "test bed" is not the source of the shift and that scopes can absolutely be designed and built to withstand the eval. If I'm missing some crucial part of this, please help me see it.
It also tracks with my personal experience. When I was a Leupold, Vortex, etc guy it was essentially a given that beginning of the season and a couple times throughout the season I'd need to make a little adjustment to my zero. As more and more of those scopes got replaced with LRHS, SWFA, that magically went away. I learned to mount scopes correctly in that time as well, but the old scopes went in good mounts and still needed adjustments. No more issues now, I adjust zero when I change bullets or powder, but that is it. Period.
Beating the objective bell on a carpeted floor might be indicative of some aspect of reliability, but it seems to me that the eval shows that when it lands on a turret with rifle attached (something the mil test table with the swinging hammer doesn't test for, but a way more likely thing to encounter in the field), zero shifts can and do occur.
Is the position here that Form, Ryan, and everyone involved is just lying about this whole thing? There are a few other members that have replicated the tests, exposed weaknesses in their mounting setups/processeces, then corrected them and had same results as the "official" ones.
Please help me understand why so many on SH are so dismissive of these evals. It seems like much of the criticism has been addressed/explained.