Rifle Scopes Revisiting drop testing

Eric1115

Private
Minuteman
Oct 2, 2020
42
9
I wanted to start a new thread to avoid detailing @supercorndogs Razor LHT thread. ..

I'll start by restating my question I asked there (in response to a post by @koshkin stating that small zero shifts in the scope itself almost never occur).

I hesitate to open this can of worms, but I'm really interested in your opinion on a specific aspect of this. I haven't seen the proofing setup for the infamous Rokslide evals addressed (though it's tough to do an exhaustive search for all the conversations that happened a couple years ago when this was the hot topic of discussion). When they have a rifle bonded to a chassis, that reliably passes the evals with NF, SWFA, certain Maven and multiple Trijicon models (and a few others like older fixed Leupold, LRHS, S&B etc), wouldn't that seem to indicate that a scope can be built to withstand that kind of impact and that the eval does indeed reliably test for that level of robustness?

If multiple samples of a Leupold or Vortex scope get mounted in the same manner and they all exhibit zero shifts on those same tests, how does the "uncontrolled variables" argument hold when there are several scope models that reliably pass under the same conditions?

Worded another way, it seems to me like scope designs tend to either be around a 2-3 on an arbitrary 0-10 reliability scale or around a 8-9 and the "uncontrolled" nature of the RS drop eval might cause some 5-6 scopes to pass or fail depending on how the rifle bounces on any particular drop. Scopes that pass seem to consistently pass and ones that fail seem to consistently fail. Is the argument that there are more scopes in the 6-7 range than I think, and that 6-7 is good enough?

There have been a couple of recent interviews I've seen (Aaron Davidson on Cliff Gray's podcast for example) that have revisited the topic and discuss the evals with similar perspectives to yours but again don't address the setups that consistently pass the test. Aaron doesn't appear to be a guy who underthinks things, and neither are you (though please don't take that as me equating you with him). I'm really not trying for a "gotcha" moment here, I'm genuinely interested in your expertise. Your commentary on the topic has been interesting for me to read (along with @Glassaholic and some others) and I'd love to hear your perspective on this.

His response was as follows:
I have been out of the loop with this for a bit, so I have not looked at that silly nonsense for a little while.

I suspect that which scopes pass the alleged test and which do not mostly comes down to the preferences of the people doing the testing. The one observation I'll make is that with a few scopes from different manufacturers that they tested, they claimed one brand passed every time and another failed every time. The problem is that with a couple of those, I know for a fact that on the inside it is the same exact scope except with different branding. If one fails and the other does not... we are either dealing with sample variation or dishonesty.

Aside from that, I have made several attempts over the years to track down the issues with scopes that "allegedly" were shifting zero. I even offered to do the investigation pro-bono for people who believe they experience that problem. Exactly zero people took me up on that.

A few times I have been able to investigate someone else's scope (for people I know well or that are near me geographically), it was almost always something with the mount, although rifle bedding also played a role.

I even invited Scott Parks over to join me for a livestream and to discuss different failure modes.



While there are possibly riflescopes out there that do have small zero shifts out there, most of the time it will come down to improper mounting (not always, probably, but sufficiently seldom that I have not been able to track that down).

Small zero shifts are generally not consistent with how most riflescopes are designed. They usually either work or fail catastrophically.

As far as internet claims of all sorts go, I have sorta resigned myself that people lie. A lot. I do not know why.

A few years ago, I did a mini investigation where a gentleman I knew relatively well claimed that he has had at least one product from every product line at Vortex fail on him at least once and he finally gave up on Vortex. He claimed that he would get a replacement product, sell it and move onto something else. After some investigation, it turned out it was one product, a while back, Vortex did replace it and he did sell it. Somehow, it was like a fishing story. Every time he would tell the story, it was a larger number of products failing.

There is a Youtuber who claims that he had a particular company's product that was absolute garbage, so he sent it in and the replacement was much better. Cool story. He did two videos on it. Since I knew his name and the company, I dug into it. He sent a scope in. They cleaned a massive oil smudge off of the front of the objective lens. Did their usual QC checks on the collimator and sent it back to him. The Youtuber did get two videos out of it sounding very authoritative (every once in a while I check in on one of his videos to see if he learned anything about optics. Nope. Not a bloody thing. He still treats them like a video game).

I have a few more stories like this and they all end up the same way. I feel like the main character from House MD, who is always convinced that his patients are lying and is mostly right about it.

Every time I try to track this down, I find either bullshit or incompetence or some weird shenanigans or a combination of all three.

Does that mean that all reports of scopes shifting zero are bullshit? Not at all. There is only so much that I can investigate given the bandwidth that I have. However, until I can get some reasonable data otherwise. I am going to stick with what I know based on hundreds, if not thousands, of different scopes I have seen over the years and based on the fact that one of the things I do for my dayjob is build riflescope testers for riflescope manufacturers.

I will add that unless something really interesting pops up, I am done trying to investigate this. I have only so much time to spend on this.

ILya





That observation about same scope with different badging is an error. The 3-15 tract was tested but not the 2.5-15 that is the "same scope" as the Maven RS1.2. That tract has not been tested.

I listened to the conversation with Scott, and you repeated the "no way to know if the shifts are coming from barrel, bedding, mounting, etc" perspective. I am saying if the rifle holds POI through a dozen drops from 3' with scope after scope from NF, SWFA, Trijicon, etc, to me it seems like that does demonstrate that the "test bed" is not the source of the shift and that scopes can absolutely be designed and built to withstand the eval. If I'm missing some crucial part of this, please help me see it.

It also tracks with my personal experience. When I was a Leupold, Vortex, etc guy it was essentially a given that beginning of the season and a couple times throughout the season I'd need to make a little adjustment to my zero. As more and more of those scopes got replaced with LRHS, SWFA, that magically went away. I learned to mount scopes correctly in that time as well, but the old scopes went in good mounts and still needed adjustments. No more issues now, I adjust zero when I change bullets or powder, but that is it. Period.

Beating the objective bell on a carpeted floor might be indicative of some aspect of reliability, but it seems to me that the eval shows that when it lands on a turret with rifle attached (something the mil test table with the swinging hammer doesn't test for, but a way more likely thing to encounter in the field), zero shifts can and do occur.

Is the position here that Form, Ryan, and everyone involved is just lying about this whole thing? There are a few other members that have replicated the tests, exposed weaknesses in their mounting setups/processeces, then corrected them and had same results as the "official" ones.

Please help me understand why so many on SH are so dismissive of these evals. It seems like much of the criticism has been addressed/explained.
 
  • Haha
Reactions: oldrifleman
The only scope that I have seen that had a small zero shift was a burris xtr 4-16 that burris said had a loose reticle.

Diffrent lots of bullets and diffrent lots of ammo might shoot diffrent poi. I hardly ever adjust zero for reasons other than bullet, load or scope changes.

I will honestly say I have not even looked at their drop tests. So I don't know the kind of shifts they claim. I don't know how reliably they can shoot the same zero or what their ammo /systems are capable of when trying to start comparing small samples and make reliable assertions. But there are not many people online I trust to do that. Most of them publish books.🤣🤣

You can never discount the fact that people will flat out lie to just "prove" you're wrong and they're right.
 
  • Like
Reactions: flogxal
Small “shifts” in their zeros are more than likely due to small sample sizes.
Can you expand on this? What is it about the way the evals are done that indicates that to you?

I believe this is pretty well controlled for if I understand you correctly. The rifle gets a 20+ round group to establish true group size, zeroed with a 10 round group, and a shift must be outside the 20 round cone to count. With a lot of the scopes, the shifts are a full mil or more on a rifle that shoots 20 shots into a .5 mil group.
 
I am only basing it on my experiences. I have shot many a tight group with rifles at one sitting, then moved around and shot tight groups later with the same rifles that were not in the exact same locations as the first group. Was it from recoil management due to my position behind the rifle, was it lighting, was it wind, barrel temperature, etc. or was it just statistical variation that occurs with a small number of shots? That’s what I meant. Not knocking the evals because I wasn’t there, but I am always leery of anyone who tests equipment and has a financial incentive through advertising to have a bias for or against a brand.


Also, just an aside, what responsible hunter would ever drop their rifle from several feet onto the scope and not check their zero before continuing a hunt? The mount and its interface with the rifle would be my first suspicion if groups moved substantially after a hard drop.
 
  • Like
Reactions: Eric1115
The only scope that I have seen that had a small zero shift was a burris xtr 4-16 that burris said had a loose reticle.

Diffrent lots of bullets and diffrent lots of ammo might shoot diffrent poi. I hardly ever adjust zero for reasons other than bullet, load or scope changes.
They for sure do have different poi, but it's controlled for. The zero, test, and long term test all use the same lot of ammo, and any time a new lot is used there's a new 20-30 round test group to establish the cone that a shot has to land outside to count as being off.
I will honestly say I have not even looked at their drop tests. So I don't know the kind of shifts they claim. I don't know how reliably they can shoot the same zero or what their ammo /systems are capable of when trying to start comparing small samples and make reliable assertions. But there are not many people online I trust to do that. Most of them publish books.🤣🤣
Short version of the program is that the test rifle, bonded to the chassis, is set up with a NF or SWFA (or a handful of other known good scopes). 20-30 round group gets shot to establish cone/precision of that lot of ammo. Rifle gets zeroed with a 10 round group, then dropped on a padded shooting mat over dirt or snow. One drop then one shot from 18" on each side and the top (3 drops and 3 shots) then same but from 36". Last shot (#7) happens after 3 drops on each side from 36" off the ground. If it passes all that with no major issues it rides in a truck for a couple months with periodic zero checks using the same lot of ammo.

For sure some guys have tried to replicate this with scopes that regularly pass the eval and had zero shifts happen due to mounting issues, bedding, etc but the test rifle that gets used for the "official" drop evals is ridiculously reliable in the way it will pass this test with most NF, SWFA, Trijicon, and a few others and how reliably a lot of others will not complete the eval without moderate to major POI shifts from those drops.
You can never discount the fact that people will flat out lie to just "prove" you're wrong and they're right.
That is a fair point, but I've never seen anyone actually do something to demonstrate that what he's doing along with a few others including Ryan (one of Rokslide's owners) is bullshit. They have put out offers for people (skeptics) to go do the eval with them, video camera rolling uncut, with some modest gear/travel compensation if the scope they're claiming was somehow a fabricated failure actually ends up passing. I was super skeptical of the tests too, but the more I looked at them the more valid they seemed.
 
Last edited:
It's pretty retarded to drop test optics unless you are getting paid to do it. First you would need a decent sample size ( say 6 of each model) to rule out sample variance. Virtually every optic will have issues when you drop it. From the cheapest shit to the most expensive optics.

I have had to send back almost every brand of optic for repair/replacement from huge drops that dented the objective ( and it still tracked) to a 8 inch top over that knocked it out of zero. Rifle scopes are designed to survive heavy recoiling rifles. None are designed to be dropped on the turrets or rolled down a hillside.

These tests are worthless and tell us nothing of value.
 
Idk why they don’t use a collimater because that will give you the clearest data on if the reticle shifts. Zero it to the collimater, mount it on the test rifle, do the drops, take the scope off and put it on the collimater. I usually default to mounting system or one of the many other threaded points being the failure mode of zero shifts.
 
I am only basing it on my experiences. I have shot many a tight group with rifles at one sitting, then moved around and shot tight groups later with the same rifles that were not in the exact same locations as the first group. Was it from recoil management due to my position behind the rifle, was it lighting, was it wind, barrel temperature, etc. or was it just statistical variation that occurs with a small number of shots? That’s what I meant. Not knocking the evals because I wasn’t there, but I am always leery of anyone who tests equipment and has a financial incentive through advertising to have a bias for or against a brand.

Gotcha. I've had that happen as well, and as far as I can tell it usually is a sample size issue. If they were using a 3 round zero and then calling a .5" difference a shift, I'd be right there with you. But starting with a 1.5" 20 shot group and observing shifts of 2-3" is not that.
Also, just an aside, what responsible hunter would ever drop their rifle from several feet onto the scope and not check their zero before continuing a hunt? The mount and its interface with the rifle would be my first suspicion if groups moved substantially after a hard drop.
I would. But what if it tips over on the bipod? Rolls off a pack that it's perched on? Tips over on soft pine needles from being leaned up against a tree? A slip that's not a full ass-over-teakettle tumble? There are scopes that people regularly hunt with that repeatedly show a full mil shift from being dropped from less than knee high into a shooting mat on soft dirt or snow.

I'd suspect mounting as a major culprit too if not for the fact that magically the SWFA, NF, and Trijicon scopes never seem to have mounting problems when they get tested. When a scope that seems like it should pass has a shift, it always gets remounted/mounts get examined as a first suspect.
 
It's pretty retarded to drop test optics unless you are getting paid to do it.
I can for sure think of lots of things I'd rather do with my unpaid free time and ammo budget, haha!

First you would need a decent sample size ( say 6 of each model) to rule out sample variance.
For sure, and when a new scope gets tested (one that seems like it should do well) a single fail is usually followed up with more samples. A single pass is taken as a positive/hopeful sign and only that until lots more get put to heavy use. Obviously some people jump to immediate sweeping conclusions but the group that does the evals is very clear that a single pass or fail is not dispositive proof of anything.

Virtually every optic will have issues when you drop it. From the cheapest shit to the most expensive optics.
How do you square that with the fact that NF, SWFA, Trijicon and a few others pass the eval with boring regularity, sample after sample. From a $300 SWFA fixed power to a $3k Minox ZP5, there are scopes at every price level that pass. And ones at every price level that fail.

I have had to send back almost every brand of optic for repair/replacement from huge drops that dented the objective ( and it still tracked) to a 8 inch top over that knocked it out of zero. Rifle scopes are designed to survive heavy recoiling rifles. None are designed to be dropped on the turrets or rolled down a hillside.

These tests are worthless and tell us nothing of value.
But again, there are several that do survive hard landings on the turrets and not only don't break, they don't even lose zero. The whole point of the evals is to identify the scopes most likely to have reliable zero retention and function, and also to sort of say to the industry, "see, it can be done. Some are doing it, but most are not." That's not to say that they're indestructible or cannot ever break/fail, but that there are options that generally work correctly through some pretty harsh handling.

If no scopes passed it I would be right there with you, but the fact that there are scopes models that reliably handle a dozen drops from waist high without shifting and other supposedly good scopes repeatedly shift from dropping from less than knee high in the exact same rifle/mounting setup is very valuable to me. How is that not useful information? Job one for a scope is not to have clear glass or crisp feeling clocks, but to have the reticle not move unless you tell it to and job two is when you do have it move exactly where you tell it to.

Side note, since switching for the most part to scopes that have fared well on those evals (as well as the scope mounting processes) my shooting life has improved immensely.
 
Last edited:
Idk why they don’t use a collimater because that will give you the clearest data on if the reticle shifts. Zero it to the collimater, mount it on the test rifle, do the drops, take the scope off and put it on the collimater. I usually default to mounting system or one of the many other threaded points being the failure mode of zero shifts.
That's the crux of my question. Several scopes are known to reliably pass the eval. If they can do it regularly, demonstrating that the rifle/mounting is solid, what is the collimator (or the test table or the recoil simulator to make the impact consistent and "scientific") gaining you?

If the test was trying to find .05 or 0.1 mil shifts in a bench rest scope, I'd be in agreement, a collimator would be needed, but it's looking for bigger shifts that actually affect practical accuracy in the field.
 
Last edited:
I am always leery of anyone who tests equipment and has a financial incentive through advertising to have a bias for or against a brand.
Just realized I didn't respond to this part. Multiple Rokslide sponsors got disappointing results and multiple scopes from companies that aren't super friendly to the site have done well. If there's bias, I don't see a pattern to it.
 
I can for sure think of lots of things I'd rather do with my unpaid free time and ammo budget, haha!


For sure, and when a new scope gets tested (one that seems like it should do well) a single fail is usually followed up with more samples. A single pass is taken as a positive/hopeful sign and only that until lots more get put to heavy use. Obviously some people jump to immediate sweeping conclusions but the group that does the evals is very clear that a single pass or fail is not dispositive proof of anything.


How do you square that with the fact that NF, SWFA, Trijicon and a few others pass the eval with boring regularity, sample after sample. From a $300 SWFA fixed power to a $3k Minox ZP5, there are scopes at every price level that pass. And ones at every price level that fail.


But again, there are several that do survive hard landings on the turrets and not only don't break, they don't even lose zero. The whole point of the evals is to identify the scopes most likely to have reliable zero retention and function, and also to sort of say to the industry, "see, it can be done. Some are doing it, but most are not." That's not to say that they're indestructible or cannot ever break/fail, but that there are options that generally work correctly through some pretty harsh handling.

If no scopes passed it I would be right there with you, but the fact that there are scopes models that reliably handle a dozen drops from waist high without shifting and other supposedly good scopes repeatedly shift from dropping from less than knee high in the exact same rifle/mounting setup is very valuable to me. How is that not useful information? Job one for a scope is not to have clear glass or crisp feeling clocks, but to have the reticle not move unless you tell it to and job two is when you do have it move exactly where you tell it to.

Side note, since switching for the most part to scopes that have fared well on those evals (as well as the scope mounting processes) my shooting life has improved immensely.
I have sent back NF and Trijicon optics for not holding zero/wandering zero and tracking issues. I threw a SWFA super sniper in the trash when the parallax stopped working. And a dozen other brands from Steiner to Burris to Bushnell to ZCO, Tangent, S&B, Vortex razors and Leopold. If you shoot enough you will see everything break.

Minox had a pretty high failure rate and an abysmal warranty process, so bad in fast most of the reputable US retailers for them no longer carry or sell them.

The military drop tests, track tests , environmental tests and more on every optic. MK4 passed, MK5 passed, NF, S&B, Bushnell, Vortex and others passed.

Not to sound like an asshole, but you sound really ignorant of actual use of these optics. Sounds like you spend your time reading shit on forums and reddit and somehow think that makes you educated on the subject. If you were actually out running all these products, you would see them fail. In competitive and hunting circles where you are around a ton of different products all being used hard, you see failures of everything. A scope is one of the most failure prone component of any rifle system, and why most of us always have a spare when competing or traveling on a hunt. Some more than others but enough so that people know to baby and protect their optics since no optic is immune from issues when its banged around.The NF Glued optics may be some of the most robust but they still failed. They do make some of the most robust optics but they are not immune from failure. And to top it off, they don't always own it and fuck the customer over in the process.

You are wasting your time and effort. You do not have the time or money to properly test any of this and its really a situation of ignorance and any findings you come up with, will be worthless as a result. Find something more productive to do with your time.
 
  • Like
Reactions: Jdowns0415
I have sent back NF and Trijicon optics for not holding zero/wandering zero and tracking issues. I threw a SWFA super sniper in the trash when the parallax stopped working. And a dozen other brands from Steiner to Burris to Bushnell to ZCO, Tangent, S&B, Vortex razors and Leopold. If you shoot enough you will see everything break.
I don't think anyone has claimed that the scopes that do well can't break. I certainly haven't and specifically said that in the post you're quoting.
Minox had a pretty high failure rate and an abysmal warranty process, so bad in fast most of the reputable US retailers for them no longer carry or sell them.
I've warrantied a Minox scope and a couple pairs of binos (though not a ZP5) and it was clunky at best. No argument there.

The military drop tests, track tests , environmental tests and more on every optic. MK4 passed, MK5 passed, NF, S&B, Bushnell, Vortex and others passed.
Ok, what's the argument here? All are equally robust since they all passed the same tests? I don't think that's what you're claiming but it's not quite clear to me how that is evidence that the drop test is garbage.

Not to sound like an asshole, but you sound really ignorant of actual use of these optics. Sounds like you spend your time reading shit on forums and reddit and somehow think that makes you educated on the subject. If you were actually out running all these products, you would see them fail.
Not to respond like an asshole, but you are deflecting and haven't actually addressed the question. If you want to claim that it's a complete waste of time then it should be pretty easy to give a clear argument for why it's not valid. I've seen some of them fail, but my LRHS's and SWFA's have been in a completely different class of reliability than I used to have with my Leupolds, vortexes, and Athlon's.
In competitive and hunting circles where you are around a ton of different products all being used hard, you see failures of everything. A scope is one of the most failure prone component of any rifle system, and why most of us always have a spare when competing or traveling on a hunt. Some more than others but enough so that people know to baby and protect their optics since no optic is immune from issues when its banged around.The NF Glued optics may be some of the most robust but they still failed. They do make some of the most robust optics but they are not immune from failure. And to top it off, they don't always own it and fuck the customer over in the process.
Again, that's a straw man. Nobody is claiming they can't break.
You are wasting your time and effort. You do not have the time or money to properly test any of this and its really a situation of ignorance and any findings you come up with, will be worthless as a result. Find something more productive to do with your time.
That is almost exactly the point. I do not have the time and money to properly test (aside from my own personal stuff). If someone else is doing a test that reveals that a Razor LHT shifts zero (and not by a little) when dropped from below knee high in soft snow, that's valuable information if the testing protocol holds water. I'd like for the new Mk4 to be reliable. I'd like the Razor LHT to be great. I'd rather not have to buy them and spend the time and money chasing problems if there's someone doing meaningful evaluations that are finding those problems. I only get to shoot once a week or so, a few thousand rifle rounds a year and I don't want to burn that all up on solving equipment issues a that someone else has already identified.

I see tons of bad arguments and straw men that are either poorly informed or intentionally misrepresenting this test, and I figured if there was anywhere on the Internet that I could get a well reasoned explanation why I should ignore the tests it would be here.
 
Last edited:
Just realized I didn't respond to this part. Multiple Rokslide sponsors got disappointing results and multiple scopes from companies that aren't super friendly to the site have done well. If there's bias, I don't see a pattern to it.
Confirmation bias is one of the realest, and most frequently ignored, problems in any test that pretends at doing Science. And the capital S there is intentional. I would add, engineers are not Scientists, they are maths guys who work in applied science. They use Science subject matter tools, but they do not conduct the Scientific Method in their work. It's more binary, works/doesn't work, in their world.

Most people don't have rigorous Science backgrounds. Most online "science" I see offered is heavily subjective while pretending at objectivity.

In the firearms world, I have seen a few "science" things that are science-ish because they follow a protocol and take data notes. Sometimes the heavily noted data is nothing more than heavily noted data, with a subjective claim of conclusion at the end. Sometimes, it's people "dropping a scope" to try to divine "quality" somehow. I can't even begin to take that seriously as Science..
 
Last edited:
I don't think anyone has claimed that the scopes that do well can't break. I certainly haven't and specifically said that in the post you're quoting.

I've warrantied a Minox scope and a couple pairs of binos (though not a ZP5) and it was clunky at best. No argument there.


Ok, what's the argument here? All are equally robust since they all passed the same tests? I don't think that's what you're claiming but it's not quite clear to me how that is evidence that the drop test is garbage.


Not to respond like an asshole, but you are deflecting and haven't actually addressed the question. If you want to claim that it's a complete waste of time then it should be pretty easy to give a clear argument for why it's not valid. I've seen some of them fail, but my LRHS's and SWFA's have been in a completely different class of reliability than I used to have with my Leupolds, vortexes, and Athlon's.

Again, that's a straw man. Nobody is claiming they can't break.

That is almost exactly the point. I do not have the time and money to properly test (aside from my own personal stuff). If someone else is doing a test that reveals that a Razor LHT shifts zero (and not by a little) when dropped from below knee high in soft snow, that's valuable information if the testing protocol holds water. I'd like for the new Mk4 to be reliable. I'd like the Razor LHT to be great. I'd rather not have to buy them and spend the time and money chasing problems if there's someone doing meaningful evaluations that are finding those problems. I only get to shoot once a week or so, a few thousand rifle rounds a year and I don't want to burn that all up on solving equipment issues a that someone else has already identified.

I see tons of bad arguments and straw men that are either poorly informed or intentionally misrepresenting this test, and I figured if there was anywhere on the Internet that I could get a well reasoned explanation why I should ignore the tests it would be here.
You obviously are not picking up what I'm putting down so ignore everything.

Go spend a ton of money time and and let us know what you find. I am sure legions of people will be eagerly waiting your results to help make their purchasing decisions. I have 3 more scope I need to order later this month. Please let me know what I should buy based on your plethora of experience and knowledge of the subject................
 
  • Like
Reactions: flogxal
A sincerely curious person wanting to disagree/quarrel with my post above on Science -- I would suggest reading Robert Pirsig's book Zen and the Art of Motorcycle Maintenance, which indirectly is about his thoughts on trying to discern Quality. You could read about Pirsig himself, his background, how he ended up writing, what sorts of ideas mattered to him.

He had the mind of a Science guy, but dabbled seriously in philosophy and human psychology. He had a lot of observations about why people delude themselves on many things, including the "quality" of one motorcycle brand vs another, different carburetor options in the 1970s, the pain of maintaining a lower-quality machine, that sort of thing.

I don't think the guys at that "technomodern hunting" forum have read Pirsig or anything like his thoughts.
 
You obviously are not picking up what I'm putting down so ignore everything.

Go spend a ton of money time and and let us know what you find. I am sure legions of people will be eagerly waiting your results to help make their purchasing decisions. I have 3 more scope I need to order later this month. Please let me know what I should buy based on your plethora of experience and knowledge of the subject................

You realize I'm not the guy, right? These are not my evaluations? I'm reading them, trying to see what it is about them that's so flawed. I'm not claiming expertise, rather I'm asking for those of you with expertise to give a reasoned, evidence based argument for what attributes of these evals cause them to be of no value.


Confirmation bias is one of the realest, and most frequently ignored, problems in any test that pretends at doing Science. And the capital S there is intentional. I would add, engineers are not Scientists, they are maths guys who work in applied science. They use Science subject matter tools, but they do not conduct the Scientific Method in their work. It's more binary, works/doesn't work, in their world.

Most people don't have rigorous Science backgrounds. Most online "science" I see offered is heavily subjective while pretending at objectivity.

In the firearms world, I have seen a few "science" things that are science-ish because they follow a protocol and take data notes. Sometimes the heavily noted data is nothing more than heavily noted data, with a subjective claim of conclusion at the end. Sometimes, it's people "dropping a scope" to try to divine "quality" somehow. I can't even begin to take that seriously as Science..
I didn't mean to imply lack of bias of any kind, just that sponsors/compensation don't appear to correlates with which brands/models do well and which ones crash and burn.

He's actually said these are not scientific in the sense of providing conclusive evidence of anything, but rather being indicative of likely behavior (if repeated multiple times with different samples).

They are, however, pretty repeatable. NF scopes almost always survive without shifting. Leupold VX series almost always shift by multiple inches at 100 yards. It's also replicable. The entire protocol is there for anyone to repeat, critique, etc. Can you articulate what you see as the fatal flaw(s) in the methods and why they make the results useless?

What is your take on the more closely controlled testing? Do you think bolting a scope to a table, applying a specific impact to the table, and putting the scope back in a collimator is a better way to check for vulnerabilities that may surface when it's in the field, mounted on a rifle? I realize that sounds like a smartass/borderline bad faith question, and I'll admit it does reflect my position, but I genuinely am interested in your thoughts.

I've had Zen on my to-read list for a while, and I'll bump it up the list for sure.
 
No one owes you an explanation. They are complicated and nuanced subjects. You asked /presented an idea and the experienced folks explained why it's dumb. Now you want them to detail why it is. No one has time for that shit.

If you want to learn how the sausage is made go earn that education.
 
  • Like
Reactions: flogxal
I saw where both of you two said that. I don't know either of you two from Adam though. I don't see where koshkin confirmed or denied. All in all it still wouldn't change my mind on my distrust of these "tests." Which will remain not on my list of cares when choosing a scope.
 
You don't have to take either of our word (or koshkin's) for it. It's really easy to go look for yourself. If one of the big points used to discredit those evals is either misinformed or misrepresented, that doesn't move the needle for you?

Is your position based on evidence and reason? If yes, what evidence would it take to change your mind?
 
I saw where both of you two said that. I don't know either of you two from Adam though. I don't see where koshkin confirmed or denied.
I don't understand why you'd need to trust anyone to just go to the section of the forum where all the droptest threads are and then ctrl + F for "Tract", then see the 2.5-15 wasn't tested. It takes 10 seconds and doesn't require any authority figure.
All in all it still wouldn't change my mind on my distrust of these "tests." Which will remain not on my list of cares when choosing a scope.
That's perfectly fine I just don't see why people feel the need to make up scope tests that didn't happen.
 
I dont check because I don't trust their results, therfore I don't carr about them. Therefore I'm not gonna waste time same as I am not gonna go sit down and read snow white and seven dwarfs for no reason.

I have no idea if koshkin was mistaken or if the two scopes use some of the same internal components.
Are you saying you don't care what the facts of the matter are? If you don't, why did you cite them (what you believed them to be) before? Why weigh in at all? What are you basing your position on? Emotion?
 
That observation about same scope with different badging is an error. The 3-15 tract was tested but not the 2.5-15 that is the "same scope" as the Maven RS1.2. That tract has not been tested.

The Tract Toric 2.5-15 was tested. I posted the same review on Rokslide, 24hr Campfire, and Long Range Hunting.

John

 
Last edited:
Growing up we always had to zero our rifles before hunting season. It made zero sense to me that we had to since it was done previously. But sure enough we had to make adjustments.

My first out of state hunt the first thing the experienced guy said we needed to do when we got to Colorado was zero rifles. Again why? How is my rifle losing zero in a hard case?

I swapped out Leupolds, March’s, and Swaro’s for NF’s. My main hunting rifle that bounces around in the SxS, strapped to my pack, etc hasn’t needed an adjustment in a few years now. I’ve used it in 4 western states on multiple hunts and it’s still zeroed.

I do believe there are levels of durability.
 
  • Like
Reactions: Eric1115
The Tract Toric 2.5-15 was tested.

John

I'll amend my statement... The 2.5-15 Tract is not the one that "failed" the drop test as performed by the same folks on Rokslide that "passed" the 2.5-15 Maven RS1.2. The 3-15 is the only Tract that has been tested by those guys.

The results of the test you linked do jive with what has been observed with the Maven if they are indeed the same scope with different badges.

The point remains that the "fact" of the identical twin scopes having one pass and one fail the exact same test as an example of the flaws/bias in the test/testers was either misinformed or misrepresented. Either way it seems like it shows poor understanding of the facts or bad faith.

I'm open to hearing why I'm off base here, but Ilya hasn't responded to the correction.
 
I'll amend my statement... The 2.5-15 Tract is not the one that "failed" the drop test as performed by the same folks on Rokslide that "passed" the 2.5-15 Maven RS1.2. The 3-15 is the only Tract that has been tested by those guys.

The results of the test you linked do jive with what has been observed with the Maven if they are indeed the same scope with different badges.

The point remains that the "fact" of the identical twin scopes having one pass and one fail the exact same test as an example of the flaws/bias in the test/testers was either misinformed or misrepresented. Either way it seems like it shows poor understanding of the facts or bad faith.

I'm open to hearing why I'm off base here, but Ilya hasn't responded to the correction.
Just so I understand your point…you write a lot.

@koshkin said that a scope that two brands rebadge (Tract 2.5-15 and Maven RS1.2 2.5-15) had one pass (Maven) and one fail (Tract) over at Rokslide.

You are saying the Tract was never tested by Form?

Right? Please, no summary, just keep it short.

All the word salad going on here makes things confusing.

@Hondo64d apparently tested the Tract using drop tests, posted it in Rokslide (and here) and it apparently passed.

Has anyone found any test in which the Tract 2.5-15 failed drop tests?
 
Just so I understand your point…you write a lot.

@koshkin said that a scope that two brands rebadge (Tract 2.5-15 and Maven RS1.2 2.5-15) had one pass (Maven) and one fail (Tract) over at Rokslide.

You are saying the Tract was never tested by Form?

Right? Please, no summary, just keep it short.
Right
All the word salad going on here makes things confusing.

@Hondo64d apparently tested the Tract using drop tests, posted it in Rokslide (and here) and it apparently passed.

Has anyone found any test in which the Tract 2.5-15 failed drop tests?
 
Just so I understand your point…you write a lot.

@koshkin said that a scope that two brands rebadge (Tract 2.5-15 and Maven RS1.2 2.5-15) had one pass (Maven) and one fail (Tract) over at Rokslide.

You are saying the Tract was never tested by Form?

Right? Please, no summary, just keep it short.

All the word salad going on here makes things confusing.

@Hondo64d apparently tested the Tract using drop tests, posted it in Rokslide (and here) and it apparently passed.

Has anyone found any test in which the Tract 2.5-15 failed drop tests?
A while back someone sent their Tract scope to the Rosklide crew for this testing (if memory serves me right). They claimed that it failed, so it went back to Tract. Somehow, it got around to me which is how I ended up spending more time on this silliness than I ever wanted in the first place.

ILya
 
You don't have to take either of our word (or koshkin's) for it. It's really easy to go look for yourself. If one of the big points used to discredit those evals is either misinformed or misrepresented, that doesn't move the needle for you?

Is your position based on evidence and reason? If yes, what evidence would it take to change your mind?
Not being High Hat here, but I spent 3 decades as a litigator using evidence and science/reason daily, and one of my specialties is dissecting "expert" opinions. I'm being gentle with you here.
 
  • Like
Reactions: ma smith
To gather enough data on something in a dynamic event of statistical significance is very very daunting. That's why I just use Nightforce. It's one of the few that is proven. Tract has been around for 10 years, and I am unaware of any unit that issues their optics (I am not saying there aren't any, but...?)
 
Are you saying you don't care what the facts of the matter are? If you don't, why did you cite them (what you believed them to be) before? Why weigh in at all? What are you basing your position on? Emotion?
For someone feigning genuine curiosity you sure seem to want to argue that these drop tests are producing factual results based on the modles tested.
 
Last edited:
For someone feigning genuine curiosity you sure seem to want to argue that these drop tests are producing factual results based on the modles tested.
How dare you make these claims, explain to us why you feel this way. Also I need to see peer reviewed white papers backing up all your statements.

I read on the Internet xxx, so I obviously I'm more experienced and knowledgeable than you. Don't forget to like and subscribe.
 
Cutting through all the BS
Years ago I worked as a best cop and everyday a crazy guy gave me a paper with all his crazy thoughts written in a spiral. Crazy as a man could be

Now these same guys write on net and are experts

Just because it’s posted on net does not make it fact

More shooter problems than scope zero shifts
 
  • Like
Reactions: Guns&WhiteWater
Cutting through all the BS
Years ago I worked as a best cop and everyday a crazy guy gave me a paper with all his crazy thoughts written in a spiral. Crazy as a man could be

Now these same guys write on net and are experts

Just because it’s posted on net does not make it fact

More shooter problems than scope zero shifts
That's a very succinct appeal to sanity.

The simple truth is that we all have different opinions on who bears the burden of proof and how much effort we are willing to make toward proving it. I am not particularly invested in putting any more time and effort into proving that drop tests the way they are done on Rokslide are silly, if the OP is interested in continuously pushing the issue, I am not going to get into a protracted argument. I've said everything I was going to say on the subject and I am not interested in doing any more with it.

If he were to just continue with: "drop tests are awesome, prove them wrong", he can. I have neither the time nor the OCD to dig into it further. More importantly, that is a deeply unscientific way of approaching it. The burden of proof is on whoever is asserting the validity of their tests, especially when it goes counter to rather vast amount of statistical data manufacturers have on riflescopes.

What the drop tests are is a very good marketing scheme because they are visual and seemingly simple. In the same way, Nightforce's trade show gimmick of banging a scope on a phone book (or whatever that was) and then putting the scope on a collimator to show that nothing shifted is an awesome marketing feat. It is simple, easily comprehensible and entirely risk free once you weed out the infant mortality cases. Unless there is a manufacturing defect that might cause a catastrophic failure, it does not stress anything important in the scope, but it is great theater. Same with that famous scope Nightforce carries around with a bullet hole in it. With any scope, if the bullet goes through the scope in a spot where it does not hit any lenses, the scope is likely to continue working (maybe with an erector fixed at a single magnification).

Some people swear by these drop test and some people look at them with a great amount of skepticism. The line between them has been drawn for a little while and neither side is going to budget any time soon.

Disclaimer: if it looks like I am picking on Nightforce again, I am not. They make very nice scopes that are well made and well QC'ed. All companies say silly things in their marketing campaigns. It just so happens that Nightforce's marketing pushes the same silly nonsense as Rokslide's drop tests. Generally, the power of marketing is hard to overstate. Nightforce markets toughness and every is convinced their scopes can be used as barbells for world record dead lift. Vortex figured out early one that their VIP warranty is like octane fuel for sails, so when people talk about Vortex, the warranty immediately comes up. These two companies arguably have the best marketing campaigns in the riflescope world because the message is simple, comprehensible and memorable.
 
I'll respond one more time on all this and then move along. It's obvious we're not getting anywhere.

I did come here with genuine curiosity. I thought there were some folks who had actually looked at what was being done in those evals, understood the "how and why" of the method as described by those performing it, and had some reasoned, logical arguments for why they're bullshit. They make sense to my small brain, and I was hoping for some clarity.

Not one single person on this thread has presented a reasoned argument against the validity of those evals. Straw men, ad homenims, appeals to authority, and an impressive number of other logical fallacies have been trotted out. I'm not asking for a white paper, but a bit of logical progression in an argument would sure be nice.

"Drop tests are awesome, prove them wrong" is explicitly not what I'm doing here, and claiming it doesn't make it so. @koshkin, you've made multiple statements/arguments that are factually incorrect about these tests. When asked/called on it, you respond with essentially, "I can't be bothered to worry about whether my arguments are based in fact or not, since I consider this to be a silly thing not worthy of my time."

You guys are right, my curiosity about your perspective is rapidly evaporating. It's pretty clear that your position here is not based on actually having counter arguments or getting the facts of the matter correct. It's not hurt feelings on my part, I was honestly expecting a bit more in the way of ball-busting. It's disappointment in the fact that you refuse to engage with the facts and just get louder and more dismissive when I try to bring it back to that. That's leftie behavior, and not what I expected to find here.

I'll take the L, and move on.
 
I'll respond one more time on all this and then move along. It's obvious we're not getting anywhere.

I did come here with genuine curiosity. I thought there were some folks who had actually looked at what was being done in those evals, understood the "how and why" of the method as described by those performing it, and had some reasoned, logical arguments for why they're bullshit. They make sense to my small brain, and I was hoping for some clarity.

Not one single person on this thread has presented a reasoned argument against the validity of those evals. Straw men, ad homenims, appeals to authority, and an impressive number of other logical fallacies have been trotted out. I'm not asking for a white paper, but a bit of logical progression in an argument would sure be nice.

"Drop tests are awesome, prove them wrong" is explicitly not what I'm doing here, and claiming it doesn't make it so. @koshkin, you've made multiple statements/arguments that are factually incorrect about these tests. When asked/called on it, you respond with essentially, "I can't be bothered to worry about whether my arguments are based in fact or not, since I consider this to be a silly thing not worthy of my time."

You guys are right, my curiosity about your perspective is rapidly evaporating. It's pretty clear that your position here is not based on actually having counter arguments or getting the facts of the matter correct. It's not hurt feelings on my part, I was honestly expecting a bit more in the way of ball-busting. It's disappointment in the fact that you refuse to engage with the facts and just get louder and more dismissive when I try to bring it back to that. That's leftie behavior, and not what I expected to find here.

I'll take the L, and move on.
Well as to testing. I have 40 plus years of professional shooting. I worked for several scope, ammo and rifle companies. Plus 35 years of competition and teaching to shoot My results are it’s almost always a shooter, loose action or mounting issue. I say that very quietly so I am not accused of “getting loud”

All things can break so a scope can and will break but this reminds me of the guys claiming a am zero to hot afternoon zero shift was the scope