• Watch Out for Scammers!

    We've now added a color code for all accounts. Orange accounts are new members, Blue are full members, and Green are Supporters. If you get a message about a sale from an orange account, make sure you pay attention before sending any money!

Why use Standard Deviation?

Lumpybrass

Private
Minuteman
Dec 15, 2018
53
23
The two chronographs I have used both displayed one sigma along with the other data. Many on forums talk about the SD recorded for various loads.
This puzzles me as to why that info is not ignored for ES which would seem more useful.
What value is SD in our thrashing around in velocities?
While working in systems engineering doing control work in heavy industry we used SD. However, we used 3 sigma representing 99.7 of all date points. The idea was to expel bogus readings so not to skew valid measurements.
Doesn't seem to fit our load development work with velocities.
Just something that gave me an itch out of reach.
 
The reason SD is useful, is that if you have twenty shots for two different loads and the extreme spreads are the same for each group, the lower SD would indicate statistically which load would have fewer fliers. I agree that Extreme Spread is important, because SD alone is only half the picture.
 
Isn't ES a function of the number of shots in a group as well? As your sample size gets larger, your SD should be more accurate (meaning approach a specific value... whatever that is), but your ES should do nothing but get larger and larger (any new "flier" that comes in a later shot that is outside your existing ES will just set your ES to a new larger value).
 
ES shows the outliers and maximum difference between the top and bottom only. It shows you when you have something thats way wrong.

SD is more representative of the entire body of your group, it shows the more average of what all your shots are doing.


Yes, I know thats not what the statistics will say but its how its generally applied for reloading and long range shooting.
 
  • Like
Reactions: AIAW
If you shoot 100 rounds the distribution of the velocities should represent a smooth bell shaped curve showing a mean and an upper and lower outlier.
Maybe all the shots are between 2990 and 2995, cool, ES would be valid.

By stating a average velocity (the middle), the shape/width of the curve requires SD and ES.
A narrow SD, sharp peak, and ES locations (or outliers) and you can evaluate any process.
With enough samples you can even find a hidden curve within the curve pointing to two sub processes.
The ES may or may not increase in range but the curve will be more defined .
Sort of like increasing sample rate/resolution to see side bands in a Frequency domain plot.
 
  • Like
Reactions: lash
I will toss this out there. With the number of variables that can affect velocity of rounds, I would consider Bayesian stats to have some application in load development. I haven't figured out just how to apply those methods yet, but I will get something figured out eventually. For what it's worth, Bayes did not assume a regular distribution like the bell shaped curve does.
 
SD is a waste of time for the average shooter.
 
Because it doesn't offer any useful information like extreme spread does before your barrel is shot out.
 
I think OP hit on two different uses for these statistics:

1) a method to reject some data points -- "expel bogus readings" -- i.e. bad measurements. This may be warranted if you believe your sensors & measurements are somewhat unreliable. With enough samples, you can maybe reject samples that fall outside of "reasonable" (e.g. 3 sigma) limits.

2) a method to estimate variance in the distribution underlying the sample data, in order to find a load with minimal variance. With enough samples, es and sd yield two independent parameters that can suggest different things as noted in replies above; the main difference IMO being that sd calculation uses all samples, while es uses only the pair with the largest difference.
 
Last edited:
'Extreme spread' by its very definition only uses two points out of however many shots fired - the outliers at either end, the values which are least likely to repeat.

Even putting aside the above, ES only tells you what your load already *did*. It has next to zero statistical validity as a predictor of future performance. Standard deviation, on the other hand, uses your previous shot values to give you a useful metric that can actually be used to predict the expected future values with increasing accuracy as the sample size increases.

The only way that "SD is a waste of time for the average shooter" is that a lot of people are too f'ing lazy to do any sort of math more complicated than 2+2=4... and yet they want to play at long range ballistics :unsure::rolleyes:
 
ES is cool, if you fire a shot that happens to match the lowest velocity, you can predict its trajectory. Same with the highest velocity.

Brian Litz wrote a book named "Applied Ballistics for Long Range Shooting " that includes a section about Weapon Employment Zone. It deals with the probability of a hit under realistic conditions. He used ES and standard deviation (and its embedded assumption about normal distribution) to estimate a large set of possible impact locations and compares them to target size. Seems a useful concept to me. When I blow a range or wind call, I miss at least as often as Bryan said that I would.
 
Spife is right in as much we use it for non statistical purposes. If you want to go a step further you can verify the Confidence in your results. To do that you need the standard statistical output of the chronograph.

 
If you measured 100 rounds and 98 of those were exactly 2700fps and one measured 2600fps and one measured 2800fps. Would you trash that load because the ES was 200?

Those would be two extraordinary items of interest and would be taken out of the population to create a valid sample frame.
 
If you measured 100 rounds and 98 of those were exactly 2700fps and one measured 2600fps and one measured 2800fps. Would you trash that load because the ES was 200?

The two outliers are the shots that ruined the group or got you disqualified from continuing on so they are the most two shots in the 100 measured.
Standard Deviation helps those who can't reload achieve a low number they can post about which in reality means nothing to shooting small groups.
Its use is for doing large samplings and barrels don't live that long.
 
Using values for "low" and "high" based on reasonable requirements, consider 4 cases:
  1. high es, high sd : fail
  2. high es, low sd (the Juggerxxx proposition) : account for the outliers, or accept them and fail
  3. low es, high sd : "impossible" (I think)
  4. low es, low sd : GTG.

If case 3 is truly impossible, and es is low, then sd must be low & so is somewhat redundant.
 
Using values for "low" and "high" based on reasonable requirements, consider 4 cases:
  1. high es, high sd : fail
  2. high es, low sd (the Juggerxxx proposition) : account for the outliers, or accept them and fail
  3. low es, high sd : "impossible" (I think)
  4. low es, low sd : GTG.
If case 3 is truly impossible, and es is low, then sd must be low & so is somewhat redundant.
Broadly speaking the above is probably not a bad way to think of this assuming you have an adequate sample size. However, I am leery of even five shot groups, but that seems to be where the shooting world is. I think many decisions are made on sample sizes that are small. As mentioned in a post further above, the missing item is the confidence level that can be calculated. In short, your level of "confidence" in your analysis (SD or ES values for example) in low sample sizes should be very low. And this confidence value grows as your sample size grows.

If you look at your four scenarios above and ignore the SD values and look ONLY at ES, you can't tell the difference between #1 (bad) and #2 (good) if the only number someone gives you is the ES numbers. I think many people really don't understand statistics well enough to appreciate what SD is telling them. But visually people sort of compute their own mental version of SD (even if they think they are paying attention to ES).

So lets do a mental experiment. Two 10 shot groups. ES for both is the same at 3 inches (an arbitrary number for this example). The first group is relatively tight, but does have a flier or two that creates that specific ES. The non-statistician shooter looks at a tight group and is quick to explain away the flyer somehow (effectively removes it from the sample) and will praise the remainder of the group that looks good. In effect, the shooter has mentally calculated something similar to a small SD value even if they couldn't begin to explain what SD means mathematically. Because they explained away the flier, they have discounted the ES it created. The second group is a bit all over the place, but still with the same ES. The same shooter can't visually find anything that looks like a tight group and concludes something is wrong. That is high SD and again the ES really doesn't factor into the judgement as the lack of a real group (high SD) is the more disturbing fact. I would tend to only remove fliers from my analysis if I "know" the cause (i.e. know the shot was bad when I pulled the trigger because any number of reason... I sneezed, jerked the trigger, etc.) Unexplained fliers? They should be viewed as real data.

I think what screws people up is that when the sample sizes are low, any ES value that is outside of our desire, our brains then scream at us that something is wrong. Even if the flier fits within the "actual" distribution curve, it still freaks us out. I think that given a decent sample size, our brains switch gears and can spot low/high SD naturally. Again, in small sample sizes, ES can lead us astray. In appropriate sample sizes (high confidence level), SD (or a visual equivalent) rightly so become the focus... IMHO :)
 
  • Like
Reactions: morganlamprecht
As a smart guy once told me, believe the bullet.

I’ll find a node to see if I’m in the ballpark I want velocity wise (big OBT fan) then tweak the load. Once it proves consistent at multiple distances (including gathering dope and envirnmebtals) for said distances, chrono for an avg velocity. Then tweak my ballistics program to match drops along the way.

This has given me consistent results within my personal trigger margin of error for years.

Believe it or not (I don’t care) I’ve had loads that held 1/2moa or better, on paper at 600yds that had I judged them on their SD alone I never would’ve shot at distance on paper.

The biggest variable is always the loose screw behind the trigger.
 
Believe it or not (I don’t care) I’ve had loads that held 1/2moa or better, on paper at 600yds that had I judged them on their SD alone I never would’ve shot at distance on paper.
Apologies if I am reading this wrong, but when you are talking SD above, I assume it is of the muzzle velocity. While your measure has been actual on target performance. With the conclusion that consistent MV doesn't always result in on target performance. I think it's hard to argue against that. MV is just one of many variable and no single one tells the entire story.

For example SD could be referring to distance to center of group and not about analysis of MV (even if that is probably the most common measure). My example post above was about grouping (precision) as my ES is in "distance" and not "velocity". I should have been more clear that my example was about grouping and not MV.

Ultimately on target performance is what counts. I would say my larger comment is that whatever measures are used, good statistical analysis helps make sense of the data and small sample sizes makes it hard to generate reliable conclusions.

The biggest variable is always the loose screw behind the trigger.
Absolutely. My biggest problem is... myself. :)
 
Correct, I was referring to chrono data.
I don’t calc such things on target, pretty easy to see if I like what I see without mental gymnastics.

I gave up obsessing on minutia that makes changes inside inside my circle of shooter error.
I actually got the lowest SD I’ve had in years last weekend, and that was on Ammo loaded in a mixed batch of 6-9x fired Hornady 65CM brass-only sorting done was throwing out the ones that a paper clip test showed signs of impending case head separation.
 
When I am refining I always shoot 2- 5 round groups at different stages.
I agree Low ES, Low SD is always the keeper.

My target fro .308 is sub 10 ES and Sub 5 SD / 6.5 and 6.0 CM is sub 8 ES and sub 4 SD. With these have been good well past 1000 yards.
 
Broadly speaking the above is probably not a bad way to think of this assuming you have an adequate sample size. However, I am leery of even five shot groups, but that seems to be where the shooting world is. I think many decisions are made on sample sizes that are small. As mentioned in a post further above, the missing item is the confidence level that can be calculated. In short, your level of "confidence" in your analysis (SD or ES values for example) in low sample sizes should be very low. And this confidence value grows as your sample size grows.

If you look at your four scenarios above and ignore the SD values and look ONLY at ES, you can't tell the difference between #1 (bad) and #2 (good) if the only number someone gives you is the ES numbers. I think many people really don't understand statistics well enough to appreciate what SD is telling them. But visually people sort of compute their own mental version of SD (even if they think they are paying attention to ES).

So lets do a mental experiment. Two 10 shot groups. ES for both is the same at 3 inches (an arbitrary number for this example). The first group is relatively tight, but does have a flier or two that creates that specific ES. The non-statistician shooter looks at a tight group and is quick to explain away the flyer somehow (effectively removes it from the sample) and will praise the remainder of the group that looks good. In effect, the shooter has mentally calculated something similar to a small SD value even if they couldn't begin to explain what SD means mathematically. Because they explained away the flier, they have discounted the ES it created. The second group is a bit all over the place, but still with the same ES. The same shooter can't visually find anything that looks like a tight group and concludes something is wrong. That is high SD and again the ES really doesn't factor into the judgement as the lack of a real group (high SD) is the more disturbing fact. I would tend to only remove fliers from my analysis if I "know" the cause (i.e. know the shot was bad when I pulled the trigger because any number of reason... I sneezed, jerked the trigger, etc.) Unexplained fliers? They should be viewed as real data.

I think what screws people up is that when the sample sizes are low, any ES value that is outside of our desire, our brains then scream at us that something is wrong. Even if the flier fits within the "actual" distribution curve, it still freaks us out. I think that given a decent sample size, our brains switch gears and can spot low/high SD naturally. Again, in small sample sizes, ES can lead us astray. In appropriate sample sizes (high confidence level), SD (or a visual equivalent) rightly so become the focus... IMHO :)


What is the appropriate number of rounds fired to give us a high probability that the numbers we see are statistically correct and meaningful?
 
What is the appropriate number of rounds fired to give us a high probability that the numbers we see are statistically correct and meaningful?
I complain about five shot groups in my earlier post, but my understanding is that it's actually in the ballpark. Probably slightly more (5-7?) than less (3-5?). It depends upon what "confidence interval" you feel is good. Basically... there is some level of doubt in the output of the analysis, how much doubt are you OK with. This speaks to my concerns with the 10 round Satterlee test (ultra low sample size per test).

My personal opinion is that given my personal skill issues may be a large factor I would like to be able to shoot a slightly larger sample to just provide more data to hopefully extract a better idea of when a flier is me vs the gun. In effect, I like a higher level of confidence than maybe others might.

In short, I put question marks into my answers above as I will let others who are better at statistics than myself try to answer this question. Here are some quick links that go into this. Note, these are somewhat heavy reads. I have includes a few select quotes or areas to examine from each...

https://www.autotrickler.com/blog/thinking-statistically
(Look at the section in which he talks about confidence interval as he tries to talk to the question of "how many shots?")

https://www.autotrickler.com/blog/practical-statistics-for-shooters
(Look at the graph and discussion about the implications of a 5-shot groups and one having a large ES and another a smaller ES. With SD and ES measuring group size. The implication that the one with larger ES is likely to be bad, but might also average and the one with lower ES is likely good, but might also be average. Repeating the test will likely give you different results. Combining all of those tests into a larger sample might give you more confidence in the answer it spits out.)

Good quote... "Any test where variance is measured and compared, where confidence is not considered, could be very misleading "

http://ballistipedia.com/index.php?title=FAQ
(I haven't really read much here, but on the surface it looks like a lot of good, but deep reading on this site.)

Overall, I am at the edge of my knowledge and don't want to enter much deeper into speculation on "what is best". :)
 
Since we don't have a population to begin with to draw a sample from we do what is reasonable. We create a small sample frame. For a simple random sample frame we need at least 40 to calculate a reasonable 95% confidence interval with an error rate of +/- 3 fps. 40 is not reasonable. 7 is reasonable but the error rate goes up to +/- 9 fps. This is all based on an imaginary population of 1000 and SD = 10. So, we can say that we are 95% sure that the next 1000 rounds are going to be within the error all conditions being the same. This has nothing to do with loss of accuracy over time.
 
Last edited by a moderator:
I complain about five shot groups in my earlier post, but my understanding is that it's actually in the ballpark. Probably slightly more (5-7?) than less (3-5?). It depends upon what "confidence interval" you feel is good. Basically... there is some level of doubt in the output of the analysis, how much doubt are you OK with. This speaks to my concerns with the 10 round Satterlee test (ultra low sample size per test).

My personal opinion is that given my personal skill issues may be a large factor I would like to be able to shoot a slightly larger sample to just provide more data to hopefully extract a better idea of when a flier is me vs the gun. In effect, I like a higher level of confidence than maybe others might.

In short, I put question marks into my answers above as I will let others who are better at statistics than myself try to answer this question. Here are some quick links that go into this. Note, these are somewhat heavy reads. I have includes a few select quotes or areas to examine from each...

https://www.autotrickler.com/blog/thinking-statistically
(Look at the section in which he talks about confidence interval as he tries to talk to the question of "how many shots?")

https://www.autotrickler.com/blog/practical-statistics-for-shooters
(Look at the graph and discussion about the implications of a 5-shot groups and one having a large ES and another a smaller ES. With SD and ES measuring group size. The implication that the one with larger ES is likely to be bad, but might also average and the one with lower ES is likely good, but might also be average. Repeating the test will likely give you different results. Combining all of those tests into a larger sample might give you more confidence in the answer it spits out.)

Good quote... "Any test where variance is measured and compared, where confidence is not considered, could be very misleading "

http://ballistipedia.com/index.php?title=FAQ
(I haven't really read much here, but on the surface it looks like a lot of good, but deep reading on this site.)

Overall, I am at the edge of my knowledge and don't want to enter much deeper into speculation on "what is best". :)


So if I'm shooting a 5 shot group at 100,200,300,600 or 1000 yards what is standard deviation going to tell me about my group?

If my bullets are good and my gun is shooting well extreme spread will tell me what the vertical spread should be in my group.
If I am shooting a 25 shot card what is standard deviation going to tell me?
Extreme spread will tell me my maximum vertical spread and if it's not small enough to keep everything in the X-ring my load isn't gonna win me any matches.
When I shoot a match I shoot it to win it so having confidence that 500 shots after the match is over my gun will be in some kind of tune is useless intel.
 
He's right. The SD is just there because people want it there because in there is the average based on the number of shots fired. The SD of the AVG is just the variation of the average. Not because Lynn Jr. knows statistics. Because he can see it down range and makes the connection with the ES. There isn't much difference between 8 to 16 for SD down range on target when their ES has no material difference. SD is just OCD for picky reloaders for the most part. Oh shit, look! Tighter group wins the match. It does make a difference. Never mind. He is not right. He was referring to his father's world record abilities.

SD=8

7037725


SD=16

7037727
 
Last edited by a moderator:
Alot of talk for a nobody who can't do a Ladder Test.
It's time for the grownups to talk now so maybe play a video game.
 
Look at the vertical spread. That is the closest you're going to see a vagina. Plus you write like a dumb cunt too.
 
Last edited by a moderator:
  • Haha
Reactions: RyanM and BigTex
Wow you know an adult word.
Took your picture and your gun as well.
 

Attachments

  • Celtic_Patriot.jpg
    Celtic_Patriot.jpg
    33 KB · Views: 26
  • 39449.jpeg
    39449.jpeg
    4.6 KB · Views: 20
Last edited:
So if I'm shooting a 5 shot group at 100,200,300,600 or 1000 yards what is standard deviation going to tell me about my group?

If my bullets are good and my gun is shooting well extreme spread will tell me what the vertical spread should be in my group.
If I am shooting a 25 shot card what is standard deviation going to tell me?
Extreme spread will tell me my maximum vertical spread and if it's not small enough to keep everything in the X-ring my load isn't gonna win me any matches.
When I shoot a match I shoot it to win it so having confidence that 500 shots after the match is over my gun will be in some kind of tune is useless intel.
Let’s back up a step.
ES and SD will help you develop and refine your load for precision purposes.
Each time you pull the trigger, you can expect to get as close to same result as the last pull.
This defines precision.
What this gives you is the ability to know with confidence the ability of your ballistic calculator to perform properly.
This then takes us to accuracy.
If you know each time you pull the trigger you can expect the same result out the barrel, then you can zero your scope and be confident that barring any other environmental factors, when you dial and pull your bullet will hit it target. This is accuracy.

ES and SD work together.
Far brighter minds and better shooter than I have been using these together for years.
 
A better and useful number than SD is the Coefficient of Variation (CV) at least when it comes to shooting.
 
If your gun is shooting 1/4 inch groups at 100 yards using a 0.600 BC bullet at 3000 fps and the extreme spread is 20 FPS at 1000 yards the best you can hope for is 6.6 inch groups which won't win you anything.
The ES contributed 4.1 inches of vertical to your group in an ideal situation.
In this example the Extreme Spread tells us what our minimum group will look like and this load would be garbage by today's standards.
Tell me what the standard deviation is going to tell me about my group other than if I shoot a 1000 groups they should look good?
With extreme spread you get an actual size to expect and with standard deviation you learn that if you were to shoot a 1000 groups some might look good.