• Watch Out for Scammers!

    We've now added a color code for all accounts. Orange accounts are new members, Blue are full members, and Green are Supporters. If you get a message about a sale from an orange account, make sure you pay attention before sending any money!

Range Report Which Published Berger BCs Are From Predictions Rather than Measurements?

I gotta get that book. Thanks for the preview.



Fascinating. Thanks for sharing this result. It's nice to see that the SBIR $ was put to good use.



Once again, fascinating. I'll probably read this paragraph 20 times trying to formulate experiments and consider how to otherwise evaluate it.

For example, consider an experiment with identical low BC .223 Rem bullets fired simultaneously at the same velocity over a shorter course (say 500m) without the huge mid range height, but one 30 feet high and one at normal bench rest level. (Build two towers 30 feet high, one for the shooter, and one for the target.) If your explanation is correct, the wind drift on the upper trajectory won't be systematically larger than the lower one.

And if this is true up to the height of a 1000 yard trajectory, how much higher can one go and still have predictions as accurate from line of sight wind measurements as from measurements along the actual path?



I like to think about it like a low pass filter, more or less giving a "moving average" over a short time interval rather than a true instantaneous reading. The time window for the vane is longer than for the anemometer.

But I am perplexed as to why the moving average over a short time window would matter more than the difference in wind 30 feet higher or the effects of spatial averaging over large fractions of the range to the target. Certainly, the ultrasonics give more insight into the time scales on which wind speed and direction are changing, but if you ran the ultrasonic results through a low pass filter or moving average mimicking the time scales of the Kestrel vane and anemometer response, would the accuracy of your wind drift predictions suffer significantly?



Now your tempting me to reverse engineer a number of solvers. With your physics and math background, you will probably get the gist:

Enter the appropriate outputs into a spreadsheet as x, y, z, t, Vx at 1 yard intervals.

Use the basic principles of Calculus and kinematics to compute the Fx, Fy, and Fz at each point. Fx and Fz are simple, they are just the retarding drag force and gravity. But reverse engineering Fy can tell me how these things are really relating BC and cross wind to the force of the wind on the bullet. Something is not right. Of course, it's also simple enough to tell if their drift predictions agree with Eqn 5.2 of Litz 2009 (time lag times wind speed, really going back to Bob McCoy).


We've definitely been at this for a while... probably the last 7 years of my life have been dedicated to these studies. It all started with the One Shot program back at Lockheed Martin back in 2007 and has continued since then. The ultimate objective was and always has been to develop an optical wind measurement system capable of achieving the highest probability of hit possible. It's all rolled into our system design for these systems.

At the end of the day, we've found that an MPM solver with a custom drag curve for the bullet is the most accurate means of getting drop and drift to match up. We've seen little difference in trajectory versus line of sight measurements via anemometers and have done that experiment many times over.

Our objective is to give shooters the best possible wind measurement along the line of sight and an accurate ballistics solver. Given those two things, you're dead on and all of our prototypes and testing have shown that time after time.
 
The problem I have with directionally dependent drag is that there is no physical justification for it. If there were, it would be seen in a 6DOF model, and we just don't see that happening.Of course, when you use a 3DOF (AKA point mass) simplification, you will necessarily cover over some stuff in the drag function and miss a few things. This is why we see slight differences in the drag function at different stabilities, for example. It's why spin drift is missed by point mass. But these are all small effects. Until shown otherwise, I have to conclude that gross discrepancies between observed wind deflection and point mass predictions are due to uncertainty in the wind measurement or errors in the drag function (most likely in the transonic region).

In other words, I believe that if you get the drag function right (which may be dependent on specific conditions), you will not need to "true" your solver. "Truing" should consist of increasing the accuracy of your inputs, not fudging them to match observed data. So in that sense, I'm skeptical of software that applies custom tweaks to change the results. Madness lies in that direction.
 
The problem I see most often is that most guys just flat out can't read the wind. You throw an optical wind sensor out there or a set of anemometers and every single guy is surprised by what they *think* the wind is compared to what it really is...

I would pay for the opportunity to see just how bad at reading the wind I am. Sounds really cool. A few hours with a spotting scope on an instrumented range is probably the equivalent of a few months of shooting by trial and error.
 
The problem I have with directionally dependent drag is that there is no physical justification for it. If there were, it would be seen in a 6DOF model, and we just don't see that happening.Of course, when you use a 3DOF (AKA point mass) simplification, you will necessarily cover over some stuff in the drag function and miss a few things. This is why we see slight differences in the drag function at different stabilities, for example. It's why spin drift is missed by point mass. But these are all small effects. Until shown otherwise, I have to conclude that gross discrepancies between observed wind deflection and point mass predictions are due to uncertainty in the wind measurement or errors in the drag function (most likely in the transonic region).

In other words, I believe that if you get the drag function right (which may be dependent on specific conditions), you will not need to "true" your solver. "Truing" should consist of increasing the accuracy of your inputs, not fudging them to match observed data. So in that sense, I'm skeptical of software that applies custom tweaks to change the results. Madness lies in that direction.

I could not have said it better. You are right on.
 
I would pay for the opportunity to see just how bad at reading the wind I am. Sounds really cool. A few hours with a spotting scope on an instrumented range is probably the equivalent of a few months of shooting by trial and error.

Any time man! No need to pay. We have the range, we have the gear. It's easy to deploy it. It's all wireless and only takes us a bit to deploy. We're in Ohio - kind of near the Akron / Canton area. So feel free to PM me or email me at [email protected] and come out some time.
 
We've definitely been at this for a while... probably the last 7 years of my life have been dedicated to these studies. It all started with the One Shot program back at Lockheed Martin back in 2007 and has continued since then. The ultimate objective was and always has been to develop an optical wind measurement system capable of achieving the highest probability of hit possible. It's all rolled into our system design for these systems.

At the end of the day, we've found that an MPM solver with a custom drag curve for the bullet is the most accurate means of getting drop and drift to match up. We've seen little difference in trajectory versus line of sight measurements via anemometers and have done that experiment many times over.

OK, supposing we accept this at face value. But can you quantify for us what you mean by "most accurate" in terms of a typical (say rms) percent error on wind drift. Does "most accurate" mean an uncertainty of 1%, 5%, 10% or 20% in terms of wind drift error?

Also, have you looked at data that would represent a valid test of the hypothesis that heavier bullets drift less than light bullets, even with the same BC? If you've done extensive analysis of the 175 SMK for example, testing this hypothesis would require looking at a much heavier (300+ grains) and a much lighter (< 100 grains) bullet with the same BC.

Finally, have you looked at data that would represent a valid test of the hypothesis that the MPM model over-predicts wind drift in much thinner atmospheres? It would be very hard to test this hypothesis if you are only working at ranges from 1000-3000 ft in elevation.

One thing I've learned in science is to remain skeptical of arguments from authority that cannot be backed up by citing specific data representing a valid test of the hypothesis in question. Extrapolating the validity of the MPM wind drift model from extensive testing on the 175 SMK over a small range of air densities may not be appropriate to bullets of much different masses fired at much different air densities.

Our objective is to give shooters the best possible wind measurement along the line of sight and an accurate ballistics solver. Given those two things, you're dead on and all of our prototypes and testing have shown that time after time.

Claiming to be the "best possible" is underwhelming without specifying a specific level of accuracy that can be expected from the predicted wind drift, because there is no way to assess how the accuracy of your system compares with other systems.

At the end of the day, we've found that an MPM solver with a custom drag curve for the bullet is the most accurate means of getting drop and drift to match up.

But now that you've cast doubt on some MPM solvers, it would be helpful if you would cite some specific solvers that match the ones you've found to work (same outputs for same inputs). JBM? Berger? QuickTarget Unlimited? Otherwise, you have not really advanced the science, you've only made the claim of having a proprietary product that works better than all the others but cannot be tested without investing a lot of money in the proprietary product.

Development of black boxes that work is a fine example of capitalism. But you cannot simultaneously make a valid scientific claim of having validated McCoy's modified point mass wind drift calculation model.
 
OK, supposing we accept this at face value. But can you quantify for us what you mean by "most accurate" in terms of a typical (say rms) percent error on wind drift. Does "most accurate" mean an uncertainty of 1%, 5%, 10% or 20% in terms of wind drift error?

Also, have you looked at data that would represent a valid test of the hypothesis that heavier bullets drift less than light bullets, even with the same BC? If you've done extensive analysis of the 175 SMK for example, testing this hypothesis would require looking at a much heavier (300+ grains) and a much lighter (< 100 grains) bullet with the same BC.

Finally, have you looked at data that would represent a valid test of the hypothesis that the MPM model over-predicts wind drift in much thinner atmospheres? It would be very hard to test this hypothesis if you are only working at ranges from 1000-3000 ft in elevation.

One thing I've learned in science is to remain skeptical of arguments from authority that cannot be backed up by citing specific data representing a valid test of the hypothesis in question. Extrapolating the validity of the MPM wind drift model from extensive testing on the 175 SMK over a small range of air densities may not be appropriate to bullets of much different masses fired at much different air densities.



Claiming to be the "best possible" is underwhelming without specifying a specific level of accuracy that can be expected from the predicted wind drift, because there is no way to assess how the accuracy of your system compares with other systems.



But now that you've cast doubt on some MPM solvers, it would be helpful if you would cite some specific solvers that match the ones you've found to work (same outputs for same inputs). JBM? Berger? QuickTarget Unlimited? Otherwise, you have not really advanced the science, you've only made the claim of having a proprietary product that works better than all the others but cannot be tested without investing a lot of money in the proprietary product.

Development of black boxes that work is a fine example of capitalism. But you cannot simultaneously make a valid scientific claim of having validated McCoy's modified point mass wind drift calculation model.


Michael - please don't misconstrue what I'm saying. I'm not sure how you can say that I've cast doubt on some MPM solvers. In fact, a properly written MPM solver with a custom drag curve that also has spin, Coriolis, and aero-jump added into it is exactly what puts the rounds on target for drop and drift (given accurate wind inputs). Please don't misinterpret my discussion regarding "truing" as saying that MPM solvers don't work. That would be a false claim.

The great thing about our data is that we've done it all over the country. I said *most* of our data was done at 1000-3000 ft DA - not all of it. We've done it all over the country at between 5000 and 10,000 DA. Same results in that an MPM with a good drag curve works perfect. Also, I think you've misinterpreted what I said about MPM models and somehow mixed that in with the whole "truing" concept. Where most solvers go wrong is when they implement the truing function. But I already spoke to that and how Solvers X and Y do it incorrectly. I won't release names of companies and products because I don't believe that publishing information on why another product doesn't work is a good strategy or a way to make friends in the industry.

It's great to be skeptical. Luckily, all of this has been well tested over the past 7 years in the government and had teams of people looking at the data. We've tested rounds from the 62gr to 300gr. It all works across calibers, bullet weights, etc. Give an MPM with the extensions that I mentioned an accurate drag curve and valid wind data and you've got one hell of a solution to get rounds on target.

Regarding this comment...

"Development of black boxes that work is a fine example of capitalism. But you cannot simultaneously make a valid scientific claim of having validated McCoy's modified point mass wind drift calculation model."

Our objective is simple. Base everything we do on scientific principles and accurate data collection methods. We've published a ton of our work internal to the government and now that things are being done on our own dollar, also to the public - hence Modern Advancements in Long Range Shooting chapters. Further, we've taken that one step further and not only validated MPM models, we've implemented them, tested them, built products on them, and are being used by literally thousands of people daily. As Chris pointed out yesterday, this is all rolled up into a number of products that we have that are out there. For many of the test that you wish to re-run, the wind sensor array is a COTS item that can be purchased from us. It's hardly a black box. I think I just gave the entire SH forum the information needed to implement the exact same thing! Ha ha!
 
But now that you've cast doubt on some MPM solvers, it would be helpful if you would cite some specific solvers that match the ones you've found to work (same outputs for same inputs). JBM? Berger? QuickTarget Unlimited? Otherwise, you have not really advanced the science, you've only made the claim of having a proprietary product that works better than all the others but cannot be tested without investing a lot of money in the proprietary product.

Development of black boxes that work is a fine example of capitalism. But you cannot simultaneously make a valid scientific claim of having validated McCoy's modified point mass wind drift calculation model.

Just a nitpick, but it's an important one. I'm sure you know this, but I'll repeat it in case anyone else has bothered to read down this far. Point Mass means something very specific. Any solver that is doing the weird things that were described by mil_coder is NOT a point mass solver. It's some weird kludgy mess. Point mass means solving a very specific set of equations which can be found in McCoy's book and elsewhere (Bob McCoy, brilliant as he was, did not invent the point mass method). That's all. You can fit them on a cocktail napkin. Frankly, it's not that hard to write a point mass solver correctly - you have to work at it to screw it up. Until I read this thread, I had assumed that they were all pretty much the same. Buyer beware, I suppose.

I know of three that work as they should - JBM, Applied Ballistics, and my own. I know this because of conversations I've had on these forums with the authors of those programs. (Interestingly, all three use a slightly different method to solve the point mass equations, but that's trivia). I'm sure there are others.

There is also a good bit of confusion that conflates the point mass equations with the drag function. They are separate things. You can use any arbitrary drag function with a perfectly sound Point mass solver and get errors as a result even if all your inputs are nailed down. G7 is wrong. G1 is even wronger. They're all wrong to a degree. One thing I've wondered about and am trying to work on in my spare time is to see if there is a better (than G7), but still universally applicable drag function, or if what is really needed is a custom drag function for every bullet. But I see a lot of folks throwing out the point mass method because they don't like the results, when the real culprit is a lack of adequate drag data, or worse yet, the basic inputs are just wrong.
 
Last edited:
Just a nitpick, but it's an important one. I'm sure you know this, but I'll repeat it in case anyone else has bothered to read down this far. Point Mass means something very specific. Any solver that is doing the weird things that were described by mil_coder is NOT a point mass solver. It's some weird kludgy mess. Point mass means solving a very specific set of equations which can be found in McCoy's book and elsewhere (Bob McCoy, brilliant as he was, did not invent the point mass method). That's all. You can fit them on a cocktail napkin. Frankly, it's not that hard to write a point mass solver correctly - you have to work at it to screw it up. Until I read this thread, I had assumed that they were all pretty much the same. Buyer beware, I suppose.

I know of three that work as they should - JBM, Applied Ballistics, and my own. I know this because of conversations I've had on these forums with the authors of those programs. (Interestingly, all three use a slightly different method to solve the point mass equations, but that's trivia). I'm sure there are others.

There is also a good bit of confusion that conflates the point mass equations with the drag function. They are separate things. You can use any arbitrary drag function with a perfectly sound Point mass solver and get errors as a result even if all your inputs are nailed down. G7 is wrong. G1 is even wronger. They're all wrong to a degree. One thing I've wondered about and am trying to work on in my spare time is to see if there is a better (than G7), but still universally applicable drag function, or if what is really needed is a custom drag function for every bullet. But I see a lot of folks throwing out the point mass method because they don't like the results, when the real culprit is a lack of adequate drag data, or worse yet, the basic inputs are just wrong.

Yep - you are right on. And I am intimately familiar with AB's solver (obviously) and JBM as well. Based upon your knowledge and your web site, I imagine that you got it right too.

At the end of the day - definitely buyer beware regarding how any ballistics app is applying truing corrections. That is where they tend to go awry.
 
Same results in that an MPM with a good drag curve works perfect.

Wonderful, maybe I missed it. Did you actually give a (typical, say rms) percent uncertainty on the wind drift predictions?

1% would be very, very good. Perfect would be 0%, and I don't think you really mean that.

It's great to be skeptical. Luckily, all of this has been well tested over the past 7 years in the government and had teams of people looking at the data. We've tested rounds from the 62gr to 300gr. It all works across calibers, bullet weights, etc. Give an MPM with the extensions that I mentioned an accurate drag curve and valid wind data and you've got one hell of a solution to get rounds on target.

We've published a ton of our work internal to the government ...

Publishing means making the work product generally available to the public. Internal distributions within an organization, even one as big as the government, do not constitute publication.

I've been a peer reviewer on dozens of papers over the last few years, and the distinction between internal reports (which cannot be cited in journal articles) and published papers (which can be cited in journal articles) is common. To be cited in a published paper, a scholarly work must itself be published (available to the general public). Materials with limited distributions are not really published.

In DoD speak, the right words are "Approved for Public Release. Distribution Unlimited."

The distinction is essential in science because it determines the scope of opportunity for scrutinizing and replicating the results. If the arbiter of scientific truth is repeatable experiment, the scope of readers with an opportunity to repeat the experiments is essential. Government does not like to pay for experiments to be repeated that it has already paid for, which is reasonable. But the result is that results one cannot generally have the same confidence in results where the details are only distributed to government employees.

Try a Google search for something like

EPA refuses to release data

to see how far government can get from good science when the general public is denied access to the data from which conclusions are drawn.
 
I know of three that work as they should - JBM, Applied Ballistics, and my own. I know this because of conversations I've had on these forums with the authors of those programs. (Interestingly, all three use a slightly different method to solve the point mass equations, but that's trivia). I'm sure there are others.

This is the specific guidance I was hoping for. Now it is straightforward to compare the ouputs of these three solvers against each other for a variety of inputs.

Once these are validated by comparison, it will be a simple matter to compare to other solvers when people report prediction errors with other solvers. There are a lot of solvers popping up on mobile devices, as well as a old standards like QuickTarget and Sierra's Infinity.

Before taking to the field with confidence with a new mobile solver, it would be advisable to test it against something proven and trusted like JBM for several sets of input values.
 
Wonderful, maybe I missed it. Did you actually give a (typical, say rms) percent uncertainty on the wind drift predictions?

1% would be very, very good. Perfect would be 0%, and I don't think you really mean that.

Publishing means making the work product generally available to the public. Internal distributions within an organization, even one as big as the government, do not constitute publication.

I've been a peer reviewer on dozens of papers over the last few years, and the distinction between internal reports (which cannot be cited in journal articles) and published papers (which can be cited in journal articles) is common. To be cited in a published paper, a scholarly work must itself be published (available to the general public). Materials with limited distributions are not really published.

In DoD speak, the right words are "Approved for Public Release. Distribution Unlimited."

The distinction is essential in science because it determines the scope of opportunity for scrutinizing and replicating the results. If the arbiter of scientific truth is repeatable experiment, the scope of readers with an opportunity to repeat the experiments is essential. Government does not like to pay for experiments to be repeated that it has already paid for, which is reasonable. But the result is that results one cannot generally have the same confidence in results where the details are only distributed to government employees.

Try a Google search for something like

EPA refuses to release data

to see how far government can get from good science when the general public is denied access to the data from which conclusions are drawn.

When you're ready to get back to the technical discussions, give me a holler. I've gotta get back to making real products.
 
Before taking to the field with confidence with a new mobile solver, it would be advisable to test it against something proven and trusted like JBM for several sets of input values.

This appears wise. I would test any mobile solver not called Applied Ballistics against JBM before trusting the numbers. JBM is great because it offers a truckload of inputs, so you can really see what's going on. The interface to mine is very simple by comparison, and not nearly as flexible - it's meant more for education. Under the covers, they're more or less the same thing. JBM also publishes an older version of the actual code used, so you can check out how it works in detail.
 
Development of black boxes that work is a fine example of capitalism.

You want a fine example of capitalism, it's getting research funded by the US Air Force to study varmint bullets.

Government does not like to pay for experiments to be repeated that it has already paid for, which is reasonable.

But your business model relies on this. You actually convinced the govt to fund your research into high altitude aerodynamics (on varmint bullets of course); in particular the 'unknown' effects of Reynolds number on drag.

Trust me there is little about high altitude supersonic aerodynamics that is unknown these days, in fact for the last several decades. List of X-planes - Wikipedia, the free encyclopedia

Your line of questioning about the equations of motion is the same pattern. Just because you don't personally know how wind deflects a bullet doesn't mean you're entitled to our tax dollars to run experiments designed to verify that F=ma (Newtons second law). As Nick has explained, lots of $ has been spent researching wind for small arms, and a lot has been learned. It's been established that the MPM equations of motion are in fact valid, and the challenge lies in producing accurate inputs, most importantly wind measurement. Your lack of knowledge in this area doesn't mean your entitled to tax payers money to conduct redundant experiments to educate yourself. "Modern Advancements in Long Range Shooting" has been available for quite some time now. Most of what Nick has explained here is published in that book.

Continuing to insist that documents are only valid if they pass thru your specific peer review is a good way to shake money loose from the government. But that has more to do with capitalism than good science.

-Bryan
 
When you're ready to get back to the technical discussions, give me a holler. I've gotta get back to making real products.

Is there anything more important to a technical discussion than the accuracy to which a model has been validated?

Wonderful, maybe I missed it. Did you actually give a (typical, say rms) percent uncertainty on the wind drift predictions?

So the data is not really published, you say the wind drift predictions are accurate, you talk about seven years of effort and so on.

But you won't say how accurate the comparisons between predictions and data have been found to be?

The defining feature of drag is not drop or drift, it's velocity loss.

Real validation of MPM solvers needs to be by determining drag coefficients from velocity loss measurements, and then using these same drag coefficients to predict drop and wind drift.

It seems that your "validation" amounted to being able to determine drag coefficients that yield consistent drop and drift, without regard for whether these drag coefficients are also consistent with velocity loss measurements at the relevant Mach numbers.
 
You want a fine example of capitalism, it's getting research funded by the US Air Force to study varmint bullets.

And winning a research award in the process.

But your business model relies on this. You actually convinced the govt to fund your research into high altitude aerodynamics (on varmint bullets of course); in particular the 'unknown' effects of Reynolds number on drag.

We never claimed the effects of Reynolds number on drag were unknown. We claimed that the approximation that McCoy made in neglecting effects of Reynolds number had not been validated in the data he used from the BRL spark range, and that the accuracy of the resulting approximation had not been tested experimentally.

Your line of questioning about the equations of motion is the same pattern. Just because you don't personally know how wind deflects a bullet doesn't mean you're entitled to our tax dollars to run experiments designed to verify that F=ma (Newtons second law).

Do not confuse not being certain of the accuracy of an explanation with not understanding it. I understand the McCoy model. I believe F = ma. The main issue is the accuracy of the approximation neglecting all other lateral forces other than the lateral component of the forward drag force after the bullet has aligned with the air flow. A secondary issue is the delay in the bullet nose realigning with the air flow as the air flow changes.

As Nick has explained, lots of $ has been spent researching wind for small arms, and a lot has been learned. It's been established that the MPM equations of motion are in fact valid, and the challenge lies in producing accurate inputs, most importantly wind measurement.

What does "valid" mean in this context? Does your claim of experimental validation mean predictions accurate to 1%, 5%, 20%?

Should not hearers be skeptical of claims of a model's validity when those claiming experimental confirmation do not specify the level of accuracy to which the model has been validated?
 
What does "valid" mean in this context? Does your claim of experimental validation mean predictions accurate to 1%, 5%, 20%?

It's as accurate as F=MA. It's a direct calculation. The only uncertainty is knowing the actual wind, and that's a measurement challenge.

Should not hearers be skeptical of claims of a model's validity when those claiming experimental confirmation do not specify the level of accuracy to which the model has been validated?

Michael, more than 7 years worth of experimental information has been shared on a public forum, for free, by someone who's been working in the field for over 7 years.

You, on the other hand HAVE NO DATA.

Nit-picking the experts with questions about the % error in the vast experience they're freely sharing is a very poor response. If you have contradictory data (or any data at all) please share. Otherwise just say thank you and move on.
 
Bro, just quit feeling sorry for yourself. There's a reason why us mil guys gravitate to dudes like Bryan and Nick, not because their experts (which they are) but because of their character and ability to put great product and knowledge in our hands and allow us to execute our job.

Go back to work.
 
It's as accurate as F=MA. It's a direct calculation. The only uncertainty is knowing the actual wind, and that's a measurement challenge.

Engineers and physics students hardly ever mess up F = ma. Their mistake is almost always failing to properly include all the forces, because F in the equation ALWAYS means a sum over all the forces, yielding a net force, which is why I prefer to write Fnet = ma.

The approximation you are using is not as accurate as F = ma, it is as accurate as Fnet = F(lateral component of aerodynamic drag), as you have it labeled in Figure 5.6 of Litz, 2009.

My skepticism grows when purported scientists confound citing actual published data with citing expert opinion which is purportedly based on private data. My skepticism also grows when scientists claim to have "perfect" agreement but are unwilling or unable to specify an accuracy level of agreement between predictions and experiment.

The history of science has too many examples of errors coming to light later when experiments are repeated or the private data becomes available. Science is more about verifying by comparison between predictions and experimental data than about trusting the experts who claim they have compared the predictions and data.

For you, the accuracy of the prediction seems like a given, a strongly held presupposition (as accurate as F = MA). In these cases, there is a tendency to ascribe differences between the predictions and the measurements as being caused by some kind of measurement error or uncertainty. This can lead to confirmation bias where the cases where data disagrees with predictions are discarded as glitches, and cases where data agrees with predictions are held up as exemplars. Actually comparing ALL the original data with the predictions is necessary to distinguish cases where the expert's opinion is based in confirmation bias and cases where the expert's opinion is solidly supported by the data.
 
My skepticism grows when purported scientists confound citing actual published data with citing expert opinion which is purportedly based on private data. My skepticism also grows when scientists claim to have "perfect" agreement but are unwilling or unable to specify an accuracy level of agreement between predictions and experiment.

My skepticism grows when I see your name.

A bullet aligning it's axis with the oncoming airflow is simply what happens with stable projectile flight. A crosswind acts to mis-align the bullets axis and drag vector with the line of sight. From here it's simple vector math to show what portion of the total bullet drag is directed perpendicular to the line of sight.

Your insistence that this all needs 'verified' is literally insisting that vector math needs 'verified'.

Of course it's easy to understand why you would insist on this point, because as long as you can complicate things enough, it gets you PAID! Not by customers who find your information useful, but by the government which is too big to effectively police fraud waste and abuse.
 
Do not confuse not being certain of the accuracy of an explanation with not understanding it. I understand the McCoy model. I believe F = ma. The main issue is the accuracy of the approximation neglecting all other lateral forces other than the lateral component of the forward drag force after the bullet has aligned with the air flow. A secondary issue is the delay in the bullet nose realigning with the air flow as the air flow changes.

The 6DOF model does not neglect other forces. There is literally decades of research on this. What we know is that the 3DOF model does a pretty good job at approximating the overall 6DOF trajectory (excepting drift, coirolis, etc) - *when the yaw is small*. We know that that yaw is small when we fire rifles at targets in the normal horizontal manner. If you want to shootb nearly straight up or in a tornado, then by all means, question the point mass method. But even then, the answer was figured out decades ago - with a 6DOF model. It answers all your questions.
 
The 6DOF model does not neglect other forces. There is literally decades of research on this. What we know is that the 3DOF model does a pretty good job at approximating the overall 6DOF trajectory (excepting drift, coirolis, etc) - *when the yaw is small*. We know that that yaw is small when we fire rifles at targets in the normal horizontal manner. If you want to shootb nearly straight up or in a tornado, then by all means, question the point mass method. But even then, the answer was figured out decades ago - with a 6DOF model. It answers all your questions.

Sort of. For the bullets where all the parameters required for the 6dof model have been accurately measured at the BRL spark range or one of the other rare facilities capable of these measurements.

Truing a BC is essentially adding a fudge factor, because you are not even taking the drag coefficients from velocity loss measurements, but rather adjusting them to a value agreeing with drop and/or drag.

Fudge factor - Wikipedia, the free encyclopedia

We've all seen physics teachers and engineers make hand waving arguments that the additional forces acting on a system are small and can safely be neglected. This often works and provides adequate accuracy, but it is recognized as an approximation.

The approximation of the MPM method is that the only aerodynamic force on the bullet is the drag force given by the Cds and acting exactly in the direction opposite the fluid flow around the bullet. Therefore, the wind deflection force is exactly the lateral component of this force computed by vector analysis in the usual way.

The bit of hand waving is the bit that "all other aerodynamic forces are small by comparison" or have some have put it, "exactly zero." My experience as a physicist tells me completely ignoring all the other forces seldom results in an exact result, and increasing the experimental accuracy a bit usually leads to the discovery and quantification of new effects.

Take for example, the approximation that Cds are independent of air density for a given bullet. However, it is well known that skin friction can change by 5-8% with a 30% change in air density. The "apparent" independence of Cds on air density depends on skin friction being a relatively small (~10%) contribution to the overall Cd. As we improve our ability to measure Cds from 1% to 0.3%, we will begin to see the effects of skin friction (thus total Cds) changing with air density.

Both 6dof and MPM models ignore the possibility of skin friction and base drag varying with air density, and that approximation is likely good for measuring Cds at one air density and using them at others to 1-2%.
 
Last edited:
I like the hornet's nest I have stirred ... :)

One point of clarification, in regards to errors in wind calls and "actual" wind holds. We (me) has access to several things, the wind I am reading with a Kestrel @ the Shooter, my experience adjusting that reading for terrain (especially out here in CO) the distance shot (Max Ord velocity increases ) and then I have the solution provided by a minimum of 2 solvers (I always run more than 1). By reading the wind to better than 1MPH at the shooter, (and I read it for a minimum of 2 Minutes to establish a High, Low and Avg) I then take the actual wind hold used to hit the target and then match that information up to my Kestrel wind reading and finally the solvers. This is part of my method for training not only me but any student to start calibrating themselves to the wind. So this makes the wind calls errors not really a factor because it is tested after the fact. (Although it can easily be done before hand and is on many occasions)

468157_10151650562807953_390739730_o.jpg


I use a variety of tools, calibrated flags etc, because during an average week I am on the range 2 to 3 days at a minimum learning. I am working the problem to be able to better teach those who cannot put in that much time. So I want to understand where the errors are, and that includes in the wind calls.

1006052_10151858018747953_326790025_n.jpg


Also it should be noted with Both FFS and ColdBore I have the ability to input multiple wind zones which is key, and it is not uncommon for me to take the results, (hits on target) and work the software backwards to see how the terrain here influenced the call.

In the interest of discussion and clarification, I have a private range and lot of time, pretty much all day to work with. I have both paper and steel on my range with the ability to shoot well beyond a Mile. Even with rifles with known results and data, I still record everything, I never not use my datebook and computers.
10268502_10152415577702953_7945988142639737767_n.jpg


My interest is in results and if I can glean why something is the way it is, the better instructor I will be. I can then answer that question as to "why" which is my goal. I will say the bias towards certain solvers does hurt the overall movement to get better results. Many of the conclusions about the engine behind a piece of software like ColdBore are incorrect and the fact it the DK Adjustments does work on both ends not just one. However that is another discussion.

As an example, I will say if I run the numbers with Shooter & JBM that are the same, move to AB and the number is more in line with ColdBore but not exact, very close but off just enough to notice. And finally I think the guys with the PM or MPM are purposely going to the old definition of G1 which is why that gets the reputation. My question there would be, why does everyone ignore the Ingalls part of that G model ? It appears to me everyone leap frogs the work done prior to 7 years ago and reverts to the original instead of that modification ?

Truly, when you look the last 7 to 10 years the changes to the Precision Rifle world is gigantic and Bryan (w/ Nick) is a huge part of that. In the 50 years before them the line was moved maybe 10ft, now we have 100ft gains every 6 months. That has to be acknowledged, and while the results on the ground may not agree with everything it does make things much easier. I personally like the fact the new Rianov units will not give you a solution unless you include a value for every data point. That includes Latitude and Azimuth, so it forces the end user to have a better understanding by making you get the details of the bullet before you can use the unit. As well I think Bryan's Custom Curves are the next big step and believe they should be traded like playing cards. We should be able to swap and insert the Curve the same way you add in the BC currently. Image if like today we can tell a new shooter,

"With a 175gr SMK going 2650fps use .496 (or band) instead of .505" or now with Custom Curves, we just email you our custom curve, install and drive on... because clearly the bigger errors I see are in the shooter more so than the BC, or in the Barrel, because most of us don't know the actual twist down to 2 decimal places. So if the difference in BC matter, how can the other parts of the system not ? Even with scopes, you have to test clicks and most don't we already know that is a big part of the error factor in solvers. Sure the option is there, but how many actually use it. Nope, i have zero issue with Berger, Bryan, or anyone moving the ball down field, but think the focus is ready to move towards the wind being a separate value to true. Much like Gerald Perry used to include a dialog in ExBal for "Shooter's Drift" where you shot 600 yards, recorded the offset and used that in place of SD, because we clearly make up part of that error.

Back to you rocket scientist types, ;)
 
Michael,

Let's consider your most recent post as an example of your pattern to make misleading and exaggerated statements.

Skin friction is affected by Reynolds number, which is affected by altitude. On this we agree. I recall running aero models on air-to-air missiles in the Air Force in which we modeled this effect (because it's well known). But here's the thing; the effect doesn't become significant (to missiles) until you get above 20,000 feet. Not many prairie dogs up there.

To bring this back to the realities of LR rifle shooting, lets revisit your statement:
Take for example, the approximation that Cds are independent of air density for a given bullet. However, it is well known that skin friction can change by 5-8% with a 30% change in air density.
You have to go above 10,000 feet before air density changes by 30%.

And this only gets you "5-8% change" in skin friction, which is only ~10% of overall drag.

So your entire argument to discredit MPM solvers as 'approximate' is based on an uncertainty of 5-8% of a 10% component, which is 0.5% to 0.8%. And that's worse case scenario, going over 10,000 feet altitude. For more average altitudes, the change in air density is much less, so you're looking at way less than 0.5% in most cases.

It would be a trivial matter to model this effect in a standard MPM model, if it were considered relevant. However, other effects of this magnitude include variations in the earths gravitational field at different altitudes. You should probably set up some government funded projects to work out how gravity diminishes with altitude BTW. Or tidal forces, don't forget about those.

Anyway I just wanted to highlight for the benefit of the readers, that what you're making a big deal about is literally a fraction of 1% for altitudes where humans can live without life support.

I think any shooter can tell you that uncertainty in the wind field is much much greater than a fraction of 1%. And yet you've marginalized 7 years worth of serious R&D on wind measurement because the guy didn't provide you with exact error margins.

I'm just providing the rest of the story
 
Last edited:
Mike, you shouldn't have posted the same thing on multiple forums. Seems a little hostile as well as personal.
 
Sort of. For the bullets where all the parameters required for the 6dof model have been accurately measured at the BRL spark range or one of the other rare facilities capable of these measurements.

Truing a BC is essentially adding a fudge factor, because you are not even taking the drag coefficients from velocity loss measurements, but rather adjusting them to a value agreeing with drop and/or drag.

Fudge factor - Wikipedia, the free encyclopedia

We've all seen physics teachers and engineers make hand waving arguments that the additional forces acting on a system are small and can safely be neglected. This often works and provides adequate accuracy, but it is recognized as an approximation.

The approximation of the MPM method is that the only aerodynamic force on the bullet is the drag force given by the Cds and acting exactly in the direction opposite the fluid flow around the bullet. Therefore, the wind deflection force is exactly the lateral component of this force computed by vector analysis in the usual way.

The bit of hand waving is the bit that "all other aerodynamic forces are small by comparison" or have some have put it, "exactly zero." My experience as a physicist tells me completely ignoring all the other forces seldom results in an exact result, and increasing the experimental accuracy a bit usually leads to the discovery and quantification of new effects.

Take for example, the approximation that Cds are independent of air density for a given bullet. However, it is well known that skin friction can change by 5-8% with a 30% change in air density. The "apparent" independence of Cds on air density depends on skin friction being a relatively small (~10%) contribution to the overall Cd. As we improve our ability to measure Cds from 1% to 0.3%, we will begin to see the effects of skin friction (thus total Cds) changing with air density.

Both 6dof and MPM models ignore the possibility of skin friction and base drag varying with air density, and that approximation is likely good for measuring Cds at one air density and using them at others to 1-2%.

Here's an idea. Why don't you modify a drag function to account for air density, and run the models yourself. Then come back and tell us the differences you find. I already know what will happen. Not because I have done it, but because, as Newton said, "I've stood on the shoulders of giants". But you seem intent on having someone prove to you the basics of what has already been hashed out over the last 150 years by a lot of dedicated, smart folks.
 
To bring this back to the realities of LR rifle shooting, lets revisit your statement: You have to go above 10,000 feet before air density changes by 30%.

And this only gets you "5-8% change" in skin friction, which is only ~10% of overall drag.

So your entire argument to discredit MPM solvers as 'approximate' is based on an uncertainty of 5-8% of a 10% component, which is 0.5% to 0.8%. And that's worse case scenario, going over 10,000 feet altitude. For more average altitudes, the change in air density is much less, so you're looking at way less than 0.5% in most cases.

Leave it to a rocket scientist to confuse one example with my "entire argument."

No doubt that the skin friction issue is likely < 0.5% for most shooters.

But I have discussed a number of sources of uncertainty in addition to the skin friction issue.

If you add up 10 different sources of uncertainty at 0.5% or so, is your overall uncertainty still 0.5%?

I expect the accuracy of experimental validation for the wind drift predictions of MPM is a lot bigger than 0.5%.
 
I like the hornet's nest I have stirred ... :)
And finally I think the guys with the PM or MPM are purposely going to the old definition of G1 which is why that gets the reputation. My question there would be, why does everyone ignore the Ingalls part of that G model ? It appears to me everyone leap frogs the work done prior to 7 years ago and reverts to the original instead of that modification ?

I think this is just a definition/semantics thing. If you use a G1 drag function in a point mass solution, you pretty much use the Official Version of the G1. If you use a G1 BC in a Pejsa solver, and then combine it with a retardation factor or whatever it is he calls it (Pejsa isn't really my thing - forgive the nomenclature), then, you're not *really* using a G1 anymore. You are using the equivalent of a custom drag function in a point mass solver. Two different ways to skin a cat. The point mass method just lets you stay a little closer to the physics, which in the long run, I think promotes better understanding.

Any smart person wanting to use point mass will want the best drag function money can buy - call it whatever you want.
 
I expect the accuracy of experimental validation for the wind drift predictions of MPM is a lot bigger than 0.5%.

And yet,

As usual,

YOU HAVE NO DATA

to support your statement.

Leave it to an MIT PHD to keep arguing about fractions of a % when actual shooting shows that uncertainty in the wind field is the dominant reason why we miss targets.
 
Why would I (or anyone) really want to use a G function from 100 years ago ? So clearly the Custom Curve / Custom Drag Function is a better way to address modern bullets. As been hashed out, up until 10 years ago, nobody cared about G7, it was unnecessary, we used the Ingalls G1 and it worked perfect. (still does, clearly) The difference between using G1 & G7 with PM or a MPM is either very little or none at all... so the idea we keep harping back to so long ago with this is a bit strange to me. Especially since it is easily demonstrated that Banding or using a Custom Curve is so much better.

With guys like Bartlein making barrels on modern machines, Powders allowing me to push a 185gr Berger Juggernaut out of a 22" 308 to 2700fps + with no pressure signs, clearly the old models are obsolete.... why continue to defend them ?

A question, can you account for my Gain Twist Barrels in PM ? or do you need the gain averaged out.
 
Frank,

The answer to your question about why we continue to use BC's (G1 or G7) is two-fold IMO.

1. It's a single number which sums up bullet performance pretty well and allows for comparison. If you have a .30 cal 175 grain bullet with a BC of .475, and another .30 cal 175 grain bullet with a BC of .510, it's easy to tell which is the better performer (assuming they were both established in the same way). When you go to custom curves or DK tables, you lose the ability to quickly represent performance with a single number.
2. Standardization. You can take any properly written PM solver and enter a single, averaged BC number and it will apply the drag curve correctly and produce accurate trajectory predictions assuming everything else is right. When you move to DK factors, retardation coefficients, there is no library of these available. A shooter has to fire many rounds before the solver becomes useful as a predictive tool.

My objective is to provide not only the solver, but equally important the DATA to drive it accurately. The library of BC's which I've measured is at 300 bullets now (I'm going for 400+ by winter), all which have custom drag curves available as well. You can run these BC's and custom drag models in any AB solver (AB Analytics, AB Kestrel, and the various smartphone apps).

The custom drag models typically show more accuracy thru transonic where drag is most 'bullet dependent'. Up to that point (where most shooters shoot) an average G7 BC will get you within a click (if everything else is right).

For shooters who prefer to massage their drag model by shooting, the advantage of a standardized PM solver running standard data isn't as important. But for the guy who just wants to pick something up and have it work from the first shot, standardization is key. Nothing wrong with the Pejsa based solutions other than users just have to know they can't expect my library or any other standard BC's to work with them because it's a different kind of solver. Same with the custom drag models; they only work with PM solvers.

-Bryan
 
Why would I (or anyone) really want to use a G function from 100 years ago ? So clearly the Custom Curve / Custom Drag Function is a better way to address modern bullets. As been hashed out, up until 10 years ago, nobody cared about G7, it was unnecessary, we used the Ingalls G1 and it worked perfect. (still does, clearly) The difference between using G1 & G7 with PM or a MPM is either very little or none at all... so the idea we keep harping back to so long ago with this is a bit strange to me. Especially since it is easily demonstrated that Banding or using a Custom Curve is so much better.

With guys like Bartlein making barrels on modern machines, Powders allowing me to push a 185gr Berger Juggernaut out of a 22" 308 to 2700fps + with no pressure signs, clearly the old models are obsolete.... why continue to defend them ?

A question, can you account for my Gain Twist Barrels in PM ? or do you need the gain averaged out.

Well, you wouldn't. Only if the old function matched your bullets. But that's what's published, so that's what we use. It's not ideal. G7 is better. As are other functions. Best yet, full custom.

As for gain twist, now your'e getting into the dependence of the drag function on other factors. This is a real thing. Bryan's newest book does a good job of explaining that, for example, the twist rate impacts BC slightly. But what that really means is that it changes the drag function. (When discussing custom drag functions, it's best to drop the concept of a BC, because you simply don't need it anymore. Everything is in the drag function.)

So drag function ideally would be a constant thing and gain twist wouldn't change it. But since we know that the point mass does not account for spin and yaw, except very crudely via the drag function, we have to accept that the drag function is subject to change based on the things that point mass ignores - such as twist (and, yes, even air density). This points us in the direction of your custom function trading cards, but the trouble is they'll not be easy to figure out, and they'll all be very similar for a given bullet.

I don't know what gain twist would do to ballistics - in the end, what you care mostly about is the spin rate at the muzzle. I suppose there would be some slight differences in the rifling marks, which already have a very small but still measurable impact on ballistics. You might also see some differences in initial yaw due to the changing rifling twist, but I doubt it. So is it conceivable that gain twists have a slight impact on exterior ballistics? Sure, but I bet it would be small.

Basically, if you want to get that nitpicky with drag functions, you'd need a separate one for each rifle/load/condition combo, and that would require collecting lots of data for each. In the end, I would guess that you wouldn't see a ton of differences for the same bullets. That said, I bet you could see a noticiable difference bweteen a carefully worked up drag function for a Berger 215 Hybrid as opposed to just using a G7. I'm less optimistic that we could measure the difference between a gain twist 1:10 and a regular 1:10.
 
Leave it to an MIT PHD to keep arguing about fractions of a % when actual shooting shows that uncertainty in the wind field is the dominant reason why we miss targets.

And leave to an engineer to keep asserting that the dominant source of uncertainty is the only source of uncertainty.

What are the uncertainties in the Cds of the 175 SMK between M1.0 and M1.5?

Have they even really been measured in velocity loss experiments?

Or are they extrapolated from Cd measurements at higher Mach numbers?

Or are they guessed at from drop and drift measurements?

How can the drift predictions be more accurate than the Cds used to predict them?

I know that drift early in the trajectory are more important, but did you actually measure the Cds above M2.1 for the 175 SMK?

And what about the 300 SMK in 338, your book shows measured Cds from M1.5 to M2.1. You somehow extrapolate that the G1 BC is 0.802 at 3000 fps (M2.68) and 0.665 at 1500 fps (M1.34). And you expect wind drift predictions to be accurate across the whole range?
 
Thanks makes sense,

Just an FYI,

Coldbore does use your G7 and your numbers do work, it's only FFS that is only using G1 .... the difference I find is, you have a lot less tweaking of the published numbers. And in certain cases, (the Gunsite guys can attest to this better than I can) FFS has Doppler data built in, which is why is very accurate. ColdBore I believe has the Lapua data, so you also see it there, but they can use any of the current numbers put out there. Sure you might find that .475 isn't as accurate as .496, but really it's a wash. You can use .505 and it will adjust it, or you can use the DK factor.

The difference in ColdBore and your AB is usually only a click or so off from each other with the most common loads. I do find the wind in ColdBore better, especially in extreme cases where I can use the multiple wind values, but for everyday use, it's six and 1/2 ... Now ColdBore on a Windows Phone is 10x better than any iPhone or Android App out there. It's crazy how robust the program is using a Windows Phone. The fact you can OneDrive the data and it works on a desktop is heads and tails above the rest. (And I hate Windows) ... so there is definitely some catching up to do on the iPhone / Android side of things. ColdBore took a leap over everyone's head this year with the Phone App.
 
Let it go, Michael. You're way, way in the weeds now. If your point is that data is imperfect, you win. I, for one, am eternally thankful to the work that the guys at Applied Ballistics/Berger have done, and it's good stuff.
 
I will say this,

Oakwood Tech has the ability to use their targets to also acquire a broad but pretty respectable MV at the point of impact. So using their electronic target, you now have instant access to Muzzle Velocity at the target from your Tablet/ Computer at the Shooter One side affect they noticed is from bullet to bullet the advertised BCs can with some calibers be all over the place. The variations are wide, and this was with a military rifle and load. What they found was this common .mil load was not putting the bullets square on target and hence you had wide BC swings... Accuracy was less effected but using the MV to measure BC was no where near what was expected from bullet to bullet.

I have one of these units, although I was traveling from Mid July until just Monday, I am now back and plan on seeing what the Oakwood Tech Target says combined with MV at the muzzle ...

it was also noted by others in the field of precision shooting that bullet consistency might be the weak link in all this. Not the merely BC but the manufacturing of the actual bullet. Consistency may not be up to par. If that is the case, it would certainly open up a new door, taking less off the software and putting it on the hardware, in this case the actual bullet.

Are the bullet consistent and hitting on center or are they more likely to be out of round causing the biggest variations then given credit for in the past ? Those hitting slightly off center were giving much lower BC numbers than those hitting noticeably on center. this was of course at distance but still within supersonic.
 
I was under the impression that lot to lot variations for typical match bullets would allow for a couple percent variation in BC. I'm sure Bryan can correct that if it's wrong. That's a fairly big number when you get to discussing some of these small factors.
 
And leave to an engineer to keep asserting that the dominant source of uncertainty is the only source of uncertainty.

Don't just race to the end. Remember I had to start by demonstrating that you were exaggerating the non-dominant sources to begin with. Prior to that, you were asking the question if the solver or the wind uncertainty was dominant. Then you moved to questioning the data, now that the unimportant things are known your criticizing me for focusing on the 'big picture'.

Of course I'm going to focus on the dominant source of uncertainty! Not just because I'm an engineer, but because I'm a shooter.

My goal is to hit targets at farther ranges and to do so more reliably. This goal is accomplished by focusing on the weakest link of the chain, not the strongest.

I've got a great deal of respect for the scientific method. But your half truths, inconsistent standards of evidence, nit picking, and general lack of data is not science, it's bias. The opposite of science.

If your saying that my problem is that I'm focused on the big picture rather than the minutia, then I'm guilty as charged. It's not that I don't know about the minutia, I just recognize that it's irrelevant to the practical objective of hitting targets.

You can question my methods for determining drag at transonic speed as much as you like. I'm not going to educate you on how it's done so you can nit-pick at something you don't understand or worse yet, publish the work as your own (again). Generating useful data from measurements is actually a simple process that I learned in aerospace engineering school, and applied on the job. Do they teach that at MIT?

MIT graduates cannot power a light bulb with a battery. - YouTube

Fuss over it all you want but at the end of the day I'm not accountable to you. I consider myself accountable to the shooters who use the tools I support and the data I provide. As long as shooters are able to use my data to hit targets at long range reliably, then I'm satisfied that I'm doing a good job.

You can squeeze $ out of govt questioning it but don't pretend that you're making anything better, or that you're revealing something that's important to actually hitting targets.

damoncali said:
I, for one, am eternally thankful to the work that the guys at Applied Ballistics/Berger have done, and it's good stuff.

Thank you for your comments.

Regarding lot uncertainties, it's been my general observation (based on 100's of bullets tested) that lot variations are typically between zero and 2-3% max across all brands unless a design has changed. One of the many reasons why arguing about 0.5% error in CD at 10,000 feet altitude is just plain 'Courney'.

-Bryan
 
I've got a great deal of respect for the scientific method. But your half truths, inconsistent standards of evidence, nit picking, and general lack of data is not science, it's bias. The opposite of science.

Will you give a straight answer to whether the revised BCs of the flat based bullets published in 2009 and maintained in the Berger materials to the present were actually measured or based on a predictive model?

Claiming predictions to be real measurements is the opposite of science.
 
<style type="text/css">P { margin-bottom: 0.08in; direction: ltr; color: rgb(0, 0, 0); font-size: 12pt; widows: 2; orphans: 2; }</style> As a layman, this clears up some odd things I’ve been seeing. For under 800 yards does it really matter what app is being used?

I’m using solver x. It allows for Cd inputs (which I get from Litz’s book) rather than a BC input(s). I start with my chrono and I get an average MV of 2836 fps and I don’t trust my chrono all that much but gets me in the ballpark. At 600 yards the trajectory validation module calculates based on observed drop that MV is more like 2808 fps. I use 2808 fps and go back to 200, 300, 400, 500, and 600 yards and bullet drop is matching the solver. I already know the wind drift calculation is incorrect and I generally compensate by reducing wind speed by about 30% for the first shot. Solver x has an option for spin drift, vertical deflection, and CE. I don’t have a problem with or without spin drift. Vertical deflection is useless because the solver is stupid and doesn’t know what is really going on downrange. CE is pointless too because I’m not taking up enough of the earth’s space and time. Combine these variables with the actual multiple wind variables occurring downrange and all you have an incorrect amalgamated wind solution. It really does not matter if you have to walk in .1 mils or .2 mils if you missed on the first shot even taking measures to adjust for incorrect windage correction per app. The second shot is based on experience.

Solver x has a little brother called solver y with less options including no input for humidity. You enter one G1 BC value. Using a published average BC value (also from Litz’s book) and solver y’s trajectory validation estimates the same 2808 fps with the same observed drop described above. Like solver x, solver y’s wind calculations are overstated. However, it does not have options for spin drift, vertical deflection and CE. What is important here is that multiple Cd inputs are no different than the corresponding single average BC for bullet drop. The scope adjustments on a .1 mil turret are the same. There is a small difference in bullet drop as an inch at 600 yards. I’m unable to observe this one inch difference.

However, solver x has an option to calculate a BC for trajectory validation as well as MV fps. Now, these BC estimates are so ridiculous that nobody paying any attention will take this option seriously enough to use it and think this will shore up the weakness with wind drift calculations and if anything it is going to screw up observed bullet drop for anything other than at 600 yards using the example above. For trajectory validation the best option is MV fps with published BC(s) or Cd(s). As for the wind, both solvers are incorrect and that is explainable like errors in predicting the weather forecast. This wind thing goes for the same wind calculation from any of the other Android or iPod app that I have used at the same time for comparison.
 
OK Michael, so you're changing subjects again.

The ole' "On to Child Sacrifice!" routine.

That's OK, I'll play along. As the VP of Berger already told you, some of the flat base bullet BC's are predictions rather than direct measurements. We focus BC testing resources on those bullets which shooters actually shoot at long range where BC matters. You'll be happy to see that our website now caveats this fact.

Does this provide you any personal validation? You feel like you earned your money on these forums now over the past week? No longer will the poor masses be tricked by claims that their flat based varmint bullet BC's are accurate to 1%. Well done.

Now I expect that you'll move on to the next irrelevant minutia to nit pick about.

How about doing a study to see if lubrication is actually slippery or not. Oh, you've already done that (http://www.dtic.mil/dtic/tr/fulltext/u2/a568594.pdf). Another wonderful use of taxpayer money. Anyway, what was your conclusion?

...It is not clear why lubricants which are effective in reducing friction in other high-pressure and high
temperature applications are relatively ineffective in reducing friction between bullets and a rifle
bore.

My guess; 'Courtney' science at work.

Keep tangling things up, and I'll keep untangling them.

Shooters, including myself, generally come to the forums to learn and share information. Unfortunately there will always be guys like you here pissing in their ear and telling them it's raining. Just know that these guys aren't stupid. They know the difference between useful information and BS. That's why you drew the response you did with your smear campaign on all 3 forums.

Where it involves me and my work, you can bet I'll be here untangling the web of deceit that you're trying so desperately to spin.
 
Last edited:
If you don't trust your chronograph, it really doesn't matter much. Before you can even get started on the level of gnat-assery in this thread, you must be very confident in the simple variables like velocity, range, atmospherics.
 
Last edited:
come on, damon. The advertised error rate on the chrono is 1%. 2836 vs. 2808 is close enough. But it does make some difference. Assume I got my shit together with the inputs. This is where trajectory validation comes in. Follow the bullet blah blah blah. I'm just curious is it necessary with today's apps under 800 yards if it is going to make that much difference which app is being used. I don't get it. Are they arguing about 1000 yards and beyond or not.
 
OK Michael, so you're changing subjects again.

It's the main subject and title of this thread.

As the VP of Berger already told you, some of the flat base bullet BC's are predictions rather than direct measurements. We focus BC testing resources on those bullets which shooters actually shoot at long range where BC matters. You'll be happy to see that our website now caveats this fact.

I am happy to see the footnote reading

*For some flat based bullets which are typically used at short range, BC’s are based on calculated rather than fired BC’s.

here: http://www.bergerbullets.com/wp_super_faq/what-is-bc/

This is rather buried in a FAQ and not immediately obvious perusing the catalog. When you click on the ? button on the varmint bullet page, you get:

"BC is more important for long range shooting than short range. The BC's of Berger bullets are based on carefully controlled test firing. The BC's established by this method are accurate to within +/- 1%, whereas BC's predicted by computer programs can have as much as +/- 10% error. All BC's reported for Berger bullets are corrected to the ICAO Atmosphere."

It would be of better service to customers to provide the footnote there also where customers are most likely to see it so they have a better assessment of the accuracy of the BCs for the bullets they may purchase.

You also had a chance to clarify this back in 2009 when a customer asked:

How does Berger determine their bullet's BC?

Calculations from bullet's physical properties (weight, dimensions, center of mass, etc)?

Time of flight between two screens at different velocities?


And you answered:

The short answer is that we fire the bullets and measure their time of flight in 200 yard increments out to (typically) 600 yards. G1 and G7 BC's are derived from the muzzle velocity and time of flight data, corrected to ICAO standard sea level conditions.

You can read more (The long answer) on these two articles on our web log:
Berger Bulletin Blog Archive Why Our BC Numbers have been Lowered (Corrected)
and
Berger Bulletin Blog Archive A Better Ballistic Coefficient

Take care,
-Bryan


Neither the short answer or the long answer made any mention of the fact that at that time the BCs of over 40% of Bergers bullets were predicted from the bullet's physical properties.

How about doing a study to see if lubrication is actually slippery or not. Oh, you've already done that (http://www.dtic.mil/dtic/tr/fulltext/u2/a568594.pdf). Another wonderful use of taxpayer money. Anyway, what was your conclusion?

Yes, for under $1000 we developed a method to measure barrel friction accurately, a goal which the Army had previously paid some contractors over $100,000 to do with much less accurate results. We also used the method to quantify friction effects of various coatings, also under $1000. Call your Congressman. Courtney is wasting taxpayer money again, spending four figures on projects that justify six figures.
 
come on, damon. The advertised error rate on the chrono is 1%. 2836 vs. 2808 is close enough. But it does make some difference. Assume I got my shit together with the inputs. This is where trajectory validation comes in. Follow the bullet blah blah blah. I'm just curious is it necessary with today's apps under 800 yards if it is going to make that much difference which app is being used. I don't get it. Are they arguing about 1000 yards and beyond or not.

Out to 800 or so, it doesn't matter as much. I have always been able to get pretty damn close with minimal effort at that range. Where it gets dicey is when you get far enough out that the bullet drops down near the speed of sound. Even at longer ranges, we're talking about some relatively small differences.
 
Speaking of chronographs, I'll throw out a plug for Bryan's new book in an attempt to rebalance the universe. The chapter on chronographs is worth the price of the book all by itself.
 
Michael,

In 2009 when all Berger's BC's were revised, we reduced the BC's on all the varmint bullets by 5% because, on average, the previous predictions were on average 5% high when compared to measurements.

How you managed to twist this into 'inflated BC's and 'Berger is lying' is your special gift.
 
Damon, that's good to know. A lot of people don't have access to 1000 yard and beyond ranges. I don't want people to get the impression what they have is now obsolete. Things are getting convoluted on this thread. I've never seen anything like it.