Fuelly Forums

Fuelly Forums (https://www.fuelly.com/forums/)
-   Aerodynamics (https://www.fuelly.com/forums/f14/)
-   -   Rear wheel skirts (https://www.fuelly.com/forums/f14/rear-wheel-skirts-5991.html)

lca13 09-17-2007 05:37 AM

>> Also, testing for small differences is difficult & time consuming

And often meaningless.

For any test, one should calculate a margin of error in the various measurements, and combine them for an overall predicted margin of error.

Then one should estimate the maximum effect of the change you are trying to test.

If the margin of error is not small with respect to the anticipated variation in the result.... there is no point in doing the test.

Personally, I doubt that we can measure anything smaller than a couple of Cd points, using the rudimentary techniques we tend to us, and a lot of the aero mods we attempt to measure fall into that grouping.

MetroMPG 09-17-2007 06:06 AM

2Ton: hills / coastdown testing would be easier, but they're still subject to the same error caused by a warming drivetrain.

MetroMPG 09-17-2007 09:42 AM

You guys are too pessimistic.

omgwtfbyobbq 09-17-2007 10:31 AM

The difference between standard deviation and improvement will dictate how many runs ya need to do IIRC. Where's Bill when ya need 'em? Well, I think this is right, and in the case of the Camry, assuming a SD of ~.1mpg (I think it's actually less) and a change of .4mpg, 2 runs for each gives an 80 chance that an 0.05 level test of significance will find a statistically significant difference between two sample means. I think...

The nice thing about a SG is the amount of resolution it affords. W/o that, we would have a much higher SD and need to do more runs to get a statistically significant result.

MetroMPG 09-17-2007 10:39 AM

Quote:

Originally Posted by omgwtfbyobbq (Post 72569)
The difference between standard deviation and improvement will dictate how many runs ya need to do IIRC.

Another

lca13 09-17-2007 05:23 PM

agreed that one way to minimize margin of error is to develop a statistical result. But 3 or 4 or even 10 runs can't do that.

if your margin of error is high, you still may indeed get a small # of samples that "look" like the margin of error is low, but if you do more, you will get the extremes in there as well.

Now one can make the argument that "most" of the samples will indeed bounce around closely to a median, but there is no guarantee. 4 samples may luckily bounce equally around the true median, but they may also be a close grouping well away from the median... here is where the odd balls sometimes can tell you something... depending on the extreme, they might be legit where a bunch of the margins of error added up (while most samples had them averaging each other out).

MetroMPG 09-17-2007 05:31 PM

Quote:

Originally Posted by lca13 (Post 72627)
4 samples may luckily bounce equally around the true median, but they may also be a close grouping well away from the median...

Agreed. You can never know for sure with a small number of samples.

But - when you look at the results for the Corolla/Camry runs, the bounce in 6 out of 7 groups (of 3 bi-dir runs each) is quite close to the mean. I think that tells me they're likely good data.

lca13 09-17-2007 09:32 PM

>>the bounce in 6 out of 7 groups (of 3 bi-dir runs each)
>>is quite close to the mean.

I think I was thinking something a little different. More like the mean of the sample may not be near the mean of the population, but significantly left or right of it. Like if you had a gaussian population between 1 and 100, and you grabbed 5 samples and they just all happened to be in the 10's and 20's.... you might conclude that 15 is the mean and that your SD is small when in fact the real population is quite different. Now throw in the 85 as a crazy outlayer for the 6th data point.... is it crazy? Or is it truely representative?

The other thing that comes to mind is that the sample of an ABA test for mpg may not be gaussian at all.... we are assuming it is. It might not be. Assuming guassian implies that the margins of error all get randomized and that, because of that, the most often cancel out. This may not be the case. Conditions may push most margins of error in one direction sometimes and the other direction at others.... I don't really know... I just know that assuming is sometimes wrong.

In any event, the margin of error in the test should be calculable...or at least shown to be "at least" a certain amount. And we all ought to think about this some and maybe come up with some reasonable margins of error for certain test aspects. For example, during any single directional run, you may have a somewhat steady wind in a certain direction, but the wind gusting is going to vary for each run.... and I don't think you can ever get a measure on that... but it could make the aero force you are trying to overcome differ quite a bit from one run to the next. Another one I've been thinking about lately is the accel/decel around a "steady" speed... which of course does not exist... what effect does it produce??? Not sure... but again it is calculable I believe (say 60 mph.... +- 1mph???? doubt it. +- 2mph??? maybe... is the speedometer even that precise?)

Nah, I don't think I am pessimistic. I just think we do what everybody does... look for data that fits the theory we want. If you make some aero mods and the data shows some improvement, then we conclude success because the mods "should" improve things.... or so we think :-) From this perspective, I wonder how many of the air dam experiments showing success would survive a non-believer's testing.... :-) Or how about acetone?

omgwtfbyobbq 09-17-2007 10:21 PM

What you seem to be talking about is the potential impact of different things we are unable to control for. I think that the SD is already representative of our controls compared to what we can't control. For instance, if I were to try to pick up a .5mpg difference based on my gaslog for White Bread, I would need thousands of runs because I don't control for weather, route, driving habits, or traffic and my tanks have at least a few mpg difference between each. I figure the average speed could be controlled fairly well since the SG can report average speed, so as long as those are within the same SD the mpg figures are, a test should be o.k. Wind gusts are already incorporated into the average wind speed, but I suppose we could see something come out of nowhere, but it would show up in the data. I think this is why the line ...gives an 80 chance that an 0.05 level test of significance... is present, since there can be a convergence of unlikely events that could lead to something appearing helpful when it's not, although, this is fairly unlikely. I think that the more we test, the more unlikely it becomes.

lca13 09-18-2007 07:11 AM

OK, I "think" the anomaly is:

""where 'd' is the expected difference between means""

The problem I see with the previous analysis is that the "quality" of the sample size is derived from looking at the actual results and working backward, rather than estimating the sample size needed based on the "expected" difference up front.... ie. the margins of error.

You can't take a small sample of something, find that there is a good distribution in the sample, and infer that you have precision in the answer. Sometimes you get lucky.... very lucky. Last time I was in Vegas I tried playing video roulette... while sitting there for a couple of hours there were two occurences of the same number hitting 3 times in a row... the odds on that a quite low, but if my sample size was 4 or 5 around that, my statistical estimate of the population would be crazy.

But back to the subject at hand:

I think the point is that you have to be dilligent about minimizing margins of error if you want to get better results. For aero, right off we are in a bad situation because at typical HW speeds, aero resistance is only about 1/3-1/2 of the opposing forces... RR is up there and as Metrompg pointed out, maybe variablitiy in that caused variability in the results. Since you can't get rid of RR the best you can do is minimize it... so running the test at say 90 mph versus 60 mph would decrease the overall effects of the margin of error on the RR and improve the overall result.

Consider this: for my Honda VX, a reduction in Cd of .01, which is something we might try to measure if we drop a mirror or lower an air dam, yields a predicted negative force delta of only 2 lbs at 60 mph, while the overall negative force is about 90 lbs (about 60 aero + 30 RR). So we would be trying to measure a difference of only 2%... that is pretty small and your testing methodology better pretty good then.... heck, getting our margins of error down to 2% is probably pretty aggressive.... but that is just an opinion thrown out.


All times are GMT -8. The time now is 11:19 PM.

Powered by vBulletin® Version 3.8.8 Beta 1
Copyright ©2000 - 2024, vBulletin Solutions, Inc.