Rear wheel skirts
The rear wheel skirts "test" I ran on my car was kind of sloppy: first of all, it wasn't an A-B-A. The skirts were added on top of another significant variable (grille block), so they weren't tested independently. (That made it actually an A-B-BC test.)
Out of curiousity I'm going to re-run the test. I'll be using a different car this time as well: 2007 Toyota Corolla automatic. Watch this space over the next week or so. |
sweet metro, your test will let me know if its worth paying for that rear wheel skirts again, i've lost mine in on the highway! was so pissed! i SHOULD have velcoed them instead of using painter's tape X_X
|
Quote:
|
We should do a pool on the results! LOL
|
Well, Clencher would probably win! Because the results are small.
|
Do you have an image of the modification?
Results look promising :) |
No pics yet - I'll take some tomorrow.
|
And slightly off topic... it sure looks like that Corolla can turn in some respectable FE! And it's an automatic!
|
So in 10,000 miles @ $3/gallon, those skirts will save $3.43, and the spoiler add-on another $1.06.
|
Good thing I made them from free cardboard then. I'm already in the black! (Not counting the little bit of duct tape...)
|
Quote:
|
Quote:
Also remember that he was testing at slower speed. Percentage would have been higher at more 'normal' speed. |
Stanley, I think you transposed what should have been 57 miles... and I'd actually consider 85 kph a pretty normal speed.
It would be interesting to see the results at 100 kph. The tough part with this is that it is measuring those particular skirts on one particular model of car. The Corolla may have faired (pardon any pun) far better from this than many other cars or its also possible that this model wouldn't benefit much from skirts. Still, I totally appreciate MetroMPG's diligence and efforts in doing this test. |
Details:
|
Quote:
|
|
2007 Camry Hybrid
(Nothing earth shattering - in fact I think it reveals more about my testing than the wheel skirts.) |
Thanks for the test Darin.
I had to comment because that Camry hybrid actually looks better with those wheel skirts on it ( non cardboard of coarse ) It's one of those cars that it just looks like it would have come that way from the factory. Also, as I have commented on your flea before, the wheel skirts that you installed actually seem to visually 'ground ' the car. From the factory, the open wheels, roofline arch and high ground clearance make the car appear as if it is flipping forwards or jumping ( excellent nick name -"flea" ) . You did a major improvement with those skirts ! Can't wait to see the car lowered in real life. |
2 Attachment(s)
Better late than never, the Camry results:
|
Metro, maybe the best bet is just to go tobogganing with them. Find a big hill and do coasting distance tests.
Don't knock your efforts... you may actually uncover some new TCH trick! ;-) |
>> Also, testing for small differences is difficult & time consuming
And often meaningless. For any test, one should calculate a margin of error in the various measurements, and combine them for an overall predicted margin of error. Then one should estimate the maximum effect of the change you are trying to test. If the margin of error is not small with respect to the anticipated variation in the result.... there is no point in doing the test. Personally, I doubt that we can measure anything smaller than a couple of Cd points, using the rudimentary techniques we tend to us, and a lot of the aero mods we attempt to measure fall into that grouping. |
2Ton: hills / coastdown testing would be easier, but they're still subject to the same error caused by a warming drivetrain.
|
You guys are too pessimistic.
|
The difference between standard deviation and improvement will dictate how many runs ya need to do IIRC. Where's Bill when ya need 'em? Well, I think this is right, and in the case of the Camry, assuming a SD of ~.1mpg (I think it's actually less) and a change of .4mpg, 2 runs for each gives an 80 chance that an 0.05 level test of significance will find a statistically significant difference between two sample means. I think...
The nice thing about a SG is the amount of resolution it affords. W/o that, we would have a much higher SD and need to do more runs to get a statistically significant result. |
Quote:
|
agreed that one way to minimize margin of error is to develop a statistical result. But 3 or 4 or even 10 runs can't do that.
if your margin of error is high, you still may indeed get a small # of samples that "look" like the margin of error is low, but if you do more, you will get the extremes in there as well. Now one can make the argument that "most" of the samples will indeed bounce around closely to a median, but there is no guarantee. 4 samples may luckily bounce equally around the true median, but they may also be a close grouping well away from the median... here is where the odd balls sometimes can tell you something... depending on the extreme, they might be legit where a bunch of the margins of error added up (while most samples had them averaging each other out). |
Quote:
But - when you look at the results for the Corolla/Camry runs, the bounce in 6 out of 7 groups (of 3 bi-dir runs each) is quite close to the mean. I think that tells me they're likely good data. |
>>the bounce in 6 out of 7 groups (of 3 bi-dir runs each)
>>is quite close to the mean. I think I was thinking something a little different. More like the mean of the sample may not be near the mean of the population, but significantly left or right of it. Like if you had a gaussian population between 1 and 100, and you grabbed 5 samples and they just all happened to be in the 10's and 20's.... you might conclude that 15 is the mean and that your SD is small when in fact the real population is quite different. Now throw in the 85 as a crazy outlayer for the 6th data point.... is it crazy? Or is it truely representative? The other thing that comes to mind is that the sample of an ABA test for mpg may not be gaussian at all.... we are assuming it is. It might not be. Assuming guassian implies that the margins of error all get randomized and that, because of that, the most often cancel out. This may not be the case. Conditions may push most margins of error in one direction sometimes and the other direction at others.... I don't really know... I just know that assuming is sometimes wrong. In any event, the margin of error in the test should be calculable...or at least shown to be "at least" a certain amount. And we all ought to think about this some and maybe come up with some reasonable margins of error for certain test aspects. For example, during any single directional run, you may have a somewhat steady wind in a certain direction, but the wind gusting is going to vary for each run.... and I don't think you can ever get a measure on that... but it could make the aero force you are trying to overcome differ quite a bit from one run to the next. Another one I've been thinking about lately is the accel/decel around a "steady" speed... which of course does not exist... what effect does it produce??? Not sure... but again it is calculable I believe (say 60 mph.... +- 1mph???? doubt it. +- 2mph??? maybe... is the speedometer even that precise?) Nah, I don't think I am pessimistic. I just think we do what everybody does... look for data that fits the theory we want. If you make some aero mods and the data shows some improvement, then we conclude success because the mods "should" improve things.... or so we think :-) From this perspective, I wonder how many of the air dam experiments showing success would survive a non-believer's testing.... :-) Or how about acetone? |
What you seem to be talking about is the potential impact of different things we are unable to control for. I think that the SD is already representative of our controls compared to what we can't control. For instance, if I were to try to pick up a .5mpg difference based on my gaslog for White Bread, I would need thousands of runs because I don't control for weather, route, driving habits, or traffic and my tanks have at least a few mpg difference between each. I figure the average speed could be controlled fairly well since the SG can report average speed, so as long as those are within the same SD the mpg figures are, a test should be o.k. Wind gusts are already incorporated into the average wind speed, but I suppose we could see something come out of nowhere, but it would show up in the data. I think this is why the line ...gives an 80 chance that an 0.05 level test of significance... is present, since there can be a convergence of unlikely events that could lead to something appearing helpful when it's not, although, this is fairly unlikely. I think that the more we test, the more unlikely it becomes.
|
OK, I "think" the anomaly is:
""where 'd' is the expected difference between means"" The problem I see with the previous analysis is that the "quality" of the sample size is derived from looking at the actual results and working backward, rather than estimating the sample size needed based on the "expected" difference up front.... ie. the margins of error. You can't take a small sample of something, find that there is a good distribution in the sample, and infer that you have precision in the answer. Sometimes you get lucky.... very lucky. Last time I was in Vegas I tried playing video roulette... while sitting there for a couple of hours there were two occurences of the same number hitting 3 times in a row... the odds on that a quite low, but if my sample size was 4 or 5 around that, my statistical estimate of the population would be crazy. But back to the subject at hand: I think the point is that you have to be dilligent about minimizing margins of error if you want to get better results. For aero, right off we are in a bad situation because at typical HW speeds, aero resistance is only about 1/3-1/2 of the opposing forces... RR is up there and as Metrompg pointed out, maybe variablitiy in that caused variability in the results. Since you can't get rid of RR the best you can do is minimize it... so running the test at say 90 mph versus 60 mph would decrease the overall effects of the margin of error on the RR and improve the overall result. Consider this: for my Honda VX, a reduction in Cd of .01, which is something we might try to measure if we drop a mirror or lower an air dam, yields a predicted negative force delta of only 2 lbs at 60 mph, while the overall negative force is about 90 lbs (about 60 aero + 30 RR). So we would be trying to measure a difference of only 2%... that is pretty small and your testing methodology better pretty good then.... heck, getting our margins of error down to 2% is probably pretty aggressive.... but that is just an opinion thrown out. |
Quote:
|
Quote:
|
All times are GMT -8. The time now is 01:02 PM. |
Powered by vBulletin® Version 3.8.8 Beta 1
Copyright ©2000 - 2024, vBulletin Solutions, Inc.