Static VS. Effective compression

We may earn a small commission from affiliate links and paid advertisements. Terms

My appologies if I am being imposing, but I have heard a lot of misinformation regarding this topic. The static vs. effective compression explanation of turbocharging does not actually make thermodynamic sense.
 
This is actually somewhat difficult to explain without the use of PdV diagrams, but in short:

Power is work done over time. The power your engine is producing can be approximated as an isentropic expansion of a diatomic gas. There is a lot of calculus behind this that I can do but really don't feel like it. If you have a higher compression, you produce higher cylinder pressure but that cylinder pressure decreases faster becuase the initial combustion volume was smaller. This should make sense to you.

When computing your engine's power, you are really trying to find the "area under the curve." This means that the area under a Pressure vs. Volume diagram's curve determines how much power you are making. In the case of a piston engine, the pressure drops as the piston goes down. Also, the rate of change of volume increases as it aproaches the middle of the stroke, and decreases as it aproaches BDC. This is important because the faster the volume is changing, the more power is being made (which can be proven using mechanics as it can be shown that the torque on the output shaft is greatest when the crankshaft is at a 90 degree angle).

It should be immediately clear that the best way to make the most torque (and subsequently, the most power) is to produce as much pressure as possible when the piston is closest to the middle of the stroke. Now: Suppose we have two engines with the same effective compression. One has a static compression of 14:1 and no boost. The other has a static compression of 7:1 and 1 bar (14.5 psi) of boost. Assuming perfect intercooling (this is not easy), the latter engine will always make more power. At the beginning of the combustion stroke, both engines have the same torque output. Because the former engine has a much smaller chamber volume than the latter engine, it's cylinder pressure drops much faster than the boosted setup; by the time the engines are passing through center stroke, the pressure disparity is large.

The really neat thing about this is that the former engine very likely encountered higher stresses (higher initial cylinder pressure) due to the isentropic behavior of the gas during compression. The turbo motor somehow got away with more power and less stress!

Turbo motors are of course innately less efficiant for reasons including:

Increased pumping losses due to increased back pressure,

Reduced thermal efficiancy due to increased intake temperatures

Reduced thermal efficiancy because of reduced compression (remember that the high compression motor looses pressure faster; this means that more energy is being extracted from the burned fuel)

Increased mixture densities to protect the motor and reduce EGT.

That is about as much as I can explain without getting into the math here.
 
If the article said that identical effective compression setups make the same amount of power, yes it's wrong. I haven't read it in a while. Energy storage in the fuel and release upon combustion is what makes the power- not just compression ratios. Higher fuel mass = more energy potential, so turbocharged engines simply have more power potential.

I understand what you're saying- I've had my share of thermodynamics too. The article is just trying to explain the effective compression concept in a simple way. It definitely needs some revision, but it's not completely incorrect.

:)
 
Back
Top