- 积分
- 5672
- 在线时间
- 2880 小时
- 最后登录
- 2019-2-10
- 阅读权限
- 100
- 精华
- 2
- UID
- 26078
- 帖子
- 5585
- 精华
- 2
- 经验
- 5672 点
- 金钱
- 5633 ¥
- 注册时间
- 2007-8-18
|
楼主 |
发表于 2009-8-20 14:33
|
显示全部楼层
(续上)
The effect of global negative feedback
The use of global negative feedback does several things: it flattens and extends the frequency response, it reduces distortion generated in the stages encompassed by the feedback loop, and it reduces the effective output impedance of the amplifier, which increases the damping factor. All of these things affect the tone in some manner.
The flattened, extended frequency response obviously changes the tonal character by removing "humps" in the output stage response and producing more high and low end frequencies. The distortion reduction makes the amp sound cleaner and more "hi-fi", up to the point of clipping. Perhaps the main difference for the "feel" is the increased damping factor produced by the negative feedback loop. The decreased effective output impedance causes the amp to react less to the speakers. A speaker impedance curve is far from flat; it rises very high at the resonant frequency, then falls to the nominal impedance around 1kHz, and again rises as the frequency increases. This changing "reactive" load causes the amp output level to change with frequency and changes in speaker impedance (a dynamic thing that changes as the speakers are driven harder). Global negative feedback generally reduces this greatly. This can be good or bad, depending upon what you are looking for.
Negative feedback makes the amp sound "tighter", particularly in the low end, where the speaker resonant hump has the most effect on amplifier output. This is better suited for pristine clean playing or a tight distorted tone, while a non-negative feedback amp has a "looser" feel, better suited to a bluesy, dynamic style of playing. The other disadvantage of a negative feedback amplifier is that the transition from clean to distorted is much more abrupt, because the negative feedback tends to keep the amp distortion to a minimum until the output stage clips, at which point there is no "excess gain" available to keep the feedback loop operating properly. At this point, the feedback loop is broken, and the amp transitions to the full non-feedback forward gain, which means that the clipping occurs very abruptly. The non-negative feedback amp transitions much more smoothly into distortion, making it better for players who like to use their volume control to change from a clean to a distorted tone.
There is an output stage topology that is kind of in between, called "ultralinear" operation. This uses local negative feedback to the screen grids of the output stage by means of a tapped output transformer primary. This increases the damping factor and makes the amp a bit tighter without the use of a global negative feedback loop (you can use global negative feedback with ultralinear output stages, but you may not like the tone as much). The Dr. Z Route 66 amplifier uses an ultralinear output stage. There is also a triode output stage, which has even higher damping factor than ultralinear, but some players feel that it sounds too "compressed" and midrangey, while others like it. Part of the reason for the midrange emphasis is the increased input capacitance of triode mode over pentode mode because of the Miller effect, which in effect, multiplies the grid to plate capacitance by the gain of the tube. This increased capacitance rolls off the high frequencies.
Does true class A operation require any particular current or bias point?
True class A operation does not have to be above any particular current rating or dissipation. It depends on the tube type, the power supply voltage, the reflected impedance, and the required operating point. However, in general, when a class A power amplifier is designed, the bias point is chosen to correspond with the spot on the plate curves at the intersection of the load line, the plate voltage, and the maximum dissipation curve that gives maximum symmetrical swing in both directions before clipping. This means that the tube is biased right at maximum plate dissipation, which is okay, because the dissipation is maximum at idle in a class A amplifier, and does not increase with applied signal, as it does in a class AB or class B amplifier (it actually decreases to a minimum at full power). This is not to say that that is the only current and voltage that will work. If you lower the plate voltage by 100V, you will find another "optimum" spot where these lines intersect. If you change the reflected load impedance, you will find yet another optimum spot. There is, however, an upper limit on the voltage that can be applied where you can no longer bias for symmetrical swing about the idle point without exceeding the plate dissipation ratings. This is the limiting voltage for that tube in true class A operation *at the max recommended tube ratings*. If you choose to run the tube over ratings, as is the case in some amplifiers, you can bias the tube to a point that is running class A, but is above the maximum dissipation curve. Although this seems to work with some tubes, it is not a recommended practice.
This holds true for both single-ended and push-pull designs. In push-pull class A, the bias point and plate supply voltage is the same as for single-ended, but there is a phase inverter and a center-tapped transformer, which are used to increase power and reduce distortion (even-order harmonics are canceled, and power supply hum is canceled in a balanced push-pull amp). Power is twice that of single ended (for a two-tube push-pull vs. a single tube single-ended, etc.).
To get a better feel for this, take a set of plate curves for a given tube, and draw a load line representing the reflected impedance (it has a slope corresponding to the negative reciprocal of the reflected load impedance, and passes through the intersection of the bias current and plate voltage lines), and draw a curve representing the plate dissipation (it will be a parabolic shape, with each point equal to the current that corresponds to the plate dissipation divided by the plate voltage). The load line should just touch the plate dissipation curve at the selected plate voltage (for max power out - if you want less than max power, it can be below the dissipation curve). The current corresponding to this point will be the required bias current, and the dissipation will be maximum at that point. All tube signal swings will occur on the load line (assuming a purely resistive load - reactive loads generate elliptical load lines), so you can find the plate voltage swing for a given grid voltage swing, and you will see that you will have to either change the plate voltage or the reflected load impedance, or both, in order to get the optimum class A bias point. Don't forget that the actual plate voltage swings both above and below the supply voltage, and the center of the swing is the actual plate supply voltage. This is kind of confusing at first, because it isn't intuitive that you could get a 400V peak with only a 250V supply (i.e., a swing from 100V to 400V, centered around 250V). The "extra" voltage comes about because of the nature of how the output transformer works.
Does biasing at max dissipation guarantee class A operation?
Just because you are biased at max dissipation does not mean you are class A! You must be in the region where the voltage swing is symmetrical and biased in the center of the range, where plate current flows for all unclipped output. Biasing to a high voltage and low plate current whose product equals the maximum plate dissipation might not allow this, because, although you are at max plate dissipation, the bias point is such that plate current will flow for an appreciably less time on the negative signal swing (cutoff) than it will on the positive signal swing (saturation), and *no* load line can be found that will allow symmetrical swing, or it will be in such a non-linear portion of the curves as to be unusable. This is because the plate voltage is too high, and the max allowable current without exceeding dissipation limits is too low. The same thing can occur on the other end of the scale, where you can reduce the plate voltage to a point that the max dissipation current will exceed the maximum allowable plate or cathode current ratings of the tube. There is an optimum area of the curves that will become apparent when you start drawing load lines and picking bias points. It is a bit of an iterative process, so the tube manufacturers make it easy for you by listing typical class A operating conditions in the data sheets.
In theory, you can take a class AB push-pull amplifier and convert it to class A push-pull operation, *however*, you would, in nearly all cases, have to reduce the plate voltage to be able to bias the tubes into the class A region, because the whole reason for going to class AB is to get higher power, so the plate voltage is run higher and the idle current lower than what is allowed in class A. Once again, you have to look at the plate curves for the particular tube to determine where the allowable class A region is. If you simply bias a class AB amp to max dissipation at idle, you will find that as you apply a signal, the tubes will dissipate more power, and they will start to glow a lovely cherry red color, and something will croak. In addition, the power supply and/or output transformer may not be able to handle the extra current required for true class A operation, so, unless you know the ratings of the trannies, it is best not to attempt this, even if you lower the supply voltage.
Are those class A amplifiers I see advertised really class A?
There is much debate raging in the marketplace about "class A" amplifiers, and whether or not they are truly class A, or just class AB amplifiers unscrupulously marketed to the unsuspecting public as "class A". The truth is that most, if not all, are in reality cathode-biased, non-negative feedback class AB amplifiers, contrary to what the manufacturer's literature may say.
What is the difference, then, and why is it a problem for so many people?
The fundamental problem is in how class AB is defined, and how people interpret it. The people who say a class AB amp is "class A at lower volumes" are technically wrong, but for the right reasons. If you were to define class A as being only conduction for a full 360 degree phase angle, you would be correct. However, there is more to the definition of amplifier classes than that.
The defining factor in a determining whether or not an amplifier is class A, class AB, or class B *has* to be made at the full output before clipping, otherwise, the class definitions have no meaning whatsoever. It is indeed, a very black and white thing, and depends on the bias point on the characteristic curves, and the load line, among other things.
If, at the full undistorted output, the plate current flows in each tube for a full 360 degrees of the input conduction cycle, the amplifier is class A. However, if the amplifier is biased such that the plate current cuts off for an appreciable time during each cycle at this full undistorted output power, it is then a class AB amplifier. If it is biased such that each side is in cutoff for half the input cycle, it is a class B amplifier. Note that cutoff does not mean that the output of the amplifier is clipped, or distorting. Cutoff refers to plate current cutting off on one side of a push-pull pair for a portion of the cycle, while the other side continues to function. The output waveform is still a clean, unclipped sine wave, because the transformer sums the two "halves" of the input signal into one composite signal. Effectively, one tube amplifies the "upper half" and the other tube amplifies the "lower half". This is done to provide higher efficiency and greater output power. In a class AB amplifier each tube amplifies a bit more than half the signal, in order to reduce the distortion that occurs at the zero crossings of the waveform, which is called "crossover distortion".
Here is where the problem comes in: because a class AB amplifier is biased so that the plate current flows for the entire cycle at lower output levels (which is done to reduce crossover distortion), many people claim it is a "class A amplifier at lower volumes". This is simply not true. It is operating in conditions *similar* to class A, but is not a class A amplifier by any means. It is still a class AB amplifier, no matter what you choose to call it.
Now, what are the differences, you might ask? Well, for one, the Class AB amplifier is biased in a more non-linear portion of the characteristic curves, which means it has more distortion than a true class A amplifier. Also, the efficiency will be greater than is theoretically possible with a class A amplifier at these levels. There is a very real difference in tone and operating conditions between a true class A 10W amplifier running at say, 1W, and a 10W class AB amplifier running at 1W. Same output level, same overall power level, *but* a different class of operation, different amount of distortion, different efficiency, *and* a different tone, even though neither one of them is in cutoff for any portion of the output cycle at that low level. This is due to the bias point differences and load line differences. The differences become even more apparent when the amplifiers are run at their full undistorted output power. The true class A amplifier will have no crossover distortion, while the class AB amplifier will. The average plate current for the true class A amplifier will not change, or will change very little, from idle to full output power, while the average plate current in a class AB amplifier will increase dramatically. This will lead to "sag" in the power supply that doesn't exist in the true class A amplifier, which again results in a tonal change.
As you can see, there is indeed such a thing as a "true class AB" amplifier, just as there is a "true class A" amplifier, and the class definitions are not at all ambiguous, except to those who don't understand them, or choose to ignore them for marketing advantage.
One more thing: What if you push the class A or class AB amplifier into clipping? Does it then become a class AB/ B, C, or D amplifier? No, of course not. It is simply the same class amplifier it was to begin with, but driven into clipping. A class A amplifier driven to clipping is still a class A amplifier by definition. This is why amplifier classes are defined the way they are. Otherwise, the class designations would have no meaning. Any amplifier can be driven beyond it's limits into a fully-clipped square wave output (unless it is limited), but that doesn't make it a class D switching amplifier, now does it?
Which one to buy?
The bottom line is this: don't worry about whether an amp is "class A" or not. If you are interested in details, find out if it is cathode-biased or fixed-biased, and whether is uses global negative feedback or not, whether it uses a pentode, triode, or ultralinear output stage, and what type of output tubes are used. These parameters will give an idea of the "feel" of the amp, but in the end, you still must play the amp and use your ears to tell you which one is best suited for your playing style. Don't make a decision based on technical specs alone, you may miss out on a great-sounding amplifier!
--------------------------------------------------------------------------------
Copyright © 2000, 2001, 2002 , 2003, 2004 Randall Aiken. May not be reproduced in any form without written approval from Aiken Amplification.
Revised 10/17/04 |
|