Elliott Sound Products | Impedance |
We hear so much about damping factor, the effect of speaker leads (and how much better this lead sounds compared to an 'ordinary' lead), and how amplifiers should have output impedances of micro-Ohms to prevent 'flabby' bass and so on. But what does it all really mean?
Before an informed judgement can be made, we need to look at some of the real factors involved. There are a multitude of impedances involved in a typical amplifier to loudspeaker connection, most of them having a vastly more profound effect than the amp's output impedance alone.
For example, I have demonstrated a modified version of the 60W amp, which used current drive output. This version exhibits a damping factor of 0, since it has an output impedance of about 200 Ohms - the most common reaction was "Wow, that sounds just like a valve amp!".
Not really true, since valve amps actually have a damping factor of at least 1.5 or so, and exhibit other characteristics which are very difficult to emulate with transistors or MOSFETs, but this was the overwhelming critique of the amp's sound.
Indeed, my very own (tri-amped) hi-fi uses an amp for the bass and mid with a designed output impedance of about 2 Ohms. This provides a useful extension of the bottom end (I'm using sealed enclosures), without any peaking at resonance. To some, this is nonsense - how can you have tight bass with no damping? Easy, damp the enclosure properly, and don't expect someone else (the amp) to do it for you.
For the purpose of this article, I shall assume an amplifier of 100W into 8 ohms nominal load, which is typical enough for a reasonable system.
It all starts with the amplifier. An amplifier will always have a defined (and measurable) output impedance, and this will vary depending on frequency. In some cases during measurement, it may appear to change with load impedance too, but that is most likely caused by protection circuitry. In many cases, measuring the output impedance is extremely difficult, but the majority of amps will give sensible results.
With commercial products, the output impedance is often quoted, but rarely with any further information. To be meaningful, the impedance should be quoted at specific frequency, or preferably at a range of frequencies. It would be helpful if the power level were to be stated as well, but this is not often the case. If you really want to know, you will have to measure it yourself.
Although relatively easy to do, there is some risk involved, so never undertake this process unless you know exactly what you are doing, and accept the risk that you may damage the amp if you make a mistake. You will need a resistor of known accuracy, with a resistance equal to the amplifier's stated load impedance. It must be capable of dissipating either the amplifier's full output power, or a level of power that you will not exceed. A 10W wire wound resistor is recommended, preferably non-inductive for broad band accuracy.
For this exercise, we shall assume the amp referred to in the introduction - 100W into 8 Ohms. An 8.2Ω 10W resistor will be fine for the basic test. The amplifier output voltage should be kept below 9V RMS at all times (9V and 8.2Ω is 9.8W - within the resistor's rating). Measure the resistor's actual value - do not use the stated value, as this could have an error of up to 10%. For the exercise, I will assume the resistor is exactly 8.2 Ohms.
Set a signal generator to the desired test frequency, and apply the signal to the amp input. Adjust the level until you have a convenient voltage below the 9V maximum. Measure the voltage carefully. Call this V1 (let's assume 5.00V).
Now, connect the load resistor directly to the amp terminals, and measure the voltage again. Do this accurately and quickly - the resistor will get hot quite quickly if the voltage is anywhere near the 9V maximum. Call the loaded voltage V2. It should be less than V1. (Assume 4.97V).
Now, you can calculate the amp's impedance (I will do this the long way, as it's far easier to remember) ...
I = V2 / R = 4.97 / 8.2 = 0.606A
The amplifier is most easily represented as a perfect voltage source having zero Ohms impedance, with a series impedance that we see as its actual output impedance. Knowing the voltage drop in the amp's internals (Vd) and current (I), we can calculate the output impedance (Zo) ...
Vd = V1 - V2 = 5.00 - 4.97 = 0.03V
Zo = Vd / I = 0.03 / 0.606 = 0.049Ω, or 49 milliohms
The so-called 'damping factor' is simply the speaker impedance (ZL) divided by the amp's output impedance ...
DF = ZL / Zo = 8 / 0.049 = 163
This test will usually show highest damping factor (lowest output impedance) at low frequencies, with the impedance rising from around 500Hz and upwards. The actual frequency depends a lot on the amplifier's topology - it may be as low as 100Hz, or as high as several kHz. It will be instructive to perform the same test at the end of one of your speaker cables (be very careful not to short them during the test). You'll find that the damping factor is a great deal lower than expected, but provided it exceeds about 20 (0.4Ω total resistance) you have nothing to worry about.
In some cases (rare, but it can happen), you will find that the voltage increases slightly when the load is applied. This condition is usually an indicator of bad internal wiring practice, and the amp has a small amount of negative impedance. While rather uncommon, I have observed this effect on a commercial product. I don't recall what it was, as this was some time ago.
Since negative impedance is an inherently unstable state it should be avoided, although it is remotely possible that some misguided soul thought it was a good idea. It isn't. With the exception of special-purpose amps where there is a need to solve a specific problem, negative impedance is generally something to be avoided. See the article (Variable Impedance Amplifiers) that covers this exact topic for full details.
After the amplifier, we have the speaker lead itself. This has become (for reasons which I must confess I find entirely obscure) a subject of great controversy. There is actually nothing controversial about a piece of wire, despite the manufacturer's assertions to the contrary.
The purpose of the speaker lead is to carry the output current from the amp to the loudspeaker, preferably with as little power loss as possible. This implies that the lead should be capable of carrying currents of perhaps 5 amps or so for a distance of typically 2 metres. This is hardly a difficult task - the mains lead to a 2400W electric heater carries a continuous current of 10 Amps (assuming 240V operation), and usually manages this with little loss. The house wiring manages to carry this current from the switchboard through to the power outlet for far greater distances, still with relatively little loss. Then, of course, there's the cable run between the switchboard and the power station - ????
Some of the more expensive speaker leads (costing 10s or 100s of dollars per metre) are capable of carrying currents far greater than can ever be drawn by a loudspeaker connected to our 'typical' amplifier.
At the very worst, a nominal 8 Ohm speaker system may have an impedance at some frequencies (usually at or near the crossover frequencies) of perhaps 4 Ohms - some may be lower than this, and I would suggest that in such cases the designer should return to the drawing board forthwith, since a truly monumental error has been made.
So, let us assume that the truly monumental error has been made, and look at the current the amp might need to supply. To make life simple (for the time being) we will look at the maximum possible power on a continuous basis - this will never happen, but it is useful as a comparative figure. The figures below are rounded, and assume that the power supply is actually capable of providing the currents quoted - this is highly unlikely.
Impedance | Power | Voltage | Current |
8.0 Ohms | 100 W | 28 V | 3.5 A |
4.0 Ohms | 200 W | 28 V | 7.0 A |
2.0 Ohms | 400 W | 28 V | 14 A |
1.0 Ohm | 800 W | 28 V | 28 A |
These figures take things to extremes - an 8 Ohm loudspeaker which falls to 1 Ohm would have to be the most ill-conceived possibility imaginable, but we will stay with this, since it is useful to demonstrate the point, and there is probably more than one of them out there already.
Now, let us imagine a speaker cable with a resistance of 0.1 Ohm - for a 2 metre cable, this will be the so-called 'speaker' cable that is a very thin figure-8, and rated at 5A (low voltage only).
Even at the lowest impedance of 1 Ohm, where at one isolated frequency the poor amplifier will be attempting to supply a maximum current of 28A - but will it? The likelihood of this one frequency being driven to full power is extraordinarily remote, but we will use it anyway.
So, now we have 28V across 1.1 Ohm, so the current is reduced slightly, to about 25A, so 2.5V will be lost across the cable at this frequency. The amplifier is only attempting to deliver a mere 650W to the load, with the cable dissipating the rest as heat. A 0.9dB drop in level is nothing compared to the agony the amplifier will be suffering, with the real possibility of second breakdown in the output devices (which does not always result in the instantaneous destruction of the transistors) - can you imagine what that would sound like?
A small amount of resistance is inevitable - there is no super-conducting speaker cable available as yet, so you are always going to have a small loss. It will require very thin wire and a very low speaker impedance before you even lose 1dB, so I believe we can dispose of that myth (note that this is a very simplistic look at the issue - there is a lot more involved). For a more in-depth discussion of this topic, see the various cable articles.
In particular, cable inductance can have an audible effect. Even a small amount of inductance will cause significant losses at the lowest impedances and high frequencies. Actual AC losses in any cable are the result of resistance, inductance and skin-effect. The latter is generally considered to be minimal at audio frequencies, but resistance and inductance are always present, and always have an effect. At issue is not that the effects are measurable (always), but if these effects are audible (sometimes).
Having established that resistance within reason is not an issue as regards power loss, there is damping factor to be considered. Quite apart from the fact that 'damping factor' is a rather ill-conceived term, we need to look at the reality of what the amplifier is capable of controlling within the speaker cabinet.
According to some, damping factor should ideally be infinity +/- 3dB so the amplifier is in total control of the speaker. What utter rubbish. The loudspeaker is an electro-mechanical device, and its sensitivity to external impedance is easily tested. Indeed, if the amp were in total control, with the amplifier connected (and turned on), you would not be able to move the cone at all by pressing it. Motional feedback can be used, but that's a completely different approach and will not be covered here.
Try this experiment. Disconnect the speaker leads from the speaker, and tap the cone lightly with your fingers - listen very carefully to the character of the 'thump' of the speaker. Now, connect a short piece of stout wire directly between the speaker terminals - this will have a resistance so low as to be considered 0 Ohms.
Tap the cone again, the same way as before. Can you hear a difference? I would expect that you can, because if not the speakers are either so well damped internally that the amplifier's contribution is irrelevant, or there is so much internal resistance that no amplifier can save them.
Now, remove the short, stout wire, and replace it with a 4 metre length of bell wire or telephone cable. Tap the speaker cone again, and listen carefully to see if there is any difference between the sound with the 'dead short' versus the piece of rubbish you just connected, which might typically have a resistance of 0.5 Ohm (maybe more).
Could you hear any difference? If you have access to an AC millivoltmeter, you should try to measure a voltage as you tap the speaker cone. You will typically find that very little change will be evident in the character of the sound between the dead short and the bell wire, and only a small voltage (a few millivolts) will be indicated.
This is a rather extreme test, and it is quite possible that a very slight difference in 'tonality' will be observed. It is equally possible that you will not be able to hear any difference whatsoever, despite the fact that damping factor has been reduced to a small fraction of its former value with the dead short circuit created by the short wire. I suspect that most people assume that the back EMF from a driver is much higher than it really is. As an experiment, I attached two drivers together - face-to-face. Since both have a rubber roll surround and this made contact, the coupling between the cones was very good indeed.
The driven speaker was a Vifa P13WG, 8 ohm - the driving speaker was the same size, but an unknown brand. The driving speaker was then driven with 2.6V RMS at 175Hz, and considerable cone movement was visible for both drivers. The open-circuit voicecoil voltage was measured at 670mV, and short-circuit current measured at 76mA. The Vifa voicecoil was able to generate a voltage of 341mV into an 8 ohm load (43mA). So with a nominal 845mW driving power, even direct coupled cones could manage only 51mW output. Back EMF (and the resultant current) from drivers in a properly damped enclosure will be normally be a lot smaller than expected.
Even the best amplifier in the world will have some impedance, as will the most expensive cable. These can actually be 'removed' completely by building an amplifier with a negative output impedance, but having tried this approach, I can assure the reader that a horn compression driver is the only speaker I have ever tested which sounded better with negative impedance. All other speakers, whether horn loaded, sealed or vented box sound universally dreadful with negative impedance.
Negative impedance is exactly what it sounds like - the amplifier is modified so that the voltage output rises with increasing load. This is surprisingly simple to implement, but speakers seem to hate it - most seem to prefer a small but measurable amount of positive impedance. This is the equivalent of using a cheap speaker lead, but has other implications - there is no loss of power due to the resistance of the lead, and high frequencies are far better since there is none of the capacitive loss incurred by the really cheap speaker lead.
Have you ever wondered what it is about valve (vacuum tube) amps that has many audio enthusiasts drooling over the latest - usually extremely expensive - offering from this manufacturer or the other?
They will wax lyrical about the extended bass, and the sweetness of the highs etc. etc. One of the things about valve amps is their relatively high output impedance - this may be up to about 6 Ohms for an amp without any feedback, and will rarely be less than about an Ohm (8 Ohm output selection is assumed). The damping factor is obviously grossly inferior to that of a transistor amp (although some of the premium amps of the late 60s and early 70s came very close indeed), but the sound quality is generally considered more musical, or 'sweeter' by a great many enthusiasts.
There are many other factors involved than simply the output impedance, but if this were so important (or as important as some would have you believe) then valve amplifiers would be universally condemned for their poor low frequency performance - terms such as 'woolly', 'muddy' and 'over-emphasised' spring readily to mind. Instead, we can read reports where reviewers have praised the low frequency performance, claiming an extra 1/2 octave or so of bass extension. This is in spite of the fact that very few valve amplifiers can actually make it down to 20Hz without rolloff and output transformer saturation distortion starting to be readily observable.
Well before transformer saturation distortion and other undesirable effects make themselves known, the output impedance rises. This is at the very frequencies where damping factor is supposedly most important. Yet these amplifiers continue to receive rave reviews from listeners - we must conclude that the damping factor cannot be as important as is so often claimed. Could it be that all owners of valve amplifiers have tin ears, and couldn't tell the difference between a symphony orchestra and a bag of cat litter? This is possible, I suppose, but I do believe that it is somewhat unlikely!
I shall not go so far as to say that the myth is disposed of, but I believe that an extremely low amplifier output impedance is not as important as many people think it is, and that in some cases a small (preferably controlled) amount of deliberately introduced impedance is useful for correcting the characteristics of a loudspeaker driver whose Q is lower than desirable for the enclosure design used. Indeed, I have made such modifications to equipment - raising the output impedance of the amp so that a studio monitor driver could be matched more exactly to the enclosure, and this was at the request of the speaker designer.
Please see the article Effects Of Source Impedance on Loudspeakers for more information on this topic.
For the Thiele-Small Parameter Designers ...
This can be a useful trick if the total Q of a loudspeaker is too low (for example, because of a very low Qms - mechanical Q). Such a speaker is normally not suited to a vented box, but by raising the total Q (Qts) by means of increased output impedance from the amp (thus raising Qes), a very satisfactory result may be had.
An experimental plot of a Dynaudio 24W-100 driver shows that an extra full octave is obtainable by raising Qes to 0.9 (from the quoted 0.45). Admittedly, the box required becomes somewhat large, but this has always been the price we pay for extended bass anyway. A similar test on another loudspeaker with a very low Qts showed identical results, with a full extra octave obtainable at the low end, simply by raising the Qes of the driver. The results are shown in Figures 1 and 2.
Figure 1 - Response Of Driver With Qes=0.364
As can clearly be seen, the lower -3dB frequency is about 35Hz which although acceptable, but can be improved. Box size is 61 Litres. By raising Qts (using the expedient of increasing Qes with a defined output impedance), we can improve matters in the low end department ....
Figure 2 - The Same Speaker, With Qes=0.6
The lower -3dB frequency is now about 18Hz, which represents an extra octave of bottom end. Box size is now 271 litres (just a tad on the large side), but the principle is sound nonetheless. It is worth noting that these two plots are optimised for the given parameters, with no further fiddling with the parameters. BoxPlot (a very useful shareware program) was simply given the details, and made an optimised calculation from the details provided.
Modifying the output impedance is remarkably easy to do in any amp, but naturally the impedance selected must exactly match that required to increase Qes by the desired amount. If it is greater (or less) than optimum, then the box alignment is no longer valid, and bottom end response is unpredictable (at best). So too with boxes which have been designed with a zero Ohm source, since this will never be achieved in practice.
Speaker systems are rated for a nominal impedance, which will be typically 8 or 4 Ohms for hi-fi systems. Many of these claimed impedances are very misleading, since the actual impedance varies widely with frequency. There will be two impedance peaks at low frequencies, corresponding to the driver and enclosure resonance (the upper peak), with the vent and enclosure resonance providing the lower peak. Sealed enclosures will exhibit only the speaker/cabinet resonance peak, since there is no vent used.
Then there are impedance dips, where the actual impedance may fall to 1/2 (or less) the rated impedance. These are nearly always caused by the crossover networks, and can impose a significant reactive load on the power amplifier. It is very difficult to design the perfect passive crossover network, which is a very good reason to use a bi-amplified system with a good quality electronic crossover network, whose characteristics are far more easily controlled than any passive design. (See Bi-Amplification - Not Quite Magic (But Close))
The reactive load imposed on the amplifier by a speaker load causes far higher dissipation in the output transistors than the simple resistive load generally assumed during testing. At the extreme end, consider a load which is completely reactive (i.e. inductive or capacitive). The voltage and current are 90 degrees out of phase with each other, and no power is consumed by the load - even though there is voltage and current present (and measurable). Assuming a voltage of 20V and a current of 2A, the actual power is zero, so the amplifier must dissipate not only the normal internal losses inherent in all power amplifier designs, but the 40 Volt/Amps reflected back from the reactive load. (Volt/Amps - or VA - is roughly equivalent to Watts - but only when the load is resistive, implying that work is performed).
In reality, the reactance is always accompanied by some resistance, so the amount of power converted into work (moving the loudspeaker cone to create sound) will always be non-zero. An additional quantity of the supplied voltage and current are converted into heat (another form of work) due to resistive losses in the voice coil and crossover network. The reactive (also known in electrical engineering as the imaginary) component is reflected back into the output of the amplifier, where it must be absorbed and converted into heat.
It is the reflected power from the loudspeakers which is responsible for a great many amplifier failures. Because of the low efficiencies of most modern speaker systems, more power is needed from the amplifier. This means that the amp will have to dissipate more reflected power and this can lead to overheating (or internal 'hot-spot' localised heating) which leads to the destruction of the output transistors. Some amplifier protection systems are sufficiently sophisticated that they can prevent this form of damage completely, but will generally provide an additional side-effect - the deterioration of sound quality. This is often noticed as a 'grainy' or similarly described quality to the sound, and is difficult to eliminate when protection is used. A certain IC power amplifier I have tested has a very comprehensive protection circuit, which seems to work very well. However, as the amp reaches the point of clipping, the distortion component is multiplied tenfold by the protection circuit, with the result that what should be completely inaudible distortion becomes very audible indeed.
It will come as no surprise that I am not a fan of protection circuits in amplifiers, for this very reason. A little care on behalf of the listener to ensure that the amp has sufficient power for the highest listening level desired (plus a bit more for safety), together with being sensible and not shorting the speaker leads, means that protection circuits can be dispensed with in a hi-fi system. This is not the case with lo-fi (Ghetto-blasters and the like), but for true high fidelity sound systems active (or 'real-time') protection should be avoided. Of course, if the protection system is carefully designed so it protects the power transistors without impinging on the sound, it should be incorporated if it doesn't make the design too complex.
Speaker protection circuits come in a variety of flavours, the most common being fuses, 'poly-switches' and DC offset cutout relays. Looking at each in turn is useful.
Fuses - A fuse is the simplest protection device for speakers, but is not very effective. The fuse is usually (or sometimes!) capable of protecting a speaker, but is quite incapable of protecting the amp - the transistors will almost invariably blow far quicker than any fuse. The fuse will prevent the fault current from the amp from blowing (or setting fire to) the speakers, provided that it is correctly rated. Basic protection is all that is really offered. While not usually considered, a fuse can introduce a small amount of distortion!
Poly-Switches - These are basically thermistors (thermal resistors) which are normally low impedance, but when their temperature reaches a preset value they go into a high impedance state - thus limiting the current to a safe value. Because they are essentially a non-linear device and have some resistance even when not activated, I cannot recommend that they be used for hi-fi applications. Their internal resistance may adversely affect damping factor, but the fact that they are non-linear means that some degree of distortion will likely be introduced by their inclusion. The degree of added distortion is bound to be dependent upon how close to their limits they are being run, but the added distortion is unlikely to be considered negligible. When we are striving for amplifiers with less than 0.01% distortion, it takes very little non-linearity to raise that by an order of magnitude.
DC Offset Relay - This is one item that has no bad habits, provided that the relay contacts are capable of handling the output current of the amplifier without overheating. This is not as trivial as you might think, since few small (and/or reasonably priced) relays have a suitably high current rating, and many do not have sufficient contact separation to break the arc created by 50V DC being dumped into a speaker load.
With some of the relays I have seen used, one can almost guarantee that the relay will be destroyed if it were to be called upon to do its duty. However, the basic idea is sound, and introduces no distortion or other undesirable effects to the final signal. With a little extra circuitry, the relay can also be called upon to open under other fault conditions (such as excessive output current), and the only characteristic will be that the sound cuts out completely while the fault is maintained. This is a better solution than trying to provide active protection with its resultant possible sonic degradation. Project 198 is a MOSFET relay that can break any DC current, and with careful MOSFET selection it will introduce close to zero distortion.
If you are into RF (radio frequency) design, or perhaps telecommunications, this becomes an important topic. For hi-fi and professional audio, it is a meaningless concept and will actually cause an increase in noise. It has often been claimed that a 600 Ohm microphone should be matched to a 600 Ohm input for best performance. This is simply wrong, and microphone manufacturers specifications will support me on this.
Imagine a 600 Ohm microphone with an open-circuit output voltage of 5mV. If the mic preamp has an input impedance of 600 Ohms, the microphone output is reduced by 6dB to 2.5mV because of the simple voltage divider created. It helps to use the engineering 'model' for a signal source of any kind, which is basically a 'perfect' (meaning zero Ohms impedance) voltage generator, with a resistance + inductance or capacitance (sometimes in combination) in series with the output.
If the output is loaded, then the available voltage from the source drops, and that in turn means that more amplification is needed to obtain the final voltage needed. If the output is reduced by 6dB, this means that an additional 6dB of gain is required to compensate - therefore the circuit will have 6dB more noise.
The ideal for a microphone is to use a high impedance input, but this creates other problems, so a compromise is needed. Typically, a good mic preamp (for microphones of up to 600 Ohms) will have an input impedance of between 1.2k and 3k Ohms. This causes far less loading, and does not cause any problems for the microphone.
Generally, it is desirable that the output impedance is low, and the destination impedance high, and this is the case with the majority of modern equipment. Preamps usually have an output impedance of less than 1k Ohm, and power amps will have an input impedance of at least 10k Ohms, but more commonly 22k or 47k.
So why is impedance matching important for RF and telecommunications? The reasons are completely different, as we shall see.
Radio Frequencies: When an RF voltage and current are transmitted along a wire, the impedance of the cable itself becomes significant, and for any distance that is 'significant' - which is to say any distance greater than about 0.1 of the signal's wavelength - matching is necessary. The wavelength is calculated from the speed of light (3 x 10 ^ 8 m/s, or 300,000km/s) multiplied by the velocity factor of the cable. This varies from about 0.7 up to 0.9 depending on the dielectric constant of the inner insulator and cable construction, meaning that a signal travels more slowly in a cable than in free air or space.
Wavelength = C / f (where C = velocity and f = frequency)
A 1Mhz signal travelling in a typical coaxial cable (velocity factor of 0.8) will have a wavelength of ...
Wavelength = ( ( 3 x 10E8) x 0.8 ) / 1 x 10E6 = 240m
Based on this, any attempt to transport a 1MHz signal further than about 24m will start to cause problems unless the send and receive impedances are properly matched - not only to each other, but to the cable as well.
In the hi-fi audio world, this is not an issue, since this is 50 times the highest frequency we can hear, and few instruments create appreciable harmonics above 20kHz anyway. In theory, we could send an audio signal 12km without having to worry about impedance matching, although at extreme line lengths matching can reduce high frequency signal losses. To understand the reasons is beyond this article, as it involves transmission line theory - not one of the easiest concepts to grasp.
Impedance matching is a requirement in telecommunications networks, but not for any of the reasons you might think. It is actually rare for an analogue phone line to run more than about 5km from the exchange (Central Office in the US) to the user's location.
Impedance matching is required to enable the hybrid - a circuit that allows simultaneous transmit and receive on a single pair without interference - to function properly. If the impedances are not properly matched, you will hear too much of your own voice when you speak, the far end speech will be too soft, and both parties will very likely get lots of echo on the line. This became a major problem when satellite systems were used for international calls. The time delay was very noticeable, and it it were accompanied by an echo, was most disconcerting.
Because of the distances involved, the telecommunications network is balanced to prevent noise pickup. The telephone cabling used twister-pair conductors, without a shield.
The conventional hi-fi connection is unbalanced. One conductor is earthed (the shield), and the other carries the signal. Except that this is rubbish. The shield also carries the signal, since without a return path, we have an 'air-gap' problem, and no signal will pass - other than general noise and maybe a tiny bit of the desired signal due to capacitive coupling between the two circuits.
This unbalanced signal is actually fine for short distances - few hi-fi interconnects will be longer then a metre or so. Provided the source impedance is low, there will be very little noise introduced, and the signal will pass unscathed from one piece of equipment to the next.
However, if 10mV of noise were to be picked up along the way, then this is added directly to the wanted signal, adding hum or other undesirable noises to your music.
In contrast, a balanced connection uses two wires, one carrying the normal signal, and the other usually an inverted (but otherwise identical) version of the signal. At the other end, the two signals are recombined. Any noise that was picked up along the way will be seen to have the same polarity on both wires, and is not 'seen' by the receiving equipment. Only signals that are in 'anti-phase' (of opposite polarity) are picked up. Note that it is not necessary to have signal on both 'signal' leads. The noise cancellation works on the basis of any noise being on both signal lines, but the signal itself does not need to be balanced.
This noise on both wires is called common-mode noise, and a properly balanced circuit will amplify the wanted signal, but reject common-mode noise by as much as 60dB (i.e. 1/1000th of the noise gets through). This is known as common-mode rejection ratio, and is a figure quoted by all opamp manufacturers for the opamp inputs.
If the same 10mV of noise were picked up by a balanced cable, only 10uV of unwanted signal will be present at the output of the amplifier if the common mode rejection is 60dB. To look at it in a different way, the signal leads will have to 'collect' 10 Volts of noise to allow a 10mV noise signal to pass.
Figure 3 - Noise Response of Balanced Vs. Unbalanced Transmission
Figure 3 shows how the noise is cancelled in a balanced circuit. The noise pulses are applied to both leads - it is important that the leads are twisted together, to ensure that any noise is picked up by both. Because the signal is 180 degrees out of phase and the noise is in phase, only the out of phase signal is allowed through the amp. Any in-phase signal (common mode) is rejected, cancelling the noise signal almost completely.
In professional audio work, balanced leads are used almost exclusively. These are always shielded as an additional protection against noise.
It has been suggested that hi-fi interconnects should be matched at both ends to the cable's characteristic impedance. The reason is that doing so will supposedly remove the echo. Que? Echo, from a 10 metre interconnect? The wavelength of a 20kHz signal in a cable is around 12km, so 10 metres of cable is utterly incapable of 'smearing' anything that we can hear.
This is sheer stupidity, serves no useful purpose whatsoever, and will overload just about every preamp ever made. A typical shielded lead might have a characteristic impedance of around 75 ohms. A 75Ω series resistor would then feed the cable from the preamp, and now it's suggested that the far end (an amplifier perhaps) should also have a 75Ω resistor to earth (ground). Yes, the cable is now perfectly matched, you've lost 6dB of signal level, and the preamp has to drive a 150Ω total load to double the normal voltage. The preamp will clip due to the low impedance, and distortion will be increased dramatically.
If (and this is not very likely) you needed to drive 12km interconnects, then matching will certainly help reduce high frequency losses. Otherwise, it's just another lunatic idea that will do nothing useful. By this reasoning (and I use the term loosely), perhaps power amplifiers should be terminated with their characteristic impedance too. 100 milliohm loudspeakers will match most amps, but will also cause them to fail instantly and spectacularly unless they have extensive short-circuit protection. Either way, you won't get to hear any echoes, because the system will be absolutely silent.
We have all seen advertisements for audio interconnects costing obscene amounts of money, and some of them actually do seem to sound better than 'ordinary' interconnects. There is no 'magic' here, these are often no more than a 'pseudo balanced' cable, where the shield is connected at one end only, and the signal is carried by a pair of wires within the shield.
This is hardly worth all the money the hi-fi shops ask, since you can make them yourself, with good quality cable and connectors available from any electronics dealer.
As for so-called 'directional' cable, this is utter rubbish. The only thing that is directional is the choice of which end the shield should be earthed at - send or receive. For cables having the shield connected at only one end, the answer is usually the receiving end (e.g. the power amp, when connected to a preamp). It is worth pointing out that shielded cables should always have the shield connected at both ends, or noise reduction is impaired. For more on this topic, please see Balanced Interconnects.
How can a cable carrying an AC signal be directional? There are some proponents of the oxygen free copper concept who will claim that if there is oxygen in the copper, it will be as copper oxide, and that copper oxide is a semiconductor, semiconductors rectify, and will therefore introduce distortion. Prove it to me!
I defy anyone to produce concrete proof in any form that a cable is capable of introducing distortion - at any level. Even if we assume that there is some validity in the copper-oxide rectifier theory, all semiconductors have their forward conduction voltage (e.g. 0.65V for silicon) - I don't know what it is for copper oxide, but even if it were as little as 100mV (highly unlikely!), this would require that there was a 100mV or more difference between adjacent conductors (or molecules) for rectification to occur. Since the loss along a 1m length of signal interconnect cable will be a very small fraction of this, I (and many others) do not consider that this is in any way possible.
So, if there is no validity at all in spending $200 for an interconnect cable, why do people say how great they sound? Easy. If you had just spent that much and could hear no difference, would you be game to admit it to anyone? No, you will be tempted to try to convince yourself that you must be able to hear some difference, otherwise you just wasted $190, since a $10 cable would work just as well.
As a matter of interest, you might also want to have a look at my editorial page, which has some challenging things to say about $3000 mains leads (and no, that is not a misprint!).
It is perhaps obvious that I do not believe any of the hype about cables. My own system uses perfectly ordinary cable throughout, and I have no desire to change this in any way whatsoever.
I am utterly unconvinced by any claim that a cable (of reasonable size and construction) can make my hi-fi sound any different - never mind better, just different. I have never been able to measure distortion in a cable, and I know of no-one who has.
Certainly, some 'super' speaker cables (because of high inductance or capacitance) can make a power amp unstable or even oscillate, but this is invariably a bad thing, so I don't want any of that.
One final point - gold-plated connectors. They do not conduct any better than any other type (worse than some), and they do not introduce (or remove) a 'sound'. Gold is used because it does not oxidise, so the connections don't have to be jiggled about every so often to remove crackles or other mechanically induced noises (including distortion in some cases) - no more and no less than this. Solder joints to gold can simply separate for want of anything better to do at the time, and where an absolutely reliable connection is essential the gold plating should be stripped off before solder is applied. The reason for gold stripping is that an intermetallic layer is formed if gold is left on the component leads, this layer could fracture and separate, and thereby cause a joint failure. Maybe not crucial in an amplifier, but in the engine controller for a 747 it could cause a serious problem.
I suggest a web search to locate suppliers of a suitable material for stripping the gold if you are concerned. I've not had a problem with it so far, but it is real.