0% found this document useful (0 votes)
330 views40 pages

Analog Signal Processing Insights

News Letter of the Musical Engineering Group

Uploaded by

Kevin Haworth
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
330 views40 pages

Analog Signal Processing Insights

News Letter of the Musical Engineering Group

Uploaded by

Kevin Haworth
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

ELECTRONOTES 196

NEWSLETTER

OF

THE

MUSICAL

ENGINEERING

~ROUP

1016 Hanshaw Rd", Ithaca, NY

14850

Vo1ume 20, No. 196

Decembez:' 2000

GROUP ANNOUNCEMENTS

CONTENTS OFEN#196

Page 2 TUning Equations Dez:'ived From Passive Sensitivities -by BeZ:'nie Hutchins

Page 2

Ana10g Signa1 Pz:'ocessing Chapters 8,9, and 10

-by Bernie Hutchins

This issue covers the final Chapter:s: 8, 9, and 10 of Analog Signal Processing" These chapters go into some of the moze unusual areas of analog signal processing, voltage-contr:olled filters (unusual, but not to readers of this newsletter), delay line filte:r:s (:r:elated to digital fil t ezs) , and analog adaptive fil t ezs ,

In going over: this analog material one moze time after: so many yea:r:s, I have been su:r:p:r:ised just how much mate:r:ial the:r:e was - much of it I had forgotten.. Another annoying thing was how many "pr:oblems left to the reade:r:" I had left hanging! In many cases, as previously stated, the actual problem statements wez'e fai:r:'ly obvious.. Yet in some othe:r:s, as I went t.hrouqh, I did not z'emembe r exactly what I intended to ask, and in still more, I had forgotten just how the pzob Lems worked out.. On the happy side, I did however stumble across a fair number of the problems which I had actually typed up, but forgotten ..

-_. _- ._-_ -- _,_, ,_ .. , .. _.. ---

Looking at the whole thing now, I think it may pzcve useful if,

f:r:om time to time, we work out and publish some of these problems, so we can perhaps have an "analog signal processing co.rriez" as a repeating department. This is the current plan. Just below, in this spi:r:'it, although not in :r:esponse to any paiticular text problem, a follow-up on the passive sensitivity discussion f'zom Chapter 7 is offer:ed ..

EN#196 (1)

TUNING EQUATIONS DERIVED FROM PASSIVE SENSITIVITIES

Passive sensitivity as discussed in Chapter' 7 of Analog Signal Processinq (EN#195) is important fIrst of a~l in helping with our choice of one circuit configuration as compared to an a Lt.e rria t.Lve configuration that is nominally also capable of realizing a desired response.. All. other things equal (and this is itself a comparison that must be'done very car'efuily), we choose the configuration with the lower' sensitivity values.. This use of passive sensitivity numbers is essentially "global" with respect to actual particular instances during production.

A second way to use passive sensitiv.:j.ty caLcu.l.atiLone rel.ates tQthe actual "fine tuning" of individual instances (Le", a particular circuit board off the p:r:oduction line).. Suppose for example that we design a filter, choose a conf'Lquz'at.Lon , and construct 10 examples" Pez'hape it is a low-pass with a nominal cutoff of 1000 Hz. Our global consideration of passive sensitivities assures us that we expect, perhaps, actual cutoff frequencies between 900 and 1100 Hz, and OUI: tests indeed show measured cutoff fI:'equencies well within this range .. (In fact, we might well expect a balance between ove:r:valued and undervalued components to keep us away from worst-case examples). So nothing is unexpected or wi'ong"

(continues on pg. 37)

CHAPTER 8

VOLTAGE-CONTROLLED FILTERS

8-1 The Need for Va ltage-Contro 11 ed Fi lters

8-2

The Multiplier as a FilterControl Element

\

8-3 The Transconductance Multiplier

8-4 First-Order Voltage-Controlled Filters

8-5 Voltage-Controlled Integrators and

State-Variable Filters

EN#196 (2)

~ THE NEED FOR VOLTAGE-CONTROLLED FILTERS;

The filters that we have discussed so far, and with which we are likely otherwise already familiar, are fixed filters, and we have in mind that they would be constructed by taking the correct parts off the shelf and soldering together the correct circuit. Such filters find wide application in cases where the specification parameters of the job the filter is to do are well established and expected to remain constant.

If we think a bit, it is clear that we can make variable filters rather than fixed filter's by making some of the filter's components vaz-Lab Le . In particular, we wish to make the f il ter' s time constants vary to change the characteristic frequencies of the filter. To do this, we need to have variable resistors, variable capacitors, or both. If we look at the range of component values we usually encounter, it is clear that we can easily find suitable variable resistox's, but that suitable variable capacitors are unlikely. (Consider that a standard radio-tuning var-Lab Le capacitor is already fairly large, and is only in the hundreds of picofarad range.) We can easily find potentiometer's in the range of a few ohms up to several megohms.

This opens up the idea that we can have variable or tunable filters. However if we restrict ourselves to potentiometers as the variable r'esistors, we are talking about manual tuning. Typically, a potentiometer is what we find used as a radio volume control, and it has a knob to be turned up or down. In this view, tunability is a step in the right direction, but there are still many things that we can't do.

For example, we might have a tenth-order filter needing a ten-way ganged potentiometer to tune all ten of its frequency controlling resistors. We might need quite an enclosure to house this control, and it would probably be somewhat difficult to turn the control shaft with the usual type knob. Secondly, manual tuning implies that we have to have someone close by to turn the knob, so we could not expect to tune the filter remotely (on a space satellite for example). In addition, computers can conveniently handle numbers and even voltages, but not turn knobs. Still additional problems with manual tuning come up when we need to adjust a filter faster, or with more precision, than can be done manually.

All of these additional jobs can be done with voltage-control or other electronic tuning. We can control almost unlimited controlling elements in parallel, control remotely, control by computer, control with rapidly changing voltage waveforms, and set parameters to an accuracy far greater than is possible with a manual knob. Thus we see a step up from simple tunability as being voltage-tunability or voltage-

. control as it has come to be called.

Most of the quality work in Voltage-Controlled Filters (VCF's) has come through the efforts of engineers working on the design of electronic music synthesizers. In their desire to achieve a dynamically changing spectrum, on a time scale compared with the shortest of individual notes expected, VCF's became one of the few possible approaches. It was found that excellent VeF's could be obtained if one

EN#196 (3)

ASP 8-1

went about it in the right way. This "right way" seems to have been to first consider the control elements and structures that were possible, and then see what sort of filter could be made tunable with these controls. In general, simply taking your favorite active filter and trying to put in voltage-variable resistors is not a productive approach.

What was found was that one particular control element, the so-called transconductance multiplier was a key control element, and that this was most useful in cer-t ed,n key structures, Fortunately, one of these structures was the first'-order low-pass section, and another was the state-variable filter.

~ THE MULTIPLIER AS A FILTER-CONTROL ELEMENT:

A few devices (such as some configurations of FET's, and photoresistors) are useful as variable resistors, but only over a fairly limited range (no more than 1.0:1). A device that is more generally useful is some form of electronic analog multiplier. To see why this is, consider that a multiplier takes two input voltages (V~n and Vc:) and produces an output voltage V", as:

(8-1)

where K is some constant of the multiplier. Normally we expect that if a voltage V~n is properly applied across a resistor R that a current V~n/R flows through it. If instead the voltage V~n is multiplied according to equation (8-1), and Vm is applied to Rt then current

KV~ ... ,V""/R flows, which is equivalent to applying the voltage V~n to a different x'esistor of value R/KV"". This is the general idea - that by scaling a voltage, we can make things look as though a resistor has a different value. This is not always the way things turn out in

practice, however. '

Fig. 8-1aSuccessful VC Integrator

Fig. 8-1b

V. e----f'X/1

ln

I----4&-v ~

m

v ~

out

Vc

Unsuccessful VC First-Order Low-Pass

I

Fi g. 8-1c

" V. ' ln

Fig. ,8-1d Successful first-order

1-"'_-S\l\NJ'v_"__ 1-_'" VeL ow- Pas s

'-_--J Vm

R

EN#19S" '(4)

ASP 8-2

Fig. 8-1 shows four cases that serve as examples. Fig. 8-la is a voltage-controlled integrator that does work. Fig. 8-1b and Fig.8-ld represent an unsuccessful attempt, and then a successful attempt, respectively, at achieving a first-order voltage-controlled Low+pe.s s filter. Fig. 8-1a works by virtue of the fact that the lower end of the resistor R is at gr-ound potential (a constant), so the only voltage that determines the current through R is V~ .... (for any fixed value of Vel. Thus a changing voltage looks the same as a changing resistance, considering the cur-r-ent; that flows into the capacitor.

It is important to understand why the first-order low-pass of Fig. 8-lb does not work, since it at first sight seems so similar to Fig. 8-wla. In fact, the multiplier of Fig. 8-lb only changes the gain factor of the filter from 1 to KVe, but does not change the time constant.

This is obvious from just looking at the network as being composed of two parts: the multiplier to the left, and the very familiar first-order (fixed) low-pass to the right. In order to understand what is wrong, and how to fix it, consider that in the fixed fir'st-order low-pass (Fig. 8-lc) the current through the resistor is not just a function of the input voltage V~n, but also of the output voltage VQut as I = (Vh,-V""''''b)/R. That is, the output gets to "fight back", and this is what is missing from Fig. 8-lb. Fig. 8-ld adds the missing feature.

Here we first take the difference (V~n-Vout), and it is this voltage that is scaled by the multiplier before it is applied to the resistor. In addition, the first-order low-pass is now made an integrator so that the lower end of the resistor R is always grounded.

In fact, it is best to analyze Fig. 8-1d directly. The output of the multiplier is:

Vm == KV.,.(Vi. .... +Vcut) (8-2)

and we have studied the inverting integrator and know it to give:

which are solved for the transfer function T(s) as:

T ( s ) = V <:> .... t IV ~ .. , = '-1 I (1 + s cR/KV c:: ) ( 8 - 4 )

From this, we have a-resistor that is effectively scaled by l/KV.,., which is the same as scaling the cutoff frequency by KV.,..

Accordingly we now know how to approach voltage-controlled filters with two powerful weapons in our arsenal - the voltage-controlled integrator (making possible the state-variable approach), and the voltage-controlled first-order low-pass section. We will continue by looking at some possibilities for practical multipliers.'

Jt:..3. THE TRANSCONDUCTANCE MULTIPLIER;

We have seen above that the key element in our VCF approach will be the analog multiplier. Most all analog multipliers of practical interest will be based on some transconductance principle. For

EN#196 (5)

ASP 8-3

practical multiplier integrated circuits, tr'ansconductance multipliers are available in several forms. These we will separ'ate into the full four-quadrant multipliers, and the two-quadrant multipliers or OTA's (for Operational Transconductance Amplifiers).

To understand the difference in these devices, we can start with the idea that a multiplier should perform the operation:

Z = XY

(8-5)

where X and Yare the inputs and Z is the output. In the case of a four-quadrant multiplier, both X and Y may take on negative and positive values. Also, Z is usually also a voltage in this case, and since we work with voltages i.n a convenient range such as flO, the multipliers usually have a scale factor to bring the product into a usual range.

For example, Z = XY/lO is common (Fig. 8-2a). True four-quadrant multipliers are also characterized by a cost in the range of $10 to $50 and/or a need for considerable individual trimming. In general, they are components that many engineers will avoid whenever possible.

As unpopular as the four-quadrant multiplier is, a two-quadrant multiplier in the form of an OTA has found wide application. This chip is available for about $1 or less, and is fairly easy to use. Being a two-'quadrant multiplier, only one of the inputs can take on positive and" negative polarity, while the other must be unipolar. The OTA chip < happens to have a bipolar voltage input, while the second or unipolar input is a current rather than a voltage. In addition, unlike the usual four-quadrant multiplier, the output is a current. It turns out that all of these are things that the designer can exploit. The most popular and well known OTA for many years has been the RCA type CA3080, after which the OTA may be called a "3080" as often as it is an "OTA". The 3080 has been s eoond+aour-ced by National as the LM3080, and new generations of OTA's have appeared, including dual versions such as the CA3280 and the LMl3600. Other improvements have involved the use of a linearizing input stage (Gi.lbert input), to help with a lineari.ty

problem that will be described below.

Fig. 8-2b shows the conventional symbol for the OTA, which superficially resembles the op-amp. The most notable difference is the additional pin, the control pin for the control cur-r-ent; Ie:. This pin is pin 5 on the CA3080, and it is common for designers to refer to the control pin as "pin 51f, even on OTA's with a different pin-out arrangement. The next difference to observe is that the output is a current source, and not a voltage as it is for op-amps. Finally, although it is not indicated in the diagrams, the input differential voltage should be limited to something like ±lO milli.volts for linearity.

The point should be emphasized that the OTA is quite a different device from the op-amp. However, it is no more difficult to learn to use - the "rules" are just different. In order to understand better how the OTA functions, the structure should be understood (see Fig. 8-2c). The OTA consists of four "current mirrors" and a standard two-transistor differential inputs stage as shown. The cur-cent; mirror's are configurations of transistors arranged so that when a certain

EN#196 (6)

ASP 8-4

Fig. 8-2b

Fig.8-2a four-Quadrant Multiplier

: ~XIJ-z-=-Xy-O;l 0

+ In

- In

I out

The OTA~ CA3080, or two-quadrant multiplier

~c

.Ei9.:_8-2d

Standard use of attenuator with the OTA reduces -~_n-t_O_E-tn-"-

+ (1

--- E.---(

1n

-15

Fig. 8-2c

Structure of the OTA (cm = current mirror)

current is pulled from one branch, an identical current is sourced by the other branch; both currents being sourced or sunk by a connection to a power supply rail. One of these current mirror's receives the control current I"" and mirrors it, drawing this same current Ie: from the tied emitters of the differential pair. Study of the remaining three curr-ent mirrors shows that they just "decouple" the two collector currents of the differential pair, so that the output is a current source that represents the difference between the two collector currents. When the two inputs are at the same potential, their collector our-cent;s are equal (to I,,,::/2, in fact), and the output current is zero. When there is a non-zero differential input voltage, the currents are out of balance, and the difference becomes the output current.

We do not want to go deeply into transistor theory, but the two-transistor differential input stage is well understood, and it is known that the difference between the collector currents is a hyperbolic tangent function, which can be considered approximately linear around zero for differential input voltages of no more than ±10 millivolts or so. In such a case, the output current is:

lout = 19.2 Ie: E~n

(8-6)

which we will consider the fundamental equation of the OTA. It is well to keep in mind that if we go all the way back to this equation for a start, we are unlikely to go wrong. Note that E~n is the actual voltage between the + and the - inputs. Clear-ly equation (8-6) implies a multiplication relationship between two electrical parameters, Ie and

EN#196 (7)

ASP 8-5

E~.,. From our study of the structure (Fig. 8-2c), it is clear that E~n can be bipolar, while Ie: must be only positive (into the lower mirror). Thus the OTA is basically a two-quadrant multiplier here.

Since the input stage must be limited to about ±10 millivolts for' linearity, and since we still want significantly larger voltages in the rest of our circuits, it is common to find an attenuator stage on OTA inputs, as seen in Fig. 8-2d. (Incidentally, the attenuation to this low level does imply problems with s LgrraLvtio-mo Lse ratio. One help is the "prewarpingtl or "Gilbert" input stage found in newer OTA's which permits input voltages of several volts.) With the addition of this attenuator, equation (8-6) becomes:

lout = 19.2 Ie: (22/10022)V~" (8-7)

or we can write an "equivalent resistance" as:

(8-8)

which amounts to 23.7 kQ when Ie: = 1 milliamp, and so on.

The concept of equivalent resistance and equation (8-8) should be used with considerable caution. As we have cautioned above, it is often better to go all the way back to equation (8-6). The problem comes up in assuming that R~ct implies that the OTA looks just like a "real" resistor. As we saw from our discussion of Fig. 8-1, we must be careful to look at voltage on both sides of resistors. Accordingly, it is best to think of Req_ as a notational convenience, and not as suggesting that the OTA can be treated as a resistor Req_, in all cases.

We have noted that the output of the OTA is a current rather than a voltage. In some OTA applications, we will drop this current through a resistor and then use an op-amp follower to buffer this voltage drop and to serve as a voltage source. In many VCF applications however, it is both possible and advantageous to just use the cur-r-ent; directly.

Before going on to some actual circuits, it will be useful to discuss how the control current Ie: is obtained. Note that the control pin is an input to a current mirror, and accordingly, its voltage will always be about one diode drop above the negative supply (about -1~.3 with a -15 volt supply). Ie can be supplied with a number of current source arr'angements. One simple way is to connect the control pin to a voltage source more positive than -1,*.3, through a suitable current limiting resistor. In such a case, for a control voltage Vc (see Fig. 8-2d), the control current is:

Ie = (Vc + 1~.3)/Rc

(8-9)

As a practical matter, Ie: should be limited to no more than 2 milliamps with 1 milliamp being a good design maximum. This means that on a standard ±15 volt supply, where the control voltage Ve: might range up to +15, that if Rc. is 30k or so, we ar-e absolutely safe on control current. Note however that the control pin is a very sensitive part of the chip. If this pin is shorted to anything, there is a danger that the chip may blow. Shorting this pin to ground, or to the output of an op-amp, can blow the chip.

EN#196 .. (8)

ASP 8-6

22

10k

22

~~~~--------~

"-:='" lout

10k

V out

Fig. 8-3a VC Low-Pass

Fig. 8-3b VC High-Pass

~ FIRST-ORDER VOLTAGE-CONTROLLED FILTERS:

There are a large number of ways of achieving useful first-order voltage'-controlled structures, which can then be combined into higherorder structures if desired. All of these involve some form of feedback from the output to the OTA input, for the same basic reason as was required in the development of Fig. B-1d. Fig. 8-3a shows a form that is first-order low-pass. The analysis begins with equation (B-6) as:

( B"10 )

It is also clear that the output current flows through the capacitor C to ground, generating the voltage Vou.t, or:

Vout = Iou.t/sC (8-11)

These two equations can be solved for the transfer function as:

T(s) = Vo""t/V~n = 1/<1 + sCRe.9.) (8-12)

where we have also used equation (B-7) for Re"'l' Note that Re9. falls exactly into a position in the transfer function where we recognize its effect on the cutoff frequency. In fact, since the cutoff is at:

fSdb = l/2,TtRe"'l..C = Ie/( 21t 23.7 C)

(8-13)

we have a cutoff frequency that is proportional to the control current. Note that we did not try to begin with the idea of Re"t for the OTA and then treat the OTA as a normal type resistor. Instead we started with the basic equation for the OTA, equation (B-6), and put in the notation Re9,. when it appeared naturally. Then we found that it fell exactly where it was most convenient for our being able to interpret the results.

Fig. 8-3b shows a corresponding h.i.gh+paa s network. Again we start with equation (8-6):

We also observe that Io._.t now flows out through the capacitor C, generating Vout, relative to V~n as:

EN#196 (9)

ASP 8-7

V ... u.t := Vii.,., .- IO\.1t/sC

(8-15)

and these equations result in the high-pass T(s) as:

T( s ) := scRe'9/ (1 + sCRe'9.)

(8-16)

These are only two of the possibilities with this general idea.

The control current I"" is often supplied by some form of current source, which may be a linear voltage-to-curr'ent converter, or even an exponential voltage-to-current converter in the case of electronic music circuits. However, for simple demonstrations and lab work, it is often sufficient to connect a pot between the + and - supplies, and feed the wiper voltage to the control pin through a 30k resistor (Fig. 8-~), using equation (8-8) to find the current.

+15

c

Fig. 8-4 A simple method of supplying the control current

Fig. 8-5 Two forms of the VC Integrator

.§.3 VOLTAGE-CONTROLLED INTEGRATORS AND STATE-VARIABLE FILTERS;

Fig. 8-5 shows two forms of an OTA controlled integrator. It is tempting to call the first a non+Lnvez-t i.ng integrator' and the second an inverting integrator. However, because either the (+) or the (-) inputs of the OTA can be used in either case, we have free choice. In the case of Fig. 8-5a, the output voltage is:

Vout := Iout/sC

(8-17)

while Fig. 8-5b has:

(8-18)

both lead to transfer' functions which have a denominator sCRe9..' which

are integrators, and in this case, voltage-controlled integrators. With the vol tage~controiIed' integrator ava'i.Lab Le ,wec:an'acnieve-voltag'e'" , ... _controlled versions of all our integrator based filters (Chapter 6).

For example, Fig. 8-6 shows a voltage-controlled state-variable filter based on the ideas we have just discussed.

Fig. 8-6 shows one of the advantages of having both the (+) and the ( -) inputs of the OTA available in our' integrator designs. Here we have used the (-) input along with the inverting integrator structure,

EN#196 (10) ASP 8-8

lOOk

V. lOOk ~n

Fi 9 .8-6 Voltage-Controlled State-Variable

R q

* 10k

v c

achieving a non-inverting integrator. This permits the feedback of the bandpass to the inverting input of the summer' (through ~), and the simple result that Q = ~/100k (compare Fig. 6-6).

This VCF and the others we discuss are of course subject to the active sensitivity problems that we have seen with all our fixed filters. However, the problem may be even more complicated in the case of a variable filter. This is because the active sensitivity eXTor is a function of the characteristic frequency of the filter, and in the case of a variable filter, this frequency changes. Thus we can not simply fix the filter by overdesign, for example. We must mor-e or less "repair" the various blocks, much as we discussed in Chapter 7, being always aware that here the resistors may have changing values. The problem is not at all simple.

EN#196 (11) ASP 8-9

Fig. 9-7 shows how the Q of the VCF of Fig. 8-6 changes with frequency, showing an enhancement with frequency. This can be so severe that the filter will break into oscillation when it is set to its upper frequency range. Traditionally, the !tcure" for this has been to place small shunt capacitors across the OTA attenuato:t' resistors (the 10k resistors marked with a * in Fig. 8-6). Capacito:t·s in the range of a few picofarads to up to 50 picofarads were usually sufficient to level off the Q vs. frequency curve. In fact, this does seem to work, but a more exacting analysis (E. Hutchins, !tSome New Results Concerning Q-Enhancement in OTA-Based VCF's," Electronotes, Vol. 14., No. 14.1, september 1982, pp 3-18) indicates that it is the summer of the statevariable rather than the integrators that is the real problem. A rather careful modeling of the whole network, and appropriate compensation methods on a block··by-block basis, can yield a very flat Q vs. frequency curve. Another unexpected result is that the integrator in the form of Fig. 8-Sa is somewhat prefe:tTed over the integrator of Fig. 8-Sb. In might have been thought that it would have been better to have the OTA driving into a constant ground potential (8-Sb) than to have it drive into the var-Lab.l e output potential (8 - Sa) . However, when stray capacitance effects are taken into coris Ldez-atid.on , the advantage goes to Fig. 8-Sa.

ASP 8-10

CHAPTER 9

FILTERING WITH ANALOG DELAY LINES

9~1 Introduction to Delay Line Filtering

9-2 The Ideal Analog Delay and Some Realizations

9-3 First-Order Non- Re curs i ve Comb Fil ters ... Four Methods of Anal ys is

9-4 The First-0rder Recursive Network

9-5 Notch and All-Pass Responses 9-6 Second-Order Networks

EN#196 (12)

INTRODUCTION TO DELAY LINE FILTERING:

In this chapter, we will look at various types of comb filters that can be realized using analog delay lines. The subject matter here is very closely related to digital filtering in that the essence of the filtering is time delay, and in that many of the same design and analysis techniques are employed. The main differences is that we work with a pure analog delay, and in a first approximation, no sampling is involved. This permits us to take advantage of the periodic frequency response in the case of analog delay line filters, while in the case of digital filters, only the first half of the unit c.iz-c Le in the z-plane is used, since the sampling theorem must be obeyed.

Comb filters find applications in cases where a number of harmonically related components must be filtered in a similar manner. For example, we might have a complex waveform, containing a fundamental and harmonics, which is to be notched out. We could envision a set of second-order notch filters in series, each responsible for its own harmonic. The comb filter however has a built-i.n periodic response (for' example, Fig. 9-1) and thus one filter can takes out multiple harmonics. Moreover, once the filter is tuned to notch out anyone

harmonic, the others are automatically tuned. We are not restricted, however, to taking out harmonics with notch-like responses shapes - we can also enhance all harmonics with bandpass-like response shapes, and so on.

Freq.

Resp.

'" ----

~

_t- --:.L -'- .J..- --J.,..- freq

2~ 3fu 4~

I J

fo Fig. 9-1

A Typical Comb Filter

9-2 THE IDEAL ANALOG DELAY AND SOME REALIZATIONS:

The ideal analog delay line is shown in Fig. 9-2. A signal that is at the input of the delay at time t emerges after a delay of time T, at time t+T. Here T is not to be considered a sampling time, since we are not necessarily assu.-ning that any sampling is taking place. However', the signal is only available to us at the input and at the output of the delay - at two discrete times separated by the interval T.

_f_(t_) ~1 T Ir---------f(-t--T)

F(s) . - e-STF(s)

Fig. 9-2

Ideal Time Delay

EN#196 (13) ASP 9-1

Analog delays can be realized (or occur naturally) in a variety of cases. Transmission lines may be used for an analog delay (usually only for very short nanosecond delays), or we may need to deal with the analog delay of a transmisson line that we are wox'king with. For delays from 100 milliseconds up to several seconds, magnetic tape recording can be used, with a delay corresponding to the tape speed and the distance between record and playback heads. Active filter all-pass or phase shift networks also can look like a pure delay over a limited range of frequency (see problems at end of chapter). Surface acoustic wave (SAW) devices may also be considered.

Probably the delay lines of most interest and most practical value are those which do involve sampling. These include digital delay lines (DDL), and charge-coupled device (CCD) delay lines. While these devices do involve sampling, specifically the sampling frequency fs is not liT, but rather some significantly higher frequency, usually an integer multiple of liT. This means that there are, at anyone time, not only a sample at the input, and another sample at the output, but also many samples in between, that are held internally. These are clocked along at the rate fs. Thus if there are N samples between the input and the output of the delay line, T = N/fs.

At this point, there are two notions of time interval that are of interest. The first of these, 1/f$, is the actual "sampling interval", and as with any sampled data system, we must not input frequencies in excess of f5/2 or else we violate the sampling theorem, and aliasing can occur. The second time interval is T, and the frequency liT corresponds to a full trip around the uni.t circle in the z-plane. Not only can we input frequencies exceeding l/2T, but we can continue around the unit circle many times, until the frequency starts to approach fs/2.

In the limit of a perfect analog delay, equivalent to fs becoming infinite, we can continue indefinitely to higher and higher frequencies, taking advantage of the repeating frequency response.

We will want to be able to write down networks involving analog delay lines, and to solve for transfer functions much as we have been doing. We need to know how a delay affects the Laplace transform of a signal. We can show, using equation (1",3) for the Laplace transform, that if F(s) is the Laplace transform of f(t), then e~5TF(s) is the Laplace transform of f(t-T). Thus when a signal passes through a delay T, it is equivalent in the Laplace domain to a multiplication by e-s,', which is also often written as z~·1.. We will in general assume that we are using perfect analog delays in the sections that follow, with the idea that various realizations of the delay may present individual problems to consider.

FIRST-ORDER NON-RECURSIVE COMB FILTER -

FOUR METHODS OF ANALYSIS:

Fig. 3··3a shows a simple use of a delay line and a summer to form a non-recursive (no feedback) network. This network has the frequency response as shown in Fig. 3-·3b. We will look at four ways to analyze this network.

EN#196 (14) ASP 9·~2

T

Freq" Resp.

2

_-

/ I

freq,

liT

2/T

Fig. 9-3a

Non -xecur-s i ve

Fig. 9-3b

Freq. Resp.

For the first method, we will begin wi.th the idea that the frequency response is the ratio of the amplitude of a sinusoidal at the output to the amplitude of the sinusoidal at the input. We can take the input sinusoidal to be Sin(wt) in which case the sinusoidal at the output of the delay is Sin(6.>t - wT) • Therefore, at the output of the summer, the voltage is:

vc.,<.,~ = Sin(wt .- wT) - Sin(wt)

= -2Sin(wT/2 )Cos(<.Ot - WT/2)

(9-1)

which is the result of the trig identity for the sum of two sines.

Note that the Cos (wt - wT/2) term is the output "sinusoidal", having frequency w and phase -wT/2. The term 2Sin(wT/2) does not vary with time, and determines the amplitude of the output sinusoidal, and is accordingly the frequency response, which we will denote by the digital filter frequency response notation:

(9-2)

This is the function plotted in Fig. 9·-3b.

In the second method, we will look at the transfer function H(z), where, using the Z···i notation, for the Laplace transform of a delay, we have:

(9-3)

OY'·

- .

H{z) = V~~_/Vt" = Z-i - 1

(9-4)

We have seen that the frequency response can be obtained from a transfer function using equation (1-18) t and this can be app~ Led here, .~~v~Il:$:._

IHe z) I = [He z=e~(J.IT )He z=e-)WT ) ] '12 = [(e-·jWT-1) (ejWT-l) JV"

= [2 - 2eos (wT >] K.

= 12Sin(~T/2)1 (9-5)

EN#196 (15) ASP 9-3

which is the same result we got with trigonometry.

The third method is to use a geometric interpretation of frequency response, which is very similar to that used for the s-plane filters, except here we are looking for the response on the unit circle in the z-plane. This we can understand in terms of the jw-axis in the s-plane becoming the unit circle in the z-plane, since:

z = eST = e<Cli'" .... ,)W)T = e= T [Cos (wT) + jSin(wT)]

(9-6)

Now the geometrical interpretation follows as seen in Fig. 9-4. From equation (9-4), we see that H(z) has no poles, but does have a zero at z=+l. The frequency response is proportional to the distance from this zero, which is the distance r shown in Fig. 9-4. From simple trig:

r = 2 Sin(8/2)

(9-7)

Further, once around the unit circle corresponds to a frequency of liT, so here the frequency is:

f = (8/21t) (liT)

(9-8)

or ~ = 2nf = 8/T, and:

IH(z)1 ~ r = 2Sin(wT/2)

(9-9)

which is agaih the same result.

f= ,~.l 2'IT T

z-plane

Fig. 9-4

Geometric Method

The fourth method involves inversion of the impulse response of the network as an alternative way of obtaining H(z). Clearly if we put in an impulse, it comes out inverted ~ediately and then right side up after a time T. From this we have for the inverse Laplace transform

EN#196 (16) ASP 9-4

the result -1 + e-ST, which is the transfer function, the same result as we got from equations (9-3) and (9-4). From this, the fl:'equency response can be obtained as beforeu

We see from the frequency response that it consists of sinusoidal lobes, with equally spaced notches at frequency intervals of liT. Accordingly, it is capable of cancelling all frequencies that are harmonics of liT. In the case of a periodic waveform with fundamental liT, that means that all harmonics are cancelled. Thus the entire waveform is cancelled. This result is at first impressive, but less so if we simply consider Fig. 9··3a in the time domain. For a periodic waveform of period T, we always have exactly the same voltage at the input and the output of the delay line. Subtracting these of course results in zero output at all times.

Other forms of non-recursive comb filters are sometimes found. The form shown in Fig. 9'''5a sums rather than subtracts the two signals.

This results in a different frequency response, as seen in Fig. 9-5b. This form is sometimes called the "delay-add" type (or Cosine type) of comb filter', as opposed to the "delay-subtract" type (or Sine type) of Fig. 9'-3a. The spacing of nulls is the same (liT) in both cases, but they are displaced by 1/2T relative to each othel:'. The delay-add type here can be seen to remove a fundamental at 1/2T, and all odd harmonics of 1/2T. It is easy to remember which type is which, simply by considering what happens at dc. At dc, the time delay of the delay line "expireslt and the input and output of the delay line are both the same de voltage. If we subtract these, we get zero (Sine type frequency response) while if we add them, we get 2 (Cosine type frequency

response ) .~" .....

v.

In

Freq ..

Resp. 2

I I I

T

lIT

Fig. 9-5a

ItDelay-Add"

Fig. 9-5b

Freq. Resp.

Another variation would be to make an unequal weighting of input and output, which moves the zero off the unit circle. This can result in attenuation in certain frequency regions (valleys of the response), but not a complete null. Since the sununation is usually a matter of op-amp suromer-s with summing l:'esistors, the incomplete null is actually what we have in practice, although trimming of resistors can be used to get very good r'ejection.

EN#196 (17) ASP 9-5

9-1.1. THE FIRST-ORDER RECURSIVE NETWORK;

The non-recursive networks above have resulted in notch-like responses since they are based on zeros, and no poles. We can use a recursive structure (with feedback) to give poles, and corresponding responses that are more bandpass in nature. (Eventually, or course, we consider both together). Fig. 9-6a shows the first-order recursive network, while Fig. 9-6b shows the position of its pole, and Fig.9-6c its frequency response .

....

V.

In

V out

+

9

Fig. 9-6a Recursive Structure

Fig. 9-6b Pole Plot

- ....... l+g

/ I J

lIT ..

Fig. 9-6c Freq. Resp.

The transfer function of the recursive filter is given by:

H(z) = 1/(1 - gz-1)

(9'·'10)

which has a pole at z = g. For stability, Igi must be less than 1 so that the poles is inside the unit circle. For positive values of g, the pole is approximately as shown in Fig. 9-6b, and the response peaks at dc and at multiples of lIT, as shown in Fig. 9-6c. For negative values of g, the pole is on the left side, and the response peaks at 1/2T and at odd harmonics of 1/2T. Clearly the recursive filter is suited to cases where frequency components are to be enhanced. It is easy to obtain the frequency response by any of the methods suggested above for the non-recursive network, and the result is:

(9-11)

Frqmthis, or from a geometric interpretation evaluated at z=+l and z=-l, we can see that the "peak-to-valley" ratio in the response is given by:

(9-12)

which is valid for positive g (and is the "valley-to-peak" r'atio for negative g);

EN#196 (18) ASP 9-6

9-5 NOTCH AND ALL-PASS RESPONSES:

Fig. 9-7a shows a first-order delay line notch filter, while Fig. 9-7b shows the pole/zero plot, and Fig. 9-7c the frequency response. Here the response is reminiscent of the non-recursive comb filter of Fig. 9-3 in being notch-like, due to the zero on the unit circle. However, as in the case of active filter notch circuits, the pole that appears here is useful in sharpening the notch and in flattening the passbands, relative to the sinusoidal lobes of Fig. 9-3b. This is because the pole, being brought up close to the zero here, tends to "hide" the zero unti~ frequencies get very close to the zero.

V.

In

a

Fig. 9-7a Notch Network

Fig. 9-7b Pole/Zero

1 IT 2/T

Fig. 9-7c Freq Resp.

Fig. 9-8a shows a delay line all-pass network, while Fig. 9-8b shows the pole/zero plot, and Fig. 9-8c the (completely flat) frequency response. Here the poles is at a position a, within the unit circle for stability, while the zero is outside the unit circle, and at a radius l/a that is reciprocal to that of the pole. It can be shown (see problems at end of chapter) that any point on the unit circle is at relative distances to the pole and to the zero that are always proportional, hence the all-pass magnitude response.

It is probably evident how the tr'ansfer functions and pole/zero plots are derived for these networks, and we have left this out, since it is really a combination of the derivations found above. It should also be req,ognized that there are a number of variations on these circuits that are sometimes seen.

Fig. 9-8a All-Pass

Fig. 9-8b Pole/Zero

EN#196 (19) ASP 9-7

. ... _. ._, .,_., _ .. " ._-

f'req.

Fig. 9-8c Freq. Resp.

9-6 SECOND ORDER NETWORKS:

In the case of first-order analog networks, there was relatively little of interest that could be achieved, and we had to go to somewhat higher order networks to achieve interesting results. In this delay line case, things are somewhat different in that the first-order networks already have the interesting and useful property of a periodic" frequency response already built in. Nonetheless, second-order and higher order delay line networks can be of interest to us. In going over to these second-oa-der- designs, we will need to make use of some digital filter design methods, and in particular, the "Bilinear-Z Transform" method is useful.

V out

Fig. 9-9a 2nd-Order Network

Fig. 9-9b Pole/Zero

Fig. 9-9a shows a second-order delay line network (note the two delay lines), while Fig. 9-'9b shows a typical pole/zero plot, by way of showing that it is capable of producing two poles and two zeros. It is easy to derive the transfer function be noting that the voltage Et is given by:

E' = V~H - a~E'z-~ - aoE'z-e

(9-13)

The output is given by:

Vout = beE' + b1.E' Z·-1 + boE' z-e

(9-'14 )

from which the transfer function Vow.t/V$.Y"'I is given as:

+ - \ ao I

(9-15)

The values for the coefficients can be determined by starting with an analog prototype transfer functi.on and plugging into the

biline_ar-z transform. That is, we start with T( s ) and make the substitution:

s

~ F(z-l)/(z+l)

(9-16)

For example if we substitute into a normalized low-pass transfer' function T(s) = 1/(52 + Os + 1), we arTive at H(z) given by:

H(z) = A(Z2 + 2z + 1)/(z2 + Bz + C)

( 9-17)

EN#196 (20) ASP 9-8

where:

A = l./(Fe + DF + 1)
B = A( 2 - 2F2 )
C = A(F2! - DF + 1)
In terms of the network of Fig .. 9-9a, we have:
be = 1
b1 = 2 (9-18) (9-19) (9-20)

be = 1

(9·-21) (9-22) (9-23) (9-24) (9-25)

a:l. = B

ao = C

which completes the network.

As with the second-order active network, relating the network parameters to a transfer function was only one step in moving toward a useful network. Here we need to find how to choose T, D, and F in order to achieve the frequency response that we want. Two of these we can deal with easily. First, l/T remains the spacing between periodi.c repetitions of the response. Secondly, it is the property of the Bilinear-z transform that the response shape is carried over from the analog to the discrete time case. This means that if we choose a value of D that gives, say, 2db passband ripple in the case of T(s), this same value of D will give 2db ripple in H(z).

This leaves us with the parameter F to manipulate to our advantage if possible. In addition, we should keep in mind that we could also have used other second-order T(s) instead of just the low-pass we actually chose. For the low-pass, we get two zeros at z=-l, which can be seen from equation (9-17). If we instead choose a high·-pass T(s), we get two zeros at z=+l. (Incidentally, a bandpass T(s) gives one zero at z=-l and the other at z=+l.)

In considering our options, it is perhaps well to keep in mind that we are generally after only one or two types of response. This is because we are taking advantage of the built-in periodicity to handle a fundamental and all its harmonics. In general, we either want to enhance this complex waveform, or we want to reject it, so we are interested in one of the two responses shown in Fig. 9-10. (Here we are

----assumingthatthe-damping-- D--is--Butterworth-or1.arger, - s-ince-ho -ri"ppIe-is --seen in the responses. ) The pole/zero positions corresponding to these two responses are also seen. Thus we see that the enhancement case will result from a low-pass prototype while the rejection case will

resul t from a h.i.gh+pas s prototype. In both cases t we are looking for poles that are inside the circle, and relatively close to z=+1. Some additional case of interest are covered in the problems at the end of the chapter.

EN#196 (21) ASP 9-9

....

I

Fig. 9-l0a

Enhancement

Fig, 9-10b

Rejection

I .

Fig. 9-10c Pole/zero for enhancement

Fig. 9-10d Pole/zero for rejection

We are now left to consider the parameter F. This is needed in equation (9-17) to put in the units, if for nothing else. When used with digital filters, it is common practice to fix F at 2fs, but that is often a matter of convenience, or in cone Lder-Lng of sampling problems, which we are not considering here. F can be seen as a design parameter that moves the poles to the right (larger F) ox' to the left (smaller F), but always along a curve such that a ripple corresponding to the selected value of D is achieved. If F = 1, the poles are on the imaginary axis. Values of F greater than 1 place poles to the right, while values of F less than 1 place the poles to the left.

ASP 9-10

,---------_ ...... _--

CHAPTER 10

ANALOG ADAPTIVE FILTERING

10-1

The Need for Adaptive or Se1f-AdjustingFilter~-

10-2 Basics of Adaptive Filtering

10-3 The Side-Tracking Filter (STF)

10-4 Correlation-Cancellation Loops (CClls) 10-5 A Compari s6n-,9f' the eCL. and the

LMS Algorithm

EN#196 (22)

THE NEED FOR ADAPTIVE OR SELF-ADJUSTING FILTERS:

We are familiar with the theory and design of filters which have fixed parameters (certainly by this chapter we know about fixed analog filters, and probably know about fixed digital filters as well). Such filters find wide application in cases where the filtering task to be performed is known ahead of time, and is expected to remain unchanged during use. Such filters have fixed parameters by virtue of the fact that they are constructed with fixed resistors and capacitors (analog filters) 01:' with fixed multiplier coefficients and clock rate (if we also consider digital filters).

Since it is the fixed nature of certain of the filter's elements that make the filter itself fixed, we can obtain variable filters by a:r-ranging for these elements to varY. In the case of analog filtering, this is usually accomplished with variable resistors to corrtz-o I the filter's time constants. (Variable capacitors, whi.le theoretically as useful as resistors for this purpose, are usually not practical.) The variable resistors may be manually controlled: potentiometers or manually controlled switches for combinations of fixed resistors. However, elect.ronically-controlled resistors, such as those obtained with a transconductance multiplier (or similar), as we saw in Chapter 8, offer filters that can be controlled remotely, with great speed and accuracy, by computer or other control mechanism rather than by hand, and in an arbitrary number or with an arbitrary number of control elements involved.

Another point about variable filters should be made, and that is that these filters are, pretty much by definition, not time-invariant. Linear time-invariance is often one of our assumptions, leading to our most useful procedures. Obviously, a filter that is manually set (thus variable) and put in fixed service is pretty much the same in status as a filter' that is "soldered-in" fixed, and this is basically a case of a programmable filter. In such a case, the time variability is not a consideration. At the other: extreme, a voltage--controlled filter being contrr-o.l Led by a waveform of frequency comparable to i.ts characteristic frequency is perfectly capable of producing detectable, non-harmonic, modulation sidebands. In such a case, our usual ideas about frequency response and the like, which we find so useful in the case of fixed filters, are not applicable.

Be'tween these cases of f il ters which are programmable, but other..rise relatively fixed. and those with var-Lat.Lcns producing modulation effects of significance, we can sometimes find a useful "quasi-stationary" region where, from a theoretical and performance standpoint, fixed filter techniques are still useful. As a guide, a time constant :rule-of-thumb can be applied: we want the change in

··········characterist-ic-frequency to take place on a time scale- that is- lo-ngcompared with one cycle of that characteristic frequency.

EN#196 (23) ASP 10-1

Such slow frequency adjustments are what is found in many cases.

Manually adjustable filters where the fr'equency is reset for different parts of an experiment would be an obvious example. Another example would be an automatic but sufficiently slow sweep found in some bandpass frequency analyzers. We also have cases where a filter should change

its performance on a slow time scale in r'esponse to some slowly changing external condition.

For example, we might be trying to filter out some frequency that is nominally 400 Hz, which appears as noise on an aircraftts communication system. However, due to changes of engine load, this frequency, determined by onboard generators, is subject to some variation. In such a case, a notch filter might be trimmed manually when the interference becomes a problem. In a corresponding ground-based case, the 60 Hz power lines might be more precisely regulated in frequency, but an analog notch filter might still be subject to drift due to external temperature variations or other such changes, and this would need to be trimmed up from time to time as well.

These and other similar situations indicate the need for a variable or tunable filter, but they also indicate the desirability of a self-tuning or adaptive filter. That is, we would like to not have to adjust the filter manually, but rather have it control and adjust itself, according to some des Lr-ed performance criterion. This automatic change could be in response to changing input conditions, which could be detected, or it might be in response to the filterts own evaluation of its own performance level at its output, or to both. This leads us to the topic of adaptive or self-adjusting filters.

BASICS OF ADAPTIVE FILTERING:

We will be using the term "adaptive filter" in a fairly general sense to include all variable filters that are capable of adjusting their own parameters, in response to signal conditions, so as to better perform their intended functions. However, at the same time it should be realized that by tradition, the term "adaptive filter" has been used in a more restricted sense to describe only an adaptive filter of the linear combiner type (an FIR digital filter), or its operating mode called the "LMS algorithm", or only the linear combiner portion of the structure. (At the same time, the almost identical analog counterpart to this digital FIR one, the so-called "Correlation Cancellation Loop",

oX' CCL, had been largely ignored.) While this particular adaptive

filter and the surrounding theory of this digital point of view are extremely interesting and useful, we prefer to use the term more openly, and corresponal.ngly, to suggest a broader range of possible solutions to self-tuning problems in signal processing. The specific approach chosen will then depend on the application and the available resources.

In the examples suggested above - that of cancelling power' supply

"humj, :'::we couLd ··take a varlety- o·fapproac:hes~Some· s-ort of· £;.mabfe- - ----.- .. -

notch filter comes to mind first, and we need to consider how we would recognize that the notch were not correctly positioned. Obviously this

would be the recognition that the unwanted signal were coming through,

but this only tells us that the notch is not properly trimmed; it in

itself does not tell us which way to move the notch position (up or

down) to reduce the level of the undesired signal.

One simple approach to self-,tuning would be to have a Phased-Locked Loop (PLL), with properly determined capture and hold properties, lock

EN#196 (24) ASP 10-2

Figilo~t PLL Tracking

PHASE-
LOCKED
, . LOOP'
. 1
V
c
'I
VOLTAGE-
CONTROLLED ..
In FILTER Out on to the undesired signal, and the feedback voltage in the PLL could in turn be used to tune a voltage-controlled filter (VCF) , as seen in Fig. 10-1. A second approach would be to use a Side-Tracking Filter (STF), which is a generalization of a comb-filter technique. As discussed above, we often do not know if a filter's frequency should be increased or decreased in order to perform better in a given case. The i.dea of the STF is to have two filters, in addition to the main one, above and below (on the sides) which are evaluating the possibilities of changes in the respective directions. A feedback mechanism than adjusts the center or main filter in response to these findings. We will look at STF's in more detail a bit later.

The PLL approach, and the STF approach offer two useful analog techniques for self-tuning filters. The third analog technique that we want to have in our "bag of tricks" is the eeL (Correlation-Cancellation Loop), which is the analog counterpart to the digital adaptive filter (or LMS algorithm). We will later spend a good deal of time on the CCL structure. First however we will take a brief look at the digital adaptive filter, in order to better relate to the eCL when it come up, and for a better understanding of how adaptive filters work.

EN#196 (25)· ASP 10-3

Fig. 10-2 shows an adaptive filter structure of the digital or LMS type. We can just think of the Z-'1 boxes as delay lines, in which case it is clear that each of the taps available on the line represent only different phases of the reference signal that is shown. We assume that the input is an information-bearing signal such as speech or music, and that added to it is a large "hum" component due to the AC power lines. Such a signal can result from poor grounding practices, for exa-nple. In the figure, the hum is the larger sinusoidal-like component while the speech or music is the smaller random-like component. The approach seen in Fig. 10-2 is to take advantage of a reference signal. In this case, we assume that the hum is caused by the power supply lines, and that we

-'have'-separate"'access"'to'-' the'-powersupply--Tines -;-for- fe-ferencepurposes-:- - .

We can think of the reference as a signal that gives us some information on the hum component - at least the correct frequency, and possibly more. In the example, we are further assuming that the waveforms of the reference and of the hum component are both sinusoidal.

The basic idea here is to take the reference and subtract it off, cancelling the hum. This would be easy if the waveform, the frequency,

from power lines

"speech"

OUTPUT

INPUT

mostly "speech"

Fig:_lO:£_ An adaptive filter using a linear combiner (FIR filter) to adjust phase , and amplitude of a reference input to cancel the "hum" component from

a desired "speech" signal.

the amplitude, and the phase were all the same. Clearly this is more than we can expect. However, it is the purpose of the adaptive filter to adaptively find the correct phase and amplitude (and possi.bly even more) to achieve the cor-reo+ cancellation. With our assumption that it is only the phase and the amplitude that are unknown, we can see that we can choose from the variety of phases presented by the delay li.nes, and adjust the tap weights Wk for the amplitude. Moreover, simple geometric constructions convince us that all we need is two different phases, and we can find amplitudes (tap weights) such that any amplitude and phase condition can be met. Thus, for our example, we would really only need two taps of the N taps shown.

Of course, to have the necessary adjustment ability available is important, but that is only part of the solution. We need to have an automatic adjustment procedure in place, and this we have conveniently ignored by hiding it in the box called "algorithmtl• We can better understand what is in the box, and how the adaptive filter can work, after we have studied the eeL. However, we can indicate here that it is a matter of looking at the output (called the "error" here - an unfortunate but traditional nomenclature), and seeing if it is correlated with a given tap's version of the refer'ence signal. If it is, then we know two things. First, we have not gotten rid of all the hum, since: th~re is still some in the output. Secondly, the particular tap in question 1.5 capable of providing a contribution- toimproving-- the __ situation. Further, we shall see that an algorithm can be chosen so that the correction is in the right direction at all times. We shall return to this a bit later'.

THE SIDE:"'TRACKING FILTER (STF):

The Side-Tracking Filter (STF) is intended to be self-tracking through the operation of two filter'ing channels on either side of a main channel. Essentially we are looking for these channels to examine the

EN#196 (26) ASP 10-4

possibility that the main channel should be moved in their direction. The principle is most useful in the case where a change of output amplitude level in the main channel is not in itself indicative of the direction in which the filter should be retuned for better performance. These cases are mainlY those of bandpass and notch responses which have equal amplitude points on either' side of a center frequency. For the most part, we are looking to apply STF ideas to case where an interference is relatively strong and relatively (but not absolutely) stationary.

Fig lO-3a shows the STF principle applied to the bandpass case.

This would be useful in cases where we have a single sinusoidal component to be tracked and enhanced. The STF here is composed of three voltage-controlled bandpass filters. The center filter is the main processing channel. Above and below we have two side filters that are tuned a bit above and a bit below the main channel. The outputs of the two side filters are not used except that their amplitudes are detected by the magnitude circuits (usually full-wave rectifiex's). These outputs are in turn fed to the inputs of a differential integrator, the output of which is the control voltage for all three VCF's. (Here we

magnitude

In

I

f ,,-.6 fc

...

f+.6 c

Fig~10-3bSide channels in balance used to hold center channel

in proper position.

Fig.10~3a A Bandpass ~~pe Side Tracking Filter

Out

Fi 9. 10-3

fc -6. f c

f+,6 c

Fig.:lO_:~'3.c If input moves upward as shown, side channels are no longer in balance, and differential integrator ramps upward.

EN#196 (27) ASP 10-5

are assuming that the frequency offset of the side channels has been set by a voltage offset not shown, or perhaps by using slightly different capacitors in the three channels.) When the control voltage Vc changes, all three filters shift, but maintain relative positions.

Fig. 10··3b shows the case which illustrate how the side channels lock the center' channel in place. In this case, the amplitudes of the two side channels ar-e exactly the same, and the differential integrator has a net input of zero, and thus its output Vc stands still. Next suppose that the input sinusoidal moves up in frequency slightly, as in Fig. 10-3c. Now the two side channels will be out of balance, with more amplitude in the upper on, and less in the lower' one. This will cause the differential integrator to ramp upward, moving all th:t'ee channels upward until the center is on the new frequency, and the side channels are balanced again, in a manner similar to Fig. 10-3b. Note that if the input frequency had drifted downward instead of upward, exactly the opposite thing would have happened, and balance would have been restored at a lower Vc• The basic feedback operation here is not at all unfamiliar, being like PLL's and other negative feedback devices.

We can see that the c.i.r-cu i.t, is capable of capturing as well as tracking. If an input signal appears roughly within !J.. of the current center frequency, the f il ter' can capture and lock in the same manner in which it responded to a change of input frequency. With this in mind, we can see that in some cases there would be an advantage to keeping the side filters of relatively low Q and somewhat further from the center channel, if we desire a wider capture range. At the same time, the center channel need not have this same lower Q. In fact, the center filtex' need not even be bandpass, but could be notch, or even low-pass or high-pass if the situation dictates this sort of need. Fig. 10-3a thus is representative of a fairly general idea for tracking filters. There is a wide flexibility with respect to the nature of the center channel, and even a good deal with respect to the side channels which could be bandpass, notch, or even a combination of high-pass and low-pass, which can be illustrated by a simplification discussed immediately below.

VCF State-Variable

In

HP LP

_Fig:olO;'4aA single VCF state

. variable filter may be used to do its own side-tracking under certain circumstances.

HP

Fig~~10-4b An out of balance condition that would cause the frequency of the VCF

of Fig. 3d to move up so that the overlap point matches the new input frequency.

EN#196 (28) ASP 10-6

Fig 10-!,t.a shows the side tracking idea extended and simplified so that only one VCF is needed. Here we show a state-variable VCF (which would probably have been our choice for a VCF above anyway) which has a low-pass and a high-pass output as well as the bandpass. In Fig. 10-4a, we show yet a third response - a notch, formed by summing the low-pass and high-pass - as the output, but any of the output could be used.

Here the capture and locking mechanism is indicated in Fig. 10-4b, which corresponds to Fig .. 10-3c for the bandpass case. In the specific instance shown, the input frequency is a bit above the center frequency of the state-variable filter, and there is more amplitude in the high-pass than in the low-pass, This will cause the differential integrator to ramp upward. Note that here we don't have quite the freedom we did in the three VCF case in that the side filters must have the same Q as the center, which in many cases is not a problem.

CORRELATION-CaNCELLATION-LOOPS (CCL'S)

A CCL is a configuration of two multipliers, an integrator, and a summer as shown in Fig. lO-Sa. The CCL in itself is a simple adaptive filter, and it can also be used as an element in a more complex adaptive fi.lter structure, such as serving as the "algorithm" of Fig. 10-2. Because it functions in close analogy with the so-called "LMS algorithm" of the digital adaptive filter, once we understand how the eCL works we can better appreciate the LMS algorithm.

Fig. 10-Sa and Fig. 10-Sb indicate how the CCL is used to cancel the "hurn" component from an input signal containing a mixture of speech and hum. Note that we assume here that we have a reference to the hum available, and we shall also assume in this case that the reference and the hurn are in phase with each other. Later we can look at a more general case.

In Fig. 10-Sa, we are assuming that the integrator has been reset so that its output is initially zero. This in turn blocks the reference signal from the summer since there is a zero voltage on the right input of the lower multiplier. Therefore, the output of the filter is the same as the input, since nothing else is fed to the summer. However, note that the output is being fed back and is being multiplied by the r'eference at the top multiplier. Since the output is the same as the input for the moment, and since the input is in phase with the reference, and both are sinusoidals or close to being si.nusoidal (the "speech" being the smaller random-like component on the input), the output of the upper multiplier is much like a Sine function, which is only positive, as shown. Next we consider releasing the integrator from this reset condition.

The Sine component at the input of the integrator causes the

-- integrator-toramp-posi tive, which---in _. turn causes the- lower-mul-tipl ier to start passing some of the reference signal. Since the two are in phase, this subtraction results in the sinusoidal component at the output being reduced. Continuing back around the loop, we see that the Sine component is now reduced, which in turn slows the ramping of the multiplier. Eventually the steady-state of Fig. IO-5b is achieved.

- Here the integrator has ramped to some value Vc such that an amount of reference is passed through the lower multiplier which exactly cancels

EN#196 (29) ASP 10-7

the sinusoidal component in the input. The output is now just the IIspeechu as shown. Following back around the loop, we see that this is multiplied by the reference, with the product still fed to the integrator. However, the speech and the reference are not correlated over any significant period of time, and in the product, the positive and negative portions pretty much average to zero. The integrator output may be fluctuating slightly, but as long as the integrator time constant is long enough, the output stands still at Vc. for all pJ:'actical purposes.

Note that the cancellation is now locked in by a negative feedback mechanism of the type we have seen many times before. If for example, the output of the integJ:'ator fluctuates up, then the sinusoidal at the input is over'-cancelled, and a small negative sinusoidal appears at the output. This in turn leads to a negative Sine term at the integrator input, which causes the integrator to ramp back down. A similar argument of course applies to a negative fluctuation of the integrator's output.

R

.
eference fV\!\A je~]r the mom
X
Input ~ the int
reset , reset ..
- =:
f\1\_
\T\; oJ,
X Speech
1\/\
Speech + IIHumll
~ V-
A r. r. r. - I_
-V-vV~ -
')? Output Fig lO-5a

eel at ent when egrator is

u!\ V V .

Input

F i g___JO- 5 b

eel after converqence to the "hum"

Reference

Input ~X .. .. ...-"' ....... ~ cancel
5

t==vc
integrator
- -- --"-
J --
11
X Speec
Speech + 11 Hum'' ~f\ ~""'""" '-'I

r. t\ 1\/\ =L_
-VV -V- V c
-'it Output h

Input

EN#196 (30) ASP 10-8

Sin(wt) reference", ....... -1

sine L....:::::::=:___::==---1L~

I t B Sin ( wt+s!I ) npu

reference cosine __.~

Cos{wt}

J

QUADRATURE CCllS

Output

Fig. 10-6 shows a case where we now have two CCL's instead of just the one. This is significant first because it will show us how to - handle the case of an arbitrary phase shift between the input sinusoidal and the reference sinusoidal, and because it is the first step in going from the eeL to a full adaptive filter. Note that the summer of the eCL's is now a common summer', which we can compare to Fig. 10-2, where the common summer is in cascade with the subtracting summer, for an equivalent result.

Fig. 10-6 shows that we have available a Sine and a Cosine of the frequency we wish to cancel. In general, to cancel any given sinusoidal component we would expect to have to achieve a particular' amplitude and a particular phase. However, we can always do this by a linear combination of a Sine component and a Cosine component, as is seen in Fig. 10-7a, and it can also be seen (Fig. 10-7b) that we do not need to have exactly Sine and Cosine components. In fact, any components with relative phases of say 700 to 1100 would probably work quite well. Theoretically, any two different phases, even 0° and 1° would work, but this can put a very severe requirements on the amplitude range of the multipliers, as is shown in Fig. 10-7c. Accordingly we would prefer to have a pair of components involved that have a phase difference of something close to 90°, but we must keep in mind that this is a matter of practical convenience, and not required by theory.

The convergence solution for the quadrature eCL can be demonstrated as follows, with the single GCL being a special case with only a Sine reference. According to Fig. 10-6, the output is:

v""' ..... t = B Sin(wt+<I» - V'1 Sin(wt)- Va Cos(wt)

(10-1)

- At the upper left multiplier, we have this output multiplied by- Sin(wtl-which gives:

Vsm = B Sin(wt+<I>}Sin(wt)

- Vi. Sinsqwt) - Ve Cos(wt)Sin(wt)

(10-2)

= (B/2}Cos(<I» - (B/2)Cos(2wt-$) - V1/2 -(Vi./2)Cos(2~t) - (Va/2)Sin(2wt)

EN#196 (31) ASP 10-9

(10-3)

Cos(wt)

N So +-' U Q)

>~_..J,;:::""_-~_--iII-- Sin (ot )

/

.4-- -- - -, Sin(wt+95)

I I

BSin¢ ----------- Sin(wt+95)

1 "Ideal Case"

B Cos ¢

9, not equal to 900 but sti 11 1 i kely quite satisfactory in generating a

so lut'ion.

Vector 1 1

1

-

- <Eo- -

- - - - ~ _" --::: --

Vector 2

C

.-' _"..

~-' -~for small 8 = 82

two vector components that sum to give proper cancellation are both of very large a!'1plitude here.

-

Fig; 10-7 Case A is ideal, using perfect 900 reference, equations (5) and (7). Same vector can be generated with non-900 angle in Band C.

This is the input to the integrator, and by assumption, the eeL has converged, and therefore V1, the output of the integrator, is a constant. Therefore, any DC terms in the i.nput, Vsm must vanish. This

gives:

(10-t,..)

In a similar manner, it can be shown that:

Ve. = B Sin(4))

(10-5)

From the discussion above and from Fig. 10-7 it can be seen that the eeL with two different phases, reasonably close to 900, is capable of cancelling a sinusoidal component of arbitra~y amplitude and phase. If the frequency of the interfering sinusoidal is fairly well known (as would be the case for power line hum, for example), a simple first-order all-pass (phase shifter) set for a 900 frequency at the nominal frequency, should be more than adequate. (Note however that even a broad band 900 network will not cancel two different frequencies with only the two tap case seen here - except in very special circumstances).

Fig 10-8 and Fig. 10-9 show some practical implementations of CeLIs. Fig 10--8 is the single loop, corresponding to Fig. 10-5, while

Ir~g .:I"Q- 9 J:=;q q()llpJ:~ __ J,_()9pgo~e§PQngJlJ:g t9th(3qua(j]:~t~~e __ :;,e~e_r~:ng~______ _

system of Fig. 10-6. Both circuits use the inexpensive transconductance multiplier along with an op-amp to form a four-quadrant multiplier.

Parts cost for either circuit is very low, under $10 for' Fig. 10-9.

In Fig. 10-8, the locations of the multipliers are indicated by their X and Y inputs and their Z outputs. The multiplier is actually configured as Z = -XY/S as a matter of convenience here. The remainder of the circuit is standard analog circuitry, with the summer and

EN#196 (32) ASP 10-10

x

II UPPER

MUL TI PLI ER II

27

CA3080

- 39k

Reference Input

20k

x

II LOWER MUlTIPLIERIl

CA3080

z

9k

lOOk

lOOk

. Fig .. 10-8

A realization

of an analog

eel using an inexpensive transcondictance multiolier (Type' CA3080)

Signa 1 1 lOOk lOOk

• .-----~P;-.~~~--~-. ~.1_~~~--._~~----~

~ Output

eel Input} .~ SUMMER _ . _

Stgnal 2

integrator as indicated. Just before the actual eeL summer, there is an additional "test summer" shown, the output of which is the actual eeL input. This permits a simple test and demonstration in which two different sinusoidal signals are summed by the test summer, becoming the eeL input, one of the signals being regarded as the "signal" while the other is regarded as the "noise." Whichever of these we wish to cancel is then connected to the reference input as well as to the test summer, and it will disappear from the output. The multipliers may be trimmed by conventional four-quadrant multiplier balancing techniques if desired, but it is usually more productive to adjust the trimmers to optimize the rej ectionof the unwanted component. '. In general, . the'.

/.'>~ . EN#i96 (33) ASP 10-11" \.. .

J,

X-trim will have more effect on the performance than the y-trim. Also, for demonstration purposes, it may be useful to greatly increase the time constant of the integrator (e.g., make the capacitor 1 microfarad instead of 0.1 microfarad). This makes the convergence time Longez, of course, and the cancellation will take place gradually over several seconds (this also depends on the amplitudes of the signals, with faster convergence taking place for larger signals, as the integrator' ramps faster). This slower convergence during demonstrations is more convincing as it seems to be psychologically more impressive to see something happen than it is to see that it has happened. Of course, in actual use, the ceLIs time constant would be set on a performance basis.

Fig. 10-9 is basically an extension of Fig. 10-8. Here we could add a simple phase shifter to provide appropriate reference signals, and another phase shifter to provide the arbitrary phase shift <p between the input and the refer'ence. This does work, and provides a demonstration similar to that of Fig. 10-8. In general, Fig. 10-8 will provide the

50k 1 330k X-tr'im l

270

...___----,-------------

"Cosine Reference"

1 00 k lOOk 1 00 k

_ ... - .. "'- ..... 7,...... J A .

.--:------yv"..-y"....-_ ,,~ ."' Ul

Signal 1 [>- cci

Input

Signal 2 r

-~ +

"S i ne Reference"

j

20k 10k V-trim

~-

50 X_S+~~~~~--,rJ~~ww~~~~

tri

39

0,,' 1"

120k

Fig. 10;.- 9 Real i zation of a Ouadrature CCl

EN#196 (34) ASP 10-12

most useful approach to practical problems of single component cancellation. In such a case, only one of Signal 1 or Signal 2 would be used, or we would go directly into the eeL input point, feeding in the signal to be cleaned up. The reference signal, and a phase shifter not shown, would then be used to provide the sine and cosine references indicated.

Another interesting demonstration that can be done with the double loop of Fig. 10-9 is to use one sinusoidal signal generator and appropriate 90(;) phase shifter for the reference, and to provide a second sinusoidal generator to the ceL input, and monitor the output for different input frequencies. That is, we test the eeL set up in this way exactly as we would a filter, measuring its frequency response.

When the frequency of the input is exactly the same as the reference, we know that we only have an arbitrary phase difference ~, and the eCL should be able to cancel this, according to our discussion above. More interestingly, when the frequencies differ only slightly, we can still get substantial cancellation. This we can understand as the eCL system interpreting this small difference in frequency as a phase difference that it is continually trying to correct (and which it is capable of correcting). As this frequency difference gets larger, the eCL system is less capable of making up the apparent phase erro~', and cancellation is less complete. This is because of the integrator time constants that determine how fast the weights can change. The longer the time constants, the slower the correction, and the less complete the cancellation. Thus the system configured as described looks somewhat like a notch filter, with the notch position set by the reference oscillator frequency, and with the Q determined by the RC time constant of the integrator, getting higher for larger RC time constant. By the same argument, it is possible to see that the Q also depends on the inverse square of the reference amplitude, since the amplitude affects the integrator charging rate through two multipliers.

A final interesting point can be made about the single loop of Fig. 10-5. As long as we assume that the interference signal and the reference signal are exactly the same waveform, then the single loop is capable of cancelling arbitrary waveforms, and is not restricted to sinusoidals. This can be argued simply in a manner similar to that surrounding Fig. 10··5a and Fig. 10-5b. This fact can be very useful in cases where phase shift is very small, so that one loop can be used, or where there is a fixed time delay instead of an actual phase shift. The fixed time delay can be handled with an analog delay line, for example. The more general case , of a genez-a l waveform with arbitrary phase shifts in different components, requires a larger number of CeLis and a corresponding larger number of reference phases, typically obtained from a tapped delay line.

-- ---10-5- ----A-COMPAR-ISON--OF--TlJE- CCL AND THE LMSAI:.G0RITHM:

Fig. 10-10a shows a general view of an adaptive filter, with Fig. 10-l0b showing an LMS algorithm realization, and Fig. 10-10c showing our fami.liar- eCL. The theory of adaptive filtering, not discussed here, leads to the LMS algorithm equation, which is stated in the form of a tap weight update equation as:

EN#196 (35) ASP 10-13

x(n}

den)

x.(n) r----,

.. ,1 1--_ Z

- - -- -- - - - - - - - --,

WN ALGORITHM I

-- _--

~

_. - __ ...I

error e(n)

e(n}

lO-l.Oa Genera 1 vi ew of adapti ve fil ter

repeat 1- - -- - - - - - - -- --I for each I

tap4:

repeat 1- - - - - - - - - -,

for eachr e(t) I

tap '4: X :

211

d(t)

Fig.lO-10b A rea1ization of the LMS algorithm

'. f

i I

'- - --

e n)

1 sCR

I I

_ .. J

X

EN#196 (36) ASP 10-14

, ,

e(t)

Fig.10-.lOcAdjusting the tap weight with a CeL

w,~ (ri+L) = W~ (ri) + 2p.e(n)xj (n )

(10-6)

which says that the next tap weight, for time instant n+1, is the current tap weight, plus a correction term. Note that this change of tap weight is equivalent to a ramping of the integrator output in the eeL case. What is of interest is the correction term. Here, the parameter p. or 211 is small (something like 0 .. 001), so the correction term at anyone time is small. The only way the tap can change substantially in one direction is for the product e t n Jx, (n ) to have the same sign over many time instances. This is to say that e(n) and x;(n) must be cor-r-e Lat.ed over a substantial amount of time, otherwise the tap weight is not being called upon to contribute to convergence, but is rather being just kicked up and down a bit. Of course, this reminds us of what is happening with the eeL. In fact, it is possible to show that the two systems can be related so that:

211 = TIRe

(10-7)

ASP 10-15

END OF ANALOG SIGNAL PROCESSING TEXT

Tuning Equations Derived from Passive Sensitivity

(Continued from Page 2)

But;, suppose we need to do somewhat better, or perhaps we take great pride in our work and want this par-t Lcul.ar set (my set) of ten

filters to be better tuned.. What sort of things can we do that may

work well and not cost us too much in money or time?

The first thing is obviously to consider using variable resistors ("tr'im pots") to adjust about a certain range. Keep in mind that a good quality trim pot costs aevez'a'l, dollars, a cheap one perhaps 30 cents, and a fixed resistor only 3 cents. So good quality trimmers may be out of the question because of cost, and .lesser quality ones may be subject to drift and contamination. Another problem with trimmers is that

thexe is always a temptation to adjust them one more- time-, -or-to-worry----if perhaps we accidentally jarred one. All these considerations make a suggestion of using a just-one-added-resistor-soldered-in tune-up quite attractive. We just need to be able to determine what resistor to use without undue trial and ez r'oz, Le t." s look at an example.

Suppose our cutoff f'zequericy is de't e.rmi.ne d by f'ouz passive components as:

as is typical of many configurations (e .. g .. , Sallen-Key). We can easily calculate:

EN#196 (37)

fe

s

-1/2

(2 )

This is a particula:rly simple case, and makes a good example. By rep.Lacd.nq the partial derivatives with deltas, we obtain:

(3)

Now suppose that we have intentionally set Rl slightly lower than nominal, so we expect the f:r'equency to be initially slightly high. At the same time, we ar'zanqe on our circuit boards to make Rl as a aezLes combination of two r es i.stior-s , the second of' which is initially just a ,wi:re.. Now we measure the cutoff f':r'~quency, and we find it to be, fo:r'

example, 1032 Hz.. That is, l1fe ::: +32 Hz.. Suppose the initial value of

Rl is nominally 10,000 (we don't know for sure what it is unless we measure it, but expect it to be within advertised tole:rances).. In a sense, by measuring the cutoff, we have made an overall measurement of all four frequency-dete:r'mining components. (Any and all of them

contribute to the e r ro r in general.) But, if the ezro.r we:r'e totally

due to Rl, what error in Rl would account for the observed frequency error? The answer is:

l1Rl = -2(10000/1000) 32 = -640 ohms

(4 )

This means that the frequency error obse:rved is as though Rl were wrong by -640 ohms.. This means that Rl is 640 omus too small. Thus we would clip out the ze:ro ohm wi:r'e and install a 640 ohm co:rrection. Note that

- in equation (4) we have plugged in nominal values for Rl and for fe, but things would be little changed if we t:ried ot.her possibly better values. The level of co:nection is at the 6% level of the component (640 ohms in 10000 ohms). If we had used the measured value of 1032 for fe, the correction resisto:r would be 620 ohms. In ei the:r case here (640 oz 620 ohms) we would have cho~~nthe closest nominal 5% :resisto:r, which would have been 620 ohms" Not: unreasonably, we expect thi s coz-cect.Lon to put us close to 5% of 6% 0:rO.3%, and the f:r'equency to within half this e:r:r'or because of the sen~itivity having a magnitude of 1/2.

This can usually give astoundingly good z'eau l t s , Note that it involves the use of two resistors, where a large one is effectively me.aaur-ed , and a small coz-rect.Lon then chosen by fo:rmula, but not

-- measuxed.. Obviously/the -p.rooe se is-easily itera-tedif we -li-ke.-- ---But- --- --- ----after all, we are riot using much more than the fact that a -3% frequency

correction r-equ.i.r'ed a +6% change in a f:r:'equency dete:rmining resistor, as

indicated by the sensitivity value of "':'1/2 ..

EN#196 (38)

This first example was fairly s·"t!:'aight.forw?lrd. We knew how to calculate the sensitivity andzirrt:erp:r:'et the re$ult as a tuning fox'mula. Mox'e complicated cases may be~f.as'~easy only in theory. Consider as a second example the popular IIDeliyannis" bandpass filter.. [See details in Application Note 145, "Anaiysis of the Deliyannis Filter," Sept. 4, 1979" This filter also appei3,;sas Fig. 5-8 of Chapter 5 9:EAnalog signal Processing, pg 80f EN#194 .. ] This filter has a center frequency given by

fa = 1/2rr~RC (5)

whexe B is a ratio of two resistors. This is similar to the first example. The fil t.e r f s Q is given by:

Q = (l-a)~ / [2(1-a) -aB] (6)

and the gain at the center frequency (the "peak gain") is given by:

g = B / [ 2(1-a) -aB]

Here the xatio a is the fraction of the output fed back to the (+) input of the op-amp, and is just the result of anothe:r: resistor ratio ..

It is evident that the manipulation of the center f:r:'equency is dependent on B but not on a .. ' So we might consider adjusting the f:r:equency first" Fu:r:the~, it is difficult to accurately measure the Q, since the Deliyannis filter is generally used for its ability to achieve vexy high Q f s.. So we may well opt fox' adjusting the peak gain, with the idea that when we get this right, the Q may well come along. Equations (6) and (7) suggest this. (Ox, we may well be mainly conce.rried with achieving the cozrect peak gain .. ) Thus, we suppose B is f'ixed and we need to adjust a" What is the sensitivity of g to a? Well, we could do this analytically by taking the usual paztial derivative:

g

S = (a/g) 8g/8a = a(2+B) / [2 - a(2+B)] a

but we can also cheat.. We really need D.g!D.a foz our tuning. All we really need to do is put in OUt nominal value of B, wiggle the ratio a ever so slightly, and see how muchg changes. This we do not by a circui t measuremerrt , but simply by using equation (7) ..

Note that the z'a t Lo a, fox a positive, non-infinite Q, must be less than 2/(B-2).. So if B=16 fot example, a would need to be less than

0 .. 1429. And, we would already have a nominal value of a as part of OUI design calculations.. Perhaps, for example, a was supposed to be 0.1. (Overall, this means we designed for Q=18 .. ), SO what is the sensitivity

EN#196 (39)

(7)

(8 )

of 9 to a about the nominal design? Well, suppose we try two values of a: a=O.l and a=0.1001.. We get g values of 80 and 80 .. 726.. So lI.a is 0.0001 and lI.g is 0.726. The sensitivity of 9 to a is right around

9 .. 075. This is not good from the point of view of getting it !'l,ght

wi thout tuning, but not all that bad when we consider' that we need for g to depend on a OI we would have no way of doing the tuning. Note that while we got this \1ithout taking der'Lvat.Lvee , plugging a=O.l and B=16 into equation (7) (from taking derivatives) does give us exactly 9.

What does this really mean? Suppose g needs be 80 but is only 50, as could well occur.. Then a would have to be changed by (a/g) (lI.g/9) or about 0.004.. Well, a would be determined by a rat.Lc of resistors and the individual resistors would typically be on the order to 10k to lOOk. So making an adjustment of a on the order of 0.4% would involve an

addi tional resistor added to the lower leg of the di.vider on the order' of 40 to 400 ohms.. This is something we can handle. It is worth noting that if we actually do solder in such an adjustment, that while the soldering heat does not change the small additional resistor much, the heat can often wonder into the other' resistor of the di.vider and throw off the .:results fOI peIhaps 10 seconds before the resistors all cool down.. It can be amusing to watch the performance converge - as we encourage it by blowing on it.. Because of our calculations, the results can come around quite well, and with little or no additional tr:'ial-and-e.:rror.

Finally, suppose that in a particular case we want a=0 .. 1 and B=16.

We choose to use two standard 5% resistors, 10k and 91k, to achieve the ratio a.. Suppose the actual values we end up with are 10.1k and 92.1k, so that a=10 .. 1/(92.1+10 .. 1)=0.0988. We do not know these actual values, but acco.:rding to equations (6) and (7) we expect to measure Q=16.3 and g=72.4 (not g=80). The actual sensitivity, using equati.on (8), is S=8 .. 0, but we don't know this, and use the nominal S=9.. Using equation (8) we get:

lI.a = (a/g) (lI.g/9) = (0.1/72 .. 4) (-7.6/9) = -0.001166

(9 )

EN#196 (40) ""

If we make this correction a = 0.0988+0.001166 ;;; 0 .. 09999. Using this new value of' a, we calculate that we would observe Q=17. 99 and g=79. 9, very close to nominal. This we would achieve with a series correction to the 10k resistor. This value, RX/ would be Rx/6.a = 10k/0 .. lor:' l:'ight around 117 ohms.. Thus, from our easily measured 6.g we calculate 6.a based on observed I).g, g, and nominal values fo.:r a and S .. From L\a we calculate Rx based on nominal values of a and one of the z-e s Ls t or s , Note

--that-Qaswellasg does come up near normal as a result.

Electronotes, Vol. 20, No. 196, December: 2000

Publ,ished by B. Hutchins, 1016 Hanshaw Rd .. , Ithaca, NY 14850 (607)-257-8010

You might also like