SlideShare a Scribd company logo
The Fundamentals of
Signal Analysis
Application Note 243
2
3
Table of Contents
Chapter 1 Introduction 4
Chapter 2 The Time, Frequency and Modal Domains:
A matter of Perspective 5
Section 1: The Time Domain 5
Section 2: The Frequency Domain 7
Section 3: Instrumentation for the Frequency Domain 17
Section 4: The Modal Domain 20
Section 5: Instrumentation for the Modal Domain 23
Section 6: Summary 24
Chapter 3 Understanding Dynamic Signal Analysis 25
Section 1: FFT Properties 25
Section 2: Sampling and Digitizing 29
Section 3: Aliasing 29
Section 4: Band Selectable Analysis 33
Section 5: Windowing 34
Section 6: Network Stimulus 40
Section 7: Averaging 43
Section 8: Real Time Bandwidth 45
Section 9: Overlap Processing 47
Section 10: Summary 48
Chapter 4 Using Dynamic Signal Analyzers 49
Section 1: Frequency Domain Measurements 49
Section 2: Time Domain Measurements 56
Section 3: Modal Domain Measurements 60
Section 4: Summary 62
Appendix A The Fourier Transform: A Mathematical Background 63
Appendix B Bibliography 66
Index 67
4
The analysis of electrical signals is
a fundamental problem for many
engineers and scientists. Even if the
immediate problem is not electrical,
the basic parameters of interest are
often changed into electrical signals
by means of transducers. Common
transducers include accelerometers
and load cells in mechanical work,
EEG electrodes and blood pressure
probes in biology and medicine, and
pH and conductivity probes in chem-
istry. The rewards for transforming
physical parameters to electrical sig-
nals are great, as many instruments
are available for the analysis of elec-
trical signals in the time, frequency
and modal domains. The powerful
measurement and analysis capabili-
ties of these instruments can lead to
rapid understanding of the system
under study.
This note is a primer for those who
are unfamiliar with the advantages of
analysis in the frequency and modal
domains and with the class of analyz-
ers we call Dynamic Signal Analyzers.
In Chapter 2 we develop the concepts
of the time, frequency and modal
domains and show why these differ-
ent ways of looking at a problem
often lend their own unique insights.
We then introduce classes of instru-
mentation available for analysis in
these domains.
In Chapter 3 we develop the proper-
ties of one of these classes of analyz-
ers, Dynamic Signal Analyzers. These
instruments are particularly appropri-
ate for the analysis of signals in the
range of a few millihertz to about a
hundred kilohertz.
Chapter 4 shows the benefits of
Dynamic Signal Analysis in a wide
range of measurement situations. The
powerful analysis tools of Dynamic
Signal Analysis are introduced as
needed in each measurement
situation.
This note avoids the use of rigorous
mathematics and instead depends on
heuristic arguments. We have found
in over a decade of teaching this
material that such arguments lead to
a better understanding of the basic
processes involved in the various
domains and in Dynamic Signal
Analysis. Equally important, this
heuristic instruction leads to better
instrument operators who can intelli-
gently use these analyzers to solve
complicated measurement problems
with accuracy and ease*.
Because of the tutorial nature of this
note, we will not attempt to show
detailed solutions for the multitude of
measurement problems which can be
solved by Dynamic Signal Analysis.
Instead, we will concentrate on the
features of Dynamic Signal Analysis,
how these features are used in a wide
range of applications and the benefits
to be gained from using Dynamic
Signal Analysis.
Those who desire more details
on specific applications should look
to Appendix B. It contains abstracts
of Agilent Technologies Application
Notes on a wide range of related
subjects. These can be obtained free
of charge from your local Agilent
field engineer or representative.
Chapter 1
Introduction
* A more rigorous mathematical justification for the
arguments developed in the main text can be found
in Appendix A.
5
A Matter of Perspective
In this chapter we introduce the
concepts of the time, frequency and
modal domains. These three ways of
looking at a problem are interchange-
able; that is, no information is lost
in changing from one domain to
another. The advantage in introducing
these three domains is that of a
change of perspective. By changing
perspective from the time domain, the
solution to difficult problems
can often become quite clear in the
frequency or modal domains.
After developing the concepts of each
domain, we will introduce the types
of instrumentation available. The
merits of each generic instrument
type are discussed to give the reader
an appreciation of the advantages and
disadvantages of each approach.
Section 1: The Time Domain
The traditional way of observing
signals is to view them in the time
domain. The time domain is a record
of what happened to a parameter of
the system versus time. For instance,
Figure 2.1 shows a simple spring-
mass system where we have attached
a pen to the mass and pulled a piece
of paper past the pen at a constant
rate. The resulting graph is a record
of the displacement of the mass
versus time, a time domain view of
displacement.
Such direct recording schemes are
sometimes used, but it usually is
much more practical to convert
the parameter of interest to an
electrical signal using a transducer.
Transducers are commonly available
to change a wide variety of parame-
ters to electrical signals. Micro-
phones, accelerometers, load cells,
conductivity and pressure probes are
just a few examples.
This electrical signal, which repre-
sents a parameter of the system, can
be recorded on a strip chart recorder
as in Figure 2.2. We can adjust the
gain of the system to calibrate our
measurement. Then we can repro-
duce exactly the results of our simple
direct recording system in Figure 2.1.
Why should we use this indirect
approach? One reason is that we are
not always measuring displacement.
We then must convert the desired
parameter to the displacement of the
recorder pen. Usually, the easiest way
to do this is through the intermediary
of electronics. However, even when
measuring displacement we would
normally use an indirect approach.
Why? Primarily because the system in
Figure 2.1 is hopelessly ideal. The
mass must be large enough and the
spring stiff enough so that the pen’s
mass and drag on the paper will not
Chapter 2
The Time, Frequency and
Modal Domains:
Figure 2.2
Indirect
recording of
displacement.
Figure 2.1
Direct
recording of
displacement -
a time domain
view.
6
affect the results appreciably. Also
the deflection of the mass must be
large enough to give a usable result,
otherwise a mechanical lever system
to amplify the motion would have to
be added with its attendant mass
and friction.
With the indirect system a transducer
can usually be selected which will not
significantly affect the measurement.
This can go to the extreme of com-
mercially available displacement
transducers which do not even con-
tact the mass. The pen deflection can
be easily set to any desired value by
controlling the gain of the electronic
amplifiers.
This indirect system works well
until our measured parameter begins
to change rapidly. Because of the
mass of the pen and recorder mecha-
nism and the power limitations of
its drive, the pen can only move
at finite velocity. If the measured
parameter changes faster, the output
of the recorder will be in error. A
common way to reduce this problem
is to eliminate the pen and record on
a photosensitive paper by deflecting
a light beam. Such a device is
called an oscillograph. Since it is
only necessary to move a small,
light-weight mirror through a very
small angle, the oscillograph can
respond much faster than a strip
chart recorder.
Another common device for display-
ing signals in the time domain is the
oscilloscope. Here an electron beam is
moved using electric fields. The elec-
tron beam is made visible by a screen
of phosphorescent material.
It is capable of accurately displaying
signals that vary even more rapidly
than the oscillograph can handle.
This is because it is only necessary to
move an electron beam, not a mirror.
The strip chart, oscillograph and
oscilloscope all show displacement
versus time. We say that changes
in this displacement represent the
variation of some parameter versus
time. We will now look at another
way of representing the variation of
a parameter.
Figure 2.3
Simplified
oscillograph
operation.
Figure 2.4
Simplified
oscilloscope
operation
(Horizontal
deflection
circuits
omitted for
clarity).
7
Section 2: The Frequency
Domain
It was shown over one hundred years
ago by Baron Jean Baptiste Fourier
that any waveform that exists in the
real world can be generated by
adding up sine waves. We have illus-
trated this in Figure 2.5 for a simple
waveform composed of two sine
waves. By picking the amplitudes,
frequencies and phases of these sine
waves correctly, we can generate a
waveform identical to our
desired signal.
Conversely, we can break down our
real world signal into these same sine
waves. It can be shown that this com-
bination of sine waves is unique; any
real world signal can be represented
by only one combination of sine
waves.
Figure 2.6a is a three dimensional
graph of this addition of sine waves.
Two of the axes are time and ampli-
tude, familiar from the time domain.
The third axis is frequency which
allows us to visually separate the
sine waves which add to give us our
complex waveform. If we view this
three-dimensional graph along the
frequency axis we get the view in
Figure 2.6b. This is the time domain
view of the sine waves. Adding them
together at each instant of time gives
the original waveform.
However, if we view our graph along
the time axis as in Figure 2.6c, we
get a totally different picture. Here
we have axes of amplitude versus
frequency, what is commonly called
the frequency domain. Every sine
wave we separated from the input
appears as a vertical line. Its height
represents its amplitude and its posi-
tion represents its frequency. Since
we know that each line represents a
sine wave, we have uniquely
characterized our input signal in the
frequency domain*. This frequency
domain representation of our signal
is called the spectrum of the signal.
Each sine wave line of the spectrum
is called a component of the
total signal.
Figure 2.6
The relationship
between the time
and frequency
domains.
a) Three-
dimensional
coordinates
showing time,
frequency
and amplitude
b) Time
domain view
c) Frequency
domain view.
Figure 2.5
Any real
waveform
can be
produced
by adding
sine waves
together.
* Actually, we have lost the phase information of the sine
waves. How we get this will be discussed in Chapter 3.
8
The Need for Decibels
Since one of the major uses of the frequency
domain is to resolve small signals in the
presence of large ones, let us now address
the problem of how we can see both large
and small signals on our display
simultaneously.
Suppose we wish to measure a distortion
component that is 0.1% of the signal. If we set
the fundamental to full scale on a four inch
(10 cm) screen, the harmonic would be only
four thousandths of an inch (0.1 mm) tall.
Obviously, we could barely see such a signal,
much less measure it accurately. Yet many
analyzers are available with the ability to
measure signals even smaller than this.
Since we want to be able to see all the
components easily at the same time, the
only answer is to change our amplitude scale.
A logarithmic scale would compress our large
signal amplitude and expand the small ones,
allowing all components to be displayed at the
same time.
Alexander Graham Bell discovered that the
human ear responded logarithmically to power
difference and invented a unit, the Bel, to help
him measure the ability of people to hear. One
tenth of a Bel, the deciBel (dB) is the most
common unit used in the frequency domain
today. A table of the relationship between
volts, power and dB is given in Figure 2.8.
From the table we can see that our 0.1%
distortion component example is 60 dB below
the fundamental. If we had an 80 dB display
as in Figure 2.9, the distortion component
would occupy 1/4 of the screen, not 1/1000
as in a linear display.
Figure 2.8
The relation-
ship between
decibels, power
and voltage.
Figure 2.9
Small signals
can be measured
with a logarithmic
amplitude scale.
9
It is very important to understand
that we have neither gained nor lost
information, we are just represent-
ing it differently. We are looking at
the same three-dimensional graph
from different angles. This different
perspective can be very useful.
Why the Frequency Domain?
Suppose we wish to measure the
level of distortion in an audio oscilla-
tor. Or we might be trying to detect
the first sounds of a bearing failing on
a noisy machine. In each case, we are
trying to detect a small sine wave in
the presence of large signals. Figure
2.7a shows a time domain waveform
which seems to be a single sine wave.
But Figure 2.7b shows in the frequen-
cy domain that the same signal is
composed of a large sine wave and
significant other sine wave compo-
nents (distortion components). When
these components are separated in
the frequency domain, the small
components are easy to see because
they are not masked by larger ones.
The frequency domain’s usefulness
is not restricted to electronics or
mechanics. All fields of science and
engineering have measurements like
these where large signals mask others
in the time domain. The frequency
domain provides a useful tool in
analyzing these small but important
effects.
The Frequency Domain:
A Natural Domain
At first the frequency domain may
seem strange and unfamiliar, yet it
is an important part of everyday life.
Your ear-brain combination is an
excellent frequency domain analyzer.
The ear-brain splits the audio spec-
trum into many narrow bands and
determines the power present in
each band. It can easily pick small
sounds out of loud background noise
thanks in part to its frequency
domain capability. A doctor listens
to your heart and breathing for any
unusual sounds. He is listening for
frequencies which will tell him
something is wrong. An experienced
mechanic can do the same thing with
a machine. Using a screwdriver as a
stethoscope, he can hear when a
bearing is failing because of the
frequencies it produces.
Figure 2.7
Small signals
are not hidden
in the frequency
domain.
a) Time Domain - small signal not visible
b) Frequency Domain - small signal easily resolved
10
So we see that the frequency domain
is not at all uncommon. We are just
not used to seeing it in graphical
form. But this graphical presentation
is really not any stranger than saying
that the temperature changed with
time like the displacement of a line
on a graph.
Spectrum Examples
Let us now look at a few common sig-
nals in both the time and frequency
domains. In Figure 2.10a, we see that
the spectrum of a sine wave is just a
single line. We expect this from the
way we constructed the frequency
domain. The square wave in Figure
2.10b is made up of an infinite num-
ber of sine waves, all harmonically
related. The lowest frequency present
is the reciprocal of the square wave
period. These two examples illustrate
a property of the frequency trans-
form: a signal which is periodic and
exists for all time has a discrete fre-
quency spectrum. This is in contrast
to the transient signal in Figure 2.10c
which has a continuous spectrum.
This means that the sine waves that
make up this signal are spaced
infinitesimally close together.
Another signal of interest is the
impulse shown in Figure 2.10d. The
frequency spectrum of an impulse is
flat, i.e., there is energy at all frequen-
cies. It would, therefore, require
infinite energy to generate a true
impulse. Nevertheless, it is possible
to generate an approximation to
an impulse which has a fairly flat
spectrum over the desired frequency
range of interest. We will find signals
with a flat spectrum useful in our
next subject, network analysis.
Figure 2.10
Frequency
spectrum
examples.
11
Network Analysis
If the frequency domain were
restricted to the analysis of signal
spectrums, it would certainly not be
such a common engineering tool.
However, the frequency domain is
also widely used in analyzing the
behavior of networks (network
analysis) and in design work.
Network analysis is the general
engineering problem of determining
how a network will respond to an
input*. For instance, we might wish
to determine how a structure will
behave in high winds. Or we might
want to know how effective a sound
absorbing wall we are planning on
purchasing would be in reducing
machinery noise. Or perhaps we are
interested in the effects of a tube of
saline solution on the transmission of
blood pressure waveforms from an
artery to a monitor.
All of these problems and many more
are examples of network analysis. As
you can see a “network” can be any
system at all. One-port network
analysis is the variation of one
parameter with respect to another,
both measured at the same point
(port) of the network. The impedance
or compliance of the electronic
or mechanical networks shown in
Figure 2.11 are typical examples of
one-port network analysis.
Figure 2.11
One-port
network
analysis
examples.
* Network Analysis is sometimes called Stimulus/Response
Testing. The input is then known as the stimulus or
excitation and the output is called the response.
12
Two-port analysis gives the response
at a second port due to an input at
the first port. We are generally inter-
ested in the transmission and rejec-
tion of signals and in insuring the
integrity of signal transmission. The
concept of two-port analysis can be
extended to any number of inputs
and outputs. This is called N-port
analysis, a subject we will use in
modal analysis later in this chapter.
We have deliberately defined network
analysis in a very general way. It
applies to all networks with no
limitations. If we place one condition
on our network, linearity, we find
that network analysis becomes a
very powerful tool.
Figure 2.12
Two-port
network
analysis.
Figure 2.14
Non-linear
system
example.
Figure 2.15
Examples of
non-linearities.
Figure 2.13
Linear network.
θ2
θ2
θ1
θ1
13
When we say a network is linear, we
mean it behaves like the network
in Figure 2.13. Suppose one input
causes an output A and a second
input applied at the same port causes
an output B. If we apply both inputs
at the same time to a linear network,
the output will be the sum of the
individual outputs, A + B.
At first glance it might seem that all
networks would behave in this fash-
ion. A counter example, a non-linear
network, is shown in Figure 2.14.
Suppose that the first input is a force
that varies in a sinusoidal manner. We
pick its amplitude to ensure that the
displacement is small enough so that
the oscillating mass does not quite hit
the stops. If we add a second identi-
cal input, the mass would now hit the
stops. Instead of a sine wave with
twice the amplitude, the output is
clipped as shown in Figure 2.14b.
This spring-mass system with stops
illustrates an important principal: no
real system is completely linear. A
system may be approximately linear
over a wide range of signals, but
eventually the assumption of linearity
breaks down. Our spring-mass system
is linear before it hits the stops.
Likewise a linear electronic amplifier
clips when the output voltage
approaches the internal supply
voltage. A spring may compress
linearly until the coils start pressing
against each other.
Other forms of non-linearities are
also often present. Hysteresis (or
backlash) is usually present in gear
trains, loosely riveted joints and in
magnetic devices. Sometimes the
non-linearities are less abrupt and are
smooth, but nonlinear, curves. The
torque versus rpm of an engine or the
operating curves of a transistor are
two examples that can be considered
linear over only small portions of
their operating regions.
The important point is not that all
systems are nonlinear; it is that
most systems can be approximated
as linear systems. Often a large
engineering effort is spent in making
the system as linear as practical. This
is done for two reasons. First, it is
often a design goal for the output of a
network to be a scaled, linear version
of the input. A strip chart recorder
is a good example. The electronic
amplifier and pen motor must both be
designed to ensure that the deflection
across the paper is linear with the
applied voltage.
The second reason why systems are
linearized is to reduce the problem
of nonlinear instability. One example
would be the positioning system
shown in Figure 2.16. The actual
position is compared to the desired
position and the error is integrated
and applied to the motor. If the gear
train has no backlash, it is a straight-
forward problem to design this
system to the desired specifications
of positioning accuracy and
response time.
However, if the gear train has exces-
sive backlash, the motor will “hunt,”
causing the positioning system to
oscillate around the desired position.
The solution is either to reduce the
loop gain and therefore reduce the
overall performance of the system,
or to reduce the backlash in the gear
train. Often, reducing the backlash
is the only way to meet the
performance specifications.
Figure 2.16
A positioning
system.
∑
14
Analysis of Linear Networks
As we have seen, many systems are
designed to be reasonably linear to
meet design specifications. This
has a fortuitous side benefit when
attempting to analyze networks*.
Recall that an real signal can be
considered to be a sum of sine waves.
Also, recall that the response of a
linear network is the sum of the
responses to each component of the
input. Therefore, if we knew the
response of the network to each of
the sine wave components of the
input spectrum, we could predict
the output.
It is easy to show that the steady-
state response of a linear network
to a sine wave input is a sine wave
of the same frequency. As shown in
Figure 2.17, the amplitude of the
output sine wave is proportional to
the input amplitude. Its phase is
shifted by an amount which depends
only on the frequency of the sine
wave. As we vary the frequency of
the sine wave input, the amplitude
proportionality factor (gain) changes
as does the phase of the output.
If we divide the output of the
network by the input, we get a
Figure 2.17
Linear network
response to a
sine wave input.
Figure 2.18
The frequency
response of
a network.
* We will discuss the analysis of networks which
have not been linearized in Chapter 3, Section 6.
15
normalized result called the frequen-
cy response of the network. As
shown in Figure 2.18, the frequency
response is the gain (or loss) and
phase shift of the network as a
function of frequency. Because the
network is linear, the frequency
response is independent of the input
amplitude; the frequency response is
a property of a linear network, not
dependent on the stimulus.
The frequency response of a network
will generally fall into one of three
categories; low pass, high pass,
bandpass or a combination of these.
As the names suggest, their frequency
responses have relatively high gain in
a band of frequencies, allowing these
frequencies to pass through the
network. Other frequencies suffer a
relatively high loss and are rejected
by the network. To see what this
means in terms of the response of a
filter to an input, let us look at the
bandpass filter case.
Figure 2.19
Three classes
of frequency
response.
16
In Figure 2.20, we put a square wave
into a bandpass filter. We recall from
Figure 2.10 that a square wave is
composed of harmonically related
sine waves. The frequency response
of our example network is shown in
Figure 2.20b. Because the filter is
narrow, it will pass only one compo-
nent of the square wave. Therefore,
the steady-state response of this
bandpass filter is a sine wave.
Notice how easy it is to predict
the output of any network from its
frequency response. The spectrum of
the input signal is multiplied by the
frequency response of the network
to determine the components that
appear in the output spectrum. This
frequency domain output can then
be transformed back to the time
domain.
In contrast, it is very difficult to
compute in the time domain the out-
put of any but the simplest networks.
A complicated integral must be evalu-
ated which often can only be done
numerically on a digital computer*. If
we computed the network response
by both evaluating the time domain
integral and by transforming to the
frequency domain and back, we
would get the same results. However,
it is usually easier to compute the
output by transforming to the
frequency domain.
Transient Response
Up to this point we have only dis-
cussed the steady-state response to a
signal. By steady-state we mean the
output after any transient responses
caused by applying the input have
died out. However, the frequency
response of a network also contains
all the information necessary to
predict the transient response of the
network to any signal.
Figure 2.20
Bandpass filter
response to a
square wave
input.
Figure 2.21
Time response
of bandpass
filters.
* This operation is called convolution.
17
Let us look qualitatively at the tran-
sient response of a bandpass filter. If
a resonance is narrow compared to
its frequency, then it is said to be a
high “Q” resonance*. Figure 2.21a
shows a high Q filter frequency
response. It has a transient response
which dies out very slowly. A time
response which decays slowly is said
to be lightly damped. Figure 2.21b
shows a low Q resonance. It has a
transient response which dies out
quickly. This illustrates a general
principle: signals which are broad in
one domain are narrow in the other.
Narrow, selective filters have very
long response times, a fact we will
find important in the next section.
Section 3:
Instrumentation for the
Frequency Domain
Just as the time domain can be
measured with strip chart recorders,
oscillographs or oscilloscopes,
the frequency domain is usually
measured with spectrum and
network analyzers.
Spectrum analyzers are instruments
which are optimized to characterize
signals. They introduce very little
distortion and few spurious signals.
This insures that the signals on the
display are truly part of the input
signal spectrum, not signals
introduced by the analyzer.
Network analyzers are optimized to
give accurate amplitude and phase
measurements over a wide range of
network gains and losses. This design
difference means that these two
traditional instrument families are
not interchangeable.** A spectrum
analyzer can not be used as a net-
work analyzer because it does not
measure amplitude accurately and
cannot measure phase. A network
analyzer would make a very poor
spectrum analyzer because spurious
responses limit its dynamic range.
In this section we will develop the
properties of several types of
analyzers in these two categories.
The Parallel-Filter
Spectrum Analyzer
As we developed in Section 2 of
this chapter, electronic filters can be
built which pass a narrow band of
frequencies. If we were to add a
meter to the output of such a band-
pass filter, we could measure the
power in the portion of the spectrum
passed by the filter. In Figure 2.22a
we have done this for a bank of
filters, each tuned to a different
frequency. If the center frequencies
of these filters are chosen so that
the filters overlap properly, the
spectrum covered by the filters can
be completely characterized as in
Figure 2.22b.
Figure 2.22
Parallel filter
analyzer.
* Q is usually defined as:
Q =
Center Frequency of Resonance
Frequency Width of -3 dB Points
** Dynamic Signal Analyzers are an exception to this rule,
they can act as both network and spectrum analyzers.
18
How many filters should we use to
cover the desired spectrum? Here we
have a trade-off. We would like to be
able to see closely spaced spectral
lines, so we should have a large
number of filters. However, each
filter is expensive and becomes more
expensive as it becomes narrower,
so the cost of the analyzer goes up
as we improve its resolution. Typical
audio parallel-filter analyzers balance
these demands with 32 filters, each
covering 1/3 of an octave.
Swept Spectrum Analyzer
One way to avoid the need for such
a large number of expensive filters is
to use only one filter and sweep it
slowly through the frequency range
of interest. If, as in Figure 2.23, we
display the output of the filter versus
the frequency to which it is tuned,
we have the spectrum of the input
signal. This swept analysis technique
is commonly used in rf and
microwave spectrum analysis.
We have, however, assumed the input
signal hasn’t changed in the time it
takes to complete a sweep of our
analyzer. If energy appears at some
frequency at a moment when our
filter is not tuned to that frequency,
then we will not measure it.
One way to reduce this problem
would be to speed up the sweep
time of our analyzer. We could still
miss an event, but the time in which
this could happen would be shorter.
Unfortunately though, we cannot
make the sweep arbitrarily fast
because of the response time of
our filter.
To understand this problem,
recall from Section 2 that a filter
takes a finite time to respond to
changes in its input. The narrower the
filter, the longer it takes to respond.
If we sweep the filter past a signal
too quickly, the filter output will not
have a chance to respond fully to the
signal. As we show in Figure 2.24,
the spectrum display will then be in
error; our estimate of the signal level
will be too low.
In a parallel-filter spectrum analyzer
we do not have this problem. All the
filters are connected to the input
signal all the time. Once we have
waited the initial settling time of a
single filter, all the filters will be
settled and the spectrum will be valid
and not miss any transient events.
So there is a basic trade-off between
parallel-filter and swept spectrum
analyzers. The parallel-filter analyzer
is fast, but has limited resolution and
is expensive. The swept analyzer
can be cheaper and have higher
resolution but the measurement
takes longer (especially at high
resolution) and it can not analyze
transient events*.
Dynamic Signal Analyzer
In recent years another kind of
analyzer has been developed
which offers the best features of the
parallel-filter and swept spectrum
analyzers. Dynamic Signal Analyzers
are based on a high speed calculation
routine which acts like a parallel
filter analyzer with hundreds of
filters and yet are cost-competitive
with swept spectrum analyzers. In
* More information on the performance of swept
spectrum analyzers can be found in Agilent
Application Note Series 150.
Figure 2.24
Amplitude
error form
sweeping
too fast.
Figure 2.23
Simplified
swept spectrum
analyzer.
19
addition, two channel Dynamic Signal
Analyzers are in many ways better
network analyzers than the ones we
will introduce next.
Network Analyzers
Since in network analysis it is
required to measure both the input
and output, network analyzers are
generally two channel devices with
the capability of measuring the ampli-
tude ratio (gain or loss) and phase
difference between the channels.
All of the analyzers discussed here
measure frequency response by using
a sinusoidal input to the network
and slowly changing its frequency.
Dynamic Signal Analyzers use a
different, much faster technique for
network analysis which we discuss
in the next chapter.
Gain-phase meters are broadband
devices which measure the amplitude
and phase of the input and output
sine waves of the network. A sinu-
soidal source must be supplied to
stimulate the network when using a
gain-phase meter as in Figure 2.25.
The source can be tuned manually
and the gain-phase plots done by
hand or a sweeping source, and an
x-y plotter can be used for automatic
frequency response plots.
The primary attraction of gain-phase
meters is their low price. If a
sinusoidal source and a plotter are
already available, frequency response
measurements can be made for a very
low investment. However, because
gain-phase meters are broadband,
they measure all the noise of the
network as well as the desired sine
wave. As the network attenuates the
input, this noise eventually becomes a
floor below which the meter cannot
measure. This typically becomes a
problem with attenuations of about
60 dB (1,000:1).
Tuned network analyzers minimize
the noise floor problems of gain-
phase meters by including a bandpass
filter which tracks the source fre-
quency. Figure 2.26 shows how this
tracking filter virtually eliminates the
noise and any harmonics to allow
measurements of attenuation to
100 dB (100,000:1).
By minimizing the noise, it is also
possible for tuned network analyzers
to make more accurate measure-
ments of amplitude and phase. These
improvements do not come without
their price, however, as tracking
filters and a dedicated source must
be added to the simpler and less
costly gain-phase meter.
Figure 2.26
Tuned net-
work analyzer
operation.
Figure 2.25
Gain-phase
meter
operation.
20
Tuned analyzers are available in the
frequency range of a few Hertz to
many Gigahertz (109 Hertz). If lower
frequency analysis is desired, a
frequency response analyzer is often
used. To the operator, it behaves
exactly like a tuned network analyzer.
However, it is quite different inside.
It integrates the signals in the time
domain to effectively filter the signals
at very low frequencies where it is
not practical to make filters by more
conventional techniques. Frequency
response analyzers are generally lim-
ited to from 1 mHz to about 10 kHz.
Section 4:
The Modal Domain
In the preceding sections we have
developed the properties of the time
and frequency domains and the
instrumentation used in these
domains. In this section we will
develop the properties of another
domain, the modal domain. This
change in perspective to a new
domain is particularly useful if we are
interested in analyzing the behavior
of mechanical structures.
To understand the modal domain let
us begin by analyzing a simple
mechanical structure, a tuning fork.
If we strike a tuning fork, we easily
conclude from its tone that it is pri-
marily vibrating at a single frequency.
We see that we have excited a
network (tuning fork) with a force
impulse (hitting the fork). The time
domain view of the sound caused by
the deformation of the fork is a
lightly damped sine wave shown
in Figure 2.27b.
In Figure 2.27c, we see in the
frequency domain that the frequency
response of the tuning fork has a
major peak that is very lightly
damped, which is the tone we hear.
There are also several smaller peaks.
Figure 2.27
The vibration
of a tuning fork.
Figure 2.28
Example
vibration modes
of a tuning fork.
21
Each of these peaks, large and small,
corresponds to a “vibration mode”
of the tuning fork. For instance, we
might expect for this simple example
that the major tone is caused by the
vibration mode shown in Figure
2.28a. The second harmonic might
be caused by a vibration like
Figure 2.28b
We can express the vibration of any
structure as a sum of its vibration
modes. Just as we can represent an
real waveform as a sum of much sim-
pler sine waves, we can represent any
vibration as a sum of much simpler
vibration modes. The task of “modal”
analysis is to determine the shape
and the magnitude of the structural
deformation in each vibration mode.
Once these are known, it usually
becomes apparent how to change
the overall vibration.
For instance, let us look again at our
tuning fork example. Suppose that we
decided that the second harmonic
tone was too loud. How should we
change our tuning fork to reduce the
harmonic? If we had measured the
vibration of the fork and determined
that the modes of vibration were
those shown in Figure 2.28, the
answer becomes clear. We might
apply damping material at the center
of the tines of the fork. This would
greatly affect the second mode which
has maximum deflection at the center
while only slightly affecting the
desired vibration of the first mode.
Other solutions are possible, but all
depend on knowing the geometry of
each mode.
The Relationship Between the Time,
Frequency and Modal Domain
To determine the total vibration
of our tuning fork or any other
structure, we have to measure the
vibration at several points on the
structure. Figure 2.30a shows some
points we might pick. If we
transformed this time domain data to
the frequency domain, we would get
results like Figure 2.30b. We measure
frequency response because we want
to measure the properties of the
structure independent of the
stimulus*.
Figure 2.29
Reducing the
second harmonic
by damping the
second vibration
mode.
Figure 2.30
Modal analysis
of a tuning fork.
* Those who are more familiar with electronics might
note that we have measured the frequency response of
a network (structure) at N points and thus have performed
an N-port Analysis.
22
We see that the sharp peaks
(resonances) all occur at the same
frequencies independent of where
they are measured on the structure.
Likewise we would find by measuring
the width of each resonance that the
damping (or Q) of each resonance
is independent of position. The
only parameter that varies as we
move from point to point along the
structure is the relative height of
resonances.* By connecting the
peaks of the resonances of a given
mode, we trace out the mode shape
of that mode.
Experimentally we have to measure
only a few points on the structure to
determine the mode shape. However,
to clearly show the mode shape in
our figure, we have drawn in the
frequency response at many more
points in Figure 2.31a. If we view this
three-dimensional graph along the
distance axis, as in Figure 2.31b, we
get a combined frequency response.
Each resonance has a peak value cor-
responding to the peak displacement
in that mode. If we view the graph
along the frequency axis, as in Figure
2.31c, we can see the mode shapes of
the structure.
We have not lost any information by
this change of perspective. Each
vibration mode is characterized by its
mode shape, frequency and damping
from which we can reconstruct the
frequency domain view.
However, the equivalence between
the modal, time and frequency
domains is not quite as strong as
that between the time and frequency
domains. Because the modal domain
portrays the properties of the net-
work independent of the stimulus,
transforming back to the time domain
gives the impulse response of the
structure, no matter what the stimu-
lus. A more important limitation of
this equivalence is that curve fitting
is used in transforming from our
frequency response measurements to
the modal domain to minimize the
effects of noise and small experimen-
tal errors. No information is lost in
this curve fitting, so all three domains
contain the same information, but not
the same noise. Therefore, transform-
ing from the frequency domain to the
modal domain and back again will
give results like those in Figure 2.32.
The results are not exactly the same,
yet in all the important features, the
frequency responses are the same.
This is also true of time domain data
derived from the modal domain.
Figure 2.31
The relationship
between the
frequency and
the modal
domains.
* The phase of each resonance is not shown for clarity of
the figures but it too is important in the mode shape. The
magnitude of the frequency response gives the magnitude
of the mode shape while the phase gives the direction of
the deflection.
23
Section 5:
Instrumentation for
the Modal Domain
There are many ways that the modes
of vibration can be determined. In our
simple tuning fork example we could
guess what the modes were. In simple
structures like drums and plates it is
possible to write an equation for the
modes of vibration. However, in
almost any real problem, the solution
can neither be guessed nor solved
analytically because the structure is
too complicated. In these cases it is
necessary to measure the response
of the structure and determine
the modes.
There are two basic techniques for
determining the modes of vibration in
complicated structures: 1) exciting
only one mode at a time, and 2)
computing the modes of vibration
from the total vibration.
Single Mode Excitation
Modal Analysis
To illustrate single mode excitation,
let us look once again at our simple
tuning fork example. To excite just
the first mode we need two shakers,
driven by a sine wave and attached
to the ends of the tines as in Figure
2.33a. Varying the frequency of the
generator near the first mode reso-
nance frequency would then give us
its frequency, damping and mode
shape.
In the second mode, the ends of the
tines do not move, so to excite the
second mode we must move the
shakers to the center of the tines. If
we anchor the ends of the tines, we
will constrain the vibration to the
second mode alone.
Figure 2.32
Curve fitting
removes
measurement
noise.
Figure 2.33
Single mode
excitation
modal analysis.
24
In more realistic, three dimensional
problems, it is necessary to add many
more shakers to ensure that only one
mode is excited. The difficulties and
expense of testing with many shakers
has limited the application of this
traditional modal analysis technique.
Modal Analysis From Total Vibration
To determine the modes of vibration
from the total vibration of the
structure, we use the techniques
developed in the previous section.
Basically, we determine the frequency
response of the structure at several
points and compute at each reso-
nance the frequency, damping and
what is called the residue (which
represents the height of the reso-
nance). This is done by a curve-fitting
routine to smooth out any noise or
small experimental errors. From
these measurements and the geome-
try of the structure, the mode shapes
are computed and drawn on a CRT
display or a plotter. If drawn on a
CRT, these displays may be animated
to help the user understand the
vibration mode.
From the above description, it is
apparent that a modal analyzer
requires some type of network
analyzer to measure the frequency
response of the structure and a
computer to convert the frequency
response to mode shapes. This can
be accomplished by connecting a
Dynamic Signal Analyzer through
a digital interface* to a computer
furnished with the appropriate soft-
ware. This capability is also available
in a single instrument called a Struc-
tural Dynamics Analyzer. In general,
computer systems offer more versa-
tile performance since they can be
programmed to solve other problems.
However, Structural Dynamics
Analyzers generally are much easier
to use than computer systems.
Section 6: Summary
In this chapter we have developed
the concept of looking at problems
from different perspectives. These
perspectives are the time, frequency
and modal domains. Phenomena that
are confusing in the time domain are
often clarified by changing perspec-
tive to another domain. Small signals
are easily resolved in the presence of
large ones in the frequency domain.
The frequency domain is also valu-
able for predicting the output of any
kind of linear network. A change to
the modal domain breaks down
complicated structural vibration
problems into simple vibration
modes.
No one domain is always the best
answer, so the ability to easily change
domains is quite valuable. Of all the
instrumentation available today, only
Dynamic Signal Analyzers can work
in all three domains. In the next
chapter we develop the properties
of this important class of analyzers.
Figure 2.34
Measured
mode shape.
* GPIB, Agilent’s implementation of
IEEE-488-1975 is ideal for this application.
25
We saw in the previous chapter that
the Dynamic Signal Analyzer has the
speed advantages of parallel-filter
analyzers without their low resolution
limitations. In addition, it is the only
type of analyzer that works in all
three domains. In this chapter we will
develop a fuller understanding of this
important analyzer family, Dynamic
Signal Analyzers. We begin by pre-
senting the properties of the Fast
Fourier Transform (FFT) upon which
Dynamic Signal Analyzers are based.
No proof of these properties is given,
but heuristic arguments as to their va-
lidity are used where appropriate. We
then show how these FFT properties
cause some undesirable characteris-
tics in spectrum analysis like aliasing
and leakage. Having demonstrated a
potential difficulty with the FFT, we
then show what solutions are used
to make practical Dynamic Signal
Analyzers. Developing this basic
knowledge of FFT characteristics
makes it simple to get good results
with a Dynamic Signal Analyzer in a
wide range of measurement problems.
Section 1: FFT Properties
The Fast Fourier Transform (FFT)
is an algorithm* for transforming
data from the time domain to the fre-
quency domain. Since this is exactly
what we want a spectrum analyzer to
do, it would seem easy to implement
a Dynamic Signal Analyzer based
on the FFT. However, we will see
that there are many factors which
complicate this seemingly
straightforward task.
First, because of the many calcula-
tions involved in transforming
domains, the transform must be
implemented on a digital computer if
the results are to be sufficiently accu-
rate. Fortunately, with the advent of
microprocessors, it is easy and inex-
pensive to incorporate all the needed
computing power in a small instru-
ment package. Note, however, that
we cannot now transform to the
frequency domain in a continuous
manner, but instead must sample and
digitize the time domain input. This
means that our algorithm transforms
digitized samples from the time do-
main to samples in the frequency
domain as shown in Figure 3.1.**
Because we have sampled, we no
longer have an exact representation
in either domain. However, a sampled
representation can be as close to
ideal as we desire by placing our
samples closer together. Later in
this chapter, we will consider what
sample spacing is necessary to
guarantee accurate results.
Chapter 3
Understanding Dynamic
Signal Analysis
Figure 3.1
The FFT samples
in both the time
and frequency
domains.
Figure 3.2
A time record
is N equally
spaced samples
of the input.
* An algorithm is any special mathematical method of
solving a certain kind of problem; e.g., the technique
you use to balance your checkbook.
** To reduce confusion about which domain we are in,
samples in the frequency domain are called lines.
26
Time Records
A time record is defined to be N
consecutive, equally spaced samples
of the input. Because it makes our
transform algorithm simpler and
much faster, N is restricted to be
a multiple of 2, for instance 1024.
As shown in Figure 3.3, this time
record is transformed as a complete
block into a complete block of
frequency lines. All the samples of
the time record are needed to
compute each and every line in the
frequency domain. This is in contrast
to what one might expect, namely
that a single time domain sample
transforms to exactly one frequency
domain line. Understanding this block
processing property of the FFT is
crucial to understanding many of
the properties of the Dynamic
Signal Analyzer.
For instance, because the FFT
transforms the entire time record
block as a total, there cannot be
valid frequency domain results until
a complete time record has been
gathered. However, once completed,
the oldest sample could be discarded,
all the samples shifted in the time
record, and a new sample added to
the end of the time record as in
Figure 3.4. Thus, once the time record
is initially filled, we have a new time
record at every time domain sample
and therefore could have new valid
results in the frequency domain at
every time domain sample.
This is very similar to the behavior of
the parallel-filter analyzers described
in the previous chapter. When a signal
is first applied to a parallel-filter ana-
lyzer, we must wait for the filters to
respond, then we can see very rapid
changes in the frequency domain.
With a Dynamic Signal Analyzer we
do not get a valid result until a full
time record has been gathered. Then
rapid changes in the spectra can
be seen.
It should be noted here that a new
spectrum every sample is usually too
much information, too fast. This
would often give you thousands of
transforms per second. Just how fast
a Dynamic Signal Analyzer should
transform is a subject better left to
the sections in this chapter on real
time bandwidth and overlap
processing.
Figure 3.3
The FFT works
on blocks
of data.
Figure 3.4
A new time
record every
sample after
the time record
is filled.
27
How Many Lines are There?
We stated earlier that the time
record has N equally spaced samples.
Another property of the FFT is that
it transforms these time domain
samples to N/2 equally spaced lines
in the frequency domain. We only
get half as many lines because each
frequency line actually contains two
pieces of information, amplitude and
phase. The meaning of this is most
easily seen if we look again at the
relationship between the time and
frequency domain.
Figure 3.5 reproduces from Chapter 2
our three-dimensional graph of this
relationship. Up to now we have
implied that the amplitude and
frequency of the sine waves contains
all the information necessary to
reconstruct the input. But it should
be obvious that the phase of each
of these sine waves is important too.
For instance, in Figure 3.6, we have
shifted the phase of the higher
frequency sine wave components
of this signal. The result is a severe
distortion of the original wave form.
We have not discussed the phase
information contained in the
spectrum of signals until now
because none of the traditional
spectrum analyzers are capable of
measuring phase. When we discuss
measurements in Chapter 4, we shall
find that phase contains valuable
information in determining the
cause of performance problems.
What is the Spacing of the Lines?
Now that we know that we have N/2
equally spaced lines in the frequency
domain, what is their spacing? The
lowest frequency that we can resolve
with our FFT spectrum analyzer must
be based on the length of the time
record. We can see in Figure 3.7 that
if the period of the input signal is
longer than the time record, we have
no way of determining the period (or
frequency, its reciprocal). Therefore,
the lowest frequency line of the FFT
must occur at frequency equal to the
reciprocal of the time record length.
Figure 3.5
The relationship
between the time
and frequency
domains.
Figure 3.6
Phase of
frequency domain
components is
important.
28
In addition, there is a frequency line
at zero Hertz, DC. This is merely the
average of the input over the time
record. It is rarely used in spectrum
or network analysis. But, we have
now established the spacing between
these two lines and hence every line;
it is the reciprocal of the time record.
What is the Frequency Range
of the FFT?
We can now quickly determine that
the highest frequency we can
measure is:
fmax = q
because we have N/2 lines spaced by
the reciprocal of the time record
starting at zero Hertz *.
Since we would like to adjust the fre-
quency range of our measurement,
we must vary fmax. The number of
time samples N is fixed by the imple-
mentation of the FFT algorithm.
Therefore, we must vary the period of
the time record to vary fmax. To do
this, we must vary the sample rate so
that we always have N samples in our
variable time record period. This is
illustrated in Figure 3.9. Notice that
to cover higher frequencies, we must
sample faster.
* The usefulness of this frequency range can be limited by
the problem of aliasing. Aliasing is discussed in Section 3.
Figure 3.7
Lowest frequency
resolvable by
the FFT.
Time
Time
AmplitudeAmplitude
Period of input signal longer than the time record.
Frequency of the input signal is unknown..
b)
Time Record
Time Record
Period of input signal equals time record.
Lowest resolvable frequency.
a)
??
Figure 3.8
Frequencies of
all the spectral
lines of the FFT.
Figure 3.9
Frequency range
of Dynamic Signal
Analyzers is
determined by
sample rate.
N 1
2 Period of Time Record
29
Section 2*:
Sampling and Digitizing
Recall that the input to our Dynamic
Signal Analyzer is a continuous
analog voltage. This voltage might
be from an electronic circuit or could
be the output of a transducer and be
proportional to current, power,
pressure, acceleration or any number
of other inputs. Recall also that the
FFT requires digitized samples of the
input for its digital calculations.
Therefore, we need to add a sampler
and analog to digital converter (ADC)
to our FFT processor to make a spec-
trum analyzer. We show this basic
block diagram in Figure 3.10.
For the analyzer to have the high
accuracy needed for many measure-
ments, the sampler and ADC must be
quite good. The sampler must sample
the input at exactly the correct time
and must accurately hold the input
voltage measured at this time until
the ADC has finished its conversion.
The ADC must have high resolution
and linearity. For 70 dB of dynamic
range the ADC must have at least
12 bits of resolution and one half
least significant bit linearity.
A good Digital Voltmeter (DVM) will
typically exceed these specifications,
but the ADC for a Dynamic Signal
Analyzer must be much faster than
typical fast DVM’s. A fast DVM might
take a thousand readings per second,
but in a typical Dynamic Signal
Analyzer the ADC must take at
least a hundred thousand readings
per second.
Section 3: Aliasing
The reason an FFT spectrum
analyzer needs so many samples per
second is to avoid a problem called
aliasing. Aliasing is a potential prob-
lem in any sampled data system. It is
often overlooked, sometimes with
disastrous results.
A Simple Data Logging
Example of Aliasing
Let us look at a simple data logging
example to see what aliasing is and
how it can be avoided. Consider the
example for recording temperature
shown in Figure 3.12. A thermocouple
is connected to a digital voltmeter
which is in turn connected to a print-
er. The system is set up to print the
temperature every second. What
would we expect for an output?
If we were measuring the tempera-
ture of a room which only changes
slowly, we would expect every
reading to be almost the same as the
previous one. In fact, we are sampling
much more often than necessary to
determine the temperature of the
room with time. If we plotted the
results of this “thought experiment”,
we would expect to see results like
Figure 3.13.
Figure 3.10
Block diagram
of dynamic
Signal Analyzer.
Figure 3.11
The Sampler
and ADC must
not introduce
errors.
Figure 3.13
Plot of
temperature
variation
of a room.
Figure 3.12
A simple
sampled
data system.
* This section and the next can be skipped by those not
interested in the internal operation of a Dynamic Signal
Analyzer. However, those who specify the purchase of
Dynamic Signal Analyzers are especially encouraged to
read these sections. The basic knowledge to be gained
from these sections can insure specifying the best analyzer
for your requirements.
30
The Case of the
Missing Temperature
If, on the other hand, we were
measuring the temperature of a
small part which could heat and cool
rapidly, what would the output be?
Suppose that the temperature of
our part cycled exactly once every
second. As shown in Figure 3.14, our
printout says that the temperature
never changes.
What has happened is that we have
sampled at exactly the same point on
our periodic temperature cycle with
every sample. We have not sampled
fast enough to see the temperature
fluctuations.
Aliasing in the Frequency Domain
This completely erroneous result is
due to a phenomena called aliasing.*
Aliasing is shown in the frequency
domain in Figure 3.15. Two signals
are said to alias if the difference of
their frequencies falls in the frequen-
cy range of interest. This difference
frequency is always generated in the
process of sampling. In Figure 3.15,
the input frequency is slightly higher
than the sampling frequency so a low
frequency alias term is generated. If
the input frequency equals the sam-
pling frequency as in our small part
example, then the alias term falls at
DC (zero Hertz) and we get the
constant output that we saw above.
Aliasing is not always bad. It is
called mixing or heterodyning in
analog electronics, and is commonly
used for tuning household radios and
televisions as well as many other
communication products. However,
in the case of the missing tempera-
ture variation of our small part, we
definitely have a problem. How can
we guarantee that we will avoid this
problem in a measurement situation?
Figure 3.16 shows that if we sample
at greater than twice the highest
frequency of our input, the alias
products will not fall within the
frequency range of our input.
Therefore, a filter (or our FFT
processor which acts like a filter)
after the sampler will remove the
alias products while passing the
desired input signals if the sample
rate is greater than twice the highest
frequency of the input. If the sample
rate is lower, the alias products will
fall in the frequency range of the
input and no amount of filtering
will be able to remove them from
the signal.
Figure 3.14
Plot of temperature
variation of a
small part.
Figure 3.15
The problem
of aliasing
viewed in the
frequency
domain.
* Aliasing is also known as fold-over or mixing.
31
This minimum sample rate
requirement is known as the Nyquist
Criterion. It is easy to see in the time
domain that a sampling frequency
exactly twice the input frequency
would not always be enough. It is less
obvious that slightly more than two
samples in each period is sufficient
information. It certainly would not
be enough to give a high quality time
display. Yet we saw in Figure 3.16 that
meeting the Nyquist Criterion of a
sample rate greater than twice the
maximum input frequency is suffi-
cient to avoid aliasing and preserve
all the information in the input signal.
The Need for an Anti-Alias Filter
Unfortunately, the real world rarely
restricts the frequency range of its
signals. In the case of the room
temperature, we can be reasonably
sure of the maximum rate at which
the temperature could change, but
we still can not rule out stray signals.
Signals induced at the powerline
frequency or even local radio stations
could alias into the desired frequency
range. The only way to be really
certain that the input frequency range
is limited is to add a low pass filter
before the sampler and ADC. Such a
filter is called an anti-alias filter.
An ideal anti-alias filter would look
like Figure 3.18a. It would pass all
the desired input frequencies with no
loss and completely reject any higher
frequencies which otherwise could
alias into the input frequency range.
However, it is not even theoretically
possible to build such a filter, much
less practical. Instead, all real filters
look something like Figure 3.18b with
a gradual roll off and finite rejection
of undesired signals. Large input
signals which are not well attenuated
in the transition band could still alias
into the desired input frequency
Figure 3.16
A frequency
domain view
of how to avoid
aliasing - sample
at greater than
twice the highest
input frequency.
Figure 3.18
Actual anti-alias
filters require
higher sampling
frequencies.
Figure 3.17
Nyquist
Criterion in the
time domain.
32
range. To avoid this, the sampling fre-
quency is raised to twice the highest
frequency of the transition band. This
guarantees that any signals which
could alias are well attentuated by
the stop band of the filter. Typically,
this means that the sample rate is
now two and a half to four times the
maximum desired input frequency.
Therefore, a 25 kHz FFT Spectrum
Analyzer can require an ADC that
runs at 100 kHz as we stated without
proof in Section 2 of this Chapter*.
The Need for More Than One
Anti-Alias Filter
Recall from Section 1 of this Chapter,
that due to the properties of the FFT
we must vary the sample rate to vary
the frequency span of our analyzer.
To reduce the frequency span, we
must reduce the sample rate. From
our considerations of aliasing, we
now realize that we must also reduce
the anti-alias filter frequency by the
same amount.
Since a Dynamic Signal Analyzer is
a very versatile instrument used in
a wide range of applications, it is
desirable to have a wide range of
frequency spans available. Typical
instruments have a minimum span of
1 Hertz and a maximum of tens to
hundreds of kilohertz. This four
decade range typically needs to be
covered with at least three spans per
decade. This would mean at least
twelve anti-alias filters would be
required for each channel.
Each of these filters must have
very good performance. It is desirable
that their transition bands be as
narrow as possible so that as many
lines as possible are free from alias
products. Additionally, in a two
channel analyzer, each filter pair
must be well matched for accurate
network analysis measurements.
These two points unfortunately mean
that each of the filters is expensive.
Taken together they can add signifi-
cantly to the price of the analyzer.
Some manufacturers don’t have a
low enough frequency anti-alias filter
on the lowest frequency spans to save
some of this expense. (The lowest
frequency filters cost the most of all.)
But as we have seen, this can lead to
problems like our “case of the
missing temperature”.
Digital Filtering
Fortunately, there is an alternative
which is cheaper and when used in
conjunction with a single analog anti-
alias filter, always provides aliasing
protection. It is called digital filtering
because it filters the input signal after
we have sampled and digitized it. To
see how this works, let us look at
Figure 3.19.
In the analog case we already
discussed, we had to use a new
filter every time we changed the
sample rate of the Analog to Digital
Converter (ADC). When using digital
filtering, the ADC sample rate is left
constant at the rate needed for the
highest frequency span of the analyz-
er. This means we need not change
our anti-alias filter. To get the
reduced sample rate and filtering
we need for the narrower frequency
spans, we follow the ADC with a
digital filter.
This digital filter is known as a
decimating filter. It not only filters
the digital representation of the signal
to the desired frequency span, it also
reduces the sample rate at its output
to the rate needed for that frequency
span. Because this filter is digital,
there are no manufacturing varia-
tions, aging or drift in the filter.
Therefore, in a two channel analyzer
the filters in each channel are identi-
cal. It is easy to design a single digital
filter to work on many frequency
spans so the need for multiple filters
per channel is avoided. All these
factors taken together mean that
digital filtering is much less expen-
sive than analog anti-aliasing filtering.
Figure 3.19
Block diagrams
of analog and
digital filtering.
* Unfortunately, because the spacing of the FFT lines
depends on the sample rate, increasing the sample rate
decreases the number of lines that are in the desired
frequency range. Therefore, to avoid aliasing problems
Dynamic Signal Analyzers have only .25N to .4N lines
instead of N/2 lines.
33
Section 4:
Band Selectable Analysis
Suppose we need to measure a small
signal that is very close in frequency
to a large one. We might be measur-
ing the powerline sidebands (50 or
60 Hz) on a 20 kHz oscillator. Or we
might want to distinguish between
the stator vibration and the shaft
imbalance in the spectrum of a
motor.*
Recall from our discussion of
the properties of the Fast Fourier
Transform that it is equivalent to a
set of filters, starting at zero Hertz,
equally spaced up to some maximum
frequency. Therefore, our frequency
resolution is limited to the maximum
frequency divided by the number
of filters.
To just resolve the 60 Hz sidebands
on a 20 kHz oscillator signal would
require 333 lines (or filters) of the
FFT. Two or three times more lines
would be required to accurately
measure the sidebands. But typical
Dynamic Signal Analyzers only have
200 to 400 lines, not enough for
accurate measurements. To increase
the number of lines would greatly
increase the cost of the analyzer. If
we chose to pay the extra cost, we
would still have trouble seeing the
results. With a 4 inch (10 cm) screen,
the sidebands would be only 0.01 inch
(.25 mm) from the carrier.
A better way to solve this problem
is to concentrate the filters into the
frequency range of interest as in
Figure 3.20. If we select the minimum
frequency as well as the maximum
frequency of our filters we can “zoom
in” for a high resolution close-up shot
of our frequency spectrum. We now
have the capability of looking at the
entire spectrum at once with low
resolution as well as the ability to
look at what interests us with much
higher resolution.
This capability of increased
resolution is called Band Selectable
Analysis (BSA).** It is done by mixing
or heterodyning the input signal
down into the range of the FFT span
selected. This technique, familiar to
electronic engineers, is the process
by which radios and televisions
tune in stations.
The primary difference between the
implementation of BSA in Dynamic
Signal Analyzers and heterodyne
radios is shown in Figure 3.21. In a
radio, the sine wave used for mixing
is an analog voltage. In a Dynamic
Signal Analyzer, the mixing is done
after the input has been digitized, so
the “sine wave” is a series of digital
numbers into a digital multiplier.
This means that the mixing will be
done with a very accurate and stable
digital signal so our high resolution
display will likewise be very stable
and accurate.
* The shaft of an ac induction motor always runs at a rate
slightly lower than a multiple of the driven frequency, an
effect called slippage.
** Also sometimes called “zoom”.
Figure 3.20
High resolution
measurements
with Band
Selectable
Analysis.
Figure 3.21
Analyzer block
diagram.
34
Section 5: Windowing
The Need for Windowing
There is another property of the Fast
Fourier Transform which affects its
use in frequency domain analysis.
We recall that the FFT computes the
frequency spectrum from a block of
samples of the input called a time
record. In addition, the FFT algorithm
is based upon the assumption that
this time record is repeated through-
out time as illustrated in Figure 3.22.
This does not cause a problem with
the transient case shown. But what
happens if we are measuring a contin-
uous signal like a sine wave? If the
time record contains an integral
number of cycles of the input sine
wave, then this assumption exactly
matches the actual input waveform
as shown in Figure 3.23. In this case,
the input waveform is said to be
periodic in the time record.
Figure 3.24 demonstrates the
difficulty with this assumption
when the input is not periodic in the
time record. The FFT algorithm is
computed on the basis of the highly
distorted waveform in Figure 3.24c.
We know from Chapter 2 that
the actual sine wave input has a
frequency spectrum of single line.
The spectrum of the input assumed
by the FFT in Figure 3.24c should be
Figure 3.24
Input signal
not periodic
in time record.
Figure 3.22
FFT assumption -
time record
repeated
throughout
all time.
Figure 3.23
Input signal
periodic in time
record.
35
very different. Since sharp phenome-
na in one domain are spread out in
the other domain, we would expect
the spectrum of our sine wave to be
spread out through the frequency
domain.
In Figure 3.25 we see in an actual
measurement that our expectations
are correct. In Figures 3.25 a & b, we
see a sine wave that is periodic in the
time record. Its frequency spectrum
is a single line whose width is deter-
mined only by the resolution of our
Dynamic Signal Analyzer.* On the
other hand, Figures 3.25c & d show
a sine wave that is not periodic in
the time record. Its power has been
spread throughout the spectrum as
we predicted.
This smearing of energy throughout
the frequency domains is a phenome-
na known as leakage. We are seeing
energy leak out of one resolution line
of the FFT into all the other lines.
It is important to realize that leakage
is due to the fact that we have taken
a finite time record. For a sine wave
to have a single line spectrum, it must
exist for all time, from minus infinity
to plus infinity. If we were to have
an infinite time record, the FFT
would compute the correct single
line spectrum exactly. However, since
we are not willing to wait forever to
measure its spectrum, we only look
at a finite time record of the sine
wave. This can cause leakage if the
continuous input is not periodic in
the time record.
It is obvious from Figure 3.25 that the
problem of leakage is severe enough
to entirely mask small signals close
to our sine waves. As such, the FFT
would not be a very useful spectrum
analyzer. The solution to this problem
is known as windowing. The prob-
lems of leakage and how to solve
them with windowing can be the
most confusing concepts of Dynamic
Signal Analysis. Therefore, we will
now carefully develop the problem
and its solution in several representa-
tive cases.
* The additional two components in the photo are the
harmonic distortion of the sine wave source.
Figure 3.25
Actual FFT results.
b)
a) & b) Sine wave periodic in time record
d)
c) & d) Sine wave not periodic in time record
a)
c)
36
What is Windowing?
In Figure 3.26 we have again repro-
duced the assumed input wave form
of a sine wave that is not periodic in
the time record. Notice that most of
the problem seems to be at the edges
of the time record, the center is a
good sine wave. If the FFT could
be made to ignore the ends and con-
centrate on the middle of the time
record, we would expect to get much
closer to the correct single line
spectrum in the frequency domain.
If we multiply our time record by
a function that is zero at the ends
of the time record and large in the
middle, we would concentrate the
FFT on the middle of the time record.
One such function is shown in Figure
3.26c. Such functions are called
window functions because they
force us to look at data through a
narrow window.
Figure 3.27 shows us the vast
improvement we get by windowing
data that is not periodic in the time
record. However, it is important to
realize that we have tampered with
the input data and cannot expect
perfect results. The FFT assumes the
input looks like Figure 3.26d, some-
thing like an amplitude-modulated
sine wave. This has a frequency
spectrum which is closer to the
correct single line of the input sine
wave than Figure 3.26b, but it still is
not correct. Figure 3.28 demonstrates
that the windowed data does not
have as narrow a spectrum as an
unwindowed function which is
periodic in the time record.
Figure 3.26
The effect of
windowing in
the time domain.
Figure 3.27
Leakage reduction
with windowing.
a) Sine wave not periodic in time record b) FFT results with no window function
c) FFT results with a window function
37
The Hanning Window
Any number of functions can be used
to window the data, but the most
common one is called Hanning. We
actually used the Hanning window in
Figure 3.27 as our example of leakage
reduction with windowing. The
Hanning window is also commonly
used when measuring random noise.
The Uniform Window*
We have seen that the Hanning
window does an acceptably good
job on our sine wave examples, both
periodic and non-periodic in the time
record. If this is true, why should we
want any other windows?
Suppose that instead of wanting the
frequency spectrum of a continuous
signal, we would like the spectrum
of a transient event. A typical tran-
sient is shown in Figure 3.29a. If we
multiplied it by the window function
in Figure 3.29b we would get the
highly distorted signal shown in
Figure 3.29c. The frequency spectrum
of an actual transient with and with-
out the Hanning window is shown in
Figure 3.30. The Hanning window has
taken our transient, which naturally
has energy spread widely through the
frequency domain and made it look
more like a sine wave.
Therefore, we can see that for
transients we do not want to use
the Hanning window. We would like
to use all the data in the time record
equally or uniformly. Hence we will
use the Uniform window which
weights all of the time record
uniformly.
The case we made for the Uniform
window by looking at transients
can be generalized. Notice that our
transient has the property that it is
zero at the beginning and end of the
time record. Remember that we intro-
duced windowing to force the input
to be zero at the ends of the time
record. In this case, there is no need
for windowing the input. Any func-
tion like this which does not require a
window because it occurs completely
within the time record is called a self-
windowing function. Self-windowing
functions generate no leakage in the
FFT and so need no window.
* The Uniform Window is sometimes referred to as a
“Rectangular Window”.
Figure 3.28
Windowing reduces
leakage but does
not eliminate it.
b) Windowed measurement - input not periodic
in time record
a) Leakage-free measurement - input periodic
in time record
Figure 3.29
Windowing loses
information from
transient events.
Figure 3.30
Spectrums
of transients.
b) Hanning windowed transientsa) Unwindowed trainsients
38
There are many examples of self-
windowing functions, some of which
are shown in Figure 3.31. Impacts,
impulses, shock responses, sine
bursts, noise bursts, chirp bursts and
pseudo-random noise can all be made
to be self-windowing. Self-windowing
functions are often used as the exci-
tation in measuring the frequency
response of networks, particularly
if the network has lightly-damped
resonances (high Q). This is because
the self-windowing functions gener-
ate no leakage in the FFT. Recall that
even with the Hanning window, some
leakage was present when the signal
was not periodic in the time record.
This means that without a self-win-
dowing excitation, energy could leak
from a lightly damped resonance into
adjacent lines (filters). The resulting
spectrum would show greater
damping than actually exists.*
The Flat-top Window
We have shown that we need a
uniform window for analyzing self-
windowing functions like transients.
In addition, we need a Hanning
window for measuring noise and
periodic signals like sine waves.
We now need to introduce a third
window function, the flat-top window,
to avoid a subtle effect of the
Hanning window. To understand
this effect, we need to look at the
Hanning window in the frequency
domain. We recall that the FFT acts
like a set of parallel filters. Figure
3.32 shows the shape of those filters
when the Hanning window is used.
Notice that the Hanning function
gives the filter a very rounded top.
If a component of the input signal
is centered in the filter it will be
measured accurately**. Otherwise,
the filter shape will attenuate the
component by up to 1.5 dB (16%)
when it falls midway between the
filters.
This error is unacceptably large
if we are trying to measure a signal’s
amplitude accurately. The solution is
to choose a window function which
gives the filter a flatter passband.
Such a flat-top passband shape is
shown in Figure 3.33. The amplitude
error from this window function does
not exceed .1 dB (1%), a 1.4 dB
improvement.
Figure 3.33
Flat-top
passband
shapes.
* There is another way to avoid this problem using Band
Selectable Analysis. We will illustrate this in the next
chapter.
** It will, in fact, be periodic in the time record
Figure 3.31
Self-windowing
function examples.
Figure 3.32
Hanning
passband
shapes.
Figure 3.34
Reduced
resolution
of the flat-top
window.
Flat-top
Hanning
39
The accuracy improvement does
not come without its price, however.
Figure 3.34 shows that we have flat-
tened the top of the passband at the
expense of widening the skirts of the
filter. We therefore lose some ability
to resolve a small component, closely
spaced to a large one. Some Dynamic
Signal Analyzers offer both Hanning
and flat-top window functions so that
the operator can choose between
increased accuracy or improved
frequency resolution.
Other Window Functions
Many other window functions are
possible but the three listed above
are by far the most common for
general measurements. For special
measurement situations other groups
of window functions may be useful.
We will discuss two windows which
are particularly useful when doing
network analysis on mechanical
structures by impact testing.
The Force and Response Windows
A hammer equipped with a force
transducer is commonly used to
stimulate a structure for response
measurements. Typically the force
input is connected to one channel
of the analyzer and the response of
the structure from another transducer
is connected to the second channel.
This force impact is obviously a
self-windowing function. The
response of the structure is also
self-windowing if it dies out within
the time record of the analyzer. To
guarantee that the response does go
to zero by the end of the time record,
an exponential-weighted window
called a response window is some-
times added. Figure 3.35 shows a
response window acting on the re-
sponse of a lightly damped structure
which did not fully decay by the end
of the time record. Notice that unlike
the Hanning window, the response
window is not zero at both ends of
the time record. We know that the
response of the structure will be zero
at the beginning of the time record
(before the hammer blow) so there
is no need for the window function
to be zero there. In addition, most of
the information about the structural
response is contained at the begin-
ning of the time record so we make
sure that this is weighted most heavi-
ly by our response window function.
The time record of the exciting force
should be just the impact with the
structure. However, movement of the
hammer before and after hitting the
structure can cause stray signals in
the time record. One way to avoid
this is to use a force window shown
in Figure 3.36. The force window is
unity where the impact data is valid
and zero everywhere else so that the
analyzer does not measure any stray
noise that might be present.
Passband Shapes or
Window Functions?
In the proceeding discussion we
sometimes talked about window
functions in the time domain. At
other times we talked about the filter
passband shape in the frequency
domain caused by these windows. We
change our perspective freely to
whichever domain yields the simplest
explanation. Likewise, some Dynamic
Signal Analyzers call the uniform,
Hanning and flat-top functions “win-
dows” and other analyzers call those
Figure 3.36
Using the
force window.
Figure 3.35
Using the
response
window.
40
functions “pass-band shapes”. Use
whichever terminology is easier
for the problem at hand as they are
completely interchangeable, just as
the time and frequency domains are
completely equivalent.
Section 6:
Network Stimulus
Recall from Chapter 2 that we can
measure the frequency response at
one frequency by stimulating the
network with a single sine wave and
measuring the gain and phase shift at
that frequency. The frequency of the
stimulus is then changed and the
measurement repeated until all
desired frequencies have been
measured. Every time the frequency
is changed, the network response
must settle to its steady-state value
before a new measurement can be
taken, making this measurement
process a slow task.
Many network analyzers operate in
this manner and we can make the
measurement this way with a two
channel Dynamic Signal Analyzer. We
set the sine wave source to the center
of the first filter as in Figure 3.37.
The analyzer then measures the
gain and phase of the network at
this frequency while the rest of the
analyzer’s filters measure only noise.
We then increase the source frequen-
cy to the next filter center, wait for
the network to settle and then meas-
ure the gain and phase. We continue
this procedure until we have
measured the gain and phase of
the network at all the frequencies
of the filters in our analyzer.
This procedure would, within
experimental error, give us the same
results as we would get with any of
the network analyzers described in
Chapter 2 with any network, linear
or nonlinear.
Noise as a Stimulus
A single sine wave stimulus does
not take advantage of the possible
speed the parallel filters of a
Dynamic Signal Analyzer provide. If
we had a source that put out multiple
sine waves, each one centered in a
filter, then we could measure the
frequency response at all frequencies
at one time. Such a source, shown in
Figure 3.38, acts like hundreds of sine
wave generators connected together.
Although this sounds very expensive,
Figure 3.37
Frequency
response
measurements
with a sine
wave stimulus.
Figure 3.38
Pseudo-random
noise as a
stimulus.
41
just such a source can be easily
generated digitally. It is called a
pseudo-random noise or periodic
random noise source.
From the names used for this source
it is apparent that it acts somewhat
like a true noise generator, except
that it has periodicity. If we add
together a large number of sine
waves, the result is very much like
white noise. A good analogy is the
sound of rain. A single drop of water
makes a quite distinctive splashing
sound, but a rain storm sounds like
white noise. However, if we add
together a large number of sine
waves, our noise-like signal will
periodically repeat its sequence.
Hence, the name periodic random
noise (PRN) source.
A truly random noise source has a
spectrum shown in Figure 3.39. It is
apparent that a random noise source
would also stimulate all the filters at
one time and so could be used as a
network stimulus. Which is a better
stimulus? The answer depends upon
the measurement situation.
Linear Network Analysis
If the network is reasonably linear,
PRN and random noise both give the
same results as the swept-sine test of
other analyzers. But PRN gives the
frequency response much faster. PRN
can be used to measure the frequency
response in a single time record.
Because the random source is true
noise, it must be averaged for several
time records before an accurate fre-
quency response can be determined.
Therefore, PRN is the best stimulus
to use with fairly linear networks
because it gives the fastest results*.
Non-Linear Network Analysis
If the network is severely non-linear,
the situation is quite different. In this
case, PRN is a very poor test signal
and random noise is much better. To
see why, let us look at just two of the
sine waves that compose the PRN
source. We see in Figure 3.40 that
if two sine waves are put through
a nonlinear network, distortion
products will be generated equally
spaced from the signals**. Unfortu-
nately, these products will fall exactly
on the frequencies of the other sine
waves in the PRN. So the distortion
products add to the output and there-
fore interfere with the measurement
Figure 3.39
Random noise
as a stimulus.
Figure 3.40
Pseudo-random
noise distortion.
∆
∆
∆∆
* There is another reason why PRN is a better test signal
than random or linear networks. Recall from the last
section that PRN is self-windowing. This means that
unlike random noise, pseudo-random noise has no leakage.
Therefore, with PRN, we can measure lightly damped (high
Q) resonances more easily than with random noise.
** This distortion is called intermodulation distortion.
42
of the frequency response. Figure
3.41a shows the jagged response of
a nonlinear network measured with
PRN. Because the PRN source
repeats itself exactly every time
record, this noisy looking trace never
changes and will not average to the
desired frequency response.
With random noise, the distortion
components are also random and will
average out. Therefore, the frequency
response does not include the distor-
tion and we get the more reasonable
results shown in Figure 3.41b.
This points out a fundamental
problem with measuring non-linear
networks; the frequency response is
not a property of the network alone,
it also depends on the stimulus.
Each stimulus, swept-sine, PRN and
random noise will, in general, give a
different result. Also, if the amplitude
of the stimulus is changed, you will
get a different result.
To illustrate this, consider the
mass-spring system with stops that
we used in Chapter 2. If the mass
does not hit the stops, the system
is linear and the frequency response
is given by Figure 3.42a.
If the mass does hit the stops,
the output is clipped and a large
number of distortion components are
generated. As the output approaches
a square wave, the fundamental com-
ponent becomes constant. Therefore,
as we increase the input amplitude,
the gain of the network drops. We
get a frequency response like Figure
3.42b, where the gain is dependent
on the input signal amplitude.
So as we have seen, the frequency
response of a nonlinear network is
not well defined, i.e., it depends on
the stimulus. Yet it is often used in
spite of this. The frequency response
of linear networks has proven to be a
very powerful tool and so naturally
people have tried to extend it to
non-linear analysis, particularly since
other nonlinear analysis tools have
proved intractable.
If every stimulus yields a different
frequency response, which one
should we use? The “best” stimulus
could be considered to be one which
approximates the kind of signals you
would expect to have as normal
inputs to the network. Since any large
collection of signals begins to look
like noise, noise is a good test signal*.
As we have already explained, noise
is also a good test signal because it
speeds the analysis by exciting all the
filters of our analyzer simultaneously.
But many other test signals can be
used with Dynamic Signal Analyzers
and are “best” (optimum) in other
senses. As explained in the beginning
of this section, sine waves can be
used to give the same results as other
types of network analyzers although
the speed advantage of the Dynamic
Signal Analyzer is lost. A fast sine
sweep (chirp) will give very similar
results with all the speed of Dynamic
Signal Analysis and so is a better
test signal. An impulse is a good test
signal for acoustical testing if the net-
work is linear. It is good for acoustics
because reflections from surfaces
at different distances can easily be
isolated or eliminated if desired. For
instance, by using the “force” window
described earlier, it is easy to get the
free field response of a speaker by
eliminating the room reflections from
the windowed time record.
Band-Limited Noise
Before leaving the subject of network
stimulus, it is appropriate to discuss
the need to band limit the stimulus.
We want all the power of the stimulus
to be concentrated in the frequency
region we are analyzing. Any power
* This is a consequence of the central limit theorem. As an
example, the telephone companies have found that when
many conversations are transmitted together, the result is
like white noise. The same effect is found more commonly
at a crowded cocktail party.
Figure 3.42
Nonlinear
system.
Figure 3.41
Nonlinear transfer function.
a) Pseudo-random noise stimulus b) Random noise stimulus
43
outside this region does not
contribute to the measurement
and could excite non-linearities.
This can be a particularly severe
problem when testing with random
noise since it theoretically has the
same power at all frequencies (white
noise). To eliminate this problem,
Dynamic Signal Analyzers often limit
the frequency range of their built-in
noise stimulus to the frequency span
selected. This could be done with an
external noise source and filters, but
every time the analyzer span changed,
the noise power and filter would have
to be readjusted. This is done auto-
matically with a built-in noise source
so transfer function measurements
are easier and faster.
Section 7: Averaging
To make it as easy as possible to
develop an understanding of Dynamic
Signal Analyzers we have almost
exclusively used examples with deter-
ministic signals, i.e., signals with no
noise. However, as the real world is
rarely so obliging, the desired signal
often must be measured in the pres-
ence of significant noise. At other
times the “signals” we are trying to
measure are more like noise them-
selves. Common examples that are
somewhat noise-like include speech,
music, digital data, seismic data and
mechanical vibrations. Because of
these two common conditions, we
must develop techniques both to
measure signals in the presence of
noise and to measure the noise itself.
The standard technique in statistics
to improve the estimates of a value
is to average. When we watch a
noisy reading on a Dynamic Signal
Analyzer, we can guess the average
value. But because the Dynamic
Signal Analyzer contains digital
computation capability we can have
it compute this average value for us.
Two kinds of averaging are available,
RMS (or “power” averaging) and
linear averaging.
RMS Averaging
When we watch the magnitude of the
spectrum and attempt to guess the
average value of the spectrum com-
ponent, we are doing a crude RMS*
average. We are trying to determine
the average magnitude of the signal,
ignoring any phase difference that
may exist between the spectra. This
averaging technique is very valuable
for determining the average power
in any of the filters of our Dynamic
Signal Analyzers. The more averages
we take, the better our estimate of
the power level.
In Figure 3.43, we show RMS aver-
aged spectra of random noise, digital
data and human voices. Each of these
examples is a fairly random process,
but when averaged we can see the
basic properties of its spectrum.
If we want to measure a small signal
in the presence of noise, RMS averag-
ing will give us a good estimate of the
signal plus noise. We can not improve
the signal to noise ratio with RMS
averaging; we can only make more
accurate estimates of the total signal
plus noise power.
Figure 3.43
RMS averaged spectra.
a) Random noise b) Digital data
c) Voices
Traces were separated 30 dB for clarity
Upper trace: female speaker
Lower trace: male speaker
* RMS stands for “root-mean-square” and is calculated
by squaring all the values, adding the squares together,
dividing by the number of measurements (mean) and
taking the square root of the result.
44
Linear Averaging
However, there is a technique for
improving the signal to noise ratio
of a measurement, called linear aver-
aging. It can be used if a trigger sig-
nal which is synchronous with the
periodic part of the spectrum is
available. Of course, the need for a
synchronizing signal is somewhat
restrictive, although there are numer-
ous situations in which one is avail-
able. In network analysis problems
the stimulus signal itself can often be
used as a synchronizing signal.
Linear averaging can be implemented
many ways, but perhaps the easiest to
understand is where the averaging is
done in the time domain. In this case,
the synchronizing signal is used to
trigger the start of a time record.
Therefore, the periodic part of the
input will always be exactly the same
in each time record we take, whereas
the noise will, of course, vary. If we
add together a series of these trig-
gered time records and divide by the
number of records we have taken we
will compute what we call a linear
average.
Since the periodic signal will have
repeated itself exactly in each time
record, it will average to its exact
value. But since the noise is different
in each time record, it will tend to
average to zero. The more averages
we take, the closer the noise comes
to zero and we continue to improve
the signal to noise ratio of our meas-
urement. Figure 3.44 shows a time
record of a square wave buried in
noise. The resulting time record after
128 averages shows a marked im-
provement in the signal to noise ratio.
Transforming both results to the
frequency domain shows how many
of the harmonics can now be accu-
rately measured because of the
reduced noise floor.
Figure 3.44
Linear averaging.
b) Single record, no averaginga) Single record, no averaging
d) 128 linear averagesc) 128 linear averages
45
Section 8:
Real Time Bandwidth
Until now we have ignored the fact
that it will take a finite time to com-
pute the FFT of our time record. In
fact, if we could compute the trans-
form in less time than our sampling
period we could continue to ignore
this computational time. Figure 3.45
shows that under this condition we
could get a new frequency spectrum
with every sample. As we have seen
from the section on aliasing, this
could result in far more spectrums
every second than we could possibly
comprehend. Worse, because of the
complexity of the FFT algorithm, it
would take a very fast and very
expensive computer to generate
spectrums this rapidly.
A reasonable alternative is to add a
time record buffer to the block dia-
gram of our analyzer. In Figure 3.47
we can see that this allows us to
compute the frequency spectrum of
the previous time record while gath-
ering the current time record. If we
can compute the transform before
the time record buffer fills, then we
are said to be operating in real time.
To see what this means, let us look at
the case where the FFT computation
takes longer than the time to fill the
time record. The case is illustrated in
Figure 3.48. Although the buffer is
full, we have not finished the last
transform, so we will have to stop
taking data. When the transform is
finished, we can transfer the time
record to the FFT and begin to take
another time record. This means that
we missed some input data and so we
are said to be not operating in real
time.
Recall that the time record is not
constant but deliberately varied to
change the frequency span of the ana-
lyzer. For wide frequency spans the
time record is shorter. Therefore, as
we increase the frequency span of the
analyzer, we eventually reach a span
where the time record is equal to the
FFT computation time. This frequen-
cy span is called the real time band-
width. For frequency spans at and
below the real time bandwidth, the
analyzer does not miss any data.
Real Time Bandwidth Requirements
How wide a real time bandwidth is
needed in a Dynamic Signal Analyzer?
Let us examine a few typical meas-
urements to get a feeling for the
considerations involved.
Adjusting Devices
If we are measuring the spectrum or
frequency response of a device which
we are adjusting, we need to watch
the spectrum change in what might
be called psychological real time. A
new spectrum every few tenths of a
second is sufficiently fast to allow an
operator to watch adjustments in
what he would consider to be real
time. However, if the response time
of the device under test is long, the
speed of the analyzer is immaterial.
We will have to wait for the device
to respond to the changes before the
spectrum will be valid, no matter
how many spectrums we generate
in that time. This is what makes
adjusting lightly damped (high Q)
resonances tedious.
Figure 3.45
A new
transform
every sample.
Figure 3.46
Time buffer
added to
block diagram.
Figure 3.48
Non-real time
operation.
Figure 3.47
Real time
operation.
46
RMS Averaging
A second case of interest in determin-
ing real time bandwidth requirements
is measurements that require RMS
averaging. We might be interested in
determining the spectrum distribution
of the noise itself or in reducing the
variation of a signal contaminated
by noise. There is no requirement in
averaging that the records must be
consecutive with no gaps*. Therefore,
a small real time bandwidth will not
affect the accuracy of the results.
However, the real time bandwidth
will affect the speed with which an
RMS averaged measurement can be
made. Figure 3.49 shows that for
frequency spans above the real time
bandwidth, the time to complete the
average of N records is dependent
only on the time to compute the
N transforms. Rather than continually
reducing the time to compute the
RMS average as we increase our
span, we reach a fixed time to
compute N averages.
Therefore, a small real time band-
width is only a problem in RMS aver-
aging when large spans are used with
a large number of averages. Under
these conditions we must wait longer
for the answer. Since wider real time
bandwidths require faster computa-
tions and therefore a more expensive
processor, there is a straightforward
trade-off of time versus money. In the
case of RMS averaging, higher real
time bandwidth gives you somewhat
faster measurements at increased
analyzer cost.
Transients
The last case of interest in determin-
ing the needed real time bandwidth
is the analysis of transient events. If
the entire transient fits within the
time record, the FFT computation
time is of little interest. The analyzer
can be triggered by the transient and
the event stored in the time record
buffer. The time to compute its
spectrum is not important.
However, if a transient event contains
high frequency energy and lasts
longer than the time record necessary
to measure the high frequency energy,
then the processing speed of the ana-
lyzer is critical. As shown in Figure
3.50b, some of the transient will not
be analyzed if the computation time
exceeds the time record length.
In the case of transients longer than
the time record, it is also imperative
that there is some way to rapidly
record the spectrum. Otherwise, the
Figure 3.49
RMS averaging
time.
Figure 3.50
Transient
analysis.
* This is because to average at all the signal must be
periodic and the noise stationary.
47
information will be lost as the
analyzer updates the display with
the spectrum of the latest time
record. A special display which
can show more than one spectrum
(“waterfall” display), mass memory,
a high speed link to a computer or a
high speed facsimile recorder is need-
ed. The output device must be able to
record a spectrum every time record
or information will be lost.
Fortunately, there is an easy way to
avoid the need for an expensive wide
real time bandwidth analyzer and an
expensive, fast spectrum recorder.
One-time transient events like explo-
sions and pass-by noise are usually
tape recorded for later analysis
because of the expense of repeating
the test. If this tape is played back at
reduced speed, the speed demands on
the analyzer and spectrum recorder
are reduced. Timing markers could
also be recorded at one time record
intervals. This would allow the analy-
sis of one record at a time and plot-
ting with a very slow (and commonly
available) X-Y plotter.
So we see that there is no clear-cut
answer to what real time bandwidth
is necessary in a Dynamic Signal
Analyzer. Except in analyzing long
transient events, the added expense
of a wide real time bandwidth gives
little advantage. It is possible to ana-
lyze long transient events with a nar-
row real time bandwidth analyzer, but
it does require the recording of the
input signal. This method is slow and
requires some operator care, but one
can avoid purchasing an expensive
analyzer and fast spectrum recorder.
It is a clear case of speed of analysis
versus dollars of capital equipment.
Section 9:
Overlap Processing
In Section 8 we considered the case
where the computation of the FFT
took longer than the collecting of the
time record. In this section we will
look at a technique, overlap process-
ing, which can be used when the FFT
computation takes less time than
gathering the time record.
To understand overlap processing, let
us look at Figure 3.51a. We see a low
frequency analysis where the gather-
ing of a time record takes much
longer than the FFT computation
time. Our FFT processor is sitting
idle much of the time. If instead of
waiting for an entirely new time
record we overlapped the new time
record with some of the old data,
we would get a new spectrum as
often as we computed the FFT. This
overlap processing is illustrated in
Figure 3.51b. To understand the
benefits of overlap processing, let
us look at the same cases we used
in the last section.
Adjusting Devices
We saw in the last section that we
need a new spectrum every few
tenths of a second when adjusting
devices. Without overlap processing
this limits our resolution to a few
Hertz. With overlap processing our
resolution is unlimited. But we are
not getting something for nothing.
Because our overlapped time record
contains old data from before the
device adjustment, it is not complete-
ly correct. It does indicate the direc-
tion and the amount of change, but
we must wait a full time record after
the change for the new spectrum to
be accurately displayed.
Nonetheless, by indicating the
direction and magnitude of the
changes every few tenths of a
second, overlap processing does
help in the adjustment of devices.
Figure 3.51
Understanding
overlap
processing.
48
RMS Averaging
Overlap processing can give dramatic
reductions in the time to compute
RMS averages with a given variance.
Recall that window functions reduce
the effects of leakage by weighting
the ends of the time record to zero.
Overlapping eliminates most or all of
the time that would be wasted taking
this data. Because some overlapped
data is used twice, more averages
must be taken to get a given variance
than in the non-overlapped case.
Figure 3.52 shows the improvements
that can be expected by overlapping.
Transients
For transients shorter than the time
record, overlap processing is useless.
For transients longer than the time
record the real time bandwidth of
the analyzer and spectrum recorder
is usually a limitation. If it is not,
overlap processing allows more
spectra to be generated from the
transient, usually improving
resolution of resulting plots.
Section 10: Summary
In this chapter we have developed
the basic properties of Dynamic
Signal Analyzers. We found that
many properties could be understood
by considering what happens when
we transform a finite, sampled time
record. The length of this record
determines how closely our filters
can be spaced in the frequency
domain and the number of samples
determines the number of filters in
the frequency domain. We also found
that unless we filtered the input we
could have errors due to aliasing and
that finite time records could cause
a problem called leakage which we
minimized by windowing.
We then added several features to
our basic Dynamic Signal Analyzer
to enhance its capabilities. Band
Selectable Analysis allows us to make
high resolution measurements even
at high frequencies. Averaging gives
more accurate measurements when
noise is present and even allows us
to improve the signal to noise ratio
when we can use linear averaging.
Finally, we incorporated a noise
source in our analyzer to act as a
stimulus for transfer function
measurements.
Figure 3.52
RMS averaging
speed improvements
with overlap
processing.
49
In Chapters 2 & 3, we developed an
understanding of the time, frequency
and modal domains and how
Dynamic Signal Analyzers operate.
In this chapter we show how to use
Dynamic Signal Analyzers in a wide
variety of measurement situations.
We introduce the measurement
functions of Dynamic Signal
Analyzers as we need them for
each measurement situation.
We begin with some common elec-
tronic and mechanical measurements
in the frequency domain. Later in the
chapter we introduce time and modal
domain measurements.
Section 1: Frequency Domain
Measurements
Oscillator Characterization
Let us begin by measuring the charac-
teristics of an electronic oscillator.
An important specification of an
oscillator is its harmonic distortion.
In Figure 4.1, we show the fundamen-
tal through fifth harmonic of a 1 KHz
oscillator. Because the frequency is
not necessarily exactly 1 KHz, win-
dowing should be used to reduce the
leakage. We have chosen the flat-top
window so that we can accurately
measure the amplitudes.
Notice that we have selected the
input sensitivity of the analyzer so
that the fundamental is near the top
of the display. In general, we set the
input sensitivity to the most sensitive
range which does not overload the
analyzer. Severe distortion of the
input signal will occur if its peak
voltage exceeds the range of the
analog to digital converter. Therefore,
all dynamic signal analyzers warn the
user of this condition by some kind of
overload indicator.
It is also important to make sure the
analyzer is not underloaded. If the
signal going into the analog to digital
converter is too small, much of the
useful information of the spectrum
may be below the noise level of the
analyzer. Therefore, setting the input
sensitivity to the most sensitive range
that does not cause an overload gives
the best possible results.
In Figure 4.1a we chose to display
the spectrum amplitude in logarith-
mic form to insure that we could see
distortion products far below the
fundamental. All signal amplitudes
on this display are in dBV, decibels
below 1 Volt RMS. However, since
most Dynamic Signal Analyzers have
very versatile display capabilities,
we could also display this spectrum
linearly as in Figure 4.1b. Here the
units of amplitude are volts.
Power-Line Sidebands
Another important measure of an
oscillator’s performance is the level
of its power-line sidebands. In Figure
4.2, we use Band Selectable Analysis
to “zoom in” on the signal so that we
can easily resolve and measure the
sidebands which are only 60 Hz away
from our 1 KHz signal. With some
analyzers it is possible to measure
signals only millihertz away from the
fundamental if desired.
Phase Noise
The short-term stability of a high
frequency oscillator is very important
in communications and radar. One
measure of this is called phase noise.
It is often measured by the technique
shown in Figure 4.3a. This mixes
down and cancels the oscillator
Figure 4.1
Harmonic distortion
of an Audio Oscillator -
Flat-top window used.
a) Logarithmic amplitude scale b) Linear amplitude scale
Figure 4.2
Powerline
sidebands of an
Audio Oscillator -
Band Selectable
Analysis and
Hanning window
used for maximum
resolution.
Chapter 4
Using Dynamic
Signal Analyzers
50
carrier leaving only the phase noise
sidebands. It is therefore possible to
measure the phase noise far below
the carrier level since the carrier does
not limit the range of our measure-
ment. Figure 4.3b shows the close-in
phase noise of a 20 MHz synthesizer.
Here, since we are measuring noise,
we use RMS averaging and the
Hanning window.
Dynamic Signal Analyzers offer
two main advantages over swept
signal analyzers in this application.
First, the phase noise can be meas-
ured much closer to the carrier. This
is because a good swept analyzer
can only resolve signals down to
about 1 Hz, while a Dynamic Signal
Analyzer can resolve signals to a few
millihertz. Secondly, the Dynamic
Signal Analyzer can determine the
complete phase noise spectrum in
a few minutes whereas a swept
analyzer would take hours.
Spectra-like phase noise are usually
displayed against the logarithm of fre-
quency instead of the linear frequen-
cy scale. This is done in Figure 4.3c.
Because the FFT generates linearly
spaced filters, the filters are not
equally spaced on the display. It is
important to realize that no informa-
tion is missed by these seemingly
widely spaced filters. We recall on
a linear frequency scale that all the
filters overlapped so that no part of
the spectrum was missed. All we have
done here is to change the presenta-
tion of the same measurement.
Figure 4.3
Phase Noise
Measurement.
a) Block diagram of phase noise measurement
b) Phase noise of a frequency synthesizer -
RMS averaging and Hanning window used for noise measurements
c) Logarithmic frequency axis presentation of phase noise normalized to a
1 Hz bandwidth (power spectral density)
51
In addition, phase noise and other
noise measurements are often nor-
malized to the power that would be
measured in a 1 Hz wide square filter.
This measurement is called a power
spectral density and is often provided
on Dynamic Signal Analyzers. It sim-
ply changes the presentation on the
display to this desired form; the data
is exactly the same in Figures 4.3b
and 4.3c, but the latter is in the more
conventional presentation.
Rotating Machinery Characterization
A rotating machine can be thought
of as a mechanical oscillator.* There-
fore, many of the measurements we
made for an electronic oscillator are
also important in characterizing
rotating machinery.
To characterize a rotating machine
we must first change its mechanical
vibration into an electrical signal.
This is often done by mounting an
accelerometer on a bearing housing
where the vibration generated by
shaft imbalance and bearing imper-
fections will be the highest. A typical
spectrum might look like Figure 4.4.
It is obviously much more complicat-
ed than the relatively clean spectrum
of the electronic oscillator we looked
at previously. There is also a great
deal of random noise; stray vibrations
from sources other than our motor
that the accelerometer picks up. The
effects of this stray vibration have
been minimized in Figure 4.4b RMS
averaging.
In Figure 4.5, we have used the Band
Selectable Analysis capability of our
analyzer to “zoom-in” and separate
the vibration of the stator at 120 Hz
from the vibration caused by the
rotor imbalance only a few tenths of
a Hertz lower in frequency.** This
ability to resolve closely spaced spec-
trum lines is crucial to our capability
to diagnose why the vibration levels
of a rotating machine are excessive.
The actions we would take to correct
an excessive vibration at 120 Hz are
quite different if it is caused by a
loose stator pole rather than an
imbalanced rotor.
Since the bearings are the most
unreliable part of most rotating
machines, we would also like to
check our spectrum for indications
of bearing failure. Any defect in a
bearing, say a spalling on the outer
face of a ball bearing, will cause a
small vibration to occur each time
a ball passes it. This will produce
a characteristic frequency in the
vibration called the passing frequen-
cy. The frequency domain is ideal for
Figure 4.6
Vibration caused by
small defect in
the bearing.
Figure 4.5
Stator vibration
and rotor imbalance
measurement with
Band Selectable
Analysis.
* Or, if you prefer, electronic oscillators can be viewed as
rotating machines which can go at millions of RPM’s.
** The rotor in an AC induction motor always runs at a
slightly lower frequency than the excitation, an effect
called slippage.
Figure 4.4
Spectrum of
electrical motor
vibration.
52
separating this small vibration from
all the other frequencies present. This
means that we can detect impending
bearing failures and schedule a shut-
down long before they become the
loudly squealing problem that signals
an immediate shutdown is necessary.
In most rotating machinery monitor-
ing situations, the absolute level of
each vibration component is not of
interest, just how they change with
time. The machine is measured when
new and throughout its life and these
successive spectra are compared.
If no catastrophic failures develop,
the spectrum components will
increase gradually as the machine
wears out. However, if an impending
bearing failure develops, the passing
frequency component corresponding
to the defect will increase suddenly
and dramatically.
An excellent way to store and com-
pare these spectra is by using a small
desktop computer. The spectra can
be easily entered into the computer
by an instrument interface like GPIB*
and compared with previous results
by a trend analysis program. This
avoids the tedious and error-prone
task of generating trend graphs by
hand. In addition, the computer can
easily check the trends against limits,
pointing out where vibration limits
are exceeded or where the trend is
for the limit to be exceeded in
the near future.
Desktop computers are also useful
when analyzing machinery that
normally operates over a wide range
of speeds. Severe vibration modes
can be excited when the machine
runs at critical speeds. A quick way
to determine if these vibrations are
a problem is to take a succession of
spectra as the machine runs up to
speed or coasts down. Each spectrum
shows the vibration components
of the machine as it passes through
an rpm range. If each spectrum is
transferred to the computer via
GPIB, the results can be processed
and displayed as in Figure 4.8. From
such a display it is easy to see shaft
imbalances, constant frequency vibra-
tions (from sources other than the
variable speed shaft) and structural
vibrations excited by the rotating
shaft. The computer gives the
capability of changing the display
presentation to other forms for
greater clarity. Because all the values
of the spectra are stored in memory,
precise values of the vibration com-
ponents can easily be determined.
In addition, signal processing can
be used to clarify the display. For
instance, in Figure 4.8 all signals
below -70 dB were ignored. This
eliminates meaningless noise from
the plot, clarifying the presentation.
So far in this chapter we have been
discussing only single channel fre-
quency domain measurements. Let us
now look at some measurements we
can make with a two channel
Dynamic Signal Analyzer.
Figure 4.7
Desktop
computer
system for
monitoring
rotating
machinery
vibration.
Motor Computer
Dynamic
Signal
Analyzer
Accelerometer
GPIB
Figure 4.8
Run up test
from the system
in Figure 4.7.
* General Purpose Interface Bus, Agilent’s
implementation of IEEE-488-1975.
53
Electronic Filter Characterization
In Section 6 of the last chapter, we
developed most of the principles we
need to characterize a low frequency
electronic filter. We show the test
setup we might use in Figure 4.9.
Because the filter is linear we can use
pseudo-random noise as the stimulus
for very fast test times. The uniform
window is used because the pseudo-
random noise is periodic in the time
record.* No averaging is needed
since the signal is periodic and rea-
sonably large. We should be careful,
as in the single channel case, to set
the input sensitivity for both channels
to the most sensitive position which
does not overload the analog to
digital converters.
With these considerations in mind,
we get a frequency response magni-
tude shown in Figure 4.10a and the
phase shown in Figure 4.10b. The
primary advantage of this measure-
ment over traditional swept analysis
techniques is speed. This measure-
ment can be made in 1/8 second with
a Dynamic Signal Analyzer, but would
take over 30 seconds with a swept
network analyzer. This speed
improvement is particularly impor-
tant when the filter under test is
being adjusted or when large volumes
are tested on a production line.
Structural Frequency Response
The network under test does not have
to be electronic. In Figure 4.11, we
are measuring the frequency response
of a single structure, in this case a
printed circuit board. Because this
structure behaves in a linear fashion,
Figure 4.10
Frequency response
of electronic filter using
PRN and uniform window.
Figure 4.9
Test setup
to measure
frequency
response
of filter.
* See the uniform window discussion in Section 6
of the previous chapter for details.
Figure 4.11
Frequency
response test
of a mechanical
structure.
a) Frequency response magnitude b) Frequency response magnitude and phase
54
we can use pseudo-random noise as
a test stimulus. But we might also
desire to use true random noise,
swept-sine or an impulse (hammer
blow) as the stimulus. In Figure 4.12
we show each of these measurements
and the frequency responses. As we
can see, the results are all the same.
The frequency response of a linear
network is a property solely of the
network, independent of the
stimulus used.
Since all the stimulus techniques
in Figure 4.12 give the same results,
we can use whichever one is fastest
and easiest. Usually this is the impact
stimulus, since a shaker is not
required.
In Figure 4.11 and 4.12, we have
been measuring the acceleration of
the structure divided by the force
applied. This quality is called
mechanical accelerance. To properly
scale the displays to the required
g’s/lb, we have entered the sensitivi-
ties of each transducer into the
analyzer by a feature called engineer-
ing units. Engineering units simply
changes the gain of each channel
of the analyzer so that the display
corresponds to the physical parame-
ter that the transducer is measuring.
Other frequency response measure-
ments besides mechanical acceler-
ance are often made on mechanical
structures. Figure 4.14 lists these
measurements. By changing transduc-
ers we could measure any of these
parameters. Or we can use the com-
putational capability of the Dynamic
Signal Analyzer to compute these
measurements from the mechanical
impedance measurement we have
already made.
Figure 4.12
Frequency
response
of a linear
network
is independent
of the stimulus
used.
a) Impact stimulus
Figure 4.13
Engineering
units set input
sensitivities to
properly scale
results.
b) Random noise stimulus
c) Swept sine stimulus
55
For instance, we can compute
velocity by integrating our accelera-
tion measurement. Displacement is
a double integration of acceleration.
Many Dynamic Signal Analyzers have
the capability of integrating a trace
by simply pushing a button. There-
fore, we can easily generate all the
common mechanical measurements
without the need of many expensive
transducers.
Coherence
Up to this point, we have been
measuring networks which we have
been able to isolate from the rest of
the world. That is, the only stimulus
to the network is what we apply and
the only response is that caused
by this controlled stimulus. This
situation is often encountered in test-
ing components, e.g., electric filters
or parts of a mechanical structure.
However, there are times when the
components we wish to test can not
be isolated from other disturbances.
For instance, in electronics we might
be trying to measure the frequency
response of a switching power supply
which has a very large component
at the switching frequency. Or we
might try to measure the frequency
response of part of a machine while
other machines are creating severe
vibration.
In Figure 4.15 we have simulated
these situations by adding noise and
a 1 KHz signal to the output of an
electronic filter. The measured
frequency response is shown in
Figure 4.16. RMS averaging has
reduced the noise contribution, but
has not completely eliminated the
1 KHz interference.* If we did not
know of the interference, we would
think that this filter has an additional
resonance at 1 KHz. But Dynamic
Signal Analyzers can often make an
additional measurement that is not
available with traditional network an-
alyzers called coherence. Coherence
measures the power in the response
channel that is caused by the power
in the reference channel. It is the out-
put power that is coherent with the
input power.
Figure 4.17 shows the same frequency
response magnitude from Figure 4.16
and its coherence. The coherence
goes from 1 (all the output power at
Figure 4.14
Mechanical
frequency
response
measurements.
Figure 4.15
Simulation
of frequency
response
measurement
in the presence
of noise.
Figure 4.17
Magnitude and
coherence of
frequency
response.
Figure 4.16
Magnitude of
frequency
response.
* Additional averaging would further reduce this
interference.
56
that frequency is caused by the input)
to 0 (none of the output power at that
frequency is caused by the input). We
can easily see from the coherence
function that the response at 1 KHz is
not caused by the input but by inter-
ference. However, our filter response
near 500 Hz has excellent coherence
and so the measurement here
is good.
Section 2: Time Domain
Measurements
A Dynamic Signal Analyzer usually
has the capability of displaying the
time record on its screen. This is the
same waveform we would see with
an oscilloscope, a time domain view
of the input. For very low frequency
or single-shot phenomena the digital
time record storage eliminates the
need for storage oscilloscope. But
there are other time domain measure-
ments that a Dynamic Signal Analyzer
can make as well. These are called
correlation measurements. We will
begin this section by defining correla-
tion and then we will show how to
make these measurements with a
Dynamic Signal Analyzer.
Correlation is a measure of the
similarity between two quantities. To
understand the correlation between
two waveforms, let us start by multi-
plying these waveforms together at
each instant in time and adding up all
the products. If, as in Figure 4.18, the
waveforms are identical, every prod-
uct is positive and the resulting sum
is large. If however, as in Figure 4.19,
the two records are dissimilar, then
some of the products would be posi-
tive and some would be negative.
There would be a tendency for the
products to cancel, so the final sum
would be smaller.
Now consider the waveform in
Figure 4.20a, and the same waveform
shifted in time, Figure 4.20b. If the
time shift were zero, then we would
have the same conditions as before,
that is, the waveforms would be in
phase and the final sum of the prod-
ucts would be large. If the time shift
between the two waveforms is made
large however, the waveforms appear
dissimilar and the final sum is small.
Figure 4.18
Correlation of
two identical
signals.
Figure 4.19
Correlation of
two different
signals.
Figure 4.20
Correlation of
time displaced
signals.
57
Going one step farther, we can find
the average product for each time
shift by dividing each final sum by
the number of products contributing
to it. If we now plot the average prod-
uct as a function of time shift, the
resulting curve will be largest when
the time shift is zero and will dimin-
ish to zero as the time shift increases.
This curve is called the auto-correla-
tion function of the waveform. It is
a graph of the similarity (or correla-
tion) between a waveform and itself,
as a function of the time shift.
The auto-correlation function is easi-
est to understand if we look at a few
examples. The random noise shown
in Figure 4.21 is not similar to itself
with any amount of time shift (after
all, it is random) so its auto-correla-
tion has only a single spike at the
point of 0 time shift. Pseudo-random
noise, however, repeats itself periodi-
cally, so when the time shift equals a
multiple of the period, the auto-corre-
lation repeats itself exactly as in
Figure 4.22. These are both special
cases of a more general statement;
the auto-correlation of any periodic
waveform is periodic and has the
same period as the waveform itself.
Figure 4.21
Auto correlation
of random noise.
a) Time record of random noise
Figure 4.22
Auto correlation
of pseudo-random
noise.
τ
Ν∆
∆
∆
Ν∆
b) Auto correlation of random noise
58
This can be useful when trying to
extract a signal hidden by noise.
Figure 4.24a shows what looks like
random noise, but there is actually a
low level sine wave buried in it. We
can see this in Figure 4.24b where
we have taken 100 averages of the
auto-correlation of this signal. The
noise has become the spike around
a time shift of zero whereas the
auto-correlation of the sine wave is
clearly visible, repeating itself with
the period of the sine wave.
If a trigger signal that is synchronous
with the sine wave is available, we
can extract the signal from the noise
by linear averaging as in the last
section. But the important point
about the auto-correlation function
is that no synchronizing trigger is
needed. In signal identification prob-
lems like radio astronomy and pas-
sive sonar, a synchronizing signal is
not available and so auto-correlation
is an important tool. The disadvan-
tage of auto-correlation is that the
input waveform is not preserved as it
is in linear averaging.
Since we can transform any time
domain waveform into the frequency
domain, the reader may wonder what
is the frequency transform of the
auto-correlation function? It turns out
to be the magnitude squared of the
spectrum of the input. Thus, there is
really no new information in the auto-
correlation function, we had the same
Figure 4.23
Auto-correlation
of periodic
waveforms.
Figure 4.24
Auto-correlation
of a sine wave
buried by noise.
59
information in the spectrum of the
signal. But as always, a change in
perspective between these two
domains often clarifies problems.
In general, impulsive type signals
like pulse trains, bearing ping or
gear chatter show up better in corre-
lation measurements, while signals
with several sine waves of different
frequencies like structural vibrations
and rotating machinery are clearer in
the frequency domain.
Cross Correlation
If auto-correlation is concerned with
the similarity between a signal and a
time shifted version of itself, then it
is reasonable to suppose that the
same technique could be used to
measure the similarity between two
non-identical waveforms. This is
called the cross correlation function.
If the same signal is present in both
waveforms, it will be reinforced in
the cross correlation function, while
any uncorrelated noise will be
reduced. In many network analysis
problems, the stimulus can be cross
correlated with the response to
reduce the effects of noise. Radar,
active sonar, room acoustics and
transmission path delays all are net-
work analysis problems where the
stimulus can be measured and used
to remove contaminating noise from
the response by cross correlation.*
Figure 4.25
Simulated radar
cross correlation.
a) ‘Transmitted’ signal, a swept-frequency sine wave
Figure 4.26
Cross correlation
shows multiple
transmission
paths.
b) ‘Received’ signal, the swept sine wave plus noise
c) Result of cross correlation of the transmitted and received signals.
Distance from left edge to peak represents transmission delay.
* The frequency transform of the cross correlation
function is the cross power spectrum, a function
discussed in Appendix A.
60
Section 3: Modal Domain
Measurements
In Section 1 we learned how to make
frequency domain measurements of
mechanical structures with Dynamic
Signal Analyzers. Let us now analyze
the behavior of a simple mechanical
structure to understand how to make
measurements in the modal domain.
We will test a simple metal plate
shown in Figure 4.27. The plate
is freely suspended using rubber
cords in order to isolate it from
any object which would alter its
properties.
The first decision we must make in
analyzing this structure is how many
measurements to make and where to
make them on the structure. There
are no firm rules for this decision;
good engineering judgment must be
exercised instead. Measuring too
many points make the calculations
unnecessarily complex and time
consuming. Measuring too few points
can cause spatial aliasing; i.e., the
measurement points are so far apart
that high frequency bending modes
in the structure can not be measured
accurately. To decide on a reasonable
number of measurement points, take
a few trial frequency response mea-
surements of the structure to deter-
mine the highest significant resonant
frequencies present. The wave length
can be determined empirically by
changing the distance between the
stimulus and the sensor until a full
360° phase shift has occurred from
the original measurement point.
Measurement point spacing should
be approximately one-quarter or less
of this wavelength.
Measurement points can be spaced
uniformly over the structure using
this guideline, but it may be desirable
to modify this procedure slightly. Few
structures are as uniform as this sim-
ple plate example,* but complicated
structures are made of simpler, more
uniform parts. The behavior of the
structure at the junction of these
parts is often of great interest, so
measurements should be made in
these critical areas as well.
Once we have decided on where the
measurements should be taken, we
number these measurement points
(the order can be arbitrary) and enter
the coordinates of each point into our
modal analyzer. This is necessary so
that the analyzer can correlate the
measurements we make with a
position on the structure to compute
the mode shapes.
The next decision we must make is
what signal we should use for a stim-
ulus. Our plate example is a linear
structure as it has no loose rivet
joints, non-linear damping materials,
or other non-linearities. Therefore,
we know that we can use any of the
stimuli described in Chapter 3,
Section 6. In this case, an impulse
would be a particularly good test
signal. We could supply the impulse
by hitting the structure with a ham-
mer equipped with a force transducer.
* If all structures were this simple, there would be
no need for modal analysis.
Figure 4.28
Spacial Aliasing -
Too few
measurement
points lead to
inaccurate
analysis of
high frequency
bending mode.
Figure 4.27
Modal analysis
example -
Determine
the modes in
this simple
plate.
61
This is probably the easiest way to
excite the structure as a shaker and
its associated driver are not required.
As we saw in the last chapter, howev-
er, if the structure were non-linear,
then random noise would be a good
test signal. To supply random noise to
the structure we would need to use a
shaker. To keep our example more
general, we will use random noise as
a stimulus.
The shaker is connected firmly to the
plate via a load cell (force transduc-
er) and excited by the band-limited
noise source of the analyzer. Since
this force is the network stimulus, the
load cell output is connected through
a suitable amplifier to the reference
channel of the analyzer. To begin
the experiment, we connect an
accelerometer* to the plate at the
same point as the load cell. The
accelerometer measures the struc-
ture’s response and its output is con-
nected to the other analyzer channel.
Because we are using random noise,
we will use a Hanning window and
RMS averaging just as we did in the
previous section.
The resulting frequency response
of this measurement is shown in
Figure 4.29. The ratio of acceleration
to force in g’s/lb is plotted on the
vertical axis by the use of engineering
units, and the data shows a number
of distinct peaks and valleys at partic-
ular frequencies. We conclude that
the plate moves more freely when
subjected to energy at certain specific
frequencies than it does in response
to energy at other frequencies. We
recall that each of the resonant peaks
correspond to a mode of vibration of
the structure.
Our simple plate supports a number
of different modes of vibration, all of
which are well separated in frequen-
cy. Structures with widely separated
modes of vibration are relatively
straightforward to analyze since
each mode can be treated as if it is
the only one present. Tightly spaced,
but lightly damped vibration modes
can also be easily analyzed if the
Band Selectable Analysis capability
is used to narrow the analyzer’s filter
sufficiently to resolve these resonanc-
es. Tightly spaced modes whose
damping is high enough to cause the
responses to overlap create computa-
tional difficulties in trying to separate
the effects of the vibration modes.
Fortunately, many structures fall into
the first two categories and so can be
easily analyzed.
Having inspected the measurement
and deciding that it met all the above
criteria, we can store it away. We
store similar measurements at each
point by moving our accelerometer to
each numbered point. We will then
have all the measurement data we
need to fully characterize the struc-
ture in the modal domain.
Recall from Chapter 2 that each
frequency response will have the
same number of peaks, with the same
resonant frequencies and dampings.
The next task is to determine these
resonant frequency and damping
values for each resonance of interest.
We do this by retrieving our stored
frequency responses and, using a
curve-fitting routine, we calculate
the frequency and damping of each
resonance of interest.
With the structural information we
entered earlier, and the frequency and
damping of each vibration mode
which we have just determined, the
analyzer can calculate the mode
Figure 4.29
A frequency
response of
the plate.
* Displacement, velocity or strain transducers could also be
used, but accelerometers are often used because they are
small and light, and therefore do not affect the response of
the structure. In addition, they are easy to mount on the
structure, reducing the total measurement time.
62
shapes by curve fitting the responses
of each point with the measured
resonances. In Figure 4.30 we show
several mode shapes of our simple
rectangular plate. These mode shapes
can be animated on the display to
show the relative motion of the vari-
ous parts of the structure. The graphs
in Figure 4.30, however, only show
the maximum deflection.
Section 4: Summary
This note has attempted to demon-
strate the advantages of expanding
one’s analysis capabilities from the
time domain to the frequency and
modal domains. Problems that are
difficult in one domain are often
clarified by a change in perspective
to another domain. The Dynamic
Signal Analyzer is a particularly
good analysis tool at low frequencies.
Not only can it work in all three
domains, it is also very fast.
We have developed heuristic argu-
ments as to why Dynamic Signal
Analyzers have certain properties
because understanding the principles
of these analyzers is important in
making good measurements. Finally,
we have shown how Dynamic Signal
Analyzers can be used in a wide
range of measurement situations
using relatively simple examples.
We have used simple examples
throughout this text to develop
understanding of the analyzer and
its measurements, but it is by no
means limited to such cases. It is
a powerful instrument that, in the
hands of an operator who under-
stands the principles developed in
this note, can lead to new insights
and analysis of problems.
Figure 4.30
Mode shapes
of a rectangular
plate.
63
The Fourier Transform
The transformation from the time
domain to the frequency domain and
back again is based on the Fourrier
Transform and its inverse. This Fourier
Transform pair is defined as:
Sx(f) = x (t) e-j2πftdt (Forward Transform) A.l
x(t) = Sx(f) e j2πftdf (Inverse Transform) A.2
where
x(t) = time domain representation of the signal x
Sx(f) = frequency domain representation of the
signal x
j = √

-1
The Fourier Transform is valid for both
periodic* and non-periodic x(t) that
satisfy certain minimum conditions.
All signals encountered in the real
world easily satisfy these requirements.
The Discrete Fourier Transform
To compute the Fourier Transform
digitally, we must perform a numerical
integration. This will give us an
approximation to a true Fourier
Transform called the Discrete
Fourier Transform.
There are three distinct difficulties
with computing the Fourier Transform.
First, the desired result is a continuous
function. We will only be able to
calculate its value at discrete points.
With this constraint our transform
becomes,
Sx(m∆f) = x (t) e-j2πm∆ftdt A.3
where m = 0, ±1, ±2
and ∆f = frequency spacing of our lines
The second problem is that we must
evaluate an integral. This is equivalent
to computing the area under a curve.
We will do this by adding together the
areas of narrow rectangles under the
curve as in Figure A.l.
Our transform now becomes:
Sx(m∆f) ≈ ∆t x (n∆t) e-j2πm∆fn∆t A.4
where ∆t = time interval between samples
The last problem is that even with this
summation approximation to the integral,
we must sum samples over all time from
minus to plus infinity. We would have to
wait forever to get a result. Clearly then,
we must limit the transform to a finite
time interval.
Sx(m∆f) ≈ ∆t x (n∆t) e-j2πm∆fn∆t A.5
As developed in Chapter 3, the frequency
spacing between the lines must be the
reciprocal of the time record length.
Therefore, we can simplify A.5 to our
formula for the Discrete Fourier
Transform, S'x.
S'x(m∆f) ≈ x(n∆t) e-j2πmn/Ν A.6
Figure A.l
Numerical
integration
used in the
Fourier
Transform
∆
π ∆
⌠⌠ ∞∞
⌡⌡−−∞∞
⌠⌠ ∞∞
⌡⌡−−∞∞
* The Fourier Series is a special case of the
Fourier Transform.
⌠⌠ ∞∞
⌡⌡−−∞∞
n=-∞
∑
n-1
∑
n-0
T
N
n-1
∑
n-0
∞
Appendix A
The Fourier Transform:
A Mathematical Background
64
The Fast Fourier Transform
The Fast Fourier Transform (FFT) is
an algorithm for computing this
Discrete Fourier Transform (DFT).
Before the development of the FFT
the DFT required excessive amounts
of computation time, particularly
when high resolution was required
(large N). The FFT forces one further
assumption, that N is a multiple of 2.
This allows certain symmetries to
occur reducing the number of calcula-
tions (specifically multiplications)
which have to be done.
It is important to recall here that the
Fast Fourier Transform is only an
approximation to the desired Fourier
Transform. First, the FFT only gives
samples of the Fourier Transform.
Second and more important, it is only
a transform of a finite time record of
the input.
Two Channel Frequency Domain
Measurements
As was pointed out in the main
text, two channel measurements are
often needed with a Dynamic Signal
Analyzer. In this section we will
mathematically define the two channel
transfer function and coherence
measurements introduced in
Chapter 4 and prove their more
important properties.
However, before we do this, we wish
to introduce one other function, the
Cross Power Spectrum, Gxy . This
function is not often used in measure-
ment situations, but is used internally
by Dynamic Signal Analyzers to
compute transfer functions and
coherence.
The Cross Power Spectrum, Gxy, is defined as taking
the Fourier Transform of two signals separately and
multiplying the result together as follows:
Gxy (f) = Sx (f) S*y(f)
where * indicates the complex conjugate of the function.
With this function, we can define the
Transfer Function, H(f), using the cross
power spectrum and the spectrum of the
input channel as follows:
H(f) =
where

denotes the average of the function.
At first glance it may seem more
appropriate to compute the transfer
function as follows:
|H(f)|2 =
This is the ratio of two single channel,
averaged measurements. Not only does
this measurement not give any phase
information, it also will be in error when
there is noise in the measurement. To see
why let us solve the equations for the
special case where noise is injected into
the output as in Figure A.2. The
output is:
Sy(f) = Sx(f)H(f) + Sn(f)
So
Gyy=SySy*= Gxx|H|2+SxHSn+Sx*H*Sn+|Sn|2
Figure A.2
Transfer
function
measurments
with noise
present.
Gyy
Gxx
Gyy(f)
Gxx(f)
65
If we RMS average this result to try to
eliminate the noise, we find the SxSn
terms approach zero because Sx, and Sn,
are uncorrelated. However, the |Sn|2
term remains as an error and so we get
= |H|2 +
Therefore if we try to measure |H|2 by
this single channel techniques, our value
will be high by the noise to signal ratio.
If instead we average the cross power
spectrum we will eliminate this noise
error. Using the same example,
Gyx=SySx*=(SxH+Sn)Sx*= GxxH +SnSx*
so
=H(f)+SnSx*
Because Sn, and Sx, are uncorrelated,
the second term will average to zero,
making this function a much better
estimate of the transfer function.
The Coherence Function, γ2, is also
derived from the cross power
spectrum by:
γ2(f) =
As stated in the main text, the coherence
function is a measure of the power in
the output signal caused by the input.
If the coherence is 1, then all the output
power is caused by the input. If the
coherence is 0, then none of the output
is caused by the input. Let us now look
at the mathematics of the coherence
function to see why this is so.
As before, we will assume a measurement
condition like Figure A.2. Then, as we
have shown before,
Gxy= Gxx|H|2+SxHSn*+ Sx*H*Sn+|S|2
Gxy=GxxH+SnSx*
As we average, the cross terms SnSx,
approach zero, assuming that the signal
and the noise are not related. So the
coherence becomes
γ2(f) =
γ2(f) =
We see that if there is no noise, the
coherence function is unity. If there is
noise, then the coherence will be reduced.
Note also that the coherence is a function
of frequency. The coherence can be unity
at frequencies where there is no interference
and low where the noise is high.
Time Domain Measurements
Because it is sometimes easier to under-
stand measurement problems from the
perspective of the time domain, Dynamic
Signal Analyzers often include several time
domain measurements. These include auto
and cross correlation and impulse response.
Auto Correlation, Rxx(τ), is a comparison
of a signal with itself as a function of
time shift. It is defined as;
Rxx(τ)= x(t)x(t+τ)dt
Gyy
Gxx
|Sn|2
Gxx
Gyx
Gxx
Gyx(f)
Gxx(f)
Gxy*(f)
Gyy(f)
|H|2Gxx
|H|2Gxx+Sn
2
(HGyx)2
Gxx(|H|2Gxx+|Sn|2)
lim
T→∞
1
T
⌠⌠
⌡⌡ΤΤ
66
That is, the auto correlation can be
found by taking a signal and multiply-
ing it by the same signal displaced by
a time τ and averaging the product
over all time. However, most Dynamic
Signal Analyzers compute this quanti-
ty by taking advantage of its dual in
the frequency domain. It can be
shown that
Rxx(τ)=F -1[Sx(f)Sx*(f)]
where F -1 is the inverse Fourier
Transform and Sx is the Fourier
Transform of x(t)
Since both techniques yield the same
answer, the latter is usually chosen for
Dynamic Signal Anlyzer since the
Frequency Transform algorithm is
already in the instrument and the
results can be computed faster
because fewer multiplications are
required.
Cross Correlation, Rxy(τ), is a
comparison of two signals as a
function of a time shift between them.
It is defined as:
Rxy(τ)= x(t)y(t+τ)dt
As in auto correlation, a Dynamic
Signal Analyzer computes this quanti-
ty indirectly, in this case from the
cross power spectrum.
Rxy(τ)=F -1[Gxy]
Lastly, the Impulse Response, h(t), is
the dual of the transfer function,
h(t) = F -1[H(f)]
Note that because the transfer func-
tion normalized the stimulus, the
impulse response can be computed no
matter what stimulus is actually used
on the network.
Bendat, Julius S. and Piersol, Allan G.,
“Random Data: Analysis and
Measurement Procedures”,
Wiley-Interscience,
New York, 1971.
Bendat, Julius S. and Piersol, Allan G.,
“Engineering Applications of
Correlation and Spectral Analysis”,
Wiley-lnterscience, New York, 1980.
Bracewell, R., “The Fourier
Transform and its Applications”,
McGraw-Hill, 1965.
Cooley, J.W. and Tukey, J.W.,
“An Algorithm for the Machine
Calculation of Complex Fourier
Series”, Mathematics of Computation,
Vo. 19, No. 90, p. 297,
April 1965.
McKinney, W., “Band Selectable
Fourier Analysis”, Hewlett-Packard
Journal, April 1975,
pp. 20-24.
Otnes, R.K. and Enochson, L.,
“Digital Time Series Analysis”,
John Wiley, 1972.
Potter, R. and Lortscher, J.,
“What in the World is Overlap
Processing”, Hewlett-Packard
Santa Clara DSA/Laser Division
“Update”, Sept. 1978.
Ramse , K.A., “Effective Measure-
ments for Structural Dynamics
Testing”, Part 1, Sound and
Vibration Magazine,
November 1975, pp. 24-35.
Roth, P., “Effective Measurements
Using Digital Signal Analysis”,
IEEE Spectrum, April 1971,
pp. 62-70.
Roth, P., “Digital Fourier Analysis”,
Hewlett-Packard Journal, June 1970.
Welch, Peter D., “The Use of
Fast Fourier Transform for the
Estimation of Power Spectra:
A Method Based on Time
Averaging Over Short, Modified
Periodograms”, IEEE Transactions
on Audio and Electro-acoustics,
Vol. AU-15, No. 2,
June 1967, pp. 70-73.
lim
T→∞
1
T
⌠⌠
⌡⌡ΤΤ
Appendix B
Bibliography
67
Aliasing 29
Anti-Alias Filter 31
Auto-Correlation 57, 65
Band Selectable Analysis 33, 51, 61
Coherence 55, 65
Correlation 56, 65
Cross Correlation 59, 66
Cross Power Spectrum 64
Damping 17, 61
Decibels (dB) 8, 49
Digital Filter 32
Discrete Fourier Transform 63
Engineering Units 55
Fast Fourier Transform (FFT) 25, 64
Flat-top Window or Passband 38, 49
Force Window 39
Fourier Transform 63
Frequency Response 15, 61
Frequency Response Analyzer 19
Gain-Phase Meter 19
Hanning Window or Passband 37, 49, 60
Impulse Response 66
Leakage 36
Linear Averaging 44
Linearity 12
Lines 25
Logarithmic Frequency Display 50
Network Analysis 11
Network Analyzer 19
Nyquist Criterion 31
Oscillograph 6
Oscilloscope 6
Parallel Filter Spectrum Analyzer 17
Periodic Random Noise 41, 53
Phase Noise 49
Power Spectral Density 51
Pseudo Random Noise 41, 53
Q (of resonance) 17
Random Noise 41
Real Time 45
Rectangular Window 37
Response Window 39
RMS Averaging 43, 46, 48, 50
Self-Windowing Functions 37
Spectrum 7
Spectrum Analyzer 17
Spectrum Component 7
Stimulus/Response Testing 11
Strip Chart Recorders 5
Swept Filter Spectrum Analyzer 18
Time Record 26
Transfer Function 42, 64
Transient Response 16
Tuned Network Analyzer 19
Uniform Window or Passband 37
Vibration Mode 20
Windowing 34
Zoom 33
Index
Product specifications and descriptions in this
document subject to change without notice.
Copyright © 1994, 1995, 1999,
2000 Agilent Technologies
Printed in U.S.A. 6/00
5952-8898E
Agilent Technologies' Test and Measurement
Support, Services, and Assistance
Agilent Technologies aims to maximize the
value you receive, while minimizing your risk
and problems. We strive to ensure that you get the
test and measurement capabilities you paid for
and obtain the support you need. Our extensive
support resources and services can help
you choose the right Agilent products for your
applications and apply them successfully. Every
instrument and system we sell has a global war-
ranty. Support is available for at least five years
beyond the production life of the product. Two
concepts underlie Agilent's overall support
policy: "Our Promise" and "Your Advantage."
Our Promise
Our Promise means your Agilent test and
measurement equipment will meet its advertised
performance and functionality. When you are
choosing new equipment, we will help you with
product information, including realistic perform-
ance specifications and practical recommenda-
tions from experienced test engineers. When
you use Agilent equipment, we can verify that it
works properly, help with product operation, and
provide basic measurement assistance for the use
of specified capabilities, at no extra cost upon
request. Many self-help tools are available.
Your Advantage
Your Advantage means that Agilent offers a wide
range of additional expert test and measurement
services, which you can purchase according to
your unique technical and business needs. Solve
problems efficiently and gain a competitive edge
by contracting with us for calibration, extra-cost
upgrades, out-of-warranty repairs, and on-site
education and training, as well as design, system
integration, project management, and other
professional engineering services. Experienced
Agilent engineers and technicians worldwide can
help you maximize your productivity, optimize the
return on investment of your Agilent instruments
and systems, and obtain dependable measurement
accuracy for the life of those products.
For More Assistance with Your
Test & Measurement Needs go to
www.agilent.com/find/assist
Or contact the test and measurement experts
at Agilent Technologies
(During normal business hours)
United States:
(tel) 1 800 452 4844
Canada:
(tel) 1 877 894 4414
(fax) (905) 206 4120
Europe:
(tel) (31 20) 547 2323
(fax) (31 20) 547 2390
Japan:
(tel) (81) 426 56 7832
(fax) (81) 426 56 7840
Latin America:
(tel) (305) 267 4245
(fax) (305) 267 4286
Australia:
(tel) 1 800 629 485
(fax) (61 3) 9272 0749
New Zealand:
(tel) 0 800 738 378
(fax) 64 4 495 8950
Asia Pacific:
(tel) (852) 3197 7777
(fax) (852) 2506 9284
Ad

More Related Content

What's hot (19)

Comparative analysis on an exponential form of pulse with an integer and non-...
Comparative analysis on an exponential form of pulse with an integer and non-...Comparative analysis on an exponential form of pulse with an integer and non-...
Comparative analysis on an exponential form of pulse with an integer and non-...
IJERA Editor
 
Investigations on real time RSSI based outdoor target tracking using kalman f...
Investigations on real time RSSI based outdoor target tracking using kalman f...Investigations on real time RSSI based outdoor target tracking using kalman f...
Investigations on real time RSSI based outdoor target tracking using kalman f...
IJECEIAES
 
Book 3: “Antennae Techniques”
Book 3: “Antennae Techniques”Book 3: “Antennae Techniques”
Book 3: “Antennae Techniques”
Timour Chaikhraziev
 
Radar Systems Engineering L7P2
Radar Systems Engineering L7P2Radar Systems Engineering L7P2
Radar Systems Engineering L7P2
Lorentz Taillade van Vogt
 
Fault Location Of Transmission Line
Fault Location Of Transmission LineFault Location Of Transmission Line
Fault Location Of Transmission Line
CHIRANJEEB DASH
 
A Virtual Infrastructure for Mitigating Typical Challenges in Sensor Networks
A Virtual Infrastructure for Mitigating Typical Challenges in Sensor NetworksA Virtual Infrastructure for Mitigating Typical Challenges in Sensor Networks
A Virtual Infrastructure for Mitigating Typical Challenges in Sensor Networks
Michele Weigle
 
Max_Poster_FINAL
Max_Poster_FINALMax_Poster_FINAL
Max_Poster_FINAL
Max Robertson
 
Multi-mission Phased Array Radar
Multi-mission Phased Array RadarMulti-mission Phased Array Radar
Multi-mission Phased Array Radar
Madiha Tahseen Shaik
 
Report TxLab - Pietro Santoro
Report TxLab - Pietro SantoroReport TxLab - Pietro Santoro
Report TxLab - Pietro Santoro
Pietro Santoro
 
Ppt fault location
Ppt fault locationPpt fault location
Ppt fault location
mohammed shareef
 
Hl3413921395
Hl3413921395Hl3413921395
Hl3413921395
IJERA Editor
 
Transient Monitoring Function based Fault Classifier for Relaying Applications
Transient Monitoring Function based Fault Classifier for Relaying Applications Transient Monitoring Function based Fault Classifier for Relaying Applications
Transient Monitoring Function based Fault Classifier for Relaying Applications
IJECEIAES
 
Enhanced Mobile Node Tracking With Received Signal Strength in Wireless Senso...
Enhanced Mobile Node Tracking With Received Signal Strength in Wireless Senso...Enhanced Mobile Node Tracking With Received Signal Strength in Wireless Senso...
Enhanced Mobile Node Tracking With Received Signal Strength in Wireless Senso...
IOSR Journals
 
IRJET-Weather Radar Range Velocity Ambiguity Analysis using Staggered PRT
IRJET-Weather Radar Range Velocity Ambiguity Analysis using Staggered PRTIRJET-Weather Radar Range Velocity Ambiguity Analysis using Staggered PRT
IRJET-Weather Radar Range Velocity Ambiguity Analysis using Staggered PRT
IRJET Journal
 
Radar 2009 a 4 radar equation
Radar 2009 a  4 radar equationRadar 2009 a  4 radar equation
Radar 2009 a 4 radar equation
Forward2025
 
Radar rangeequation(2)2011
Radar rangeequation(2)2011Radar rangeequation(2)2011
Radar rangeequation(2)2011
modddar
 
Radar 2009 a 3 review of signals, systems, and dsp
Radar 2009 a  3 review of signals, systems, and dspRadar 2009 a  3 review of signals, systems, and dsp
Radar 2009 a 3 review of signals, systems, and dsp
subha5
 
False Node Recovery Algorithm for a Wireless Sensor Network
False Node Recovery Algorithm for a Wireless Sensor NetworkFalse Node Recovery Algorithm for a Wireless Sensor Network
False Node Recovery Algorithm for a Wireless Sensor Network
Radita Apriana
 
Phased-Array Radar Talk Jorge Salazar
Phased-Array Radar Talk Jorge SalazarPhased-Array Radar Talk Jorge Salazar
Phased-Array Radar Talk Jorge Salazar
Jorge L. Salazar-Cerreño
 
Comparative analysis on an exponential form of pulse with an integer and non-...
Comparative analysis on an exponential form of pulse with an integer and non-...Comparative analysis on an exponential form of pulse with an integer and non-...
Comparative analysis on an exponential form of pulse with an integer and non-...
IJERA Editor
 
Investigations on real time RSSI based outdoor target tracking using kalman f...
Investigations on real time RSSI based outdoor target tracking using kalman f...Investigations on real time RSSI based outdoor target tracking using kalman f...
Investigations on real time RSSI based outdoor target tracking using kalman f...
IJECEIAES
 
Book 3: “Antennae Techniques”
Book 3: “Antennae Techniques”Book 3: “Antennae Techniques”
Book 3: “Antennae Techniques”
Timour Chaikhraziev
 
Fault Location Of Transmission Line
Fault Location Of Transmission LineFault Location Of Transmission Line
Fault Location Of Transmission Line
CHIRANJEEB DASH
 
A Virtual Infrastructure for Mitigating Typical Challenges in Sensor Networks
A Virtual Infrastructure for Mitigating Typical Challenges in Sensor NetworksA Virtual Infrastructure for Mitigating Typical Challenges in Sensor Networks
A Virtual Infrastructure for Mitigating Typical Challenges in Sensor Networks
Michele Weigle
 
Report TxLab - Pietro Santoro
Report TxLab - Pietro SantoroReport TxLab - Pietro Santoro
Report TxLab - Pietro Santoro
Pietro Santoro
 
Transient Monitoring Function based Fault Classifier for Relaying Applications
Transient Monitoring Function based Fault Classifier for Relaying Applications Transient Monitoring Function based Fault Classifier for Relaying Applications
Transient Monitoring Function based Fault Classifier for Relaying Applications
IJECEIAES
 
Enhanced Mobile Node Tracking With Received Signal Strength in Wireless Senso...
Enhanced Mobile Node Tracking With Received Signal Strength in Wireless Senso...Enhanced Mobile Node Tracking With Received Signal Strength in Wireless Senso...
Enhanced Mobile Node Tracking With Received Signal Strength in Wireless Senso...
IOSR Journals
 
IRJET-Weather Radar Range Velocity Ambiguity Analysis using Staggered PRT
IRJET-Weather Radar Range Velocity Ambiguity Analysis using Staggered PRTIRJET-Weather Radar Range Velocity Ambiguity Analysis using Staggered PRT
IRJET-Weather Radar Range Velocity Ambiguity Analysis using Staggered PRT
IRJET Journal
 
Radar 2009 a 4 radar equation
Radar 2009 a  4 radar equationRadar 2009 a  4 radar equation
Radar 2009 a 4 radar equation
Forward2025
 
Radar rangeequation(2)2011
Radar rangeequation(2)2011Radar rangeequation(2)2011
Radar rangeequation(2)2011
modddar
 
Radar 2009 a 3 review of signals, systems, and dsp
Radar 2009 a  3 review of signals, systems, and dspRadar 2009 a  3 review of signals, systems, and dsp
Radar 2009 a 3 review of signals, systems, and dsp
subha5
 
False Node Recovery Algorithm for a Wireless Sensor Network
False Node Recovery Algorithm for a Wireless Sensor NetworkFalse Node Recovery Algorithm for a Wireless Sensor Network
False Node Recovery Algorithm for a Wireless Sensor Network
Radita Apriana
 

Similar to Fundamentals of dsp (20)

I0423056065
I0423056065I0423056065
I0423056065
ijceronline
 
Comparative Analysis of Natural Frequency of Transverse Vibration of a Cantil...
Comparative Analysis of Natural Frequency of Transverse Vibration of a Cantil...Comparative Analysis of Natural Frequency of Transverse Vibration of a Cantil...
Comparative Analysis of Natural Frequency of Transverse Vibration of a Cantil...
IRJET Journal
 
A230108
A230108A230108
A230108
inventionjournals
 
IRJET- Compressed Sensing based Modified Orthogonal Matching Pursuit in DTTV ...
IRJET- Compressed Sensing based Modified Orthogonal Matching Pursuit in DTTV ...IRJET- Compressed Sensing based Modified Orthogonal Matching Pursuit in DTTV ...
IRJET- Compressed Sensing based Modified Orthogonal Matching Pursuit in DTTV ...
IRJET Journal
 
Earthquake Seismic Sensor Calibration
Earthquake Seismic Sensor CalibrationEarthquake Seismic Sensor Calibration
Earthquake Seismic Sensor Calibration
Ali Osman Öncel
 
Salamanca_Research_Paper
Salamanca_Research_PaperSalamanca_Research_Paper
Salamanca_Research_Paper
Carlos Salamanca
 
An Algorithm Based On Discrete Wavelet Transform For Faults Detection, Locati...
An Algorithm Based On Discrete Wavelet Transform For Faults Detection, Locati...An Algorithm Based On Discrete Wavelet Transform For Faults Detection, Locati...
An Algorithm Based On Discrete Wavelet Transform For Faults Detection, Locati...
paperpublications3
 
Differential equation fault location algorithm with harmonic effects in power...
Differential equation fault location algorithm with harmonic effects in power...Differential equation fault location algorithm with harmonic effects in power...
Differential equation fault location algorithm with harmonic effects in power...
TELKOMNIKA JOURNAL
 
F41043841
F41043841F41043841
F41043841
IJERA Editor
 
Fundamentals of vibration_measurement_and_analysis_explained
Fundamentals of vibration_measurement_and_analysis_explainedFundamentals of vibration_measurement_and_analysis_explained
Fundamentals of vibration_measurement_and_analysis_explained
vibratiob
 
Development of Seakeeping Test and Data Processing System
Development of Seakeeping Test and Data Processing SystemDevelopment of Seakeeping Test and Data Processing System
Development of Seakeeping Test and Data Processing System
ijceronline
 
Shunt Faults Detection on Transmission Line by Wavelet
Shunt Faults Detection on Transmission Line by WaveletShunt Faults Detection on Transmission Line by Wavelet
Shunt Faults Detection on Transmission Line by Wavelet
paperpublications3
 
Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276
Editor IJARCET
 
Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276
Editor IJARCET
 
GROUP1_INSTRU-SENSORS-CALI_FEEDBACK.pptx
GROUP1_INSTRU-SENSORS-CALI_FEEDBACK.pptxGROUP1_INSTRU-SENSORS-CALI_FEEDBACK.pptx
GROUP1_INSTRU-SENSORS-CALI_FEEDBACK.pptx
Ryan Cortes
 
30 9762 extension paper id 0030 (edit i)
30 9762 extension paper id 0030 (edit i)30 9762 extension paper id 0030 (edit i)
30 9762 extension paper id 0030 (edit i)
IAESIJEECS
 
Researches
ResearchesResearches
Researches
Mohamed Saddek
 
B041221317
B041221317B041221317
B041221317
IOSR-JEN
 
Ih3514441454
Ih3514441454Ih3514441454
Ih3514441454
IJERA Editor
 
Intro tosignalprocessing
 Intro tosignalprocessing Intro tosignalprocessing
Intro tosignalprocessing
JuanJTovarP
 
Comparative Analysis of Natural Frequency of Transverse Vibration of a Cantil...
Comparative Analysis of Natural Frequency of Transverse Vibration of a Cantil...Comparative Analysis of Natural Frequency of Transverse Vibration of a Cantil...
Comparative Analysis of Natural Frequency of Transverse Vibration of a Cantil...
IRJET Journal
 
IRJET- Compressed Sensing based Modified Orthogonal Matching Pursuit in DTTV ...
IRJET- Compressed Sensing based Modified Orthogonal Matching Pursuit in DTTV ...IRJET- Compressed Sensing based Modified Orthogonal Matching Pursuit in DTTV ...
IRJET- Compressed Sensing based Modified Orthogonal Matching Pursuit in DTTV ...
IRJET Journal
 
Earthquake Seismic Sensor Calibration
Earthquake Seismic Sensor CalibrationEarthquake Seismic Sensor Calibration
Earthquake Seismic Sensor Calibration
Ali Osman Öncel
 
An Algorithm Based On Discrete Wavelet Transform For Faults Detection, Locati...
An Algorithm Based On Discrete Wavelet Transform For Faults Detection, Locati...An Algorithm Based On Discrete Wavelet Transform For Faults Detection, Locati...
An Algorithm Based On Discrete Wavelet Transform For Faults Detection, Locati...
paperpublications3
 
Differential equation fault location algorithm with harmonic effects in power...
Differential equation fault location algorithm with harmonic effects in power...Differential equation fault location algorithm with harmonic effects in power...
Differential equation fault location algorithm with harmonic effects in power...
TELKOMNIKA JOURNAL
 
Fundamentals of vibration_measurement_and_analysis_explained
Fundamentals of vibration_measurement_and_analysis_explainedFundamentals of vibration_measurement_and_analysis_explained
Fundamentals of vibration_measurement_and_analysis_explained
vibratiob
 
Development of Seakeeping Test and Data Processing System
Development of Seakeeping Test and Data Processing SystemDevelopment of Seakeeping Test and Data Processing System
Development of Seakeeping Test and Data Processing System
ijceronline
 
Shunt Faults Detection on Transmission Line by Wavelet
Shunt Faults Detection on Transmission Line by WaveletShunt Faults Detection on Transmission Line by Wavelet
Shunt Faults Detection on Transmission Line by Wavelet
paperpublications3
 
Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276
Editor IJARCET
 
Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276
Editor IJARCET
 
GROUP1_INSTRU-SENSORS-CALI_FEEDBACK.pptx
GROUP1_INSTRU-SENSORS-CALI_FEEDBACK.pptxGROUP1_INSTRU-SENSORS-CALI_FEEDBACK.pptx
GROUP1_INSTRU-SENSORS-CALI_FEEDBACK.pptx
Ryan Cortes
 
30 9762 extension paper id 0030 (edit i)
30 9762 extension paper id 0030 (edit i)30 9762 extension paper id 0030 (edit i)
30 9762 extension paper id 0030 (edit i)
IAESIJEECS
 
B041221317
B041221317B041221317
B041221317
IOSR-JEN
 
Intro tosignalprocessing
 Intro tosignalprocessing Intro tosignalprocessing
Intro tosignalprocessing
JuanJTovarP
 
Ad

Recently uploaded (20)

Beyond fake greening and potential vegetation: parcel nested dynamics of urba...
Beyond fake greening and potential vegetation: parcel nested dynamics of urba...Beyond fake greening and potential vegetation: parcel nested dynamics of urba...
Beyond fake greening and potential vegetation: parcel nested dynamics of urba...
Yuji Hara
 
5. Substance Addiction_Facilitator's Guide.pdf
5. Substance Addiction_Facilitator's Guide.pdf5. Substance Addiction_Facilitator's Guide.pdf
5. Substance Addiction_Facilitator's Guide.pdf
Dilip677056
 
The Role of Technology in Modern Flood Risk Management Services
The Role of Technology in Modern Flood Risk Management ServicesThe Role of Technology in Modern Flood Risk Management Services
The Role of Technology in Modern Flood Risk Management Services
wrightcontractingseo
 
10-ISO-45001-Overview-Presentation-Sample.pptx
10-ISO-45001-Overview-Presentation-Sample.pptx10-ISO-45001-Overview-Presentation-Sample.pptx
10-ISO-45001-Overview-Presentation-Sample.pptx
RaedRjab
 
2. Plastic_ Facilitator's Guide.pdf camp
2. Plastic_ Facilitator's Guide.pdf camp2. Plastic_ Facilitator's Guide.pdf camp
2. Plastic_ Facilitator's Guide.pdf camp
Dilip677056
 
Environmental Studies : Types of Ecosystem.pptx
Environmental Studies : Types of Ecosystem.pptxEnvironmental Studies : Types of Ecosystem.pptx
Environmental Studies : Types of Ecosystem.pptx
vvsasane
 
lecture2faafffffffffffffffaaaaaaaaaaa4.ppt
lecture2faafffffffffffffffaaaaaaaaaaa4.pptlecture2faafffffffffffffffaaaaaaaaaaa4.ppt
lecture2faafffffffffffffffaaaaaaaaaaa4.ppt
ShohidulIslamSovon
 
270297488455520861-genetic-transformation.ppt
270297488455520861-genetic-transformation.ppt270297488455520861-genetic-transformation.ppt
270297488455520861-genetic-transformation.ppt
mishraji175a
 
SROI Principles are essential for creating an assessment to ensure that socia...
SROI Principles are essential for creating an assessment to ensure that socia...SROI Principles are essential for creating an assessment to ensure that socia...
SROI Principles are essential for creating an assessment to ensure that socia...
dedeabdulhasyir
 
4. Soil Health_Facilitator's Guide (2).pdf
4. Soil Health_Facilitator's Guide (2).pdf4. Soil Health_Facilitator's Guide (2).pdf
4. Soil Health_Facilitator's Guide (2).pdf
Dilip677056
 
Disaster-Mitigation A natural disaster .
Disaster-Mitigation A natural disaster  .Disaster-Mitigation A natural disaster  .
Disaster-Mitigation A natural disaster .
princeprinceantil3
 
Ethics in Environmental Sustainability introduction
Ethics in Environmental Sustainability introductionEthics in Environmental Sustainability introduction
Ethics in Environmental Sustainability introduction
ishaverma244
 
Critical Assessment of the Challenges Faced by Tropical Forest Ecosystem Func...
Critical Assessment of the Challenges Faced by Tropical Forest Ecosystem Func...Critical Assessment of the Challenges Faced by Tropical Forest Ecosystem Func...
Critical Assessment of the Challenges Faced by Tropical Forest Ecosystem Func...
Professional Content Writing's
 
AIR POLLUTION_Khalifa_Alblooshi_Assingment3.pdf
AIR POLLUTION_Khalifa_Alblooshi_Assingment3.pdfAIR POLLUTION_Khalifa_Alblooshi_Assingment3.pdf
AIR POLLUTION_Khalifa_Alblooshi_Assingment3.pdf
Khalifa Alblooshi
 
Organic Slow Release Fertiliser — Smarter Nutrition for Healthier Soil.pptx
Organic Slow Release Fertiliser — Smarter Nutrition for Healthier Soil.pptxOrganic Slow Release Fertiliser — Smarter Nutrition for Healthier Soil.pptx
Organic Slow Release Fertiliser — Smarter Nutrition for Healthier Soil.pptx
soildynamies12
 
Land Utilization (Agricultural, Pastoral, Horticultural.pdf
Land Utilization (Agricultural, Pastoral, Horticultural.pdfLand Utilization (Agricultural, Pastoral, Horticultural.pdf
Land Utilization (Agricultural, Pastoral, Horticultural.pdf
Nistarini College, Purulia (W.B) India
 
Bonikro-Environment Month Ending June 17_Noise & Dust.pptx
Bonikro-Environment Month Ending June 17_Noise & Dust.pptxBonikro-Environment Month Ending June 17_Noise & Dust.pptx
Bonikro-Environment Month Ending June 17_Noise & Dust.pptx
SaturninKouam
 
Fest Brochure for college any kind of event
Fest Brochure for college any kind of eventFest Brochure for college any kind of event
Fest Brochure for college any kind of event
thakurrajsinghrajput
 
The Great Alone - Kristin Hannah.pdfThe Great Alone - Kristin Hannah.pdf
The Great Alone - Kristin Hannah.pdfThe Great Alone - Kristin Hannah.pdfThe Great Alone - Kristin Hannah.pdfThe Great Alone - Kristin Hannah.pdf
The Great Alone - Kristin Hannah.pdfThe Great Alone - Kristin Hannah.pdf
lynnet17122
 
Study of Certain Behavior of Rhesus Macaques
Study of Certain Behavior of Rhesus MacaquesStudy of Certain Behavior of Rhesus Macaques
Study of Certain Behavior of Rhesus Macaques
Rahim Shaikh
 
Beyond fake greening and potential vegetation: parcel nested dynamics of urba...
Beyond fake greening and potential vegetation: parcel nested dynamics of urba...Beyond fake greening and potential vegetation: parcel nested dynamics of urba...
Beyond fake greening and potential vegetation: parcel nested dynamics of urba...
Yuji Hara
 
5. Substance Addiction_Facilitator's Guide.pdf
5. Substance Addiction_Facilitator's Guide.pdf5. Substance Addiction_Facilitator's Guide.pdf
5. Substance Addiction_Facilitator's Guide.pdf
Dilip677056
 
The Role of Technology in Modern Flood Risk Management Services
The Role of Technology in Modern Flood Risk Management ServicesThe Role of Technology in Modern Flood Risk Management Services
The Role of Technology in Modern Flood Risk Management Services
wrightcontractingseo
 
10-ISO-45001-Overview-Presentation-Sample.pptx
10-ISO-45001-Overview-Presentation-Sample.pptx10-ISO-45001-Overview-Presentation-Sample.pptx
10-ISO-45001-Overview-Presentation-Sample.pptx
RaedRjab
 
2. Plastic_ Facilitator's Guide.pdf camp
2. Plastic_ Facilitator's Guide.pdf camp2. Plastic_ Facilitator's Guide.pdf camp
2. Plastic_ Facilitator's Guide.pdf camp
Dilip677056
 
Environmental Studies : Types of Ecosystem.pptx
Environmental Studies : Types of Ecosystem.pptxEnvironmental Studies : Types of Ecosystem.pptx
Environmental Studies : Types of Ecosystem.pptx
vvsasane
 
lecture2faafffffffffffffffaaaaaaaaaaa4.ppt
lecture2faafffffffffffffffaaaaaaaaaaa4.pptlecture2faafffffffffffffffaaaaaaaaaaa4.ppt
lecture2faafffffffffffffffaaaaaaaaaaa4.ppt
ShohidulIslamSovon
 
270297488455520861-genetic-transformation.ppt
270297488455520861-genetic-transformation.ppt270297488455520861-genetic-transformation.ppt
270297488455520861-genetic-transformation.ppt
mishraji175a
 
SROI Principles are essential for creating an assessment to ensure that socia...
SROI Principles are essential for creating an assessment to ensure that socia...SROI Principles are essential for creating an assessment to ensure that socia...
SROI Principles are essential for creating an assessment to ensure that socia...
dedeabdulhasyir
 
4. Soil Health_Facilitator's Guide (2).pdf
4. Soil Health_Facilitator's Guide (2).pdf4. Soil Health_Facilitator's Guide (2).pdf
4. Soil Health_Facilitator's Guide (2).pdf
Dilip677056
 
Disaster-Mitigation A natural disaster .
Disaster-Mitigation A natural disaster  .Disaster-Mitigation A natural disaster  .
Disaster-Mitigation A natural disaster .
princeprinceantil3
 
Ethics in Environmental Sustainability introduction
Ethics in Environmental Sustainability introductionEthics in Environmental Sustainability introduction
Ethics in Environmental Sustainability introduction
ishaverma244
 
Critical Assessment of the Challenges Faced by Tropical Forest Ecosystem Func...
Critical Assessment of the Challenges Faced by Tropical Forest Ecosystem Func...Critical Assessment of the Challenges Faced by Tropical Forest Ecosystem Func...
Critical Assessment of the Challenges Faced by Tropical Forest Ecosystem Func...
Professional Content Writing's
 
AIR POLLUTION_Khalifa_Alblooshi_Assingment3.pdf
AIR POLLUTION_Khalifa_Alblooshi_Assingment3.pdfAIR POLLUTION_Khalifa_Alblooshi_Assingment3.pdf
AIR POLLUTION_Khalifa_Alblooshi_Assingment3.pdf
Khalifa Alblooshi
 
Organic Slow Release Fertiliser — Smarter Nutrition for Healthier Soil.pptx
Organic Slow Release Fertiliser — Smarter Nutrition for Healthier Soil.pptxOrganic Slow Release Fertiliser — Smarter Nutrition for Healthier Soil.pptx
Organic Slow Release Fertiliser — Smarter Nutrition for Healthier Soil.pptx
soildynamies12
 
Bonikro-Environment Month Ending June 17_Noise & Dust.pptx
Bonikro-Environment Month Ending June 17_Noise & Dust.pptxBonikro-Environment Month Ending June 17_Noise & Dust.pptx
Bonikro-Environment Month Ending June 17_Noise & Dust.pptx
SaturninKouam
 
Fest Brochure for college any kind of event
Fest Brochure for college any kind of eventFest Brochure for college any kind of event
Fest Brochure for college any kind of event
thakurrajsinghrajput
 
The Great Alone - Kristin Hannah.pdfThe Great Alone - Kristin Hannah.pdf
The Great Alone - Kristin Hannah.pdfThe Great Alone - Kristin Hannah.pdfThe Great Alone - Kristin Hannah.pdfThe Great Alone - Kristin Hannah.pdf
The Great Alone - Kristin Hannah.pdfThe Great Alone - Kristin Hannah.pdf
lynnet17122
 
Study of Certain Behavior of Rhesus Macaques
Study of Certain Behavior of Rhesus MacaquesStudy of Certain Behavior of Rhesus Macaques
Study of Certain Behavior of Rhesus Macaques
Rahim Shaikh
 
Ad

Fundamentals of dsp

  • 1. The Fundamentals of Signal Analysis Application Note 243
  • 2. 2
  • 3. 3 Table of Contents Chapter 1 Introduction 4 Chapter 2 The Time, Frequency and Modal Domains: A matter of Perspective 5 Section 1: The Time Domain 5 Section 2: The Frequency Domain 7 Section 3: Instrumentation for the Frequency Domain 17 Section 4: The Modal Domain 20 Section 5: Instrumentation for the Modal Domain 23 Section 6: Summary 24 Chapter 3 Understanding Dynamic Signal Analysis 25 Section 1: FFT Properties 25 Section 2: Sampling and Digitizing 29 Section 3: Aliasing 29 Section 4: Band Selectable Analysis 33 Section 5: Windowing 34 Section 6: Network Stimulus 40 Section 7: Averaging 43 Section 8: Real Time Bandwidth 45 Section 9: Overlap Processing 47 Section 10: Summary 48 Chapter 4 Using Dynamic Signal Analyzers 49 Section 1: Frequency Domain Measurements 49 Section 2: Time Domain Measurements 56 Section 3: Modal Domain Measurements 60 Section 4: Summary 62 Appendix A The Fourier Transform: A Mathematical Background 63 Appendix B Bibliography 66 Index 67
  • 4. 4 The analysis of electrical signals is a fundamental problem for many engineers and scientists. Even if the immediate problem is not electrical, the basic parameters of interest are often changed into electrical signals by means of transducers. Common transducers include accelerometers and load cells in mechanical work, EEG electrodes and blood pressure probes in biology and medicine, and pH and conductivity probes in chem- istry. The rewards for transforming physical parameters to electrical sig- nals are great, as many instruments are available for the analysis of elec- trical signals in the time, frequency and modal domains. The powerful measurement and analysis capabili- ties of these instruments can lead to rapid understanding of the system under study. This note is a primer for those who are unfamiliar with the advantages of analysis in the frequency and modal domains and with the class of analyz- ers we call Dynamic Signal Analyzers. In Chapter 2 we develop the concepts of the time, frequency and modal domains and show why these differ- ent ways of looking at a problem often lend their own unique insights. We then introduce classes of instru- mentation available for analysis in these domains. In Chapter 3 we develop the proper- ties of one of these classes of analyz- ers, Dynamic Signal Analyzers. These instruments are particularly appropri- ate for the analysis of signals in the range of a few millihertz to about a hundred kilohertz. Chapter 4 shows the benefits of Dynamic Signal Analysis in a wide range of measurement situations. The powerful analysis tools of Dynamic Signal Analysis are introduced as needed in each measurement situation. This note avoids the use of rigorous mathematics and instead depends on heuristic arguments. We have found in over a decade of teaching this material that such arguments lead to a better understanding of the basic processes involved in the various domains and in Dynamic Signal Analysis. Equally important, this heuristic instruction leads to better instrument operators who can intelli- gently use these analyzers to solve complicated measurement problems with accuracy and ease*. Because of the tutorial nature of this note, we will not attempt to show detailed solutions for the multitude of measurement problems which can be solved by Dynamic Signal Analysis. Instead, we will concentrate on the features of Dynamic Signal Analysis, how these features are used in a wide range of applications and the benefits to be gained from using Dynamic Signal Analysis. Those who desire more details on specific applications should look to Appendix B. It contains abstracts of Agilent Technologies Application Notes on a wide range of related subjects. These can be obtained free of charge from your local Agilent field engineer or representative. Chapter 1 Introduction * A more rigorous mathematical justification for the arguments developed in the main text can be found in Appendix A.
  • 5. 5 A Matter of Perspective In this chapter we introduce the concepts of the time, frequency and modal domains. These three ways of looking at a problem are interchange- able; that is, no information is lost in changing from one domain to another. The advantage in introducing these three domains is that of a change of perspective. By changing perspective from the time domain, the solution to difficult problems can often become quite clear in the frequency or modal domains. After developing the concepts of each domain, we will introduce the types of instrumentation available. The merits of each generic instrument type are discussed to give the reader an appreciation of the advantages and disadvantages of each approach. Section 1: The Time Domain The traditional way of observing signals is to view them in the time domain. The time domain is a record of what happened to a parameter of the system versus time. For instance, Figure 2.1 shows a simple spring- mass system where we have attached a pen to the mass and pulled a piece of paper past the pen at a constant rate. The resulting graph is a record of the displacement of the mass versus time, a time domain view of displacement. Such direct recording schemes are sometimes used, but it usually is much more practical to convert the parameter of interest to an electrical signal using a transducer. Transducers are commonly available to change a wide variety of parame- ters to electrical signals. Micro- phones, accelerometers, load cells, conductivity and pressure probes are just a few examples. This electrical signal, which repre- sents a parameter of the system, can be recorded on a strip chart recorder as in Figure 2.2. We can adjust the gain of the system to calibrate our measurement. Then we can repro- duce exactly the results of our simple direct recording system in Figure 2.1. Why should we use this indirect approach? One reason is that we are not always measuring displacement. We then must convert the desired parameter to the displacement of the recorder pen. Usually, the easiest way to do this is through the intermediary of electronics. However, even when measuring displacement we would normally use an indirect approach. Why? Primarily because the system in Figure 2.1 is hopelessly ideal. The mass must be large enough and the spring stiff enough so that the pen’s mass and drag on the paper will not Chapter 2 The Time, Frequency and Modal Domains: Figure 2.2 Indirect recording of displacement. Figure 2.1 Direct recording of displacement - a time domain view.
  • 6. 6 affect the results appreciably. Also the deflection of the mass must be large enough to give a usable result, otherwise a mechanical lever system to amplify the motion would have to be added with its attendant mass and friction. With the indirect system a transducer can usually be selected which will not significantly affect the measurement. This can go to the extreme of com- mercially available displacement transducers which do not even con- tact the mass. The pen deflection can be easily set to any desired value by controlling the gain of the electronic amplifiers. This indirect system works well until our measured parameter begins to change rapidly. Because of the mass of the pen and recorder mecha- nism and the power limitations of its drive, the pen can only move at finite velocity. If the measured parameter changes faster, the output of the recorder will be in error. A common way to reduce this problem is to eliminate the pen and record on a photosensitive paper by deflecting a light beam. Such a device is called an oscillograph. Since it is only necessary to move a small, light-weight mirror through a very small angle, the oscillograph can respond much faster than a strip chart recorder. Another common device for display- ing signals in the time domain is the oscilloscope. Here an electron beam is moved using electric fields. The elec- tron beam is made visible by a screen of phosphorescent material. It is capable of accurately displaying signals that vary even more rapidly than the oscillograph can handle. This is because it is only necessary to move an electron beam, not a mirror. The strip chart, oscillograph and oscilloscope all show displacement versus time. We say that changes in this displacement represent the variation of some parameter versus time. We will now look at another way of representing the variation of a parameter. Figure 2.3 Simplified oscillograph operation. Figure 2.4 Simplified oscilloscope operation (Horizontal deflection circuits omitted for clarity).
  • 7. 7 Section 2: The Frequency Domain It was shown over one hundred years ago by Baron Jean Baptiste Fourier that any waveform that exists in the real world can be generated by adding up sine waves. We have illus- trated this in Figure 2.5 for a simple waveform composed of two sine waves. By picking the amplitudes, frequencies and phases of these sine waves correctly, we can generate a waveform identical to our desired signal. Conversely, we can break down our real world signal into these same sine waves. It can be shown that this com- bination of sine waves is unique; any real world signal can be represented by only one combination of sine waves. Figure 2.6a is a three dimensional graph of this addition of sine waves. Two of the axes are time and ampli- tude, familiar from the time domain. The third axis is frequency which allows us to visually separate the sine waves which add to give us our complex waveform. If we view this three-dimensional graph along the frequency axis we get the view in Figure 2.6b. This is the time domain view of the sine waves. Adding them together at each instant of time gives the original waveform. However, if we view our graph along the time axis as in Figure 2.6c, we get a totally different picture. Here we have axes of amplitude versus frequency, what is commonly called the frequency domain. Every sine wave we separated from the input appears as a vertical line. Its height represents its amplitude and its posi- tion represents its frequency. Since we know that each line represents a sine wave, we have uniquely characterized our input signal in the frequency domain*. This frequency domain representation of our signal is called the spectrum of the signal. Each sine wave line of the spectrum is called a component of the total signal. Figure 2.6 The relationship between the time and frequency domains. a) Three- dimensional coordinates showing time, frequency and amplitude b) Time domain view c) Frequency domain view. Figure 2.5 Any real waveform can be produced by adding sine waves together. * Actually, we have lost the phase information of the sine waves. How we get this will be discussed in Chapter 3.
  • 8. 8 The Need for Decibels Since one of the major uses of the frequency domain is to resolve small signals in the presence of large ones, let us now address the problem of how we can see both large and small signals on our display simultaneously. Suppose we wish to measure a distortion component that is 0.1% of the signal. If we set the fundamental to full scale on a four inch (10 cm) screen, the harmonic would be only four thousandths of an inch (0.1 mm) tall. Obviously, we could barely see such a signal, much less measure it accurately. Yet many analyzers are available with the ability to measure signals even smaller than this. Since we want to be able to see all the components easily at the same time, the only answer is to change our amplitude scale. A logarithmic scale would compress our large signal amplitude and expand the small ones, allowing all components to be displayed at the same time. Alexander Graham Bell discovered that the human ear responded logarithmically to power difference and invented a unit, the Bel, to help him measure the ability of people to hear. One tenth of a Bel, the deciBel (dB) is the most common unit used in the frequency domain today. A table of the relationship between volts, power and dB is given in Figure 2.8. From the table we can see that our 0.1% distortion component example is 60 dB below the fundamental. If we had an 80 dB display as in Figure 2.9, the distortion component would occupy 1/4 of the screen, not 1/1000 as in a linear display. Figure 2.8 The relation- ship between decibels, power and voltage. Figure 2.9 Small signals can be measured with a logarithmic amplitude scale.
  • 9. 9 It is very important to understand that we have neither gained nor lost information, we are just represent- ing it differently. We are looking at the same three-dimensional graph from different angles. This different perspective can be very useful. Why the Frequency Domain? Suppose we wish to measure the level of distortion in an audio oscilla- tor. Or we might be trying to detect the first sounds of a bearing failing on a noisy machine. In each case, we are trying to detect a small sine wave in the presence of large signals. Figure 2.7a shows a time domain waveform which seems to be a single sine wave. But Figure 2.7b shows in the frequen- cy domain that the same signal is composed of a large sine wave and significant other sine wave compo- nents (distortion components). When these components are separated in the frequency domain, the small components are easy to see because they are not masked by larger ones. The frequency domain’s usefulness is not restricted to electronics or mechanics. All fields of science and engineering have measurements like these where large signals mask others in the time domain. The frequency domain provides a useful tool in analyzing these small but important effects. The Frequency Domain: A Natural Domain At first the frequency domain may seem strange and unfamiliar, yet it is an important part of everyday life. Your ear-brain combination is an excellent frequency domain analyzer. The ear-brain splits the audio spec- trum into many narrow bands and determines the power present in each band. It can easily pick small sounds out of loud background noise thanks in part to its frequency domain capability. A doctor listens to your heart and breathing for any unusual sounds. He is listening for frequencies which will tell him something is wrong. An experienced mechanic can do the same thing with a machine. Using a screwdriver as a stethoscope, he can hear when a bearing is failing because of the frequencies it produces. Figure 2.7 Small signals are not hidden in the frequency domain. a) Time Domain - small signal not visible b) Frequency Domain - small signal easily resolved
  • 10. 10 So we see that the frequency domain is not at all uncommon. We are just not used to seeing it in graphical form. But this graphical presentation is really not any stranger than saying that the temperature changed with time like the displacement of a line on a graph. Spectrum Examples Let us now look at a few common sig- nals in both the time and frequency domains. In Figure 2.10a, we see that the spectrum of a sine wave is just a single line. We expect this from the way we constructed the frequency domain. The square wave in Figure 2.10b is made up of an infinite num- ber of sine waves, all harmonically related. The lowest frequency present is the reciprocal of the square wave period. These two examples illustrate a property of the frequency trans- form: a signal which is periodic and exists for all time has a discrete fre- quency spectrum. This is in contrast to the transient signal in Figure 2.10c which has a continuous spectrum. This means that the sine waves that make up this signal are spaced infinitesimally close together. Another signal of interest is the impulse shown in Figure 2.10d. The frequency spectrum of an impulse is flat, i.e., there is energy at all frequen- cies. It would, therefore, require infinite energy to generate a true impulse. Nevertheless, it is possible to generate an approximation to an impulse which has a fairly flat spectrum over the desired frequency range of interest. We will find signals with a flat spectrum useful in our next subject, network analysis. Figure 2.10 Frequency spectrum examples.
  • 11. 11 Network Analysis If the frequency domain were restricted to the analysis of signal spectrums, it would certainly not be such a common engineering tool. However, the frequency domain is also widely used in analyzing the behavior of networks (network analysis) and in design work. Network analysis is the general engineering problem of determining how a network will respond to an input*. For instance, we might wish to determine how a structure will behave in high winds. Or we might want to know how effective a sound absorbing wall we are planning on purchasing would be in reducing machinery noise. Or perhaps we are interested in the effects of a tube of saline solution on the transmission of blood pressure waveforms from an artery to a monitor. All of these problems and many more are examples of network analysis. As you can see a “network” can be any system at all. One-port network analysis is the variation of one parameter with respect to another, both measured at the same point (port) of the network. The impedance or compliance of the electronic or mechanical networks shown in Figure 2.11 are typical examples of one-port network analysis. Figure 2.11 One-port network analysis examples. * Network Analysis is sometimes called Stimulus/Response Testing. The input is then known as the stimulus or excitation and the output is called the response.
  • 12. 12 Two-port analysis gives the response at a second port due to an input at the first port. We are generally inter- ested in the transmission and rejec- tion of signals and in insuring the integrity of signal transmission. The concept of two-port analysis can be extended to any number of inputs and outputs. This is called N-port analysis, a subject we will use in modal analysis later in this chapter. We have deliberately defined network analysis in a very general way. It applies to all networks with no limitations. If we place one condition on our network, linearity, we find that network analysis becomes a very powerful tool. Figure 2.12 Two-port network analysis. Figure 2.14 Non-linear system example. Figure 2.15 Examples of non-linearities. Figure 2.13 Linear network. θ2 θ2 θ1 θ1
  • 13. 13 When we say a network is linear, we mean it behaves like the network in Figure 2.13. Suppose one input causes an output A and a second input applied at the same port causes an output B. If we apply both inputs at the same time to a linear network, the output will be the sum of the individual outputs, A + B. At first glance it might seem that all networks would behave in this fash- ion. A counter example, a non-linear network, is shown in Figure 2.14. Suppose that the first input is a force that varies in a sinusoidal manner. We pick its amplitude to ensure that the displacement is small enough so that the oscillating mass does not quite hit the stops. If we add a second identi- cal input, the mass would now hit the stops. Instead of a sine wave with twice the amplitude, the output is clipped as shown in Figure 2.14b. This spring-mass system with stops illustrates an important principal: no real system is completely linear. A system may be approximately linear over a wide range of signals, but eventually the assumption of linearity breaks down. Our spring-mass system is linear before it hits the stops. Likewise a linear electronic amplifier clips when the output voltage approaches the internal supply voltage. A spring may compress linearly until the coils start pressing against each other. Other forms of non-linearities are also often present. Hysteresis (or backlash) is usually present in gear trains, loosely riveted joints and in magnetic devices. Sometimes the non-linearities are less abrupt and are smooth, but nonlinear, curves. The torque versus rpm of an engine or the operating curves of a transistor are two examples that can be considered linear over only small portions of their operating regions. The important point is not that all systems are nonlinear; it is that most systems can be approximated as linear systems. Often a large engineering effort is spent in making the system as linear as practical. This is done for two reasons. First, it is often a design goal for the output of a network to be a scaled, linear version of the input. A strip chart recorder is a good example. The electronic amplifier and pen motor must both be designed to ensure that the deflection across the paper is linear with the applied voltage. The second reason why systems are linearized is to reduce the problem of nonlinear instability. One example would be the positioning system shown in Figure 2.16. The actual position is compared to the desired position and the error is integrated and applied to the motor. If the gear train has no backlash, it is a straight- forward problem to design this system to the desired specifications of positioning accuracy and response time. However, if the gear train has exces- sive backlash, the motor will “hunt,” causing the positioning system to oscillate around the desired position. The solution is either to reduce the loop gain and therefore reduce the overall performance of the system, or to reduce the backlash in the gear train. Often, reducing the backlash is the only way to meet the performance specifications. Figure 2.16 A positioning system. ∑
  • 14. 14 Analysis of Linear Networks As we have seen, many systems are designed to be reasonably linear to meet design specifications. This has a fortuitous side benefit when attempting to analyze networks*. Recall that an real signal can be considered to be a sum of sine waves. Also, recall that the response of a linear network is the sum of the responses to each component of the input. Therefore, if we knew the response of the network to each of the sine wave components of the input spectrum, we could predict the output. It is easy to show that the steady- state response of a linear network to a sine wave input is a sine wave of the same frequency. As shown in Figure 2.17, the amplitude of the output sine wave is proportional to the input amplitude. Its phase is shifted by an amount which depends only on the frequency of the sine wave. As we vary the frequency of the sine wave input, the amplitude proportionality factor (gain) changes as does the phase of the output. If we divide the output of the network by the input, we get a Figure 2.17 Linear network response to a sine wave input. Figure 2.18 The frequency response of a network. * We will discuss the analysis of networks which have not been linearized in Chapter 3, Section 6.
  • 15. 15 normalized result called the frequen- cy response of the network. As shown in Figure 2.18, the frequency response is the gain (or loss) and phase shift of the network as a function of frequency. Because the network is linear, the frequency response is independent of the input amplitude; the frequency response is a property of a linear network, not dependent on the stimulus. The frequency response of a network will generally fall into one of three categories; low pass, high pass, bandpass or a combination of these. As the names suggest, their frequency responses have relatively high gain in a band of frequencies, allowing these frequencies to pass through the network. Other frequencies suffer a relatively high loss and are rejected by the network. To see what this means in terms of the response of a filter to an input, let us look at the bandpass filter case. Figure 2.19 Three classes of frequency response.
  • 16. 16 In Figure 2.20, we put a square wave into a bandpass filter. We recall from Figure 2.10 that a square wave is composed of harmonically related sine waves. The frequency response of our example network is shown in Figure 2.20b. Because the filter is narrow, it will pass only one compo- nent of the square wave. Therefore, the steady-state response of this bandpass filter is a sine wave. Notice how easy it is to predict the output of any network from its frequency response. The spectrum of the input signal is multiplied by the frequency response of the network to determine the components that appear in the output spectrum. This frequency domain output can then be transformed back to the time domain. In contrast, it is very difficult to compute in the time domain the out- put of any but the simplest networks. A complicated integral must be evalu- ated which often can only be done numerically on a digital computer*. If we computed the network response by both evaluating the time domain integral and by transforming to the frequency domain and back, we would get the same results. However, it is usually easier to compute the output by transforming to the frequency domain. Transient Response Up to this point we have only dis- cussed the steady-state response to a signal. By steady-state we mean the output after any transient responses caused by applying the input have died out. However, the frequency response of a network also contains all the information necessary to predict the transient response of the network to any signal. Figure 2.20 Bandpass filter response to a square wave input. Figure 2.21 Time response of bandpass filters. * This operation is called convolution.
  • 17. 17 Let us look qualitatively at the tran- sient response of a bandpass filter. If a resonance is narrow compared to its frequency, then it is said to be a high “Q” resonance*. Figure 2.21a shows a high Q filter frequency response. It has a transient response which dies out very slowly. A time response which decays slowly is said to be lightly damped. Figure 2.21b shows a low Q resonance. It has a transient response which dies out quickly. This illustrates a general principle: signals which are broad in one domain are narrow in the other. Narrow, selective filters have very long response times, a fact we will find important in the next section. Section 3: Instrumentation for the Frequency Domain Just as the time domain can be measured with strip chart recorders, oscillographs or oscilloscopes, the frequency domain is usually measured with spectrum and network analyzers. Spectrum analyzers are instruments which are optimized to characterize signals. They introduce very little distortion and few spurious signals. This insures that the signals on the display are truly part of the input signal spectrum, not signals introduced by the analyzer. Network analyzers are optimized to give accurate amplitude and phase measurements over a wide range of network gains and losses. This design difference means that these two traditional instrument families are not interchangeable.** A spectrum analyzer can not be used as a net- work analyzer because it does not measure amplitude accurately and cannot measure phase. A network analyzer would make a very poor spectrum analyzer because spurious responses limit its dynamic range. In this section we will develop the properties of several types of analyzers in these two categories. The Parallel-Filter Spectrum Analyzer As we developed in Section 2 of this chapter, electronic filters can be built which pass a narrow band of frequencies. If we were to add a meter to the output of such a band- pass filter, we could measure the power in the portion of the spectrum passed by the filter. In Figure 2.22a we have done this for a bank of filters, each tuned to a different frequency. If the center frequencies of these filters are chosen so that the filters overlap properly, the spectrum covered by the filters can be completely characterized as in Figure 2.22b. Figure 2.22 Parallel filter analyzer. * Q is usually defined as: Q = Center Frequency of Resonance Frequency Width of -3 dB Points ** Dynamic Signal Analyzers are an exception to this rule, they can act as both network and spectrum analyzers.
  • 18. 18 How many filters should we use to cover the desired spectrum? Here we have a trade-off. We would like to be able to see closely spaced spectral lines, so we should have a large number of filters. However, each filter is expensive and becomes more expensive as it becomes narrower, so the cost of the analyzer goes up as we improve its resolution. Typical audio parallel-filter analyzers balance these demands with 32 filters, each covering 1/3 of an octave. Swept Spectrum Analyzer One way to avoid the need for such a large number of expensive filters is to use only one filter and sweep it slowly through the frequency range of interest. If, as in Figure 2.23, we display the output of the filter versus the frequency to which it is tuned, we have the spectrum of the input signal. This swept analysis technique is commonly used in rf and microwave spectrum analysis. We have, however, assumed the input signal hasn’t changed in the time it takes to complete a sweep of our analyzer. If energy appears at some frequency at a moment when our filter is not tuned to that frequency, then we will not measure it. One way to reduce this problem would be to speed up the sweep time of our analyzer. We could still miss an event, but the time in which this could happen would be shorter. Unfortunately though, we cannot make the sweep arbitrarily fast because of the response time of our filter. To understand this problem, recall from Section 2 that a filter takes a finite time to respond to changes in its input. The narrower the filter, the longer it takes to respond. If we sweep the filter past a signal too quickly, the filter output will not have a chance to respond fully to the signal. As we show in Figure 2.24, the spectrum display will then be in error; our estimate of the signal level will be too low. In a parallel-filter spectrum analyzer we do not have this problem. All the filters are connected to the input signal all the time. Once we have waited the initial settling time of a single filter, all the filters will be settled and the spectrum will be valid and not miss any transient events. So there is a basic trade-off between parallel-filter and swept spectrum analyzers. The parallel-filter analyzer is fast, but has limited resolution and is expensive. The swept analyzer can be cheaper and have higher resolution but the measurement takes longer (especially at high resolution) and it can not analyze transient events*. Dynamic Signal Analyzer In recent years another kind of analyzer has been developed which offers the best features of the parallel-filter and swept spectrum analyzers. Dynamic Signal Analyzers are based on a high speed calculation routine which acts like a parallel filter analyzer with hundreds of filters and yet are cost-competitive with swept spectrum analyzers. In * More information on the performance of swept spectrum analyzers can be found in Agilent Application Note Series 150. Figure 2.24 Amplitude error form sweeping too fast. Figure 2.23 Simplified swept spectrum analyzer.
  • 19. 19 addition, two channel Dynamic Signal Analyzers are in many ways better network analyzers than the ones we will introduce next. Network Analyzers Since in network analysis it is required to measure both the input and output, network analyzers are generally two channel devices with the capability of measuring the ampli- tude ratio (gain or loss) and phase difference between the channels. All of the analyzers discussed here measure frequency response by using a sinusoidal input to the network and slowly changing its frequency. Dynamic Signal Analyzers use a different, much faster technique for network analysis which we discuss in the next chapter. Gain-phase meters are broadband devices which measure the amplitude and phase of the input and output sine waves of the network. A sinu- soidal source must be supplied to stimulate the network when using a gain-phase meter as in Figure 2.25. The source can be tuned manually and the gain-phase plots done by hand or a sweeping source, and an x-y plotter can be used for automatic frequency response plots. The primary attraction of gain-phase meters is their low price. If a sinusoidal source and a plotter are already available, frequency response measurements can be made for a very low investment. However, because gain-phase meters are broadband, they measure all the noise of the network as well as the desired sine wave. As the network attenuates the input, this noise eventually becomes a floor below which the meter cannot measure. This typically becomes a problem with attenuations of about 60 dB (1,000:1). Tuned network analyzers minimize the noise floor problems of gain- phase meters by including a bandpass filter which tracks the source fre- quency. Figure 2.26 shows how this tracking filter virtually eliminates the noise and any harmonics to allow measurements of attenuation to 100 dB (100,000:1). By minimizing the noise, it is also possible for tuned network analyzers to make more accurate measure- ments of amplitude and phase. These improvements do not come without their price, however, as tracking filters and a dedicated source must be added to the simpler and less costly gain-phase meter. Figure 2.26 Tuned net- work analyzer operation. Figure 2.25 Gain-phase meter operation.
  • 20. 20 Tuned analyzers are available in the frequency range of a few Hertz to many Gigahertz (109 Hertz). If lower frequency analysis is desired, a frequency response analyzer is often used. To the operator, it behaves exactly like a tuned network analyzer. However, it is quite different inside. It integrates the signals in the time domain to effectively filter the signals at very low frequencies where it is not practical to make filters by more conventional techniques. Frequency response analyzers are generally lim- ited to from 1 mHz to about 10 kHz. Section 4: The Modal Domain In the preceding sections we have developed the properties of the time and frequency domains and the instrumentation used in these domains. In this section we will develop the properties of another domain, the modal domain. This change in perspective to a new domain is particularly useful if we are interested in analyzing the behavior of mechanical structures. To understand the modal domain let us begin by analyzing a simple mechanical structure, a tuning fork. If we strike a tuning fork, we easily conclude from its tone that it is pri- marily vibrating at a single frequency. We see that we have excited a network (tuning fork) with a force impulse (hitting the fork). The time domain view of the sound caused by the deformation of the fork is a lightly damped sine wave shown in Figure 2.27b. In Figure 2.27c, we see in the frequency domain that the frequency response of the tuning fork has a major peak that is very lightly damped, which is the tone we hear. There are also several smaller peaks. Figure 2.27 The vibration of a tuning fork. Figure 2.28 Example vibration modes of a tuning fork.
  • 21. 21 Each of these peaks, large and small, corresponds to a “vibration mode” of the tuning fork. For instance, we might expect for this simple example that the major tone is caused by the vibration mode shown in Figure 2.28a. The second harmonic might be caused by a vibration like Figure 2.28b We can express the vibration of any structure as a sum of its vibration modes. Just as we can represent an real waveform as a sum of much sim- pler sine waves, we can represent any vibration as a sum of much simpler vibration modes. The task of “modal” analysis is to determine the shape and the magnitude of the structural deformation in each vibration mode. Once these are known, it usually becomes apparent how to change the overall vibration. For instance, let us look again at our tuning fork example. Suppose that we decided that the second harmonic tone was too loud. How should we change our tuning fork to reduce the harmonic? If we had measured the vibration of the fork and determined that the modes of vibration were those shown in Figure 2.28, the answer becomes clear. We might apply damping material at the center of the tines of the fork. This would greatly affect the second mode which has maximum deflection at the center while only slightly affecting the desired vibration of the first mode. Other solutions are possible, but all depend on knowing the geometry of each mode. The Relationship Between the Time, Frequency and Modal Domain To determine the total vibration of our tuning fork or any other structure, we have to measure the vibration at several points on the structure. Figure 2.30a shows some points we might pick. If we transformed this time domain data to the frequency domain, we would get results like Figure 2.30b. We measure frequency response because we want to measure the properties of the structure independent of the stimulus*. Figure 2.29 Reducing the second harmonic by damping the second vibration mode. Figure 2.30 Modal analysis of a tuning fork. * Those who are more familiar with electronics might note that we have measured the frequency response of a network (structure) at N points and thus have performed an N-port Analysis.
  • 22. 22 We see that the sharp peaks (resonances) all occur at the same frequencies independent of where they are measured on the structure. Likewise we would find by measuring the width of each resonance that the damping (or Q) of each resonance is independent of position. The only parameter that varies as we move from point to point along the structure is the relative height of resonances.* By connecting the peaks of the resonances of a given mode, we trace out the mode shape of that mode. Experimentally we have to measure only a few points on the structure to determine the mode shape. However, to clearly show the mode shape in our figure, we have drawn in the frequency response at many more points in Figure 2.31a. If we view this three-dimensional graph along the distance axis, as in Figure 2.31b, we get a combined frequency response. Each resonance has a peak value cor- responding to the peak displacement in that mode. If we view the graph along the frequency axis, as in Figure 2.31c, we can see the mode shapes of the structure. We have not lost any information by this change of perspective. Each vibration mode is characterized by its mode shape, frequency and damping from which we can reconstruct the frequency domain view. However, the equivalence between the modal, time and frequency domains is not quite as strong as that between the time and frequency domains. Because the modal domain portrays the properties of the net- work independent of the stimulus, transforming back to the time domain gives the impulse response of the structure, no matter what the stimu- lus. A more important limitation of this equivalence is that curve fitting is used in transforming from our frequency response measurements to the modal domain to minimize the effects of noise and small experimen- tal errors. No information is lost in this curve fitting, so all three domains contain the same information, but not the same noise. Therefore, transform- ing from the frequency domain to the modal domain and back again will give results like those in Figure 2.32. The results are not exactly the same, yet in all the important features, the frequency responses are the same. This is also true of time domain data derived from the modal domain. Figure 2.31 The relationship between the frequency and the modal domains. * The phase of each resonance is not shown for clarity of the figures but it too is important in the mode shape. The magnitude of the frequency response gives the magnitude of the mode shape while the phase gives the direction of the deflection.
  • 23. 23 Section 5: Instrumentation for the Modal Domain There are many ways that the modes of vibration can be determined. In our simple tuning fork example we could guess what the modes were. In simple structures like drums and plates it is possible to write an equation for the modes of vibration. However, in almost any real problem, the solution can neither be guessed nor solved analytically because the structure is too complicated. In these cases it is necessary to measure the response of the structure and determine the modes. There are two basic techniques for determining the modes of vibration in complicated structures: 1) exciting only one mode at a time, and 2) computing the modes of vibration from the total vibration. Single Mode Excitation Modal Analysis To illustrate single mode excitation, let us look once again at our simple tuning fork example. To excite just the first mode we need two shakers, driven by a sine wave and attached to the ends of the tines as in Figure 2.33a. Varying the frequency of the generator near the first mode reso- nance frequency would then give us its frequency, damping and mode shape. In the second mode, the ends of the tines do not move, so to excite the second mode we must move the shakers to the center of the tines. If we anchor the ends of the tines, we will constrain the vibration to the second mode alone. Figure 2.32 Curve fitting removes measurement noise. Figure 2.33 Single mode excitation modal analysis.
  • 24. 24 In more realistic, three dimensional problems, it is necessary to add many more shakers to ensure that only one mode is excited. The difficulties and expense of testing with many shakers has limited the application of this traditional modal analysis technique. Modal Analysis From Total Vibration To determine the modes of vibration from the total vibration of the structure, we use the techniques developed in the previous section. Basically, we determine the frequency response of the structure at several points and compute at each reso- nance the frequency, damping and what is called the residue (which represents the height of the reso- nance). This is done by a curve-fitting routine to smooth out any noise or small experimental errors. From these measurements and the geome- try of the structure, the mode shapes are computed and drawn on a CRT display or a plotter. If drawn on a CRT, these displays may be animated to help the user understand the vibration mode. From the above description, it is apparent that a modal analyzer requires some type of network analyzer to measure the frequency response of the structure and a computer to convert the frequency response to mode shapes. This can be accomplished by connecting a Dynamic Signal Analyzer through a digital interface* to a computer furnished with the appropriate soft- ware. This capability is also available in a single instrument called a Struc- tural Dynamics Analyzer. In general, computer systems offer more versa- tile performance since they can be programmed to solve other problems. However, Structural Dynamics Analyzers generally are much easier to use than computer systems. Section 6: Summary In this chapter we have developed the concept of looking at problems from different perspectives. These perspectives are the time, frequency and modal domains. Phenomena that are confusing in the time domain are often clarified by changing perspec- tive to another domain. Small signals are easily resolved in the presence of large ones in the frequency domain. The frequency domain is also valu- able for predicting the output of any kind of linear network. A change to the modal domain breaks down complicated structural vibration problems into simple vibration modes. No one domain is always the best answer, so the ability to easily change domains is quite valuable. Of all the instrumentation available today, only Dynamic Signal Analyzers can work in all three domains. In the next chapter we develop the properties of this important class of analyzers. Figure 2.34 Measured mode shape. * GPIB, Agilent’s implementation of IEEE-488-1975 is ideal for this application.
  • 25. 25 We saw in the previous chapter that the Dynamic Signal Analyzer has the speed advantages of parallel-filter analyzers without their low resolution limitations. In addition, it is the only type of analyzer that works in all three domains. In this chapter we will develop a fuller understanding of this important analyzer family, Dynamic Signal Analyzers. We begin by pre- senting the properties of the Fast Fourier Transform (FFT) upon which Dynamic Signal Analyzers are based. No proof of these properties is given, but heuristic arguments as to their va- lidity are used where appropriate. We then show how these FFT properties cause some undesirable characteris- tics in spectrum analysis like aliasing and leakage. Having demonstrated a potential difficulty with the FFT, we then show what solutions are used to make practical Dynamic Signal Analyzers. Developing this basic knowledge of FFT characteristics makes it simple to get good results with a Dynamic Signal Analyzer in a wide range of measurement problems. Section 1: FFT Properties The Fast Fourier Transform (FFT) is an algorithm* for transforming data from the time domain to the fre- quency domain. Since this is exactly what we want a spectrum analyzer to do, it would seem easy to implement a Dynamic Signal Analyzer based on the FFT. However, we will see that there are many factors which complicate this seemingly straightforward task. First, because of the many calcula- tions involved in transforming domains, the transform must be implemented on a digital computer if the results are to be sufficiently accu- rate. Fortunately, with the advent of microprocessors, it is easy and inex- pensive to incorporate all the needed computing power in a small instru- ment package. Note, however, that we cannot now transform to the frequency domain in a continuous manner, but instead must sample and digitize the time domain input. This means that our algorithm transforms digitized samples from the time do- main to samples in the frequency domain as shown in Figure 3.1.** Because we have sampled, we no longer have an exact representation in either domain. However, a sampled representation can be as close to ideal as we desire by placing our samples closer together. Later in this chapter, we will consider what sample spacing is necessary to guarantee accurate results. Chapter 3 Understanding Dynamic Signal Analysis Figure 3.1 The FFT samples in both the time and frequency domains. Figure 3.2 A time record is N equally spaced samples of the input. * An algorithm is any special mathematical method of solving a certain kind of problem; e.g., the technique you use to balance your checkbook. ** To reduce confusion about which domain we are in, samples in the frequency domain are called lines.
  • 26. 26 Time Records A time record is defined to be N consecutive, equally spaced samples of the input. Because it makes our transform algorithm simpler and much faster, N is restricted to be a multiple of 2, for instance 1024. As shown in Figure 3.3, this time record is transformed as a complete block into a complete block of frequency lines. All the samples of the time record are needed to compute each and every line in the frequency domain. This is in contrast to what one might expect, namely that a single time domain sample transforms to exactly one frequency domain line. Understanding this block processing property of the FFT is crucial to understanding many of the properties of the Dynamic Signal Analyzer. For instance, because the FFT transforms the entire time record block as a total, there cannot be valid frequency domain results until a complete time record has been gathered. However, once completed, the oldest sample could be discarded, all the samples shifted in the time record, and a new sample added to the end of the time record as in Figure 3.4. Thus, once the time record is initially filled, we have a new time record at every time domain sample and therefore could have new valid results in the frequency domain at every time domain sample. This is very similar to the behavior of the parallel-filter analyzers described in the previous chapter. When a signal is first applied to a parallel-filter ana- lyzer, we must wait for the filters to respond, then we can see very rapid changes in the frequency domain. With a Dynamic Signal Analyzer we do not get a valid result until a full time record has been gathered. Then rapid changes in the spectra can be seen. It should be noted here that a new spectrum every sample is usually too much information, too fast. This would often give you thousands of transforms per second. Just how fast a Dynamic Signal Analyzer should transform is a subject better left to the sections in this chapter on real time bandwidth and overlap processing. Figure 3.3 The FFT works on blocks of data. Figure 3.4 A new time record every sample after the time record is filled.
  • 27. 27 How Many Lines are There? We stated earlier that the time record has N equally spaced samples. Another property of the FFT is that it transforms these time domain samples to N/2 equally spaced lines in the frequency domain. We only get half as many lines because each frequency line actually contains two pieces of information, amplitude and phase. The meaning of this is most easily seen if we look again at the relationship between the time and frequency domain. Figure 3.5 reproduces from Chapter 2 our three-dimensional graph of this relationship. Up to now we have implied that the amplitude and frequency of the sine waves contains all the information necessary to reconstruct the input. But it should be obvious that the phase of each of these sine waves is important too. For instance, in Figure 3.6, we have shifted the phase of the higher frequency sine wave components of this signal. The result is a severe distortion of the original wave form. We have not discussed the phase information contained in the spectrum of signals until now because none of the traditional spectrum analyzers are capable of measuring phase. When we discuss measurements in Chapter 4, we shall find that phase contains valuable information in determining the cause of performance problems. What is the Spacing of the Lines? Now that we know that we have N/2 equally spaced lines in the frequency domain, what is their spacing? The lowest frequency that we can resolve with our FFT spectrum analyzer must be based on the length of the time record. We can see in Figure 3.7 that if the period of the input signal is longer than the time record, we have no way of determining the period (or frequency, its reciprocal). Therefore, the lowest frequency line of the FFT must occur at frequency equal to the reciprocal of the time record length. Figure 3.5 The relationship between the time and frequency domains. Figure 3.6 Phase of frequency domain components is important.
  • 28. 28 In addition, there is a frequency line at zero Hertz, DC. This is merely the average of the input over the time record. It is rarely used in spectrum or network analysis. But, we have now established the spacing between these two lines and hence every line; it is the reciprocal of the time record. What is the Frequency Range of the FFT? We can now quickly determine that the highest frequency we can measure is: fmax = q because we have N/2 lines spaced by the reciprocal of the time record starting at zero Hertz *. Since we would like to adjust the fre- quency range of our measurement, we must vary fmax. The number of time samples N is fixed by the imple- mentation of the FFT algorithm. Therefore, we must vary the period of the time record to vary fmax. To do this, we must vary the sample rate so that we always have N samples in our variable time record period. This is illustrated in Figure 3.9. Notice that to cover higher frequencies, we must sample faster. * The usefulness of this frequency range can be limited by the problem of aliasing. Aliasing is discussed in Section 3. Figure 3.7 Lowest frequency resolvable by the FFT. Time Time AmplitudeAmplitude Period of input signal longer than the time record. Frequency of the input signal is unknown.. b) Time Record Time Record Period of input signal equals time record. Lowest resolvable frequency. a) ?? Figure 3.8 Frequencies of all the spectral lines of the FFT. Figure 3.9 Frequency range of Dynamic Signal Analyzers is determined by sample rate. N 1 2 Period of Time Record
  • 29. 29 Section 2*: Sampling and Digitizing Recall that the input to our Dynamic Signal Analyzer is a continuous analog voltage. This voltage might be from an electronic circuit or could be the output of a transducer and be proportional to current, power, pressure, acceleration or any number of other inputs. Recall also that the FFT requires digitized samples of the input for its digital calculations. Therefore, we need to add a sampler and analog to digital converter (ADC) to our FFT processor to make a spec- trum analyzer. We show this basic block diagram in Figure 3.10. For the analyzer to have the high accuracy needed for many measure- ments, the sampler and ADC must be quite good. The sampler must sample the input at exactly the correct time and must accurately hold the input voltage measured at this time until the ADC has finished its conversion. The ADC must have high resolution and linearity. For 70 dB of dynamic range the ADC must have at least 12 bits of resolution and one half least significant bit linearity. A good Digital Voltmeter (DVM) will typically exceed these specifications, but the ADC for a Dynamic Signal Analyzer must be much faster than typical fast DVM’s. A fast DVM might take a thousand readings per second, but in a typical Dynamic Signal Analyzer the ADC must take at least a hundred thousand readings per second. Section 3: Aliasing The reason an FFT spectrum analyzer needs so many samples per second is to avoid a problem called aliasing. Aliasing is a potential prob- lem in any sampled data system. It is often overlooked, sometimes with disastrous results. A Simple Data Logging Example of Aliasing Let us look at a simple data logging example to see what aliasing is and how it can be avoided. Consider the example for recording temperature shown in Figure 3.12. A thermocouple is connected to a digital voltmeter which is in turn connected to a print- er. The system is set up to print the temperature every second. What would we expect for an output? If we were measuring the tempera- ture of a room which only changes slowly, we would expect every reading to be almost the same as the previous one. In fact, we are sampling much more often than necessary to determine the temperature of the room with time. If we plotted the results of this “thought experiment”, we would expect to see results like Figure 3.13. Figure 3.10 Block diagram of dynamic Signal Analyzer. Figure 3.11 The Sampler and ADC must not introduce errors. Figure 3.13 Plot of temperature variation of a room. Figure 3.12 A simple sampled data system. * This section and the next can be skipped by those not interested in the internal operation of a Dynamic Signal Analyzer. However, those who specify the purchase of Dynamic Signal Analyzers are especially encouraged to read these sections. The basic knowledge to be gained from these sections can insure specifying the best analyzer for your requirements.
  • 30. 30 The Case of the Missing Temperature If, on the other hand, we were measuring the temperature of a small part which could heat and cool rapidly, what would the output be? Suppose that the temperature of our part cycled exactly once every second. As shown in Figure 3.14, our printout says that the temperature never changes. What has happened is that we have sampled at exactly the same point on our periodic temperature cycle with every sample. We have not sampled fast enough to see the temperature fluctuations. Aliasing in the Frequency Domain This completely erroneous result is due to a phenomena called aliasing.* Aliasing is shown in the frequency domain in Figure 3.15. Two signals are said to alias if the difference of their frequencies falls in the frequen- cy range of interest. This difference frequency is always generated in the process of sampling. In Figure 3.15, the input frequency is slightly higher than the sampling frequency so a low frequency alias term is generated. If the input frequency equals the sam- pling frequency as in our small part example, then the alias term falls at DC (zero Hertz) and we get the constant output that we saw above. Aliasing is not always bad. It is called mixing or heterodyning in analog electronics, and is commonly used for tuning household radios and televisions as well as many other communication products. However, in the case of the missing tempera- ture variation of our small part, we definitely have a problem. How can we guarantee that we will avoid this problem in a measurement situation? Figure 3.16 shows that if we sample at greater than twice the highest frequency of our input, the alias products will not fall within the frequency range of our input. Therefore, a filter (or our FFT processor which acts like a filter) after the sampler will remove the alias products while passing the desired input signals if the sample rate is greater than twice the highest frequency of the input. If the sample rate is lower, the alias products will fall in the frequency range of the input and no amount of filtering will be able to remove them from the signal. Figure 3.14 Plot of temperature variation of a small part. Figure 3.15 The problem of aliasing viewed in the frequency domain. * Aliasing is also known as fold-over or mixing.
  • 31. 31 This minimum sample rate requirement is known as the Nyquist Criterion. It is easy to see in the time domain that a sampling frequency exactly twice the input frequency would not always be enough. It is less obvious that slightly more than two samples in each period is sufficient information. It certainly would not be enough to give a high quality time display. Yet we saw in Figure 3.16 that meeting the Nyquist Criterion of a sample rate greater than twice the maximum input frequency is suffi- cient to avoid aliasing and preserve all the information in the input signal. The Need for an Anti-Alias Filter Unfortunately, the real world rarely restricts the frequency range of its signals. In the case of the room temperature, we can be reasonably sure of the maximum rate at which the temperature could change, but we still can not rule out stray signals. Signals induced at the powerline frequency or even local radio stations could alias into the desired frequency range. The only way to be really certain that the input frequency range is limited is to add a low pass filter before the sampler and ADC. Such a filter is called an anti-alias filter. An ideal anti-alias filter would look like Figure 3.18a. It would pass all the desired input frequencies with no loss and completely reject any higher frequencies which otherwise could alias into the input frequency range. However, it is not even theoretically possible to build such a filter, much less practical. Instead, all real filters look something like Figure 3.18b with a gradual roll off and finite rejection of undesired signals. Large input signals which are not well attenuated in the transition band could still alias into the desired input frequency Figure 3.16 A frequency domain view of how to avoid aliasing - sample at greater than twice the highest input frequency. Figure 3.18 Actual anti-alias filters require higher sampling frequencies. Figure 3.17 Nyquist Criterion in the time domain.
  • 32. 32 range. To avoid this, the sampling fre- quency is raised to twice the highest frequency of the transition band. This guarantees that any signals which could alias are well attentuated by the stop band of the filter. Typically, this means that the sample rate is now two and a half to four times the maximum desired input frequency. Therefore, a 25 kHz FFT Spectrum Analyzer can require an ADC that runs at 100 kHz as we stated without proof in Section 2 of this Chapter*. The Need for More Than One Anti-Alias Filter Recall from Section 1 of this Chapter, that due to the properties of the FFT we must vary the sample rate to vary the frequency span of our analyzer. To reduce the frequency span, we must reduce the sample rate. From our considerations of aliasing, we now realize that we must also reduce the anti-alias filter frequency by the same amount. Since a Dynamic Signal Analyzer is a very versatile instrument used in a wide range of applications, it is desirable to have a wide range of frequency spans available. Typical instruments have a minimum span of 1 Hertz and a maximum of tens to hundreds of kilohertz. This four decade range typically needs to be covered with at least three spans per decade. This would mean at least twelve anti-alias filters would be required for each channel. Each of these filters must have very good performance. It is desirable that their transition bands be as narrow as possible so that as many lines as possible are free from alias products. Additionally, in a two channel analyzer, each filter pair must be well matched for accurate network analysis measurements. These two points unfortunately mean that each of the filters is expensive. Taken together they can add signifi- cantly to the price of the analyzer. Some manufacturers don’t have a low enough frequency anti-alias filter on the lowest frequency spans to save some of this expense. (The lowest frequency filters cost the most of all.) But as we have seen, this can lead to problems like our “case of the missing temperature”. Digital Filtering Fortunately, there is an alternative which is cheaper and when used in conjunction with a single analog anti- alias filter, always provides aliasing protection. It is called digital filtering because it filters the input signal after we have sampled and digitized it. To see how this works, let us look at Figure 3.19. In the analog case we already discussed, we had to use a new filter every time we changed the sample rate of the Analog to Digital Converter (ADC). When using digital filtering, the ADC sample rate is left constant at the rate needed for the highest frequency span of the analyz- er. This means we need not change our anti-alias filter. To get the reduced sample rate and filtering we need for the narrower frequency spans, we follow the ADC with a digital filter. This digital filter is known as a decimating filter. It not only filters the digital representation of the signal to the desired frequency span, it also reduces the sample rate at its output to the rate needed for that frequency span. Because this filter is digital, there are no manufacturing varia- tions, aging or drift in the filter. Therefore, in a two channel analyzer the filters in each channel are identi- cal. It is easy to design a single digital filter to work on many frequency spans so the need for multiple filters per channel is avoided. All these factors taken together mean that digital filtering is much less expen- sive than analog anti-aliasing filtering. Figure 3.19 Block diagrams of analog and digital filtering. * Unfortunately, because the spacing of the FFT lines depends on the sample rate, increasing the sample rate decreases the number of lines that are in the desired frequency range. Therefore, to avoid aliasing problems Dynamic Signal Analyzers have only .25N to .4N lines instead of N/2 lines.
  • 33. 33 Section 4: Band Selectable Analysis Suppose we need to measure a small signal that is very close in frequency to a large one. We might be measur- ing the powerline sidebands (50 or 60 Hz) on a 20 kHz oscillator. Or we might want to distinguish between the stator vibration and the shaft imbalance in the spectrum of a motor.* Recall from our discussion of the properties of the Fast Fourier Transform that it is equivalent to a set of filters, starting at zero Hertz, equally spaced up to some maximum frequency. Therefore, our frequency resolution is limited to the maximum frequency divided by the number of filters. To just resolve the 60 Hz sidebands on a 20 kHz oscillator signal would require 333 lines (or filters) of the FFT. Two or three times more lines would be required to accurately measure the sidebands. But typical Dynamic Signal Analyzers only have 200 to 400 lines, not enough for accurate measurements. To increase the number of lines would greatly increase the cost of the analyzer. If we chose to pay the extra cost, we would still have trouble seeing the results. With a 4 inch (10 cm) screen, the sidebands would be only 0.01 inch (.25 mm) from the carrier. A better way to solve this problem is to concentrate the filters into the frequency range of interest as in Figure 3.20. If we select the minimum frequency as well as the maximum frequency of our filters we can “zoom in” for a high resolution close-up shot of our frequency spectrum. We now have the capability of looking at the entire spectrum at once with low resolution as well as the ability to look at what interests us with much higher resolution. This capability of increased resolution is called Band Selectable Analysis (BSA).** It is done by mixing or heterodyning the input signal down into the range of the FFT span selected. This technique, familiar to electronic engineers, is the process by which radios and televisions tune in stations. The primary difference between the implementation of BSA in Dynamic Signal Analyzers and heterodyne radios is shown in Figure 3.21. In a radio, the sine wave used for mixing is an analog voltage. In a Dynamic Signal Analyzer, the mixing is done after the input has been digitized, so the “sine wave” is a series of digital numbers into a digital multiplier. This means that the mixing will be done with a very accurate and stable digital signal so our high resolution display will likewise be very stable and accurate. * The shaft of an ac induction motor always runs at a rate slightly lower than a multiple of the driven frequency, an effect called slippage. ** Also sometimes called “zoom”. Figure 3.20 High resolution measurements with Band Selectable Analysis. Figure 3.21 Analyzer block diagram.
  • 34. 34 Section 5: Windowing The Need for Windowing There is another property of the Fast Fourier Transform which affects its use in frequency domain analysis. We recall that the FFT computes the frequency spectrum from a block of samples of the input called a time record. In addition, the FFT algorithm is based upon the assumption that this time record is repeated through- out time as illustrated in Figure 3.22. This does not cause a problem with the transient case shown. But what happens if we are measuring a contin- uous signal like a sine wave? If the time record contains an integral number of cycles of the input sine wave, then this assumption exactly matches the actual input waveform as shown in Figure 3.23. In this case, the input waveform is said to be periodic in the time record. Figure 3.24 demonstrates the difficulty with this assumption when the input is not periodic in the time record. The FFT algorithm is computed on the basis of the highly distorted waveform in Figure 3.24c. We know from Chapter 2 that the actual sine wave input has a frequency spectrum of single line. The spectrum of the input assumed by the FFT in Figure 3.24c should be Figure 3.24 Input signal not periodic in time record. Figure 3.22 FFT assumption - time record repeated throughout all time. Figure 3.23 Input signal periodic in time record.
  • 35. 35 very different. Since sharp phenome- na in one domain are spread out in the other domain, we would expect the spectrum of our sine wave to be spread out through the frequency domain. In Figure 3.25 we see in an actual measurement that our expectations are correct. In Figures 3.25 a & b, we see a sine wave that is periodic in the time record. Its frequency spectrum is a single line whose width is deter- mined only by the resolution of our Dynamic Signal Analyzer.* On the other hand, Figures 3.25c & d show a sine wave that is not periodic in the time record. Its power has been spread throughout the spectrum as we predicted. This smearing of energy throughout the frequency domains is a phenome- na known as leakage. We are seeing energy leak out of one resolution line of the FFT into all the other lines. It is important to realize that leakage is due to the fact that we have taken a finite time record. For a sine wave to have a single line spectrum, it must exist for all time, from minus infinity to plus infinity. If we were to have an infinite time record, the FFT would compute the correct single line spectrum exactly. However, since we are not willing to wait forever to measure its spectrum, we only look at a finite time record of the sine wave. This can cause leakage if the continuous input is not periodic in the time record. It is obvious from Figure 3.25 that the problem of leakage is severe enough to entirely mask small signals close to our sine waves. As such, the FFT would not be a very useful spectrum analyzer. The solution to this problem is known as windowing. The prob- lems of leakage and how to solve them with windowing can be the most confusing concepts of Dynamic Signal Analysis. Therefore, we will now carefully develop the problem and its solution in several representa- tive cases. * The additional two components in the photo are the harmonic distortion of the sine wave source. Figure 3.25 Actual FFT results. b) a) & b) Sine wave periodic in time record d) c) & d) Sine wave not periodic in time record a) c)
  • 36. 36 What is Windowing? In Figure 3.26 we have again repro- duced the assumed input wave form of a sine wave that is not periodic in the time record. Notice that most of the problem seems to be at the edges of the time record, the center is a good sine wave. If the FFT could be made to ignore the ends and con- centrate on the middle of the time record, we would expect to get much closer to the correct single line spectrum in the frequency domain. If we multiply our time record by a function that is zero at the ends of the time record and large in the middle, we would concentrate the FFT on the middle of the time record. One such function is shown in Figure 3.26c. Such functions are called window functions because they force us to look at data through a narrow window. Figure 3.27 shows us the vast improvement we get by windowing data that is not periodic in the time record. However, it is important to realize that we have tampered with the input data and cannot expect perfect results. The FFT assumes the input looks like Figure 3.26d, some- thing like an amplitude-modulated sine wave. This has a frequency spectrum which is closer to the correct single line of the input sine wave than Figure 3.26b, but it still is not correct. Figure 3.28 demonstrates that the windowed data does not have as narrow a spectrum as an unwindowed function which is periodic in the time record. Figure 3.26 The effect of windowing in the time domain. Figure 3.27 Leakage reduction with windowing. a) Sine wave not periodic in time record b) FFT results with no window function c) FFT results with a window function
  • 37. 37 The Hanning Window Any number of functions can be used to window the data, but the most common one is called Hanning. We actually used the Hanning window in Figure 3.27 as our example of leakage reduction with windowing. The Hanning window is also commonly used when measuring random noise. The Uniform Window* We have seen that the Hanning window does an acceptably good job on our sine wave examples, both periodic and non-periodic in the time record. If this is true, why should we want any other windows? Suppose that instead of wanting the frequency spectrum of a continuous signal, we would like the spectrum of a transient event. A typical tran- sient is shown in Figure 3.29a. If we multiplied it by the window function in Figure 3.29b we would get the highly distorted signal shown in Figure 3.29c. The frequency spectrum of an actual transient with and with- out the Hanning window is shown in Figure 3.30. The Hanning window has taken our transient, which naturally has energy spread widely through the frequency domain and made it look more like a sine wave. Therefore, we can see that for transients we do not want to use the Hanning window. We would like to use all the data in the time record equally or uniformly. Hence we will use the Uniform window which weights all of the time record uniformly. The case we made for the Uniform window by looking at transients can be generalized. Notice that our transient has the property that it is zero at the beginning and end of the time record. Remember that we intro- duced windowing to force the input to be zero at the ends of the time record. In this case, there is no need for windowing the input. Any func- tion like this which does not require a window because it occurs completely within the time record is called a self- windowing function. Self-windowing functions generate no leakage in the FFT and so need no window. * The Uniform Window is sometimes referred to as a “Rectangular Window”. Figure 3.28 Windowing reduces leakage but does not eliminate it. b) Windowed measurement - input not periodic in time record a) Leakage-free measurement - input periodic in time record Figure 3.29 Windowing loses information from transient events. Figure 3.30 Spectrums of transients. b) Hanning windowed transientsa) Unwindowed trainsients
  • 38. 38 There are many examples of self- windowing functions, some of which are shown in Figure 3.31. Impacts, impulses, shock responses, sine bursts, noise bursts, chirp bursts and pseudo-random noise can all be made to be self-windowing. Self-windowing functions are often used as the exci- tation in measuring the frequency response of networks, particularly if the network has lightly-damped resonances (high Q). This is because the self-windowing functions gener- ate no leakage in the FFT. Recall that even with the Hanning window, some leakage was present when the signal was not periodic in the time record. This means that without a self-win- dowing excitation, energy could leak from a lightly damped resonance into adjacent lines (filters). The resulting spectrum would show greater damping than actually exists.* The Flat-top Window We have shown that we need a uniform window for analyzing self- windowing functions like transients. In addition, we need a Hanning window for measuring noise and periodic signals like sine waves. We now need to introduce a third window function, the flat-top window, to avoid a subtle effect of the Hanning window. To understand this effect, we need to look at the Hanning window in the frequency domain. We recall that the FFT acts like a set of parallel filters. Figure 3.32 shows the shape of those filters when the Hanning window is used. Notice that the Hanning function gives the filter a very rounded top. If a component of the input signal is centered in the filter it will be measured accurately**. Otherwise, the filter shape will attenuate the component by up to 1.5 dB (16%) when it falls midway between the filters. This error is unacceptably large if we are trying to measure a signal’s amplitude accurately. The solution is to choose a window function which gives the filter a flatter passband. Such a flat-top passband shape is shown in Figure 3.33. The amplitude error from this window function does not exceed .1 dB (1%), a 1.4 dB improvement. Figure 3.33 Flat-top passband shapes. * There is another way to avoid this problem using Band Selectable Analysis. We will illustrate this in the next chapter. ** It will, in fact, be periodic in the time record Figure 3.31 Self-windowing function examples. Figure 3.32 Hanning passband shapes. Figure 3.34 Reduced resolution of the flat-top window. Flat-top Hanning
  • 39. 39 The accuracy improvement does not come without its price, however. Figure 3.34 shows that we have flat- tened the top of the passband at the expense of widening the skirts of the filter. We therefore lose some ability to resolve a small component, closely spaced to a large one. Some Dynamic Signal Analyzers offer both Hanning and flat-top window functions so that the operator can choose between increased accuracy or improved frequency resolution. Other Window Functions Many other window functions are possible but the three listed above are by far the most common for general measurements. For special measurement situations other groups of window functions may be useful. We will discuss two windows which are particularly useful when doing network analysis on mechanical structures by impact testing. The Force and Response Windows A hammer equipped with a force transducer is commonly used to stimulate a structure for response measurements. Typically the force input is connected to one channel of the analyzer and the response of the structure from another transducer is connected to the second channel. This force impact is obviously a self-windowing function. The response of the structure is also self-windowing if it dies out within the time record of the analyzer. To guarantee that the response does go to zero by the end of the time record, an exponential-weighted window called a response window is some- times added. Figure 3.35 shows a response window acting on the re- sponse of a lightly damped structure which did not fully decay by the end of the time record. Notice that unlike the Hanning window, the response window is not zero at both ends of the time record. We know that the response of the structure will be zero at the beginning of the time record (before the hammer blow) so there is no need for the window function to be zero there. In addition, most of the information about the structural response is contained at the begin- ning of the time record so we make sure that this is weighted most heavi- ly by our response window function. The time record of the exciting force should be just the impact with the structure. However, movement of the hammer before and after hitting the structure can cause stray signals in the time record. One way to avoid this is to use a force window shown in Figure 3.36. The force window is unity where the impact data is valid and zero everywhere else so that the analyzer does not measure any stray noise that might be present. Passband Shapes or Window Functions? In the proceeding discussion we sometimes talked about window functions in the time domain. At other times we talked about the filter passband shape in the frequency domain caused by these windows. We change our perspective freely to whichever domain yields the simplest explanation. Likewise, some Dynamic Signal Analyzers call the uniform, Hanning and flat-top functions “win- dows” and other analyzers call those Figure 3.36 Using the force window. Figure 3.35 Using the response window.
  • 40. 40 functions “pass-band shapes”. Use whichever terminology is easier for the problem at hand as they are completely interchangeable, just as the time and frequency domains are completely equivalent. Section 6: Network Stimulus Recall from Chapter 2 that we can measure the frequency response at one frequency by stimulating the network with a single sine wave and measuring the gain and phase shift at that frequency. The frequency of the stimulus is then changed and the measurement repeated until all desired frequencies have been measured. Every time the frequency is changed, the network response must settle to its steady-state value before a new measurement can be taken, making this measurement process a slow task. Many network analyzers operate in this manner and we can make the measurement this way with a two channel Dynamic Signal Analyzer. We set the sine wave source to the center of the first filter as in Figure 3.37. The analyzer then measures the gain and phase of the network at this frequency while the rest of the analyzer’s filters measure only noise. We then increase the source frequen- cy to the next filter center, wait for the network to settle and then meas- ure the gain and phase. We continue this procedure until we have measured the gain and phase of the network at all the frequencies of the filters in our analyzer. This procedure would, within experimental error, give us the same results as we would get with any of the network analyzers described in Chapter 2 with any network, linear or nonlinear. Noise as a Stimulus A single sine wave stimulus does not take advantage of the possible speed the parallel filters of a Dynamic Signal Analyzer provide. If we had a source that put out multiple sine waves, each one centered in a filter, then we could measure the frequency response at all frequencies at one time. Such a source, shown in Figure 3.38, acts like hundreds of sine wave generators connected together. Although this sounds very expensive, Figure 3.37 Frequency response measurements with a sine wave stimulus. Figure 3.38 Pseudo-random noise as a stimulus.
  • 41. 41 just such a source can be easily generated digitally. It is called a pseudo-random noise or periodic random noise source. From the names used for this source it is apparent that it acts somewhat like a true noise generator, except that it has periodicity. If we add together a large number of sine waves, the result is very much like white noise. A good analogy is the sound of rain. A single drop of water makes a quite distinctive splashing sound, but a rain storm sounds like white noise. However, if we add together a large number of sine waves, our noise-like signal will periodically repeat its sequence. Hence, the name periodic random noise (PRN) source. A truly random noise source has a spectrum shown in Figure 3.39. It is apparent that a random noise source would also stimulate all the filters at one time and so could be used as a network stimulus. Which is a better stimulus? The answer depends upon the measurement situation. Linear Network Analysis If the network is reasonably linear, PRN and random noise both give the same results as the swept-sine test of other analyzers. But PRN gives the frequency response much faster. PRN can be used to measure the frequency response in a single time record. Because the random source is true noise, it must be averaged for several time records before an accurate fre- quency response can be determined. Therefore, PRN is the best stimulus to use with fairly linear networks because it gives the fastest results*. Non-Linear Network Analysis If the network is severely non-linear, the situation is quite different. In this case, PRN is a very poor test signal and random noise is much better. To see why, let us look at just two of the sine waves that compose the PRN source. We see in Figure 3.40 that if two sine waves are put through a nonlinear network, distortion products will be generated equally spaced from the signals**. Unfortu- nately, these products will fall exactly on the frequencies of the other sine waves in the PRN. So the distortion products add to the output and there- fore interfere with the measurement Figure 3.39 Random noise as a stimulus. Figure 3.40 Pseudo-random noise distortion. ∆ ∆ ∆∆ * There is another reason why PRN is a better test signal than random or linear networks. Recall from the last section that PRN is self-windowing. This means that unlike random noise, pseudo-random noise has no leakage. Therefore, with PRN, we can measure lightly damped (high Q) resonances more easily than with random noise. ** This distortion is called intermodulation distortion.
  • 42. 42 of the frequency response. Figure 3.41a shows the jagged response of a nonlinear network measured with PRN. Because the PRN source repeats itself exactly every time record, this noisy looking trace never changes and will not average to the desired frequency response. With random noise, the distortion components are also random and will average out. Therefore, the frequency response does not include the distor- tion and we get the more reasonable results shown in Figure 3.41b. This points out a fundamental problem with measuring non-linear networks; the frequency response is not a property of the network alone, it also depends on the stimulus. Each stimulus, swept-sine, PRN and random noise will, in general, give a different result. Also, if the amplitude of the stimulus is changed, you will get a different result. To illustrate this, consider the mass-spring system with stops that we used in Chapter 2. If the mass does not hit the stops, the system is linear and the frequency response is given by Figure 3.42a. If the mass does hit the stops, the output is clipped and a large number of distortion components are generated. As the output approaches a square wave, the fundamental com- ponent becomes constant. Therefore, as we increase the input amplitude, the gain of the network drops. We get a frequency response like Figure 3.42b, where the gain is dependent on the input signal amplitude. So as we have seen, the frequency response of a nonlinear network is not well defined, i.e., it depends on the stimulus. Yet it is often used in spite of this. The frequency response of linear networks has proven to be a very powerful tool and so naturally people have tried to extend it to non-linear analysis, particularly since other nonlinear analysis tools have proved intractable. If every stimulus yields a different frequency response, which one should we use? The “best” stimulus could be considered to be one which approximates the kind of signals you would expect to have as normal inputs to the network. Since any large collection of signals begins to look like noise, noise is a good test signal*. As we have already explained, noise is also a good test signal because it speeds the analysis by exciting all the filters of our analyzer simultaneously. But many other test signals can be used with Dynamic Signal Analyzers and are “best” (optimum) in other senses. As explained in the beginning of this section, sine waves can be used to give the same results as other types of network analyzers although the speed advantage of the Dynamic Signal Analyzer is lost. A fast sine sweep (chirp) will give very similar results with all the speed of Dynamic Signal Analysis and so is a better test signal. An impulse is a good test signal for acoustical testing if the net- work is linear. It is good for acoustics because reflections from surfaces at different distances can easily be isolated or eliminated if desired. For instance, by using the “force” window described earlier, it is easy to get the free field response of a speaker by eliminating the room reflections from the windowed time record. Band-Limited Noise Before leaving the subject of network stimulus, it is appropriate to discuss the need to band limit the stimulus. We want all the power of the stimulus to be concentrated in the frequency region we are analyzing. Any power * This is a consequence of the central limit theorem. As an example, the telephone companies have found that when many conversations are transmitted together, the result is like white noise. The same effect is found more commonly at a crowded cocktail party. Figure 3.42 Nonlinear system. Figure 3.41 Nonlinear transfer function. a) Pseudo-random noise stimulus b) Random noise stimulus
  • 43. 43 outside this region does not contribute to the measurement and could excite non-linearities. This can be a particularly severe problem when testing with random noise since it theoretically has the same power at all frequencies (white noise). To eliminate this problem, Dynamic Signal Analyzers often limit the frequency range of their built-in noise stimulus to the frequency span selected. This could be done with an external noise source and filters, but every time the analyzer span changed, the noise power and filter would have to be readjusted. This is done auto- matically with a built-in noise source so transfer function measurements are easier and faster. Section 7: Averaging To make it as easy as possible to develop an understanding of Dynamic Signal Analyzers we have almost exclusively used examples with deter- ministic signals, i.e., signals with no noise. However, as the real world is rarely so obliging, the desired signal often must be measured in the pres- ence of significant noise. At other times the “signals” we are trying to measure are more like noise them- selves. Common examples that are somewhat noise-like include speech, music, digital data, seismic data and mechanical vibrations. Because of these two common conditions, we must develop techniques both to measure signals in the presence of noise and to measure the noise itself. The standard technique in statistics to improve the estimates of a value is to average. When we watch a noisy reading on a Dynamic Signal Analyzer, we can guess the average value. But because the Dynamic Signal Analyzer contains digital computation capability we can have it compute this average value for us. Two kinds of averaging are available, RMS (or “power” averaging) and linear averaging. RMS Averaging When we watch the magnitude of the spectrum and attempt to guess the average value of the spectrum com- ponent, we are doing a crude RMS* average. We are trying to determine the average magnitude of the signal, ignoring any phase difference that may exist between the spectra. This averaging technique is very valuable for determining the average power in any of the filters of our Dynamic Signal Analyzers. The more averages we take, the better our estimate of the power level. In Figure 3.43, we show RMS aver- aged spectra of random noise, digital data and human voices. Each of these examples is a fairly random process, but when averaged we can see the basic properties of its spectrum. If we want to measure a small signal in the presence of noise, RMS averag- ing will give us a good estimate of the signal plus noise. We can not improve the signal to noise ratio with RMS averaging; we can only make more accurate estimates of the total signal plus noise power. Figure 3.43 RMS averaged spectra. a) Random noise b) Digital data c) Voices Traces were separated 30 dB for clarity Upper trace: female speaker Lower trace: male speaker * RMS stands for “root-mean-square” and is calculated by squaring all the values, adding the squares together, dividing by the number of measurements (mean) and taking the square root of the result.
  • 44. 44 Linear Averaging However, there is a technique for improving the signal to noise ratio of a measurement, called linear aver- aging. It can be used if a trigger sig- nal which is synchronous with the periodic part of the spectrum is available. Of course, the need for a synchronizing signal is somewhat restrictive, although there are numer- ous situations in which one is avail- able. In network analysis problems the stimulus signal itself can often be used as a synchronizing signal. Linear averaging can be implemented many ways, but perhaps the easiest to understand is where the averaging is done in the time domain. In this case, the synchronizing signal is used to trigger the start of a time record. Therefore, the periodic part of the input will always be exactly the same in each time record we take, whereas the noise will, of course, vary. If we add together a series of these trig- gered time records and divide by the number of records we have taken we will compute what we call a linear average. Since the periodic signal will have repeated itself exactly in each time record, it will average to its exact value. But since the noise is different in each time record, it will tend to average to zero. The more averages we take, the closer the noise comes to zero and we continue to improve the signal to noise ratio of our meas- urement. Figure 3.44 shows a time record of a square wave buried in noise. The resulting time record after 128 averages shows a marked im- provement in the signal to noise ratio. Transforming both results to the frequency domain shows how many of the harmonics can now be accu- rately measured because of the reduced noise floor. Figure 3.44 Linear averaging. b) Single record, no averaginga) Single record, no averaging d) 128 linear averagesc) 128 linear averages
  • 45. 45 Section 8: Real Time Bandwidth Until now we have ignored the fact that it will take a finite time to com- pute the FFT of our time record. In fact, if we could compute the trans- form in less time than our sampling period we could continue to ignore this computational time. Figure 3.45 shows that under this condition we could get a new frequency spectrum with every sample. As we have seen from the section on aliasing, this could result in far more spectrums every second than we could possibly comprehend. Worse, because of the complexity of the FFT algorithm, it would take a very fast and very expensive computer to generate spectrums this rapidly. A reasonable alternative is to add a time record buffer to the block dia- gram of our analyzer. In Figure 3.47 we can see that this allows us to compute the frequency spectrum of the previous time record while gath- ering the current time record. If we can compute the transform before the time record buffer fills, then we are said to be operating in real time. To see what this means, let us look at the case where the FFT computation takes longer than the time to fill the time record. The case is illustrated in Figure 3.48. Although the buffer is full, we have not finished the last transform, so we will have to stop taking data. When the transform is finished, we can transfer the time record to the FFT and begin to take another time record. This means that we missed some input data and so we are said to be not operating in real time. Recall that the time record is not constant but deliberately varied to change the frequency span of the ana- lyzer. For wide frequency spans the time record is shorter. Therefore, as we increase the frequency span of the analyzer, we eventually reach a span where the time record is equal to the FFT computation time. This frequen- cy span is called the real time band- width. For frequency spans at and below the real time bandwidth, the analyzer does not miss any data. Real Time Bandwidth Requirements How wide a real time bandwidth is needed in a Dynamic Signal Analyzer? Let us examine a few typical meas- urements to get a feeling for the considerations involved. Adjusting Devices If we are measuring the spectrum or frequency response of a device which we are adjusting, we need to watch the spectrum change in what might be called psychological real time. A new spectrum every few tenths of a second is sufficiently fast to allow an operator to watch adjustments in what he would consider to be real time. However, if the response time of the device under test is long, the speed of the analyzer is immaterial. We will have to wait for the device to respond to the changes before the spectrum will be valid, no matter how many spectrums we generate in that time. This is what makes adjusting lightly damped (high Q) resonances tedious. Figure 3.45 A new transform every sample. Figure 3.46 Time buffer added to block diagram. Figure 3.48 Non-real time operation. Figure 3.47 Real time operation.
  • 46. 46 RMS Averaging A second case of interest in determin- ing real time bandwidth requirements is measurements that require RMS averaging. We might be interested in determining the spectrum distribution of the noise itself or in reducing the variation of a signal contaminated by noise. There is no requirement in averaging that the records must be consecutive with no gaps*. Therefore, a small real time bandwidth will not affect the accuracy of the results. However, the real time bandwidth will affect the speed with which an RMS averaged measurement can be made. Figure 3.49 shows that for frequency spans above the real time bandwidth, the time to complete the average of N records is dependent only on the time to compute the N transforms. Rather than continually reducing the time to compute the RMS average as we increase our span, we reach a fixed time to compute N averages. Therefore, a small real time band- width is only a problem in RMS aver- aging when large spans are used with a large number of averages. Under these conditions we must wait longer for the answer. Since wider real time bandwidths require faster computa- tions and therefore a more expensive processor, there is a straightforward trade-off of time versus money. In the case of RMS averaging, higher real time bandwidth gives you somewhat faster measurements at increased analyzer cost. Transients The last case of interest in determin- ing the needed real time bandwidth is the analysis of transient events. If the entire transient fits within the time record, the FFT computation time is of little interest. The analyzer can be triggered by the transient and the event stored in the time record buffer. The time to compute its spectrum is not important. However, if a transient event contains high frequency energy and lasts longer than the time record necessary to measure the high frequency energy, then the processing speed of the ana- lyzer is critical. As shown in Figure 3.50b, some of the transient will not be analyzed if the computation time exceeds the time record length. In the case of transients longer than the time record, it is also imperative that there is some way to rapidly record the spectrum. Otherwise, the Figure 3.49 RMS averaging time. Figure 3.50 Transient analysis. * This is because to average at all the signal must be periodic and the noise stationary.
  • 47. 47 information will be lost as the analyzer updates the display with the spectrum of the latest time record. A special display which can show more than one spectrum (“waterfall” display), mass memory, a high speed link to a computer or a high speed facsimile recorder is need- ed. The output device must be able to record a spectrum every time record or information will be lost. Fortunately, there is an easy way to avoid the need for an expensive wide real time bandwidth analyzer and an expensive, fast spectrum recorder. One-time transient events like explo- sions and pass-by noise are usually tape recorded for later analysis because of the expense of repeating the test. If this tape is played back at reduced speed, the speed demands on the analyzer and spectrum recorder are reduced. Timing markers could also be recorded at one time record intervals. This would allow the analy- sis of one record at a time and plot- ting with a very slow (and commonly available) X-Y plotter. So we see that there is no clear-cut answer to what real time bandwidth is necessary in a Dynamic Signal Analyzer. Except in analyzing long transient events, the added expense of a wide real time bandwidth gives little advantage. It is possible to ana- lyze long transient events with a nar- row real time bandwidth analyzer, but it does require the recording of the input signal. This method is slow and requires some operator care, but one can avoid purchasing an expensive analyzer and fast spectrum recorder. It is a clear case of speed of analysis versus dollars of capital equipment. Section 9: Overlap Processing In Section 8 we considered the case where the computation of the FFT took longer than the collecting of the time record. In this section we will look at a technique, overlap process- ing, which can be used when the FFT computation takes less time than gathering the time record. To understand overlap processing, let us look at Figure 3.51a. We see a low frequency analysis where the gather- ing of a time record takes much longer than the FFT computation time. Our FFT processor is sitting idle much of the time. If instead of waiting for an entirely new time record we overlapped the new time record with some of the old data, we would get a new spectrum as often as we computed the FFT. This overlap processing is illustrated in Figure 3.51b. To understand the benefits of overlap processing, let us look at the same cases we used in the last section. Adjusting Devices We saw in the last section that we need a new spectrum every few tenths of a second when adjusting devices. Without overlap processing this limits our resolution to a few Hertz. With overlap processing our resolution is unlimited. But we are not getting something for nothing. Because our overlapped time record contains old data from before the device adjustment, it is not complete- ly correct. It does indicate the direc- tion and the amount of change, but we must wait a full time record after the change for the new spectrum to be accurately displayed. Nonetheless, by indicating the direction and magnitude of the changes every few tenths of a second, overlap processing does help in the adjustment of devices. Figure 3.51 Understanding overlap processing.
  • 48. 48 RMS Averaging Overlap processing can give dramatic reductions in the time to compute RMS averages with a given variance. Recall that window functions reduce the effects of leakage by weighting the ends of the time record to zero. Overlapping eliminates most or all of the time that would be wasted taking this data. Because some overlapped data is used twice, more averages must be taken to get a given variance than in the non-overlapped case. Figure 3.52 shows the improvements that can be expected by overlapping. Transients For transients shorter than the time record, overlap processing is useless. For transients longer than the time record the real time bandwidth of the analyzer and spectrum recorder is usually a limitation. If it is not, overlap processing allows more spectra to be generated from the transient, usually improving resolution of resulting plots. Section 10: Summary In this chapter we have developed the basic properties of Dynamic Signal Analyzers. We found that many properties could be understood by considering what happens when we transform a finite, sampled time record. The length of this record determines how closely our filters can be spaced in the frequency domain and the number of samples determines the number of filters in the frequency domain. We also found that unless we filtered the input we could have errors due to aliasing and that finite time records could cause a problem called leakage which we minimized by windowing. We then added several features to our basic Dynamic Signal Analyzer to enhance its capabilities. Band Selectable Analysis allows us to make high resolution measurements even at high frequencies. Averaging gives more accurate measurements when noise is present and even allows us to improve the signal to noise ratio when we can use linear averaging. Finally, we incorporated a noise source in our analyzer to act as a stimulus for transfer function measurements. Figure 3.52 RMS averaging speed improvements with overlap processing.
  • 49. 49 In Chapters 2 & 3, we developed an understanding of the time, frequency and modal domains and how Dynamic Signal Analyzers operate. In this chapter we show how to use Dynamic Signal Analyzers in a wide variety of measurement situations. We introduce the measurement functions of Dynamic Signal Analyzers as we need them for each measurement situation. We begin with some common elec- tronic and mechanical measurements in the frequency domain. Later in the chapter we introduce time and modal domain measurements. Section 1: Frequency Domain Measurements Oscillator Characterization Let us begin by measuring the charac- teristics of an electronic oscillator. An important specification of an oscillator is its harmonic distortion. In Figure 4.1, we show the fundamen- tal through fifth harmonic of a 1 KHz oscillator. Because the frequency is not necessarily exactly 1 KHz, win- dowing should be used to reduce the leakage. We have chosen the flat-top window so that we can accurately measure the amplitudes. Notice that we have selected the input sensitivity of the analyzer so that the fundamental is near the top of the display. In general, we set the input sensitivity to the most sensitive range which does not overload the analyzer. Severe distortion of the input signal will occur if its peak voltage exceeds the range of the analog to digital converter. Therefore, all dynamic signal analyzers warn the user of this condition by some kind of overload indicator. It is also important to make sure the analyzer is not underloaded. If the signal going into the analog to digital converter is too small, much of the useful information of the spectrum may be below the noise level of the analyzer. Therefore, setting the input sensitivity to the most sensitive range that does not cause an overload gives the best possible results. In Figure 4.1a we chose to display the spectrum amplitude in logarith- mic form to insure that we could see distortion products far below the fundamental. All signal amplitudes on this display are in dBV, decibels below 1 Volt RMS. However, since most Dynamic Signal Analyzers have very versatile display capabilities, we could also display this spectrum linearly as in Figure 4.1b. Here the units of amplitude are volts. Power-Line Sidebands Another important measure of an oscillator’s performance is the level of its power-line sidebands. In Figure 4.2, we use Band Selectable Analysis to “zoom in” on the signal so that we can easily resolve and measure the sidebands which are only 60 Hz away from our 1 KHz signal. With some analyzers it is possible to measure signals only millihertz away from the fundamental if desired. Phase Noise The short-term stability of a high frequency oscillator is very important in communications and radar. One measure of this is called phase noise. It is often measured by the technique shown in Figure 4.3a. This mixes down and cancels the oscillator Figure 4.1 Harmonic distortion of an Audio Oscillator - Flat-top window used. a) Logarithmic amplitude scale b) Linear amplitude scale Figure 4.2 Powerline sidebands of an Audio Oscillator - Band Selectable Analysis and Hanning window used for maximum resolution. Chapter 4 Using Dynamic Signal Analyzers
  • 50. 50 carrier leaving only the phase noise sidebands. It is therefore possible to measure the phase noise far below the carrier level since the carrier does not limit the range of our measure- ment. Figure 4.3b shows the close-in phase noise of a 20 MHz synthesizer. Here, since we are measuring noise, we use RMS averaging and the Hanning window. Dynamic Signal Analyzers offer two main advantages over swept signal analyzers in this application. First, the phase noise can be meas- ured much closer to the carrier. This is because a good swept analyzer can only resolve signals down to about 1 Hz, while a Dynamic Signal Analyzer can resolve signals to a few millihertz. Secondly, the Dynamic Signal Analyzer can determine the complete phase noise spectrum in a few minutes whereas a swept analyzer would take hours. Spectra-like phase noise are usually displayed against the logarithm of fre- quency instead of the linear frequen- cy scale. This is done in Figure 4.3c. Because the FFT generates linearly spaced filters, the filters are not equally spaced on the display. It is important to realize that no informa- tion is missed by these seemingly widely spaced filters. We recall on a linear frequency scale that all the filters overlapped so that no part of the spectrum was missed. All we have done here is to change the presenta- tion of the same measurement. Figure 4.3 Phase Noise Measurement. a) Block diagram of phase noise measurement b) Phase noise of a frequency synthesizer - RMS averaging and Hanning window used for noise measurements c) Logarithmic frequency axis presentation of phase noise normalized to a 1 Hz bandwidth (power spectral density)
  • 51. 51 In addition, phase noise and other noise measurements are often nor- malized to the power that would be measured in a 1 Hz wide square filter. This measurement is called a power spectral density and is often provided on Dynamic Signal Analyzers. It sim- ply changes the presentation on the display to this desired form; the data is exactly the same in Figures 4.3b and 4.3c, but the latter is in the more conventional presentation. Rotating Machinery Characterization A rotating machine can be thought of as a mechanical oscillator.* There- fore, many of the measurements we made for an electronic oscillator are also important in characterizing rotating machinery. To characterize a rotating machine we must first change its mechanical vibration into an electrical signal. This is often done by mounting an accelerometer on a bearing housing where the vibration generated by shaft imbalance and bearing imper- fections will be the highest. A typical spectrum might look like Figure 4.4. It is obviously much more complicat- ed than the relatively clean spectrum of the electronic oscillator we looked at previously. There is also a great deal of random noise; stray vibrations from sources other than our motor that the accelerometer picks up. The effects of this stray vibration have been minimized in Figure 4.4b RMS averaging. In Figure 4.5, we have used the Band Selectable Analysis capability of our analyzer to “zoom-in” and separate the vibration of the stator at 120 Hz from the vibration caused by the rotor imbalance only a few tenths of a Hertz lower in frequency.** This ability to resolve closely spaced spec- trum lines is crucial to our capability to diagnose why the vibration levels of a rotating machine are excessive. The actions we would take to correct an excessive vibration at 120 Hz are quite different if it is caused by a loose stator pole rather than an imbalanced rotor. Since the bearings are the most unreliable part of most rotating machines, we would also like to check our spectrum for indications of bearing failure. Any defect in a bearing, say a spalling on the outer face of a ball bearing, will cause a small vibration to occur each time a ball passes it. This will produce a characteristic frequency in the vibration called the passing frequen- cy. The frequency domain is ideal for Figure 4.6 Vibration caused by small defect in the bearing. Figure 4.5 Stator vibration and rotor imbalance measurement with Band Selectable Analysis. * Or, if you prefer, electronic oscillators can be viewed as rotating machines which can go at millions of RPM’s. ** The rotor in an AC induction motor always runs at a slightly lower frequency than the excitation, an effect called slippage. Figure 4.4 Spectrum of electrical motor vibration.
  • 52. 52 separating this small vibration from all the other frequencies present. This means that we can detect impending bearing failures and schedule a shut- down long before they become the loudly squealing problem that signals an immediate shutdown is necessary. In most rotating machinery monitor- ing situations, the absolute level of each vibration component is not of interest, just how they change with time. The machine is measured when new and throughout its life and these successive spectra are compared. If no catastrophic failures develop, the spectrum components will increase gradually as the machine wears out. However, if an impending bearing failure develops, the passing frequency component corresponding to the defect will increase suddenly and dramatically. An excellent way to store and com- pare these spectra is by using a small desktop computer. The spectra can be easily entered into the computer by an instrument interface like GPIB* and compared with previous results by a trend analysis program. This avoids the tedious and error-prone task of generating trend graphs by hand. In addition, the computer can easily check the trends against limits, pointing out where vibration limits are exceeded or where the trend is for the limit to be exceeded in the near future. Desktop computers are also useful when analyzing machinery that normally operates over a wide range of speeds. Severe vibration modes can be excited when the machine runs at critical speeds. A quick way to determine if these vibrations are a problem is to take a succession of spectra as the machine runs up to speed or coasts down. Each spectrum shows the vibration components of the machine as it passes through an rpm range. If each spectrum is transferred to the computer via GPIB, the results can be processed and displayed as in Figure 4.8. From such a display it is easy to see shaft imbalances, constant frequency vibra- tions (from sources other than the variable speed shaft) and structural vibrations excited by the rotating shaft. The computer gives the capability of changing the display presentation to other forms for greater clarity. Because all the values of the spectra are stored in memory, precise values of the vibration com- ponents can easily be determined. In addition, signal processing can be used to clarify the display. For instance, in Figure 4.8 all signals below -70 dB were ignored. This eliminates meaningless noise from the plot, clarifying the presentation. So far in this chapter we have been discussing only single channel fre- quency domain measurements. Let us now look at some measurements we can make with a two channel Dynamic Signal Analyzer. Figure 4.7 Desktop computer system for monitoring rotating machinery vibration. Motor Computer Dynamic Signal Analyzer Accelerometer GPIB Figure 4.8 Run up test from the system in Figure 4.7. * General Purpose Interface Bus, Agilent’s implementation of IEEE-488-1975.
  • 53. 53 Electronic Filter Characterization In Section 6 of the last chapter, we developed most of the principles we need to characterize a low frequency electronic filter. We show the test setup we might use in Figure 4.9. Because the filter is linear we can use pseudo-random noise as the stimulus for very fast test times. The uniform window is used because the pseudo- random noise is periodic in the time record.* No averaging is needed since the signal is periodic and rea- sonably large. We should be careful, as in the single channel case, to set the input sensitivity for both channels to the most sensitive position which does not overload the analog to digital converters. With these considerations in mind, we get a frequency response magni- tude shown in Figure 4.10a and the phase shown in Figure 4.10b. The primary advantage of this measure- ment over traditional swept analysis techniques is speed. This measure- ment can be made in 1/8 second with a Dynamic Signal Analyzer, but would take over 30 seconds with a swept network analyzer. This speed improvement is particularly impor- tant when the filter under test is being adjusted or when large volumes are tested on a production line. Structural Frequency Response The network under test does not have to be electronic. In Figure 4.11, we are measuring the frequency response of a single structure, in this case a printed circuit board. Because this structure behaves in a linear fashion, Figure 4.10 Frequency response of electronic filter using PRN and uniform window. Figure 4.9 Test setup to measure frequency response of filter. * See the uniform window discussion in Section 6 of the previous chapter for details. Figure 4.11 Frequency response test of a mechanical structure. a) Frequency response magnitude b) Frequency response magnitude and phase
  • 54. 54 we can use pseudo-random noise as a test stimulus. But we might also desire to use true random noise, swept-sine or an impulse (hammer blow) as the stimulus. In Figure 4.12 we show each of these measurements and the frequency responses. As we can see, the results are all the same. The frequency response of a linear network is a property solely of the network, independent of the stimulus used. Since all the stimulus techniques in Figure 4.12 give the same results, we can use whichever one is fastest and easiest. Usually this is the impact stimulus, since a shaker is not required. In Figure 4.11 and 4.12, we have been measuring the acceleration of the structure divided by the force applied. This quality is called mechanical accelerance. To properly scale the displays to the required g’s/lb, we have entered the sensitivi- ties of each transducer into the analyzer by a feature called engineer- ing units. Engineering units simply changes the gain of each channel of the analyzer so that the display corresponds to the physical parame- ter that the transducer is measuring. Other frequency response measure- ments besides mechanical acceler- ance are often made on mechanical structures. Figure 4.14 lists these measurements. By changing transduc- ers we could measure any of these parameters. Or we can use the com- putational capability of the Dynamic Signal Analyzer to compute these measurements from the mechanical impedance measurement we have already made. Figure 4.12 Frequency response of a linear network is independent of the stimulus used. a) Impact stimulus Figure 4.13 Engineering units set input sensitivities to properly scale results. b) Random noise stimulus c) Swept sine stimulus
  • 55. 55 For instance, we can compute velocity by integrating our accelera- tion measurement. Displacement is a double integration of acceleration. Many Dynamic Signal Analyzers have the capability of integrating a trace by simply pushing a button. There- fore, we can easily generate all the common mechanical measurements without the need of many expensive transducers. Coherence Up to this point, we have been measuring networks which we have been able to isolate from the rest of the world. That is, the only stimulus to the network is what we apply and the only response is that caused by this controlled stimulus. This situation is often encountered in test- ing components, e.g., electric filters or parts of a mechanical structure. However, there are times when the components we wish to test can not be isolated from other disturbances. For instance, in electronics we might be trying to measure the frequency response of a switching power supply which has a very large component at the switching frequency. Or we might try to measure the frequency response of part of a machine while other machines are creating severe vibration. In Figure 4.15 we have simulated these situations by adding noise and a 1 KHz signal to the output of an electronic filter. The measured frequency response is shown in Figure 4.16. RMS averaging has reduced the noise contribution, but has not completely eliminated the 1 KHz interference.* If we did not know of the interference, we would think that this filter has an additional resonance at 1 KHz. But Dynamic Signal Analyzers can often make an additional measurement that is not available with traditional network an- alyzers called coherence. Coherence measures the power in the response channel that is caused by the power in the reference channel. It is the out- put power that is coherent with the input power. Figure 4.17 shows the same frequency response magnitude from Figure 4.16 and its coherence. The coherence goes from 1 (all the output power at Figure 4.14 Mechanical frequency response measurements. Figure 4.15 Simulation of frequency response measurement in the presence of noise. Figure 4.17 Magnitude and coherence of frequency response. Figure 4.16 Magnitude of frequency response. * Additional averaging would further reduce this interference.
  • 56. 56 that frequency is caused by the input) to 0 (none of the output power at that frequency is caused by the input). We can easily see from the coherence function that the response at 1 KHz is not caused by the input but by inter- ference. However, our filter response near 500 Hz has excellent coherence and so the measurement here is good. Section 2: Time Domain Measurements A Dynamic Signal Analyzer usually has the capability of displaying the time record on its screen. This is the same waveform we would see with an oscilloscope, a time domain view of the input. For very low frequency or single-shot phenomena the digital time record storage eliminates the need for storage oscilloscope. But there are other time domain measure- ments that a Dynamic Signal Analyzer can make as well. These are called correlation measurements. We will begin this section by defining correla- tion and then we will show how to make these measurements with a Dynamic Signal Analyzer. Correlation is a measure of the similarity between two quantities. To understand the correlation between two waveforms, let us start by multi- plying these waveforms together at each instant in time and adding up all the products. If, as in Figure 4.18, the waveforms are identical, every prod- uct is positive and the resulting sum is large. If however, as in Figure 4.19, the two records are dissimilar, then some of the products would be posi- tive and some would be negative. There would be a tendency for the products to cancel, so the final sum would be smaller. Now consider the waveform in Figure 4.20a, and the same waveform shifted in time, Figure 4.20b. If the time shift were zero, then we would have the same conditions as before, that is, the waveforms would be in phase and the final sum of the prod- ucts would be large. If the time shift between the two waveforms is made large however, the waveforms appear dissimilar and the final sum is small. Figure 4.18 Correlation of two identical signals. Figure 4.19 Correlation of two different signals. Figure 4.20 Correlation of time displaced signals.
  • 57. 57 Going one step farther, we can find the average product for each time shift by dividing each final sum by the number of products contributing to it. If we now plot the average prod- uct as a function of time shift, the resulting curve will be largest when the time shift is zero and will dimin- ish to zero as the time shift increases. This curve is called the auto-correla- tion function of the waveform. It is a graph of the similarity (or correla- tion) between a waveform and itself, as a function of the time shift. The auto-correlation function is easi- est to understand if we look at a few examples. The random noise shown in Figure 4.21 is not similar to itself with any amount of time shift (after all, it is random) so its auto-correla- tion has only a single spike at the point of 0 time shift. Pseudo-random noise, however, repeats itself periodi- cally, so when the time shift equals a multiple of the period, the auto-corre- lation repeats itself exactly as in Figure 4.22. These are both special cases of a more general statement; the auto-correlation of any periodic waveform is periodic and has the same period as the waveform itself. Figure 4.21 Auto correlation of random noise. a) Time record of random noise Figure 4.22 Auto correlation of pseudo-random noise. τ Ν∆ ∆ ∆ Ν∆ b) Auto correlation of random noise
  • 58. 58 This can be useful when trying to extract a signal hidden by noise. Figure 4.24a shows what looks like random noise, but there is actually a low level sine wave buried in it. We can see this in Figure 4.24b where we have taken 100 averages of the auto-correlation of this signal. The noise has become the spike around a time shift of zero whereas the auto-correlation of the sine wave is clearly visible, repeating itself with the period of the sine wave. If a trigger signal that is synchronous with the sine wave is available, we can extract the signal from the noise by linear averaging as in the last section. But the important point about the auto-correlation function is that no synchronizing trigger is needed. In signal identification prob- lems like radio astronomy and pas- sive sonar, a synchronizing signal is not available and so auto-correlation is an important tool. The disadvan- tage of auto-correlation is that the input waveform is not preserved as it is in linear averaging. Since we can transform any time domain waveform into the frequency domain, the reader may wonder what is the frequency transform of the auto-correlation function? It turns out to be the magnitude squared of the spectrum of the input. Thus, there is really no new information in the auto- correlation function, we had the same Figure 4.23 Auto-correlation of periodic waveforms. Figure 4.24 Auto-correlation of a sine wave buried by noise.
  • 59. 59 information in the spectrum of the signal. But as always, a change in perspective between these two domains often clarifies problems. In general, impulsive type signals like pulse trains, bearing ping or gear chatter show up better in corre- lation measurements, while signals with several sine waves of different frequencies like structural vibrations and rotating machinery are clearer in the frequency domain. Cross Correlation If auto-correlation is concerned with the similarity between a signal and a time shifted version of itself, then it is reasonable to suppose that the same technique could be used to measure the similarity between two non-identical waveforms. This is called the cross correlation function. If the same signal is present in both waveforms, it will be reinforced in the cross correlation function, while any uncorrelated noise will be reduced. In many network analysis problems, the stimulus can be cross correlated with the response to reduce the effects of noise. Radar, active sonar, room acoustics and transmission path delays all are net- work analysis problems where the stimulus can be measured and used to remove contaminating noise from the response by cross correlation.* Figure 4.25 Simulated radar cross correlation. a) ‘Transmitted’ signal, a swept-frequency sine wave Figure 4.26 Cross correlation shows multiple transmission paths. b) ‘Received’ signal, the swept sine wave plus noise c) Result of cross correlation of the transmitted and received signals. Distance from left edge to peak represents transmission delay. * The frequency transform of the cross correlation function is the cross power spectrum, a function discussed in Appendix A.
  • 60. 60 Section 3: Modal Domain Measurements In Section 1 we learned how to make frequency domain measurements of mechanical structures with Dynamic Signal Analyzers. Let us now analyze the behavior of a simple mechanical structure to understand how to make measurements in the modal domain. We will test a simple metal plate shown in Figure 4.27. The plate is freely suspended using rubber cords in order to isolate it from any object which would alter its properties. The first decision we must make in analyzing this structure is how many measurements to make and where to make them on the structure. There are no firm rules for this decision; good engineering judgment must be exercised instead. Measuring too many points make the calculations unnecessarily complex and time consuming. Measuring too few points can cause spatial aliasing; i.e., the measurement points are so far apart that high frequency bending modes in the structure can not be measured accurately. To decide on a reasonable number of measurement points, take a few trial frequency response mea- surements of the structure to deter- mine the highest significant resonant frequencies present. The wave length can be determined empirically by changing the distance between the stimulus and the sensor until a full 360° phase shift has occurred from the original measurement point. Measurement point spacing should be approximately one-quarter or less of this wavelength. Measurement points can be spaced uniformly over the structure using this guideline, but it may be desirable to modify this procedure slightly. Few structures are as uniform as this sim- ple plate example,* but complicated structures are made of simpler, more uniform parts. The behavior of the structure at the junction of these parts is often of great interest, so measurements should be made in these critical areas as well. Once we have decided on where the measurements should be taken, we number these measurement points (the order can be arbitrary) and enter the coordinates of each point into our modal analyzer. This is necessary so that the analyzer can correlate the measurements we make with a position on the structure to compute the mode shapes. The next decision we must make is what signal we should use for a stim- ulus. Our plate example is a linear structure as it has no loose rivet joints, non-linear damping materials, or other non-linearities. Therefore, we know that we can use any of the stimuli described in Chapter 3, Section 6. In this case, an impulse would be a particularly good test signal. We could supply the impulse by hitting the structure with a ham- mer equipped with a force transducer. * If all structures were this simple, there would be no need for modal analysis. Figure 4.28 Spacial Aliasing - Too few measurement points lead to inaccurate analysis of high frequency bending mode. Figure 4.27 Modal analysis example - Determine the modes in this simple plate.
  • 61. 61 This is probably the easiest way to excite the structure as a shaker and its associated driver are not required. As we saw in the last chapter, howev- er, if the structure were non-linear, then random noise would be a good test signal. To supply random noise to the structure we would need to use a shaker. To keep our example more general, we will use random noise as a stimulus. The shaker is connected firmly to the plate via a load cell (force transduc- er) and excited by the band-limited noise source of the analyzer. Since this force is the network stimulus, the load cell output is connected through a suitable amplifier to the reference channel of the analyzer. To begin the experiment, we connect an accelerometer* to the plate at the same point as the load cell. The accelerometer measures the struc- ture’s response and its output is con- nected to the other analyzer channel. Because we are using random noise, we will use a Hanning window and RMS averaging just as we did in the previous section. The resulting frequency response of this measurement is shown in Figure 4.29. The ratio of acceleration to force in g’s/lb is plotted on the vertical axis by the use of engineering units, and the data shows a number of distinct peaks and valleys at partic- ular frequencies. We conclude that the plate moves more freely when subjected to energy at certain specific frequencies than it does in response to energy at other frequencies. We recall that each of the resonant peaks correspond to a mode of vibration of the structure. Our simple plate supports a number of different modes of vibration, all of which are well separated in frequen- cy. Structures with widely separated modes of vibration are relatively straightforward to analyze since each mode can be treated as if it is the only one present. Tightly spaced, but lightly damped vibration modes can also be easily analyzed if the Band Selectable Analysis capability is used to narrow the analyzer’s filter sufficiently to resolve these resonanc- es. Tightly spaced modes whose damping is high enough to cause the responses to overlap create computa- tional difficulties in trying to separate the effects of the vibration modes. Fortunately, many structures fall into the first two categories and so can be easily analyzed. Having inspected the measurement and deciding that it met all the above criteria, we can store it away. We store similar measurements at each point by moving our accelerometer to each numbered point. We will then have all the measurement data we need to fully characterize the struc- ture in the modal domain. Recall from Chapter 2 that each frequency response will have the same number of peaks, with the same resonant frequencies and dampings. The next task is to determine these resonant frequency and damping values for each resonance of interest. We do this by retrieving our stored frequency responses and, using a curve-fitting routine, we calculate the frequency and damping of each resonance of interest. With the structural information we entered earlier, and the frequency and damping of each vibration mode which we have just determined, the analyzer can calculate the mode Figure 4.29 A frequency response of the plate. * Displacement, velocity or strain transducers could also be used, but accelerometers are often used because they are small and light, and therefore do not affect the response of the structure. In addition, they are easy to mount on the structure, reducing the total measurement time.
  • 62. 62 shapes by curve fitting the responses of each point with the measured resonances. In Figure 4.30 we show several mode shapes of our simple rectangular plate. These mode shapes can be animated on the display to show the relative motion of the vari- ous parts of the structure. The graphs in Figure 4.30, however, only show the maximum deflection. Section 4: Summary This note has attempted to demon- strate the advantages of expanding one’s analysis capabilities from the time domain to the frequency and modal domains. Problems that are difficult in one domain are often clarified by a change in perspective to another domain. The Dynamic Signal Analyzer is a particularly good analysis tool at low frequencies. Not only can it work in all three domains, it is also very fast. We have developed heuristic argu- ments as to why Dynamic Signal Analyzers have certain properties because understanding the principles of these analyzers is important in making good measurements. Finally, we have shown how Dynamic Signal Analyzers can be used in a wide range of measurement situations using relatively simple examples. We have used simple examples throughout this text to develop understanding of the analyzer and its measurements, but it is by no means limited to such cases. It is a powerful instrument that, in the hands of an operator who under- stands the principles developed in this note, can lead to new insights and analysis of problems. Figure 4.30 Mode shapes of a rectangular plate.
  • 63. 63 The Fourier Transform The transformation from the time domain to the frequency domain and back again is based on the Fourrier Transform and its inverse. This Fourier Transform pair is defined as: Sx(f) = x (t) e-j2πftdt (Forward Transform) A.l x(t) = Sx(f) e j2πftdf (Inverse Transform) A.2 where x(t) = time domain representation of the signal x Sx(f) = frequency domain representation of the signal x j = √  -1 The Fourier Transform is valid for both periodic* and non-periodic x(t) that satisfy certain minimum conditions. All signals encountered in the real world easily satisfy these requirements. The Discrete Fourier Transform To compute the Fourier Transform digitally, we must perform a numerical integration. This will give us an approximation to a true Fourier Transform called the Discrete Fourier Transform. There are three distinct difficulties with computing the Fourier Transform. First, the desired result is a continuous function. We will only be able to calculate its value at discrete points. With this constraint our transform becomes, Sx(m∆f) = x (t) e-j2πm∆ftdt A.3 where m = 0, ±1, ±2 and ∆f = frequency spacing of our lines The second problem is that we must evaluate an integral. This is equivalent to computing the area under a curve. We will do this by adding together the areas of narrow rectangles under the curve as in Figure A.l. Our transform now becomes: Sx(m∆f) ≈ ∆t x (n∆t) e-j2πm∆fn∆t A.4 where ∆t = time interval between samples The last problem is that even with this summation approximation to the integral, we must sum samples over all time from minus to plus infinity. We would have to wait forever to get a result. Clearly then, we must limit the transform to a finite time interval. Sx(m∆f) ≈ ∆t x (n∆t) e-j2πm∆fn∆t A.5 As developed in Chapter 3, the frequency spacing between the lines must be the reciprocal of the time record length. Therefore, we can simplify A.5 to our formula for the Discrete Fourier Transform, S'x. S'x(m∆f) ≈ x(n∆t) e-j2πmn/Ν A.6 Figure A.l Numerical integration used in the Fourier Transform ∆ π ∆ ⌠⌠ ∞∞ ⌡⌡−−∞∞ ⌠⌠ ∞∞ ⌡⌡−−∞∞ * The Fourier Series is a special case of the Fourier Transform. ⌠⌠ ∞∞ ⌡⌡−−∞∞ n=-∞ ∑ n-1 ∑ n-0 T N n-1 ∑ n-0 ∞ Appendix A The Fourier Transform: A Mathematical Background
  • 64. 64 The Fast Fourier Transform The Fast Fourier Transform (FFT) is an algorithm for computing this Discrete Fourier Transform (DFT). Before the development of the FFT the DFT required excessive amounts of computation time, particularly when high resolution was required (large N). The FFT forces one further assumption, that N is a multiple of 2. This allows certain symmetries to occur reducing the number of calcula- tions (specifically multiplications) which have to be done. It is important to recall here that the Fast Fourier Transform is only an approximation to the desired Fourier Transform. First, the FFT only gives samples of the Fourier Transform. Second and more important, it is only a transform of a finite time record of the input. Two Channel Frequency Domain Measurements As was pointed out in the main text, two channel measurements are often needed with a Dynamic Signal Analyzer. In this section we will mathematically define the two channel transfer function and coherence measurements introduced in Chapter 4 and prove their more important properties. However, before we do this, we wish to introduce one other function, the Cross Power Spectrum, Gxy . This function is not often used in measure- ment situations, but is used internally by Dynamic Signal Analyzers to compute transfer functions and coherence. The Cross Power Spectrum, Gxy, is defined as taking the Fourier Transform of two signals separately and multiplying the result together as follows: Gxy (f) = Sx (f) S*y(f) where * indicates the complex conjugate of the function. With this function, we can define the Transfer Function, H(f), using the cross power spectrum and the spectrum of the input channel as follows: H(f) = where  denotes the average of the function. At first glance it may seem more appropriate to compute the transfer function as follows: |H(f)|2 = This is the ratio of two single channel, averaged measurements. Not only does this measurement not give any phase information, it also will be in error when there is noise in the measurement. To see why let us solve the equations for the special case where noise is injected into the output as in Figure A.2. The output is: Sy(f) = Sx(f)H(f) + Sn(f) So Gyy=SySy*= Gxx|H|2+SxHSn+Sx*H*Sn+|Sn|2 Figure A.2 Transfer function measurments with noise present. Gyy Gxx Gyy(f) Gxx(f)
  • 65. 65 If we RMS average this result to try to eliminate the noise, we find the SxSn terms approach zero because Sx, and Sn, are uncorrelated. However, the |Sn|2 term remains as an error and so we get = |H|2 + Therefore if we try to measure |H|2 by this single channel techniques, our value will be high by the noise to signal ratio. If instead we average the cross power spectrum we will eliminate this noise error. Using the same example, Gyx=SySx*=(SxH+Sn)Sx*= GxxH +SnSx* so =H(f)+SnSx* Because Sn, and Sx, are uncorrelated, the second term will average to zero, making this function a much better estimate of the transfer function. The Coherence Function, γ2, is also derived from the cross power spectrum by: γ2(f) = As stated in the main text, the coherence function is a measure of the power in the output signal caused by the input. If the coherence is 1, then all the output power is caused by the input. If the coherence is 0, then none of the output is caused by the input. Let us now look at the mathematics of the coherence function to see why this is so. As before, we will assume a measurement condition like Figure A.2. Then, as we have shown before, Gxy= Gxx|H|2+SxHSn*+ Sx*H*Sn+|S|2 Gxy=GxxH+SnSx* As we average, the cross terms SnSx, approach zero, assuming that the signal and the noise are not related. So the coherence becomes γ2(f) = γ2(f) = We see that if there is no noise, the coherence function is unity. If there is noise, then the coherence will be reduced. Note also that the coherence is a function of frequency. The coherence can be unity at frequencies where there is no interference and low where the noise is high. Time Domain Measurements Because it is sometimes easier to under- stand measurement problems from the perspective of the time domain, Dynamic Signal Analyzers often include several time domain measurements. These include auto and cross correlation and impulse response. Auto Correlation, Rxx(τ), is a comparison of a signal with itself as a function of time shift. It is defined as; Rxx(τ)= x(t)x(t+τ)dt Gyy Gxx |Sn|2 Gxx Gyx Gxx Gyx(f) Gxx(f) Gxy*(f) Gyy(f) |H|2Gxx |H|2Gxx+Sn 2 (HGyx)2 Gxx(|H|2Gxx+|Sn|2) lim T→∞ 1 T ⌠⌠ ⌡⌡ΤΤ
  • 66. 66 That is, the auto correlation can be found by taking a signal and multiply- ing it by the same signal displaced by a time τ and averaging the product over all time. However, most Dynamic Signal Analyzers compute this quanti- ty by taking advantage of its dual in the frequency domain. It can be shown that Rxx(τ)=F -1[Sx(f)Sx*(f)] where F -1 is the inverse Fourier Transform and Sx is the Fourier Transform of x(t) Since both techniques yield the same answer, the latter is usually chosen for Dynamic Signal Anlyzer since the Frequency Transform algorithm is already in the instrument and the results can be computed faster because fewer multiplications are required. Cross Correlation, Rxy(τ), is a comparison of two signals as a function of a time shift between them. It is defined as: Rxy(τ)= x(t)y(t+τ)dt As in auto correlation, a Dynamic Signal Analyzer computes this quanti- ty indirectly, in this case from the cross power spectrum. Rxy(τ)=F -1[Gxy] Lastly, the Impulse Response, h(t), is the dual of the transfer function, h(t) = F -1[H(f)] Note that because the transfer func- tion normalized the stimulus, the impulse response can be computed no matter what stimulus is actually used on the network. Bendat, Julius S. and Piersol, Allan G., “Random Data: Analysis and Measurement Procedures”, Wiley-Interscience, New York, 1971. Bendat, Julius S. and Piersol, Allan G., “Engineering Applications of Correlation and Spectral Analysis”, Wiley-lnterscience, New York, 1980. Bracewell, R., “The Fourier Transform and its Applications”, McGraw-Hill, 1965. Cooley, J.W. and Tukey, J.W., “An Algorithm for the Machine Calculation of Complex Fourier Series”, Mathematics of Computation, Vo. 19, No. 90, p. 297, April 1965. McKinney, W., “Band Selectable Fourier Analysis”, Hewlett-Packard Journal, April 1975, pp. 20-24. Otnes, R.K. and Enochson, L., “Digital Time Series Analysis”, John Wiley, 1972. Potter, R. and Lortscher, J., “What in the World is Overlap Processing”, Hewlett-Packard Santa Clara DSA/Laser Division “Update”, Sept. 1978. Ramse , K.A., “Effective Measure- ments for Structural Dynamics Testing”, Part 1, Sound and Vibration Magazine, November 1975, pp. 24-35. Roth, P., “Effective Measurements Using Digital Signal Analysis”, IEEE Spectrum, April 1971, pp. 62-70. Roth, P., “Digital Fourier Analysis”, Hewlett-Packard Journal, June 1970. Welch, Peter D., “The Use of Fast Fourier Transform for the Estimation of Power Spectra: A Method Based on Time Averaging Over Short, Modified Periodograms”, IEEE Transactions on Audio and Electro-acoustics, Vol. AU-15, No. 2, June 1967, pp. 70-73. lim T→∞ 1 T ⌠⌠ ⌡⌡ΤΤ Appendix B Bibliography
  • 67. 67 Aliasing 29 Anti-Alias Filter 31 Auto-Correlation 57, 65 Band Selectable Analysis 33, 51, 61 Coherence 55, 65 Correlation 56, 65 Cross Correlation 59, 66 Cross Power Spectrum 64 Damping 17, 61 Decibels (dB) 8, 49 Digital Filter 32 Discrete Fourier Transform 63 Engineering Units 55 Fast Fourier Transform (FFT) 25, 64 Flat-top Window or Passband 38, 49 Force Window 39 Fourier Transform 63 Frequency Response 15, 61 Frequency Response Analyzer 19 Gain-Phase Meter 19 Hanning Window or Passband 37, 49, 60 Impulse Response 66 Leakage 36 Linear Averaging 44 Linearity 12 Lines 25 Logarithmic Frequency Display 50 Network Analysis 11 Network Analyzer 19 Nyquist Criterion 31 Oscillograph 6 Oscilloscope 6 Parallel Filter Spectrum Analyzer 17 Periodic Random Noise 41, 53 Phase Noise 49 Power Spectral Density 51 Pseudo Random Noise 41, 53 Q (of resonance) 17 Random Noise 41 Real Time 45 Rectangular Window 37 Response Window 39 RMS Averaging 43, 46, 48, 50 Self-Windowing Functions 37 Spectrum 7 Spectrum Analyzer 17 Spectrum Component 7 Stimulus/Response Testing 11 Strip Chart Recorders 5 Swept Filter Spectrum Analyzer 18 Time Record 26 Transfer Function 42, 64 Transient Response 16 Tuned Network Analyzer 19 Uniform Window or Passband 37 Vibration Mode 20 Windowing 34 Zoom 33 Index
  • 68. Product specifications and descriptions in this document subject to change without notice. Copyright © 1994, 1995, 1999, 2000 Agilent Technologies Printed in U.S.A. 6/00 5952-8898E Agilent Technologies' Test and Measurement Support, Services, and Assistance Agilent Technologies aims to maximize the value you receive, while minimizing your risk and problems. We strive to ensure that you get the test and measurement capabilities you paid for and obtain the support you need. Our extensive support resources and services can help you choose the right Agilent products for your applications and apply them successfully. Every instrument and system we sell has a global war- ranty. Support is available for at least five years beyond the production life of the product. Two concepts underlie Agilent's overall support policy: "Our Promise" and "Your Advantage." Our Promise Our Promise means your Agilent test and measurement equipment will meet its advertised performance and functionality. When you are choosing new equipment, we will help you with product information, including realistic perform- ance specifications and practical recommenda- tions from experienced test engineers. When you use Agilent equipment, we can verify that it works properly, help with product operation, and provide basic measurement assistance for the use of specified capabilities, at no extra cost upon request. Many self-help tools are available. Your Advantage Your Advantage means that Agilent offers a wide range of additional expert test and measurement services, which you can purchase according to your unique technical and business needs. Solve problems efficiently and gain a competitive edge by contracting with us for calibration, extra-cost upgrades, out-of-warranty repairs, and on-site education and training, as well as design, system integration, project management, and other professional engineering services. Experienced Agilent engineers and technicians worldwide can help you maximize your productivity, optimize the return on investment of your Agilent instruments and systems, and obtain dependable measurement accuracy for the life of those products. For More Assistance with Your Test & Measurement Needs go to www.agilent.com/find/assist Or contact the test and measurement experts at Agilent Technologies (During normal business hours) United States: (tel) 1 800 452 4844 Canada: (tel) 1 877 894 4414 (fax) (905) 206 4120 Europe: (tel) (31 20) 547 2323 (fax) (31 20) 547 2390 Japan: (tel) (81) 426 56 7832 (fax) (81) 426 56 7840 Latin America: (tel) (305) 267 4245 (fax) (305) 267 4286 Australia: (tel) 1 800 629 485 (fax) (61 3) 9272 0749 New Zealand: (tel) 0 800 738 378 (fax) 64 4 495 8950 Asia Pacific: (tel) (852) 3197 7777 (fax) (852) 2506 9284
  翻译: