Frequency Domain Ramp

On Fri, 25 Feb 2011 09:34:04 -0800 (PST), Bret Cahill
<Bret_E_Cahill@yahoo.com> wrote:

I'm not sure I understand what you're trying to do. Are you trying to
get a FFT whose "amplitude" is a linear (either positive- or negative-
going) ramp?

Yes.

You can take integer order or fractional order derivatives in the FFT
on Spice if you have something proportional to a ramp.
I'm not a Spice user, but all you need to do is take a
normal FFT and apply a tilt to it, then do an IFFT (if you
want to see the time waveform). Doesn't Spice allow you to
do this?

The tilt should be +6 dB/octave for a normal derivative (see
my prior post) or in your case some fraction of that. Note
that the raw FFT is linear in frequency and the "tilt" I am
talking about is linear in *log* frequency, so you need a
little exponential math here to generate the proper shape.

Best regards,


Bob Masta

DAQARTA v6.00
Data AcQuisition And Real-Time Analysis
www.daqarta.com
Scope, Spectrum, Spectrogram, Sound Level Meter
Frequency Counter, FREE Signal Generator
Pitch Track, Pitch-to-MIDI
Science with your sound card!
 
Bob Masta wrote:
On Fri, 25 Feb 2011 08:05:00 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

Bob Masta wrote:
On Thu, 24 Feb 2011 09:10:50 -0800 (PST), Bret Cahill
Bret_E_Cahill@yahoo.com> wrote:

Taking a time derivative after an FFT would be easy on SPICE if you
had some curve that would FFT into a ramp: Just multiply the function
by the ramp.

Not sure what you are ultimately trying to do, but note that
you can obtain the FFT of the time derivative by taking the
FFT of the raw waveform and applying a +6 dB/octave
"envelope" to that... essentially, you just tilt the
spectrum up at a 6 dB/octave slope.

This turns out to be very handy for measuring frequency
response of a system. Classically, one can apply an impulse
to the system and take the FFT to get the frequency
response. But an impulse is pretty narrow (one sample, in a
digital system), so it doesn't have much energy. A step
response, on the other hand, has a whole lot more. Since
the derivatve of a step is an impulse, you can get the
frequency response by applying a step, taking the FFT, and
tilting it. This is so handy that I built this feature into
my Daqarta software. See "Frequency Response Measurement -
Step Response" at<http://www.daqarta.com/dw_0a0s.htm>.



Are you windowing the data before taking the DFT? Could get ugly otherwise.

No, in this case it's important to *not* window the data.
A window function (at least, any of the standard ones) has a
gradual onset and offset, for the specific purpose of
eliminating transients at the start/end of the FFT frame.
But here it is the onset that we are specifically interested
in. The transient response should be complete (for all
practical purposes) before the end of the frame, or else you
need more samples in the frame.

In general, you never want to window a transient or noise,
only a continous wave. The FFT analysis presumes a
continuous wave, such that every frame is an identical copy
that can be spliced seamlessly head to tail. A real-world
continuous wave that does not contain an exact integer
number of cycles in the FFT frame will have a discontinuity
where the next frame is spliced, which results in "spectral
leakage" that appears as "skirts" on what would otherwise be
a single line in the spectrum. The window function provides
a gradual onset and offset to smooth out this discontinuity,
greatly reducing the spectral leakage.

Interested readers may want to check out my "Gut Level
Fourier Transforms" series at
http://www.daqarta.com/author.htm>.
In particular, see Part 5 "Dumping Spectral Leakage Out a
Window"<http://www.daqarta.com/eex05.htm

The usefulness of windowing isn't restricted to the noiseless CW case.
Unless your signal is actually periodic with the same period as the DFT,
failure to window will cause every bin to smear out all across the
interval. You may not notice it with white noise, because the spectrum
stays flat in that case, but there are other kinds of noise where you
certainly will.

If your signal is a pulse, so that it starts from zero somewhere in the
interval and decays back to zero by the end, then you effectively are in
the periodic case already. If not, windowing is good medicine.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal
ElectroOptical Innovations
55 Orchard Rd
Briarcliff Manor NY 10510
845-480-2058

email: hobbs (atsign) electrooptical (period) net
http://electrooptical.net
 
Bob Masta wrote:
On Fri, 25 Feb 2011 09:34:04 -0800 (PST), Bret Cahill
Bret_E_Cahill@yahoo.com> wrote:

I'm not sure I understand what you're trying to do. Are you trying to
get a FFT whose "amplitude" is a linear (either positive- or negative-
going) ramp?

Yes.

You can take integer order or fractional order derivatives in the FFT
on Spice if you have something proportional to a ramp.


I'm not a Spice user, but all you need to do is take a
normal FFT and apply a tilt to it, then do an IFFT (if you
want to see the time waveform). Doesn't Spice allow you to
do this?

The tilt should be +6 dB/octave for a normal derivative (see
my prior post) or in your case some fraction of that. Note
that the raw FFT is linear in frequency and the "tilt" I am
talking about is linear in *log* frequency, so you need a
little exponential math here to generate the proper shape.
The best you can do for differentiation in a DFT is to take the central
finite difference, but the DFT of that operation is not a ramp, linear,
logarithmic, or exponential.

With an appropriately windowed transform, you can make a DFT look like
samples of the continuous-time Fourier transform of a band-limited
continuous time function.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal
ElectroOptical Innovations
55 Orchard Rd
Briarcliff Manor NY 10510
845-480-2058

email: hobbs (atsign) electrooptical (period) net
http://electrooptical.net
 
On 2/26/2011 10:02 AM, Phil Hobbs wrote:
The best you can do for differentiation in a DFT is to take the central
finite difference, but the DFT of that operation is not a ramp, linear,
logarithmic, or exponential.
Phil,

I don't know what you mean by "best" re: the central finite difference
in view of the need for using finite discrete transforms and/or windowed
data. As far as I know, a minimax FIR filter designed differentiator
approximates an imaginary ramp in frequency *up to* some cutoff
frequency - which is necessary to avoid aliasing I do believe. Another
approach to getting this is a Taylor approximation to the central finite
difference of the first derivative which apparently comes in closed form
(Khan and Ohba, 1999). I believe these approaches are common practice
in a case like this.

Here is an example of a length 51 differentiator of the minimax type
done using the Parks-McClellan program:
[2.83E-05 -7.68E-05 1.34E-04 -2.24E-04 3.53E-04 -5.32E-04 7.73E-04
-1.09E-03 1.50E-03 -2.01E-03 2.66E-03 -3.46E-03 4.44E-03
-5.64E-03 7.10E-03 -8.88E-03 1.11E-02 -1.38E-02 1.72E-02 -2.16E-02
2.76E-02 -3.63E-02 5.04E-02 -7.78E-02 1.58E-01
0
-1.58E-01 7.78E-02 -5.04E-02 3.63E-02 -2.76E-02 2.16E-02 -1.72E-02
1.38E-02 -1.11E-02 8.88E-03 -7.10E-03 5.64E-03 -4.44E-03
3.46E-03 -2.66E-03 2.01E-03 -1.50E-03 1.09E-03 -7.73E-04 5.32E-04
-3.53E-04 2.24E-04 -1.34E-04 7.68E-05
-2.83E-05]

Fred
 
Fred Marshall wrote:
On 2/26/2011 10:02 AM, Phil Hobbs wrote:
The best you can do for differentiation in a DFT is to take the central
finite difference, but the DFT of that operation is not a ramp, linear,
logarithmic, or exponential.

Phil,

I don't know what you mean by "best" re: the central finite difference
in view of the need for using finite discrete transforms and/or windowed
data. As far as I know, a minimax FIR filter designed differentiator
approximates an imaginary ramp in frequency *up to* some cutoff
frequency - which is necessary to avoid aliasing I do believe. Another
approach to getting this is a Taylor approximation to the central finite
difference of the first derivative which apparently comes in closed form
(Khan and Ohba, 1999). I believe these approaches are common practice in
a case like this.

Here is an example of a length 51 differentiator of the minimax type
done using the Parks-McClellan program:
[2.83E-05 -7.68E-05 1.34E-04 -2.24E-04 3.53E-04 -5.32E-04 7.73E-04
-1.09E-03 1.50E-03 -2.01E-03 2.66E-03 -3.46E-03 4.44E-03 -5.64E-03
7.10E-03 -8.88E-03 1.11E-02 -1.38E-02 1.72E-02 -2.16E-02 2.76E-02
-3.63E-02 5.04E-02 -7.78E-02 1.58E-01
0
-1.58E-01 7.78E-02 -5.04E-02 3.63E-02 -2.76E-02 2.16E-02 -1.72E-02
1.38E-02 -1.11E-02 8.88E-03 -7.10E-03 5.64E-03 -4.44E-03 3.46E-03
-2.66E-03 2.01E-03 -1.50E-03 1.09E-03 -7.73E-04 5.32E-04 -3.53E-04
2.24E-04 -1.34E-04 7.68E-05
-2.83E-05]

Fred
Thanks, that's pretty. I was perhaps a bit unclear in saying that the
first finite difference is the best you can do, but the point at issue
was what happens at frequencies near +-f_sample/2. The first finite
difference transforms to a sine wave with a 1/2 sample delay.

Optimal differentiators can be looked at as a variant of windowing a
ramp, by using linear combinations of the first M finite differences to
make an M+1 order approximation to the derivative over some frequency
band, but they always roll off at high frequency, which is a key point.

Bob M. seems to be suggesting using a real ramp with no high-frequency
cutoff, which will give what the compiler manuals describe as
"unexpected results".

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal
ElectroOptical Innovations
55 Orchard Rd
Briarcliff Manor NY 10510
845-480-2058

email: hobbs (atsign) electrooptical (period) net
http://electrooptical.net
 
On 2/26/2011 2:38 PM, Phil Hobbs wrote:
The first finite difference transforms to a sine wave with a 1/2 sample
delay.
Yes. I'm glad you didn't say cosine. It's an imaginary sine wave at
the lowest frequency. So, it has one period. As such, it goes from
positive to negative at fs/2. And, it's "ramp like" near f=0 as sin(x)
= x for x < whatever small number (0.25 is 1% error) So, it's a
reasonable differentiator for frequencies below whatever limit you like
there. Not so good at pi/2 or fs/4!!

The "imaginary" part seems to be getting lost in the "ramp" terminology.
Note that the FIR filter I provided is real and purely odd in time -
thus odd and purely imaginary in frequency (if centered at t=0). So,
any of these is an approximation to "jw" - an imaginary ramp.

Fred
 
On 26/02/2011 14:17, Bob Masta wrote:
On Fri, 25 Feb 2011 08:05:00 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

Bob Masta wrote:
On Thu, 24 Feb 2011 09:10:50 -0800 (PST), Bret Cahill
Bret_E_Cahill@yahoo.com> wrote:

Taking a time derivative after an FFT would be easy on SPICE if you
had some curve that would FFT into a ramp: Just multiply the function
by the ramp.

Not sure what you are ultimately trying to do, but note that
you can obtain the FFT of the time derivative by taking the
FFT of the raw waveform and applying a +6 dB/octave
"envelope" to that... essentially, you just tilt the
spectrum up at a 6 dB/octave slope.

This turns out to be very handy for measuring frequency
response of a system. Classically, one can apply an impulse
to the system and take the FFT to get the frequency
response. But an impulse is pretty narrow (one sample, in a
digital system), so it doesn't have much energy. A step
response, on the other hand, has a whole lot more. Since
the derivatve of a step is an impulse, you can get the
frequency response by applying a step, taking the FFT, and
tilting it. This is so handy that I built this feature into
my Daqarta software. See "Frequency Response Measurement -
Step Response" at<http://www.daqarta.com/dw_0a0s.htm>.



Are you windowing the data before taking the DFT? Could get ugly otherwise.

No, in this case it's important to *not* window the data.
A window function (at least, any of the standard ones) has a
gradual onset and offset, for the specific purpose of
eliminating transients at the start/end of the FFT frame.
But here it is the onset that we are specifically interested
in. The transient response should be complete (for all
practical purposes) before the end of the frame, or else you
need more samples in the frame.
You will still get a better behaved result (ie one which is mainly due
to the core transient with more predictable and well controlled
artefacts if you put that in the middle of the window and tile it with a
phantom unmeasured vertical axis reflection time shifted (or rather do a
transform that uses that implied symmetry).

Otherwise you will have the result of your measured transient combined
with the very sharp step down at the end of the range and associated
high frequency components that may not be in your measured data.
In general, you never want to window a transient or noise,
only a continous wave. The FFT analysis presumes a
continuous wave, such that every frame is an identical copy
that can be spliced seamlessly head to tail. A real-world
continuous wave that does not contain an exact integer
number of cycles in the FFT frame will have a discontinuity
where the next frame is spliced, which results in "spectral
leakage" that appears as "skirts" on what would otherwise be
a single line in the spectrum. The window function provides
a gradual onset and offset to smooth out this discontinuity,
greatly reducing the spectral leakage.
Windowing to avoid sharp edge discontinuity artefacts is usually
worthwhile - and for sampled data that might not be truly bandlimited
and subject to aliassing it can make sense to preconvolve the raw data
with a gridding function to get something that behaves much more like
the ideal DFT. Prolate spheroidal Bessel functions are amongst the
simplest which give real benefit in this application see for example:

http://www.astron.nl/aips++/docs/glossary/s.html#spheroidal_function

This method requires a multiplicative correction after the FFT but gives
a final result much closer to the DFT.
Interested readers may want to check out my "Gut Level
Fourier Transforms" series at
http://www.daqarta.com/author.htm>.
In particular, see Part 5 "Dumping Spectral Leakage Out a
Window"<http://www.daqarta.com/eex05.htm
What you say here about power of two transforms is no longer true. FFTW
and other fast low prime transform kernels are now faster than naieve
power of two implementations for some favourable sizes and competitive
for many more. In some cases due to associative cache problems on
certain processors exact powers of two are slower than they should be!

The pattern of FFT radix for which non power of two beats zero padding
to the next power of two for my local FFT implementation is:

5, 6, 9, 36, 80, 81, 144, 160, 320, 625, 640, 1152,1250,1280,
1296, 1536, 2187, 2304, 2500, 2460, 2592, 3072, 3125, 3200

Taking only the ones where there is a clear win. Quite a lot of others
are broadly competitive with the longer highly optimised 2^N transform.

3072 and 3125 are nearly 2x faster than an optimised radix-8 4096 and 3x
faster than the naive radix-2 implementation for instance. An optimised
split radix-16 transform would shave roughly another 10% off the 4096
time but would not change the outcome.

Regards,
Martin Brown
 
On 2/28/2011 2:54 AM, Martin Brown wrote:

The pattern of FFT radix for which non power of two beats zero padding
to the next power of two for my local FFT implementation is:
I wonder if there isn't some misunderstanding here? The notion of zero
padding to achieve a power of two seems an archaic concern - as you well
point out. However, zero padding to accommodate circular convolution is
common - much more common I should think.

Nonetheless, you've pointed out some things that are new info for me.
Thanks.

Fred
 
Martin Brown wrote:
On 26/02/2011 14:17, Bob Masta wrote:
On Fri, 25 Feb 2011 08:05:00 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

Bob Masta wrote:
On Thu, 24 Feb 2011 09:10:50 -0800 (PST), Bret Cahill
Bret_E_Cahill@yahoo.com> wrote:

Taking a time derivative after an FFT would be easy on SPICE if you
had some curve that would FFT into a ramp: Just multiply the function
by the ramp.

Not sure what you are ultimately trying to do, but note that
you can obtain the FFT of the time derivative by taking the
FFT of the raw waveform and applying a +6 dB/octave
"envelope" to that... essentially, you just tilt the
spectrum up at a 6 dB/octave slope.

This turns out to be very handy for measuring frequency
response of a system. Classically, one can apply an impulse
to the system and take the FFT to get the frequency
response. But an impulse is pretty narrow (one sample, in a
digital system), so it doesn't have much energy. A step
response, on the other hand, has a whole lot more. Since
the derivatve of a step is an impulse, you can get the
frequency response by applying a step, taking the FFT, and
tilting it. This is so handy that I built this feature into
my Daqarta software. See "Frequency Response Measurement -
Step Response" at<http://www.daqarta.com/dw_0a0s.htm>.



Are you windowing the data before taking the DFT? Could get ugly
otherwise.

No, in this case it's important to *not* window the data.
A window function (at least, any of the standard ones) has a
gradual onset and offset, for the specific purpose of
eliminating transients at the start/end of the FFT frame.
But here it is the onset that we are specifically interested
in. The transient response should be complete (for all
practical purposes) before the end of the frame, or else you
need more samples in the frame.

You will still get a better behaved result (ie one which is mainly due
to the core transient with more predictable and well controlled
artefacts if you put that in the middle of the window and tile it with a
phantom unmeasured vertical axis reflection time shifted (or rather do a
transform that uses that implied symmetry).

Otherwise you will have the result of your measured transient combined
with the very sharp step down at the end of the range and associated
high frequency components that may not be in your measured data.

In general, you never want to window a transient or noise,
only a continous wave. The FFT analysis presumes a
continuous wave, such that every frame is an identical copy
that can be spliced seamlessly head to tail. A real-world
continuous wave that does not contain an exact integer
number of cycles in the FFT frame will have a discontinuity
where the next frame is spliced, which results in "spectral
leakage" that appears as "skirts" on what would otherwise be
a single line in the spectrum. The window function provides
a gradual onset and offset to smooth out this discontinuity,
greatly reducing the spectral leakage.

Windowing to avoid sharp edge discontinuity artefacts is usually
worthwhile - and for sampled data that might not be truly bandlimited
and subject to aliassing it can make sense to preconvolve the raw data
with a gridding function to get something that behaves much more like
the ideal DFT. Prolate spheroidal Bessel functions are amongst the
simplest which give real benefit in this application see for example:

http://www.astron.nl/aips++/docs/glossary/s.html#spheroidal_function

This method requires a multiplicative correction after the FFT but gives
a final result much closer to the DFT.

Interested readers may want to check out my "Gut Level
Fourier Transforms" series at
http://www.daqarta.com/author.htm>.
In particular, see Part 5 "Dumping Spectral Leakage Out a
Window"<http://www.daqarta.com/eex05.htm

What you say here about power of two transforms is no longer true. FFTW
and other fast low prime transform kernels are now faster than naieve
power of two implementations for some favourable sizes and competitive
for many more. In some cases due to associative cache problems on
certain processors exact powers of two are slower than they should be!

The pattern of FFT radix for which non power of two beats zero padding
to the next power of two for my local FFT implementation is:

5, 6, 9, 36, 80, 81, 144, 160, 320, 625, 640, 1152,1250,1280,
1296, 1536, 2187, 2304, 2500, 2460, 2592, 3072, 3125, 3200

Taking only the ones where there is a clear win. Quite a lot of others
are broadly competitive with the longer highly optimised 2^N transform.

3072 and 3125 are nearly 2x faster than an optimised radix-8 4096 and 3x
faster than the naive radix-2 implementation for instance. An optimised
split radix-16 transform would shave roughly another 10% off the 4096
time but would not change the outcome.

Regards,
Martin Brown
Interesting. To clarify, you seem to be using 'DFT' to mean a
sampled-time Fourier transform of infinite length, is that right? IME
that's usually used to denote the circular version for which the FFT is
a fast algorithm.

I'm also less sanguine about resampling. In the 1-D case at least, the
orthogonal functions for the actual sampling grid used may be quite
ill-conditioned, and resampling without taking account of any special
features of the eigenfunctions can lead to tears.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal
ElectroOptical Innovations
55 Orchard Rd
Briarcliff Manor NY 10510
845-480-2058

email: hobbs (atsign) electrooptical (period) net
http://electrooptical.net
 
Fred Marshall wrote:
On 2/28/2011 2:54 AM, Martin Brown wrote:


The pattern of FFT radix for which non power of two beats zero padding
to the next power of two for my local FFT implementation is:


I wonder if there isn't some misunderstanding here? The notion of zero
padding to achieve a power of two seems an archaic concern - as you well
point out. However, zero padding to accommodate circular convolution is
common - much more common I should think.

Nonetheless, you've pointed out some things that are new info for me.
Thanks.

Fred
Zero padding a transient without windowing does zilch to reduce spectral
leakage--all it does is to evaluate the exact same transform at
different sample frequencies. You can see that from the definition of
the DFT (circular).

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal
ElectroOptical Innovations
55 Orchard Rd
Briarcliff Manor NY 10510
845-480-2058

email: hobbs (atsign) electrooptical (period) net
http://electrooptical.net
 
Phil Hobbs wrote:
Fred Marshall wrote:
On 2/28/2011 2:54 AM, Martin Brown wrote:


The pattern of FFT radix for which non power of two beats zero padding
to the next power of two for my local FFT implementation is:


I wonder if there isn't some misunderstanding here? The notion of zero
padding to achieve a power of two seems an archaic concern - as you well
point out. However, zero padding to accommodate circular convolution is
common - much more common I should think.

Nonetheless, you've pointed out some things that are new info for me.
Thanks.

Fred


Zero padding a transient without windowing does zilch to reduce spectral
leakage--all it does is to evaluate the exact same transform at
different sample frequencies. You can see that from the definition of
the DFT (circular).
I should add that leaving space to avoid wraparound in the circular
convolution (as Fred mentioned) doesn't suffer this problem.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal
ElectroOptical Innovations
55 Orchard Rd
Briarcliff Manor NY 10510
845-480-2058

email: hobbs (atsign) electrooptical (period) net
http://electrooptical.net
 
On 28/02/2011 16:21, Phil Hobbs wrote:
Martin Brown wrote:
On 26/02/2011 14:17, Bob Masta wrote:
On Fri, 25 Feb 2011 08:05:00 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

Bob Masta wrote:

In general, you never want to window a transient or noise,
only a continous wave. The FFT analysis presumes a
continuous wave, such that every frame is an identical copy
that can be spliced seamlessly head to tail. A real-world
continuous wave that does not contain an exact integer
number of cycles in the FFT frame will have a discontinuity
where the next frame is spliced, which results in "spectral
leakage" that appears as "skirts" on what would otherwise be
a single line in the spectrum. The window function provides
a gradual onset and offset to smooth out this discontinuity,
greatly reducing the spectral leakage.

Windowing to avoid sharp edge discontinuity artefacts is usually
worthwhile - and for sampled data that might not be truly bandlimited
and subject to aliassing it can make sense to preconvolve the raw data
with a gridding function to get something that behaves much more like
the ideal DFT. Prolate spheroidal Bessel functions are amongst the
simplest which give real benefit in this application see for example:

http://www.astron.nl/aips++/docs/glossary/s.html#spheroidal_function

This method requires a multiplicative correction after the FFT but gives
a final result much closer to the DFT.

Interested readers may want to check out my "Gut Level
Fourier Transforms" series at
http://www.daqarta.com/author.htm>.
In particular, see Part 5 "Dumping Spectral Leakage Out a
Window"<http://www.daqarta.com/eex05.htm

What you say here about power of two transforms is no longer true. FFTW
and other fast low prime transform kernels are now faster than naieve
power of two implementations for some favourable sizes and competitive
for many more. In some cases due to associative cache problems on
certain processors exact powers of two are slower than they should be!

The pattern of FFT radix for which non power of two beats zero padding
to the next power of two for my local FFT implementation is:

5, 6, 9, 36, 80, 81, 144, 160, 320, 625, 640, 1152,1250,1280,
1296, 1536, 2187, 2304, 2500, 2460, 2592, 3072, 3125, 3200

Taking only the ones where there is a clear win. Quite a lot of others
are broadly competitive with the longer highly optimised 2^N transform.

3072 and 3125 are nearly 2x faster than an optimised radix-8 4096 and 3x
faster than the naive radix-2 implementation for instance. An optimised
split radix-16 transform would shave roughly another 10% off the 4096
time but would not change the outcome.

Interesting. To clarify, you seem to be using 'DFT' to mean a
sampled-time Fourier transform of infinite length, is that right? IME
Yes. I am using DFT here in the context of the idealised FT of infinite
length that is zero outside the region of interest. The one that you
would get by summing the discrete Fourier equations without the implicit
FFT assumption of tiled or mirror boundary periodicity.

This becomes important in classical radio astronomy aperture synthesis
when there are bright point sources in the field of view with grating
ring artefacts from the telescope baseline configuration that cross the
reconstructed image boundary!

that's usually used to denote the circular version for which the FFT is
a fast algorithm.
Sorry for any confusion.
I'm also less sanguine about resampling. In the 1-D case at least, the
orthogonal functions for the actual sampling grid used may be quite
ill-conditioned, and resampling without taking account of any special
features of the eigenfunctions can lead to tears.
The neat thing about the prolate spheroidal Bessel functions is that
they are the Eigenfunctions of bandlimiting an FT to some finite width
and then taking its FT again. This means that when you regrid data using
one of them all components are accurately represented. The adhoc
truncated Gaussian, Kaiser-Bessel or truncated Gaussian*Sinc functions
commonly used have weaker alias rejection and/or in band distortions
that affect high dynamic range data.

I don't agree with everything in this paper, but it provides an
introduction and comparison of several popular FT gridding functions.
The older papers are not online.

http://math.asu.edu/~svetlana/Sampling/Jackson,%20Meyer,%20Nishimura%20(9).pdf

There is another even more sophisticated family of gridding functions
that sacrifice a guard band around the edge of the transform to obtain
an extremely high fidelity FT of interferometer and MRI data. Basically
you can't trust the edges or the corners because some leakage always
occurs and these methods formalise the error bound in the trusted zone.

Regards,
Martin Brown
 
On Mon, 28 Feb 2011 10:54:46 +0000, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

On 26/02/2011 14:17, Bob Masta wrote:
On Fri, 25 Feb 2011 08:05:00 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

Bob Masta wrote:
On Thu, 24 Feb 2011 09:10:50 -0800 (PST), Bret Cahill
Bret_E_Cahill@yahoo.com> wrote:

Taking a time derivative after an FFT would be easy on SPICE if you
had some curve that would FFT into a ramp: Just multiply the function
by the ramp.

Not sure what you are ultimately trying to do, but note that
you can obtain the FFT of the time derivative by taking the
FFT of the raw waveform and applying a +6 dB/octave
"envelope" to that... essentially, you just tilt the
spectrum up at a 6 dB/octave slope.

This turns out to be very handy for measuring frequency
response of a system. Classically, one can apply an impulse
to the system and take the FFT to get the frequency
response. But an impulse is pretty narrow (one sample, in a
digital system), so it doesn't have much energy. A step
response, on the other hand, has a whole lot more. Since
the derivatve of a step is an impulse, you can get the
frequency response by applying a step, taking the FFT, and
tilting it. This is so handy that I built this feature into
my Daqarta software. See "Frequency Response Measurement -
Step Response" at<http://www.daqarta.com/dw_0a0s.htm>.



Are you windowing the data before taking the DFT? Could get ugly otherwise.

No, in this case it's important to *not* window the data.
A window function (at least, any of the standard ones) has a
gradual onset and offset, for the specific purpose of
eliminating transients at the start/end of the FFT frame.
But here it is the onset that we are specifically interested
in. The transient response should be complete (for all
practical purposes) before the end of the frame, or else you
need more samples in the frame.

You will still get a better behaved result (ie one which is mainly due
to the core transient with more predictable and well controlled
artefacts if you put that in the middle of the window and tile it with a
phantom unmeasured vertical axis reflection time shifted (or rather do a
transform that uses that implied symmetry).

Otherwise you will have the result of your measured transient combined
with the very sharp step down at the end of the range and associated
high frequency components that may not be in your measured data.
Remember, this is in the context of converting a step
response into an impulse response. The step response
waveform of a perfect system is a flat line at unity...
there is no step-down.

In a real system that has an overall low-pass response, the
step response starts below unity and rises up to unity
(maybe with some overshoot and damped oscillation) and then
stays at unity until the end of the FFT period. We don't
want to taper that down to zero... staying at unity is the
desired result.

It's true that there is a conceptual step-down that is
off-screen, to prepare for the next rising edge. But that
has no effect on the desired step response.

In the case of overall high-pass systems, the step response
droops down from the initial peak, and will be somewhere
less than unity by the end of the frame. Although this
somehow seems more worrisome than the low-pass case, in fact
it is not really a problem for this approach. The fact that
the step response is not satisfactorily completed means that
the low end of the frequency response is not fully captured.
That's all stuff that is between the 0th (DC) and 1st FFT
lines... a longer FFT is needed to resolve that.

In general, you never want to window a transient or noise,
only a continous wave. The FFT analysis presumes a
continuous wave, such that every frame is an identical copy
that can be spliced seamlessly head to tail. A real-world
continuous wave that does not contain an exact integer
number of cycles in the FFT frame will have a discontinuity
where the next frame is spliced, which results in "spectral
leakage" that appears as "skirts" on what would otherwise be
a single line in the spectrum. The window function provides
a gradual onset and offset to smooth out this discontinuity,
greatly reducing the spectral leakage.

Windowing to avoid sharp edge discontinuity artefacts is usually
worthwhile - and for sampled data that might not be truly bandlimited
and subject to aliassing it can make sense to preconvolve the raw data
with a gridding function to get something that behaves much more like
the ideal DFT. Prolate spheroidal Bessel functions are amongst the
simplest which give real benefit in this application see for example:

http://www.astron.nl/aips++/docs/glossary/s.html#spheroidal_function

This method requires a multiplicative correction after the FFT but gives
a final result much closer to the DFT.

Interested readers may want to check out my "Gut Level
Fourier Transforms" series at
http://www.daqarta.com/author.htm>.
In particular, see Part 5 "Dumping Spectral Leakage Out a
Window"<http://www.daqarta.com/eex05.htm

What you say here about power of two transforms is no longer true. FFTW
and other fast low prime transform kernels are now faster than naieve
power of two implementations for some favourable sizes and competitive
for many more. In some cases due to associative cache problems on
certain processors exact powers of two are slower than they should be!

The pattern of FFT radix for which non power of two beats zero padding
to the next power of two for my local FFT implementation is:

5, 6, 9, 36, 80, 81, 144, 160, 320, 625, 640, 1152,1250,1280,
1296, 1536, 2187, 2304, 2500, 2460, 2592, 3072, 3125, 3200

Taking only the ones where there is a clear win. Quite a lot of others
are broadly competitive with the longer highly optimised 2^N transform.

3072 and 3125 are nearly 2x faster than an optimised radix-8 4096 and 3x
faster than the naive radix-2 implementation for instance. An optimised
split radix-16 transform would shave roughly another 10% off the 4096
time but would not change the outcome.

Thanks for the FFTW reference. I am gratified to see that
there are finally decent non-proprietary FFT routines
available for the public to use. When I got started in this
adventure, back in the days of the IBM PC and XT, it was
"common knowledge" that you couldn't do real-time spectral
analysis on those systems without a dedicated custom add-in
card, but that "someday soon" we could hope to do it all on
the main CPU. I realized that I had been hearing this line
for years already, in reference to the Apple, Commodore, and
Sinclair (etc). It dawned on me that maybe nobody had
really tried to optimize an FFT for this purpose, since all
the references I could find were obviously not well
optimized. They were written in FORTRAN or BASIC, computed
trig functions on the fly in every iteration, and did a few
other obviously redundant things. Coding everything in
assembler, avoiding the deathly slow FPU, and doing trig by
table lookup instead of blindly-repeated recalculation gave
a speed improvement of over 100x... plenty fast enough for
real-time on a 4.77 MHz PX/XT when coupled with fast
assembly language display graphics.

Over the years, I have occasionally looked at touted "fast"
FFT routines, but they appeared to all suffer from the same
issues. (Like assuming that you were only going to perform
one FFT, instead of pre-computing factors and swap tables so
you don't have to repeat it for every FFT.)

Now, however, I suspect that much faster FPUs (and SSE, and
GPUs) have made it actually better to recompute everything,
rather than incur cache misses while trying to access
precomputed tables. It seems that technology has finally
vindicated the old "don't worry, someday the system will be
fast enough" ! <g>

Best regards,


Bob Masta

DAQARTA v6.00
Data AcQuisition And Real-Time Analysis
www.daqarta.com
Scope, Spectrum, Spectrogram, Sound Level Meter
Frequency Counter, FREE Signal Generator
Pitch Track, Pitch-to-MIDI
Science with your sound card!
 
Martin Brown wrote:
On 28/02/2011 16:21, Phil Hobbs wrote:
Martin Brown wrote:
On 26/02/2011 14:17, Bob Masta wrote:
On Fri, 25 Feb 2011 08:05:00 -0500, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

Bob Masta wrote:

In general, you never want to window a transient or noise,
only a continous wave. The FFT analysis presumes a
continuous wave, such that every frame is an identical copy
that can be spliced seamlessly head to tail. A real-world
continuous wave that does not contain an exact integer
number of cycles in the FFT frame will have a discontinuity
where the next frame is spliced, which results in "spectral
leakage" that appears as "skirts" on what would otherwise be
a single line in the spectrum. The window function provides
a gradual onset and offset to smooth out this discontinuity,
greatly reducing the spectral leakage.

Windowing to avoid sharp edge discontinuity artefacts is usually
worthwhile - and for sampled data that might not be truly bandlimited
and subject to aliassing it can make sense to preconvolve the raw data
with a gridding function to get something that behaves much more like
the ideal DFT. Prolate spheroidal Bessel functions are amongst the
simplest which give real benefit in this application see for example:

http://www.astron.nl/aips++/docs/glossary/s.html#spheroidal_function

This method requires a multiplicative correction after the FFT but gives
a final result much closer to the DFT.

Interested readers may want to check out my "Gut Level
Fourier Transforms" series at
http://www.daqarta.com/author.htm>.
In particular, see Part 5 "Dumping Spectral Leakage Out a
Window"<http://www.daqarta.com/eex05.htm

What you say here about power of two transforms is no longer true. FFTW
and other fast low prime transform kernels are now faster than naieve
power of two implementations for some favourable sizes and competitive
for many more. In some cases due to associative cache problems on
certain processors exact powers of two are slower than they should be!

The pattern of FFT radix for which non power of two beats zero padding
to the next power of two for my local FFT implementation is:

5, 6, 9, 36, 80, 81, 144, 160, 320, 625, 640, 1152,1250,1280,
1296, 1536, 2187, 2304, 2500, 2460, 2592, 3072, 3125, 3200

Taking only the ones where there is a clear win. Quite a lot of others
are broadly competitive with the longer highly optimised 2^N transform.

3072 and 3125 are nearly 2x faster than an optimised radix-8 4096 and 3x
faster than the naive radix-2 implementation for instance. An optimised
split radix-16 transform would shave roughly another 10% off the 4096
time but would not change the outcome.

Interesting. To clarify, you seem to be using 'DFT' to mean a
sampled-time Fourier transform of infinite length, is that right? IME

Yes. I am using DFT here in the context of the idealised FT of infinite
length that is zero outside the region of interest. The one that you
would get by summing the discrete Fourier equations without the implicit
FFT assumption of tiled or mirror boundary periodicity.

This becomes important in classical radio astronomy aperture synthesis
when there are bright point sources in the field of view with grating
ring artefacts from the telescope baseline configuration that cross the
reconstructed image boundary!

that's usually used to denote the circular version for which the FFT is
a fast algorithm.

Sorry for any confusion.

I'm also less sanguine about resampling. In the 1-D case at least, the
orthogonal functions for the actual sampling grid used may be quite
ill-conditioned, and resampling without taking account of any special
features of the eigenfunctions can lead to tears.

The neat thing about the prolate spheroidal Bessel functions is that
they are the Eigenfunctions of bandlimiting an FT to some finite width
and then taking its FT again. This means that when you regrid data using
one of them all components are accurately represented. The adhoc
truncated Gaussian, Kaiser-Bessel or truncated Gaussian*Sinc functions
commonly used have weaker alias rejection and/or in band distortions
that affect high dynamic range data.

I don't agree with everything in this paper, but it provides an
introduction and comparison of several popular FT gridding functions.
The older papers are not online.

http://math.asu.edu/~svetlana/Sampling/Jackson,%20Meyer,%20Nishimura%20(9).pdf


There is another even more sophisticated family of gridding functions
that sacrifice a guard band around the edge of the transform to obtain
an extremely high fidelity FT of interferometer and MRI data. Basically
you can't trust the edges or the corners because some leakage always
occurs and these methods formalise the error bound in the trusted zone.

Regards,
Martin Brown
Thanks. I suppose in the design of radiotelescope arrays, folks keep an
eye on the condition number of the transformation, at least at some
level, and you know the Jacobian analytically, which is also a big help.
Stuff like missing samples in a time series can give rise to all sorts
of funnies if you try just interpolating.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal
ElectroOptical Innovations
55 Orchard Rd
Briarcliff Manor NY 10510
845-480-2058

email: hobbs (atsign) electrooptical (period) net
http://electrooptical.net
 
On 01/03/2011 16:26, Phil Hobbs wrote:

Thanks. I suppose in the design of radiotelescope arrays, folks keep an
eye on the condition number of the transformation, at least at some
level, and you know the Jacobian analytically, which is also a big help.
The history is quite interesting in that the interferometer arrays
became more complex and completely beyond analytical treatment as
computers became more powerful.

The first Earth rotation aperture synthesis telescope the One Mile
Telescope at Cambridge had 2 fixed dishes and one movable on a very
accurately surveyed E-W line. I think 64 baselines at uniform spacing
(32 days observing) and 32 days moving the scope configuration.

The next generation 5km telescope was about 5 degrees of perfect E-W
alignment (thanks to Dr Beeching closing the Cambridge-Bedford railway)
and consisted of 4 fixed and 4 movable. 128 uniformly spaced baselines
in 8 days observing and 8 days moving the scope around.

Big disadvantage of these E-W arrays is that their beamshape is lousy
near the celestial equator and you have to observe for 12 hours
continuous to get a decent image of the target. But on the plus side the
mathematics for gridding the data is easier.

The VLA moved to a Y shape with 27 antennae I forget how many fixed and
how many movable and a power law antenna spacing which allows more or
less instant snapshots. When commissioned at the opening party it did
some quick give away 5 minute maps of famous objects for VIPs to take
away. It require massively more computing power to do this. The
beamshape is pretty hairy and must be deconvolved by either CLEAN or MEM
to provide an image or map suitable for astrophysics.

If you are interesting in the details a decent talk is online at
http://www.aoc.nrao.edu/epo/powerpoint/interferometry2001.pdf
It loses something by not having a narrator but if you are already
familiar with the basics it should make reasonable sense. It shows what
the U-V baseline coverage, beamshape, raw and final maps look like.

International VLBI you are pretty much stuck with where the biggest
dishes are physically located on the planet and have to make do with
what is one offer. The beamshape is horrible and it can only
realistically be used on very bright compact targets. Cunning
calibration techinques using triples of baselines extract good
observables despite the unknown phase errors over each telescope. The
latter method has been applied in the optical at near IR wavelengths.

Stuff like missing samples in a time series can give rise to all sorts
of funnies if you try just interpolating.
Indeed. If you have missing data the only reliable way of modelling it
is to pose the question what is the most probable model of the thing
being measured that would yield the observed data to within the
measurement noise. Attempts at direct inverses with missing data can end
up dominated by the misleading zeroes or guessed missing data.

Astronomers do have some advantaged here. The sky brightness is known to
be everywhere positive so negative regions have to be artefacts. The
same cannot be said for electrical signals...

Regards,
Martin Brown
 

Welcome to EDABoard.com

Sponsor

Back
Top