Sinc weirdness

J

Jeroen Belleman

Guest
I just came across a weird fact involving sinc functions, which
everyone here will have seen and used at one time or another.
They're used all the time in signal processing mathematics.

We all know that the integral over all x of sinc(x) = π.
A little bit funnier is that the integral over all x of
sinc(x) * sinc(x/3) = π as well. We can go on:
integral over all x of sinc(x) * sinc(x/3) * sinc(x/5) = π.

Beginning to see a pattern? You'd be wrong. Up until sinc(1/13),
the result will indeed always be π exactly, but when the
factor sinc(x/15) is reached, suddenly the integral ends up a
teensy tiny bit less than π, and it gets worse after that.

Surprise!

If you Fourier-transform the individual factors and then
convolve them together, it will become clear what's going on.
They are known as Borwein's integrals. Oh well, I thought this
was fascinating.

Jeroen Belleman
 
Cool.

Tim

--
Seven Transistor Labs, LLC
Electrical Engineering Consultation and Design
Website: https://www.seventransistorlabs.com/

"Jeroen Belleman" <jeroen@nospam.please> wrote in message
news:qpa8ba$co1$1@gioia.aioe.org...
I just came across a weird fact involving sinc functions, which
everyone here will have seen and used at one time or another.
They're used all the time in signal processing mathematics.

We all know that the integral over all x of sinc(x) = π.
A little bit funnier is that the integral over all x of
sinc(x) * sinc(x/3) = π as well. We can go on:
integral over all x of sinc(x) * sinc(x/3) * sinc(x/5) = π.

Beginning to see a pattern? You'd be wrong. Up until sinc(1/13),
the result will indeed always be π exactly, but when the
factor sinc(x/15) is reached, suddenly the integral ends up a
teensy tiny bit less than π, and it gets worse after that.

Surprise!

If you Fourier-transform the individual factors and then
convolve them together, it will become clear what's going on.
They are known as Borwein's integrals. Oh well, I thought this
was fascinating.

Jeroen Belleman
 
On 29/10/2019 20:44, Jeroen Belleman wrote:
I just came across a weird fact involving sinc functions, which
everyone here will have seen and used at one time or another.
They're used all the time in signal processing mathematics.

We all know that the integral over all x of sinc(x) = π.
A little bit funnier is that the integral over all x of
sinc(x) * sinc(x/3) = π as well. We can go on:
integral over all x of sinc(x) * sinc(x/3) * sinc(x/5) = π.

Beginning to see a pattern? You'd be wrong. Up until sinc(1/13),
the result will indeed always be π exactly, but when the
factor sinc(x/15) is reached, suddenly the integral ends up a
teensy tiny bit less than π, and it gets worse after that.

Surprise!

And only fairly recently discovered too 2001. It converges to pi
provided that the sum of the reciprocals of 1/(2n+1) stays under 1.

https://en.wikipedia.org/wiki/Borwein_integral
If you Fourier-transform the individual factors and then
convolve them together, it will become clear what's going on.
They are known as Borwein's integrals. Oh well, I thought this
was fascinating.

+1

A more accessible intuitive proof that isn't behind a paywall is at:

http://schmid-werren.ch/hanspeter/publications/2014elemath.pdf

Thanks for pointing this one out. I would never have guessed that it
broke down after working OK for so many terms.

Wildly oscillatory integrals can be very tricky if you don't choose your
path wisely.

--
Regards,
Martin Brown
 
On a sunny day (Tue, 29 Oct 2019 21:44:28 +0100) it happened Jeroen Belleman
<jeroen@nospam.please> wrote in <qpa8ba$co1$1@gioia.aioe.org>:

I just came across a weird fact involving sinc functions, which
everyone here will have seen and used at one time or another.
They're used all the time in signal processing mathematics.

We all know that the integral over all x of sinc(x) = π.
A little bit funnier is that the integral over all x of
sinc(x) * sinc(x/3) = π as well. We can go on:
integral over all x of sinc(x) * sinc(x/3) * sinc(x/5) = π.

Beginning to see a pattern? You'd be wrong. Up until sinc(1/13),
the result will indeed always be π exactly, but when the
factor sinc(x/15) is reached, suddenly the integral ends up a
teensy tiny bit less than π, and it gets worse after that.

Surprise!

If you Fourier-transform the individual factors and then
convolve them together, it will become clear what's going on.
They are known as Borwein's integrals. Oh well, I thought this
was fascinating.

Jeroen Belleman

Interesting, never knew that,
because did not pay attention in math class as I was sitting..
https://www.youtube.com/watch?v=aZcfxLtOKmY
 
Martin Brown <'''newspam'''@nezumi.demon.co.uk> wrote in
news:qpbit1$7pv$1@gioia.aioe.org:

Wildly oscillatory integrals can be very tricky if you don't
choose your path wisely.

--
Regards,
Martin Brown

Yes.. one could end up back in 1955 at a high school dance.

Are you related to Emmett Brown? :)
 
On Wednesday, October 30, 2019 at 4:50:46 AM UTC-4, Martin Brown wrote:
On 29/10/2019 20:44, Jeroen Belleman wrote:
I just came across a weird fact involving sinc functions, which
everyone here will have seen and used at one time or another.
They're used all the time in signal processing mathematics.

We all know that the integral over all x of sinc(x) = π.
A little bit funnier is that the integral over all x of
sinc(x) * sinc(x/3) = π as well. We can go on:
integral over all x of sinc(x) * sinc(x/3) * sinc(x/5) = π.

Beginning to see a pattern? You'd be wrong. Up until sinc(1/13),
the result will indeed always be π exactly, but when the
factor sinc(x/15) is reached, suddenly the integral ends up a
teensy tiny bit less than π, and it gets worse after that.

Surprise!

And only fairly recently discovered too 2001. It converges to pi
provided that the sum of the reciprocals of 1/(2n+1) stays under 1.

https://en.wikipedia.org/wiki/Borwein_integral

If you Fourier-transform the individual factors and then
convolve them together, it will become clear what's going on.
They are known as Borwein's integrals. Oh well, I thought this
was fascinating.

+1

A more accessible intuitive proof that isn't behind a paywall is at:

http://schmid-werren.ch/hanspeter/publications/2014elemath.pdf
Interesting, thanks! (that goes for Jeroen too.)

George H

Thanks for pointing this one out. I would never have guessed that it
broke down after working OK for so many terms.

Wildly oscillatory integrals can be very tricky if you don't choose your
path wisely.

--
Regards,
Martin Brown
 
On Tuesday, October 29, 2019 at 4:44:34 PM UTC-4, Jeroen Belleman wrote:
I just came across a weird fact involving sinc functions, which
everyone here will have seen and used at one time or another.
They're used all the time in signal processing mathematics.

We all know that the integral over all x of sinc(x) = π.
A little bit funnier is that the integral over all x of
sinc(x) * sinc(x/3) = π as well. We can go on:
integral over all x of sinc(x) * sinc(x/3) * sinc(x/5) = π.

Beginning to see a pattern? You'd be wrong. Up until sinc(1/13),
the result will indeed always be π exactly, but when the
factor sinc(x/15) is reached, suddenly the integral ends up a
teensy tiny bit less than π, and it gets worse after that.

Surprise!

If you Fourier-transform the individual factors and then
convolve them together, it will become clear what's going on.
They are known as Borwein's integrals. Oh well, I thought this
was fascinating.

Jeroen Belleman

What is the significance, if any, of pi? And who encounters these oddball integrals of products in any practical application of signal processing? What good is this useless factoid? Answer: none. Best I can find is it might have something to do with research mathematicians falling into some psychological trap about the value of induction based on limited evidence, for what that's worth.
 
bloggs.fredbloggs.fred@gmail.com wrote:
On Tuesday, October 29, 2019 at 4:44:34 PM UTC-4, Jeroen Belleman wrote:

I just came across a weird fact involving sinc functions, which
everyone here will have seen and used at one time or another. They're
used all the time in signal processing mathematics.

We all know that the integral over all x of sinc(x) = π. A little bit
funnier is that the integral over all x of sinc(x) * sinc(x/3) = π as
well. We can go on: integral over all x of sinc(x) * sinc(x/3) *
sinc(x/5) = π.

Beginning to see a pattern? You'd be wrong. Up until sinc(1/13), the
result will indeed always be π exactly, but when the factor sinc(x/15)
is reached, suddenly the integral ends up a teensy tiny bit less than
π, and it gets worse after that.

Surprise!

If you Fourier-transform the individual factors and then convolve them
together, it will become clear what's going on. They are known as
Borwein's integrals. Oh well, I thought this was fascinating.

Jeroen Belleman

What is the significance, if any, of pi? And who encounters these
oddball integrals of products in any practical application of signal
processing? What good is this useless factoid? Answer: none. Best I can
find is it might have something to do with research mathematicians
falling into some psychological trap about the value of induction based
on limited evidence, for what that's worth.

The math essentially describes the effect of successive, repeated
moving-average filters. This is quite common in signal processing
and therefor perfectly relevant.

Jeroen Belleman
 
Jeroen Belleman wrote...
The math essentially describes the effect of successive,
repeated moving-average filters. This is quite common
in signal processing and therefor perfectly relevant.

Multiple successive moving-average filters, sounds
odd, can you describe a few examples of that?


--
Thanks,
- Win
 
fredag den 1. november 2019 kl. 14.29.10 UTC+1 skrev Winfield Hill:
Jeroen Belleman wrote...

The math essentially describes the effect of successive,
repeated moving-average filters. This is quite common
in signal processing and therefor perfectly relevant.

Multiple successive moving-average filters, sounds
odd, can you describe a few examples of that?

https://en.wikipedia.org/wiki/Cascaded_integrator%E2%80%93comb_filter ?
 
On 1 Nov 2019 06:28:58 -0700, Winfield Hill <winfieldhill@yahoo.com>
wrote:

Jeroen Belleman wrote...

The math essentially describes the effect of successive,
repeated moving-average filters. This is quite common
in signal processing and therefor perfectly relevant.

Multiple successive moving-average filters, sounds
odd, can you describe a few examples of that?

As soon as the kids are finished with this big laser controller
project, they can do the FPGA for my alternator simulator. I'm picking
off currents and voltages with the ADUM isolated delta-sigma
converter, and they will have to filter the bit stream into 16-bit
integers, which will be feedbacks into a control loop. One common
delta-sigma recovery filter is called sinc3, which is made out of
rectangular summers, basically what Jeroen is discussing. It has the
bouncing-ball frequency response, not my favorite thing to put inside
a control loop.

Wiki talks about sinc filters.

I prefer integrator-based IIR filters, but they are trickier inside an
FPGA.


--

John Larkin Highland Technology, Inc

lunatic fringe electronics
 
On Thu, 31 Oct 2019 09:32:34 -0700 (PDT),
bloggs.fredbloggs.fred@gmail.com wrote:

On Tuesday, October 29, 2019 at 4:44:34 PM UTC-4, Jeroen Belleman wrote:
I just came across a weird fact involving sinc functions, which
everyone here will have seen and used at one time or another.
They're used all the time in signal processing mathematics.

We all know that the integral over all x of sinc(x) = ?.
A little bit funnier is that the integral over all x of
sinc(x) * sinc(x/3) = ? as well. We can go on:
integral over all x of sinc(x) * sinc(x/3) * sinc(x/5) = ?.

Beginning to see a pattern? You'd be wrong. Up until sinc(1/13),
the result will indeed always be ? exactly, but when the
factor sinc(x/15) is reached, suddenly the integral ends up a
teensy tiny bit less than ?, and it gets worse after that.

Surprise!

If you Fourier-transform the individual factors and then
convolve them together, it will become clear what's going on.
They are known as Borwein's integrals. Oh well, I thought this
was fascinating.

Jeroen Belleman

What is the significance, if any, of pi? And who encounters these oddball integrals of products in any practical application of signal processing? What good is this useless factoid? Answer: none. Best I can find is it might have something to do with research mathematicians falling into some psychological trap about the value of induction based on limited evidence, for what that's worth.

Thinking about math, especially oddball math, keeps an engineer's
brain in tune. Obsessing on gloom and doom might not.

Do you have no use for pi?





--

John Larkin Highland Technology, Inc

lunatic fringe electronics
 
On 30/10/19 7:44 am, Jeroen Belleman wrote:
> I just came across a weird fact involving sinc functions,

Weird. Is this the normalised or non-normalised sinc function?

<https://en.wikipedia.org/wiki/Sinc_function>

Clifford Heath.
 
On 2019-11-02 13:15, Phil Hobbs wrote:
On 2019-11-02 00:28, Clifford Heath wrote:
On 30/10/19 7:44 am, Jeroen Belleman wrote:
I just came across a weird fact involving sinc functions,

Weird. Is this the normalised or non-normalised sinc function?

https://en.wikipedia.org/wiki/Sinc_function

Clifford Heath.

sinc(x/N) has transform N rect(x/N).

If you think about it in the frequency domain, convolving a wide
rectangle with a succession of narrower rectangles leaves the DC value
the same until the sum of the widths of the narrow rectangles is more
than half the width of the wide one.  The DC value of the convolution is
the integral over all X of the product of the functions.

Not rocket science--anybody Bracewell ever taught Fourier to would get
that in three seconds.

Cheers

Phil Hobbs

N rect(NX), that is.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2019-11-02 00:28, Clifford Heath wrote:
On 30/10/19 7:44 am, Jeroen Belleman wrote:
I just came across a weird fact involving sinc functions,

Weird. Is this the normalised or non-normalised sinc function?

https://en.wikipedia.org/wiki/Sinc_function

Clifford Heath.

sinc(x/N) has transform N rect(x/N).

If you think about it in the frequency domain, convolving a wide
rectangle with a succession of narrower rectangles leaves the DC value
the same until the sum of the widths of the narrow rectangles is more
than half the width of the wide one. The DC value of the convolution is
the integral over all X of the product of the functions.

Not rocket science--anybody Bracewell ever taught Fourier to would get
that in three seconds.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2019-11-03 13:18 Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> writes:
On 2019-11-02 13:15, Phil Hobbs wrote:

sinc(x/N) has transform N rect(x/N).
....
Not rocket science--anybody Bracewell ever taught Fourier to would
get that in three seconds.

N rect(NX), that is.

...and three minutes :)

--
mikko
 
On 01/11/2019 13:28, Winfield Hill wrote:
Jeroen Belleman wrote...

The math essentially describes the effect of successive,
repeated moving-average filters. This is quite common
in signal processing and therefor perfectly relevant.

Multiple successive moving-average filters, sounds
odd, can you describe a few examples of that?

I wouldn't say it was common but it is sometimes done.

Unweighted boxcar moving average is about the simplest to implement
hardware filter that there is. And if you nest a few of them the same
then the result is a fair approximation to Gaussian convolution (and
would still be for all engineering purposes even in the failing case
being discussed here).

It is still an interesting and curious result though.

Truncated and Gaussian weighted sinc was used in the past for regridding
frequency domain data onto a rectangular grid prior to using the FFT. It
has artefacts so modern methods use better functions which have the nice
property that like a Gaussian they are their own FT but only when
truncated to a fixed length in one of the domains. This gives very nice
antialiasing properties at the expense of needing a guard band around
the edges. Prolate spheroidal Bessel functions are the canonical one.

--
Regards,
Martin Brown
 
On Sun, 3 Nov 2019 14:27:57 +0000, Martin Brown
<'''newspam'''@nezumi.demon.co.uk> wrote:

On 01/11/2019 13:28, Winfield Hill wrote:
Jeroen Belleman wrote...

The math essentially describes the effect of successive,
repeated moving-average filters. This is quite common
in signal processing and therefor perfectly relevant.

Multiple successive moving-average filters, sounds
odd, can you describe a few examples of that?

I wouldn't say it was common but it is sometimes done.

Unweighted boxcar moving average is about the simplest to implement
hardware filter that there is. And if you nest a few of them the same
then the result is a fair approximation to Gaussian convolution (and
would still be for all engineering purposes even in the failing case
being discussed here).

It is still an interesting and curious result though.

Truncated and Gaussian weighted sinc was used in the past for regridding
frequency domain data onto a rectangular grid prior to using the FFT. It
has artefacts so modern methods use better functions which have the nice
property that like a Gaussian they are their own FT but only when
truncated to a fixed length in one of the domains. This gives very nice
antialiasing properties at the expense of needing a guard band around
the edges. Prolate spheroidal Bessel functions are the canonical one.

One possibility that we are considering for our control loop is to
dump a 20 MHz delta-sigma bit stream into a 16-bit shift register and
use that as the feedback signal into our P+I controller.

Even more extreme would be to take a 1 and translate that to 0x7FFF,
and take a 0 and treat that as 0x8000 into the loop.

Might work.



--

John Larkin Highland Technology, Inc

lunatic fringe electronics
 
søndag den 3. november 2019 kl. 16.09.45 UTC+1 skrev jla...@highlandsniptechnology.com:
On Sun, 3 Nov 2019 14:27:57 +0000, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 01/11/2019 13:28, Winfield Hill wrote:
Jeroen Belleman wrote...

The math essentially describes the effect of successive,
repeated moving-average filters. This is quite common
in signal processing and therefor perfectly relevant.

Multiple successive moving-average filters, sounds
odd, can you describe a few examples of that?

I wouldn't say it was common but it is sometimes done.

Unweighted boxcar moving average is about the simplest to implement
hardware filter that there is. And if you nest a few of them the same
then the result is a fair approximation to Gaussian convolution (and
would still be for all engineering purposes even in the failing case
being discussed here).

It is still an interesting and curious result though.

Truncated and Gaussian weighted sinc was used in the past for regridding
frequency domain data onto a rectangular grid prior to using the FFT. It
has artefacts so modern methods use better functions which have the nice
property that like a Gaussian they are their own FT but only when
truncated to a fixed length in one of the domains. This gives very nice
antialiasing properties at the expense of needing a guard band around
the edges. Prolate spheroidal Bessel functions are the canonical one.

One possibility that we are considering for our control loop is to
dump a 20 MHz delta-sigma bit stream into a 16-bit shift register and
use that as the feedback signal into our P+I controller.

Even more extreme would be to take a 1 and translate that to 0x7FFF,
and take a 0 and treat that as 0x8000 into the loop.

Might work.

it is possible to do all you processing directly on the fast 1 bit stream, remember doing a bit on filters working like that at uni. The rationale being that multiplier become simple multiplexers, but since it also has to run at the much higher rate and filters will have to be that much longer
I'm not sure there's much to gain other than the headache of making it work
 
On 11/2/19 12:28 AM, Clifford Heath wrote:
On 30/10/19 7:44 am, Jeroen Belleman wrote:
I just came across a weird fact involving sinc functions,

Weird. Is this the normalised or non-normalised sinc function?

https://en.wikipedia.org/wiki/Sinc_function

Clifford Heath.

I somehow ended up with two copies of "Probability and Information
Theory, with Applications to Radar" by Phillip Woodward, 1953.

<https://imgur.com/a/4YtR2m9>

It seems improbable.
 

Welcome to EDABoard.com

Sponsor

Back
Top