Wasn\'t this impossible ?...

On 24/08/2020 9:52 am, bitrex wrote:
On 8/23/2020 7:26 PM, Ricketty C wrote:
On Sunday, August 23, 2020 at 4:09:51 PM UTC-4, bitrex wrote:
On 8/22/2020 11:34 PM, Ricketty C wrote:
On Saturday, August 22, 2020 at 3:04:21 PM UTC-4,
upsid...@downunder.com wrote:
On Sat, 22 Aug 2020 08:41:34 -0700 (PDT), Ricketty C
gnuarm.deletethisbit@gmail.com> wrote:

On Saturday, August 22, 2020 at 4:19:44 AM UTC-4, Martin Brown wrote:
On 22/08/2020 02:10, Ricketty C wrote:
On Friday, August 21, 2020 at 3:49:41 PM UTC-4, Pimpom wrote:
On 8/21/2020 10:54 PM, Ricketty C wrote:

But the time required to return those images would be...
astronomical.

Sending back digital images over a distance of a tenth of a light
year by radio wave would take, well, just one-tenth of a year, or
about five weeks. No problem.

The big problem is to reach that 0.1 light year distance first.
The
farthest man-made objects are now travelling at ~15 km/s
(Wikipedia). At that speed, it would take over 2000 years. :-(

You misunderstand.  I\'m referring to the RF link analysis.  They
have
a very hard time seeing anything other than stars because other
objects are too dim.  I haven\'t done the math, but the data rate
would have to be microscopic to successfully send and receive a
radio
signal from such distances.  0.1 light year is 6324 AUs.  The
Voyager
probes are about 141 AU so about 40 times closer.  They now are
transmitting at 160 bps.  Doing the math I get about 3 years to
transmit a MB of data.

It is slightly amazing that the Voyagers can be received at all
now. The
transmitters on the probes are fixed 1970\'s space approved
technology
with limited power and quite crude by modern communications
standards.
The error correction was very sophisticated for the time though.

The base station receivers have improved so much since the probes
were
launched that they can still follow Voyager out to the heliopause.

Data rates for a future deep space probe should be about the same
order
of magnitude as the recent probe to Pluto managed but downgraded
by the
increase in distance effects on signal to noise.

Yes, that is the problem.  The distance hugely weakens the signals
in both directions.

Only -6 dB for each doubling of the distance.

\"Only\" is a four letter word!

And how is the data rate impacted by a -17 dB adjustment to signal
strength?  You have to consider they are pushing the limit of what
they can do working at 160 bps presently.  We have done a lot to
improve the ground stations, but they can\'t keep increasing their
transmit power ad infinitum.  As I said, it will take years, maybe
decades to transmit a high resolution image from a space craft that
far away.  While larger antenna in general produce a more collimated
beam, there are limits to what you can do given the precision of the
antenna shape and the frequency you transmit on.


Cheapest bang-for-the-buck per-unit mission profile I think is keep the
power supply on Earth, a large laser, or array of lasers. Keep the
spacecraft as lightweight as possible. Fire it out into interstellar
space towards 1000 AU fast as it can go, using its instruments/optics to
take in all the data it can on the way, big sail of solar panels feeding
the ion/plasma drive or whatever from the laser beam.

Then at some point when received power is insufficient to run everything
flip it around and shut down everything but the downlink laser and use
the optics in reverse to beam the data back. Send as much as it can
before it goes out of range completely.

And how far would that be with this laser?


Beats me! Have to talk to some kind of NASA-person! ;) I\'m just enjoying
speculating beyond my depth in ways that are unlikely to get anyone
hurt, at least for the foreseeable future lol

not very far. it is somewhat difficult to detect the return bounce from
a laser shot at a target on the moon. The idea of powering something
with that is .....
 
bitrex wrote:
On 8/23/2020 6:46 PM, Tom Del Rosso wrote:
On Friday, August 21, 2020 at 11:55:30 AM UTC-4, bitrex wrote:

I\'m probably remembering the exact figures incorrectly but if you
could push a space telescope out to about a tenth of a light-year
from Earth, it could leverage gravitational lensing to make an
equivalent lens of enormous size.

Perhaps you could make an enormous virtual lens with a large array of
Hubbles.

At infrared or optical wavelengths it\'s hard to keep the light
coherent between members of the array separated by large distances,
aperture synthesis is a lot easier with radio astronomy I think

Why does it have to be coherent? Two photons entering a single telescope
aren\'t coherent. There would be more skew in digital processing of the
images. Radio astronomy at the VLA looks at one big pixel and uses some
kind of analog mixer.
 
On 29/08/2020 13:57, Tom Del Rosso wrote:
bitrex wrote:
On 8/23/2020 6:46 PM, Tom Del Rosso wrote:
On Friday, August 21, 2020 at 11:55:30 AM UTC-4, bitrex wrote:

I\'m probably remembering the exact figures incorrectly but if you
could push a space telescope out to about a tenth of a light-year
from Earth, it could leverage gravitational lensing to make an
equivalent lens of enormous size.

Perhaps you could make an enormous virtual lens with a large array of
Hubbles.

At infrared or optical wavelengths it\'s hard to keep the light
coherent between members of the array separated by large distances,
aperture synthesis is a lot easier with radio astronomy I think

Why does it have to be coherent? Two photons entering a single telescope
aren\'t coherent. There would be more skew in digital processing of the
images. Radio astronomy at the VLA looks at one big pixel and uses some
kind of analog mixer.

Not exactly. The signals are combined coherently and a huge effort is
made to compensate the differential paths as the scopes track. It is
quite an art. The phase centre is deliberately put just outside of the
target field of view because all local interference is coherent there.

The signals from every antenna are phase compensated and digitised and
combined in the correlators against every other antenna. On a good day
VLA can do a quick snapshot in 5 minutes because it is a Y shape.

The original aperture synthesis algorithms were strictly limited to E-W
baselines like the One Mile Telescope or nearly E-W like the Ryle 5km at
MRAO Cambridge. It is pretty much like making Young\'s slits sensitivity
patterns on the sky and measuring the sky brightness Fourier transform.

VLBI avoids the strict need to maintain perfect coherence by using
invariant combinations of three antennas as closure phase and of four
antennas for closure amplitudes. These remain good observables even when
the absolute phase at each antenna is shot to hell. It needs a high
precision synchronised maser clock at each dish and as many dishes in
the network at possible to get enough good observables. It also require
a lot of patience and some luck to find the interference fringes.

This technique is now also used in the optical at CHARA and before that
on the experimental rig at COAST in MRAO. Essentially using radio
astronomy methods to work with optical (near IR) light beams.

The only system I know of that used photon amplitude correlation only
was Hanbury-Brown & Twiss\'s intensity interferometer from Jodrell Bank
which surprised most physicists at the time by actually working! They
measured the diameters of numerous bright stars for the first time since
Michelson & Pease had done it with an iron girder on the Mt Wilson 100\".

There is a book about it which I think is online somewhere.

--
Regards,
Martin Brown
 
On Saturday, August 29, 2020 at 8:57:15 AM UTC-4, Tom Del Rosso wrote:
bitrex wrote:
On 8/23/2020 6:46 PM, Tom Del Rosso wrote:
On Friday, August 21, 2020 at 11:55:30 AM UTC-4, bitrex wrote:

I\'m probably remembering the exact figures incorrectly but if you
could push a space telescope out to about a tenth of a light-year
from Earth, it could leverage gravitational lensing to make an
equivalent lens of enormous size.

Perhaps you could make an enormous virtual lens with a large array of
Hubbles.

At infrared or optical wavelengths it\'s hard to keep the light
coherent between members of the array separated by large distances,
aperture synthesis is a lot easier with radio astronomy I think

Why does it have to be coherent? Two photons entering a single telescope
aren\'t coherent. There would be more skew in digital processing of the
images. Radio astronomy at the VLA looks at one big pixel and uses some
kind of analog mixer.

Is is not coherence that allows the laser light to spread less rapidly? Do I have this wrong?

--

Rick C.

+-+ Get 1,000 miles of free Supercharging
+-+ Tesla referral code - https://ts.la/richard11209
 
On Monday, August 24, 2020 at 12:22:49 AM UTC-4, bitrex wrote:
On 8/23/2020 11:49 PM, upsidedown@downunder.com wrote:
On Sun, 23 Aug 2020 16:09:43 -0400, bitrex <user@example.net> wrote:

On 8/22/2020 11:34 PM, Ricketty C wrote:
On Saturday, August 22, 2020 at 3:04:21 PM UTC-4, upsid...@downunder.com wrote:
On Sat, 22 Aug 2020 08:41:34 -0700 (PDT), Ricketty C
gnuarm.deletethisbit@gmail.com> wrote:

On Saturday, August 22, 2020 at 4:19:44 AM UTC-4, Martin Brown wrote:
On 22/08/2020 02:10, Ricketty C wrote:
On Friday, August 21, 2020 at 3:49:41 PM UTC-4, Pimpom wrote:
On 8/21/2020 10:54 PM, Ricketty C wrote:

But the time required to return those images would be...
astronomical.

Sending back digital images over a distance of a tenth of a light
year by radio wave would take, well, just one-tenth of a year, or
about five weeks. No problem.

The big problem is to reach that 0.1 light year distance first. The
farthest man-made objects are now travelling at ~15 km/s
(Wikipedia). At that speed, it would take over 2000 years. :-(

You misunderstand. I\'m referring to the RF link analysis. They have
a very hard time seeing anything other than stars because other
objects are too dim. I haven\'t done the math, but the data rate
would have to be microscopic to successfully send and receive a radio
signal from such distances. 0.1 light year is 6324 AUs. The Voyager
probes are about 141 AU so about 40 times closer. They now are
transmitting at 160 bps. Doing the math I get about 3 years to
transmit a MB of data.

It is slightly amazing that the Voyagers can be received at all now. The
transmitters on the probes are fixed 1970\'s space approved technology
with limited power and quite crude by modern communications standards.
The error correction was very sophisticated for the time though.

The base station receivers have improved so much since the probes were
launched that they can still follow Voyager out to the heliopause.

Data rates for a future deep space probe should be about the same order
of magnitude as the recent probe to Pluto managed but downgraded by the
increase in distance effects on signal to noise.

Yes, that is the problem. The distance hugely weakens the signals in both directions.

Only -6 dB for each doubling of the distance.

\"Only\" is a four letter word!

And how is the data rate impacted by a -17 dB adjustment to signal strength? You have to consider they are pushing the limit of what they can do working at 160 bps presently. We have done a lot to improve the ground stations, but they can\'t keep increasing their transmit power ad infinitum. As I said, it will take years, maybe decades to transmit a high resolution image from a space craft that far away. While larger antenna in general produce a more collimated beam, there are limits to what you can do given the precision of the antenna shape and the frequency you transmit on.


Cheapest bang-for-the-buck per-unit mission profile I think is keep the
power supply on Earth, a large laser, or array of lasers. Keep the
spacecraft as lightweight as possible. Fire it out into interstellar
space towards 1000 AU fast as it can go, using its instruments/optics to
take in all the data it can on the way, big sail of solar panels feeding
the ion/plasma drive or whatever from the laser beam.

Shining a laser through current largest ground based telescopes you
might be able to get a 10 nanorad beam, unfortunately at 0.1 ly the
beam is more than 10000 km in diameter, so you would need a huge
receiver to extract power.


My numbers were off anyway, you can be a lot closer than 0.1 LY to
leverage the Sun\'s gravitational lens. 500 AU minimum it looks like.
Voyager 2 is almost at 200 AU, \"only\" took 40 years using just chemical
rockets.

The idea in the white-paper I saw re: propulsion is you use a large
solar array, like a solar sail, tuned to the laser:

https://www.nasa.gov/directorates/spacetech/niac/2017_Phase_I_Phase_II/Propulsion_Architecture_for_Interstellar_Precursor_Missions/

\"A 10-km diameter, 100-MW laser array that beams power across the solar
system.

A 70% efficient photovoltaic array tuned to the laser frequency
producing power at 12 kV.

A 70-MW direct-drive, lithium (not xenon)-based ion propulsion system
with a specific impulse of 58,000 s.\"

You might like this Lex Fridman podcast with Sara Seager.
https://www.youtube.com/watch?v=-jA2ABHBc6Y
Besides the lensing idea, she wants to make a Starshade.
https://seagerexoplanets.mit.edu/exoplanet.htm
(weird looking edges to reduce the diffraction.)

George H.
 
On Saturday, August 29, 2020 at 7:53:59 AM UTC-7, Ricketty C wrote:

> Is is not coherence that allows the laser light to spread less rapidly? Do I have this wrong?

Coherence is the only light-emission plan that gives a lot of light into a single direcction
(etendue problems). Thermal sources the size of stars can be seen at a distance;
light bulbs, not so much.
 
On Saturday, August 29, 2020 at 4:55:33 PM UTC-4, whit3rd wrote:
On Saturday, August 29, 2020 at 7:53:59 AM UTC-7, Ricketty C wrote:

Is is not coherence that allows the laser light to spread less rapidly? Do I have this wrong?

Coherence is the only light-emission plan that gives a lot of light into a single direcction
(etendue problems). Thermal sources the size of stars can be seen at a distance;
light bulbs, not so much.

Hmm OK, I thought coherence was only about the wavelength / frequency
purity. As in coherence length. Send a thermal source through a
narrow interference filter and you increase the coherence of the
output light...

George H.

George H.
 
Martin Brown wrote:
The only system I know of that used photon amplitude correlation only
was Hanbury-Brown & Twiss\'s intensity interferometer from Jodrell Bank
which surprised most physicists at the time by actually working! They
measured the diameters of numerous bright stars for the first time
since Michelson & Pease had done it with an iron girder on the Mt
Wilson 100\".

See there you have it. It might surprise people by working. I\'m not
asking if known tech can do it. I\'m asking if it\'s feasible for an array
to collect a lot of visible light and process multiple images into one
with more clarity. Like a common deblurring algorithm except it would
be given multiple images to start from.
 
On 2020-08-29 08:57, Tom Del Rosso wrote:
bitrex wrote:
On 8/23/2020 6:46 PM, Tom Del Rosso wrote:
On Friday, August 21, 2020 at 11:55:30 AM UTC-4, bitrex wrote:

I\'m probably remembering the exact figures incorrectly but if you
could push a space telescope out to about a tenth of a light-year
from Earth, it could leverage gravitational lensing to make an
equivalent lens of enormous size.

Perhaps you could make an enormous virtual lens with a large array of
Hubbles.

At infrared or optical wavelengths it\'s hard to keep the light
coherent between members of the array separated by large distances,
aperture synthesis is a lot easier with radio astronomy I think

Why does it have to be coherent? Two photons entering a single telescope
aren\'t coherent. There would be more skew in digital processing of the
images. Radio astronomy at the VLA looks at one big pixel and uses some
kind of analog mixer.

Thinking about light propagation as photons bouncing around is a
one-hundred-percent guaranteed way to get the wrong answer.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2020-08-29 09:52, Martin Brown wrote:
On 29/08/2020 13:57, Tom Del Rosso wrote:
bitrex wrote:
On 8/23/2020 6:46 PM, Tom Del Rosso wrote:
On Friday, August 21, 2020 at 11:55:30 AM UTC-4, bitrex wrote:

I\'m probably remembering the exact figures incorrectly but if you
could push a space telescope out to about a tenth of a light-year
from Earth, it could leverage gravitational lensing to make an
equivalent lens of enormous size.

Perhaps you could make an enormous virtual lens with a large array of
Hubbles.

At infrared or optical wavelengths it\'s hard to keep the light
coherent between members of the array separated by large distances,
aperture synthesis is a lot easier with radio astronomy I think

Why does it have to be coherent? Two photons entering a single telescope
aren\'t coherent. There would be more skew in digital processing of the
images. Radio astronomy at the VLA looks at one big pixel and uses some
kind of analog mixer.

Not exactly. The signals are combined coherently and a huge effort is
made to compensate the differential paths as the scopes track. It is
quite an art. The phase centre is deliberately put just outside of the
target field of view because all local interference is coherent there.

The signals from every antenna are phase compensated and digitised and
combined in the correlators against every other antenna. On a good day
VLA can do a quick snapshot in 5 minutes because it is a Y shape.

The original aperture synthesis algorithms were strictly limited to E-W
baselines like the One Mile Telescope or nearly E-W like the Ryle 5km at
MRAO Cambridge. It is pretty much like making Young\'s slits sensitivity
patterns on the sky and measuring the sky brightness Fourier transform.

VLBI avoids the strict need to maintain perfect coherence by using
invariant combinations of three antennas as closure phase and of four
antennas for closure amplitudes. These remain good observables even when
the absolute phase at each antenna is shot to hell. It needs a high
precision synchronised maser clock at each dish and as many dishes in
the network at possible to get enough good observables. It also require
a lot of patience and some luck to find the interference fringes.

This technique is now also used in the optical at CHARA and before that
on the experimental rig at COAST in MRAO. Essentially using radio
astronomy methods to work with optical (near IR) light beams.

The only system I know of that used photon amplitude correlation only
was Hanbury-Brown & Twiss\'s intensity interferometer from Jodrell Bank
which surprised most physicists at the time by actually working! They
measured the diameters of numerous bright stars for the first time since
Michelson & Pease had done it with an iron girder on the Mt Wilson 100\".

There is a book about it which I think is online somewhere.

It is. ;)

<https://electrooptical.net/static/oldsite/hanbury/The_Intensity_Interferometer-Hanbury_Brown.pdf>

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2020-08-29 18:16, George Herold wrote:
On Saturday, August 29, 2020 at 4:55:33 PM UTC-4, whit3rd wrote:
On Saturday, August 29, 2020 at 7:53:59 AM UTC-7, Ricketty C wrote:

Is is not coherence that allows the laser light to spread less rapidly? Do I have this wrong?

Coherence is the only light-emission plan that gives a lot of light into a single direcction
(etendue problems). Thermal sources the size of stars can be seen at a distance;
light bulbs, not so much.

Hmm OK, I thought coherence was only about the wavelength / frequency
purity. As in coherence length. Send a thermal source through a
narrow interference filter and you increase the coherence of the
output light...

That\'s temporal coherence. There\'s also spatial coherence to worry about.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 30/08/2020 00:27, Tom Del Rosso wrote:
Martin Brown wrote:

The only system I know of that used photon amplitude correlation only
was Hanbury-Brown & Twiss\'s intensity interferometer from Jodrell Bank
which surprised most physicists at the time by actually working! They
measured the diameters of numerous bright stars for the first time
since Michelson & Pease had done it with an iron girder on the Mt
Wilson 100\".

See there you have it. It might surprise people by working. I\'m not

But the *BIG* snag is that without phase information the result you get
is very ambiguous. You don\'t get an image so much as a centrosymmetric
autocorrelation of the sky brightness distribution. That isn\'t too bad
on a stellar disk but it is pretty whacky on most complex objects.

Closure phase (and amplitude to a lesser extent) is the preferred
technique these days to get good observables out of tricky data. A
technique devised by Jennison at Jodrell Bank.

https://en.wikipedia.org/wiki/Closure_phase

ISTR Baldwin did a phaseless image of Cygnus A at 151MHz with synthetic
phases added to the measured amplitudes based on other observations at
2.7GHz (quite a big step up in frequency). They were lucky in that this
particular one is very symmetrical (although VLA observations later
revealed a jet in the lobe pointing towards us and backwash from the jet
in the one pointing away). Much higher resolution images are now
available. Being one of the brightest radio objects in the sky means it
gets used to test out every new technique there is. Tiny little galaxy
right at the middle with these truly massive radio lobes round it!

http://articles.adsabs.harvard.edu/pdf/1980MNRAS.192..931W

asking if known tech can do it. I\'m asking if it\'s feasible for an array
to collect a lot of visible light and process multiple images into one
with more clarity. Like a common deblurring algorithm except it would
be given multiple images to start from.

It is actually rather old technology. Frieden developed a photographic
method of speckle interferometry using narrow band starlight that could
resolve some close doubles with enough aperture. It was exacting work
and rather tedious. Never really caught on.

Lucky imaging which arose from faster more sensitive cameras and a
rather simple piece of logic sometimes used in aperture synthesis of
aligning the highest spike of each image has tended to take over these
days. Astonishing resolution limited by available aperture images are
possible today by amateurs using what are pretty much everything
modified webcams to state of the art CCDs. Basically take lots of shots
and throw away all the bad blurry ones and shift and add the remaining
high contrast images using a smart algorithm.

A side effect is that Jupiter is now under observation somewhere in the
world by at least two or three amateur scopes with enough resolution to
show any impacts. Turns out there are more than we ever imagined.


--
Regards,
Martin Brown
 
On 2020-08-30 04:19, Martin Brown wrote:
On 30/08/2020 00:27, Tom Del Rosso wrote:
Martin Brown wrote:

The only system I know of that used photon amplitude correlation
only was Hanbury-Brown & Twiss\'s intensity interferometer from
Jodrell Bank which surprised most physicists at the time by
actually working! They measured the diameters of numerous bright
stars for the first time since Michelson & Pease had done it with
an iron girder on the Mt Wilson 100\".

See there you have it. It might surprise people by working. I\'m
not

But the *BIG* snag is that without phase information the result you
get is very ambiguous. You don\'t get an image so much as a
centrosymmetric autocorrelation of the sky brightness distribution.
That isn\'t too bad on a stellar disk but it is pretty whacky on most
complex objects.

Closure phase (and amplitude to a lesser extent) is the preferred
technique these days to get good observables out of tricky data. A
technique devised by Jennison at Jodrell Bank.

https://en.wikipedia.org/wiki/Closure_phase

ISTR Baldwin did a phaseless image of Cygnus A at 151MHz with
synthetic phases added to the measured amplitudes based on other
observations at 2.7GHz (quite a big step up in frequency). They were
lucky in that this particular one is very symmetrical (although VLA
observations later revealed a jet in the lobe pointing towards us and
backwash from the jet in the one pointing away). Much higher
resolution images are now available. Being one of the brightest radio
objects in the sky means it gets used to test out every new technique
there is. Tiny little galaxy right at the middle with these truly
massive radio lobes round it!

http://articles.adsabs.harvard.edu/pdf/1980MNRAS.192..931W

asking if known tech can do it. I\'m asking if it\'s feasible for an
array to collect a lot of visible light and process multiple images
into one with more clarity. Like a common deblurring algorithm
except it would be given multiple images to start from.

It is actually rather old technology. Frieden developed a
photographic method of speckle interferometry using narrow band
starlight that could resolve some close doubles with enough aperture.
It was exacting work and rather tedious. Never really caught on.

Lucky imaging which arose from faster more sensitive cameras and a
rather simple piece of logic sometimes used in aperture synthesis of
aligning the highest spike of each image has tended to take over
these days. Astonishing resolution limited by available aperture
images are possible today by amateurs using what are pretty much
everything modified webcams to state of the art CCDs. Basically take
lots of shots and throw away all the bad blurry ones and shift and
add the remaining high contrast images using a smart algorithm.

A side effect is that Jupiter is now under observation somewhere in
the world by at least two or three amateur scopes with enough
resolution to show any impacts. Turns out there are more than we ever
imagined.

Speckle interferometry was a good technique that you could do using
photographic plates, but it\'s obsolete now, as you say. (*)

The lucky imaging trick works fine as long as the object is bright
enough. Seeing fluctuates in a bandwidth of up to 20 Hz or thereabouts,
so you need to be able to estimate image sharpness reasonably well in at
most 100 ms.

Cheers

Phil Hobbs

(*) Even though silver halide film can now be made as sensitive as a
good CCD, by adding formate ions to the emulsion.

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Saturday, August 29, 2020 at 8:54:24 PM UTC-4, Phil Hobbs wrote:
On 2020-08-29 18:16, George Herold wrote:
On Saturday, August 29, 2020 at 4:55:33 PM UTC-4, whit3rd wrote:
On Saturday, August 29, 2020 at 7:53:59 AM UTC-7, Ricketty C wrote:

Is is not coherence that allows the laser light to spread less rapidly? Do I have this wrong?

Coherence is the only light-emission plan that gives a lot of light into a single direcction
(etendue problems). Thermal sources the size of stars can be seen at a distance;
light bulbs, not so much.

Hmm OK, I thought coherence was only about the wavelength / frequency
purity. As in coherence length. Send a thermal source through a
narrow interference filter and you increase the coherence of the
output light...


That\'s temporal coherence. There\'s also spatial coherence to worry about.

Right, thanks. AFAIK, spatial coherence involves a pin hole that
the laser/ light source is focused through.
(memories of a holography lab.)

George H.
Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2020-08-30 12:57, George Herold wrote:
On Saturday, August 29, 2020 at 8:54:24 PM UTC-4, Phil Hobbs wrote:
On 2020-08-29 18:16, George Herold wrote:
On Saturday, August 29, 2020 at 4:55:33 PM UTC-4, whit3rd wrote:
On Saturday, August 29, 2020 at 7:53:59 AM UTC-7, Ricketty C wrote:

Is is not coherence that allows the laser light to spread less rapidly? Do I have this wrong?

Coherence is the only light-emission plan that gives a lot of light into a single direcction
(etendue problems). Thermal sources the size of stars can be seen at a distance;
light bulbs, not so much.

Hmm OK, I thought coherence was only about the wavelength / frequency
purity. As in coherence length. Send a thermal source through a
narrow interference filter and you increase the coherence of the
output light...


That\'s temporal coherence. There\'s also spatial coherence to worry about.

Right, thanks. AFAIK, spatial coherence involves a pin hole that
the laser/ light source is focused through.
(memories of a holography lab.)

Yup, the fringe visibility as a function of separation of the two
pinholes goes like the modulus of the complex degree of coherence. (Its
phase sets the phase-shift of the fringe pattern.)

Cheers

Phil Hobbs

(Who had the privilege of taking statistical optics from Joe Goodman
himself many moons ago, and remembers odd bits of it.)

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 

Welcome to EDABoard.com

Sponsor

Back
Top