F
feklar
Guest
The explosive growth of the mobile phone industry has crowded and
tangled the nation's airwaves to such an extent that wireless company
signals are increasingly interfering with emergency radio frequencies
used by police and firefighters, public safety agencies said.
A cure?
I may know of a way to broadcast 10 to 12 NTSC TV stations from a
single antenna using a hundredth or less of the 4 mHz bandwidth it
currently takes to broadcast a single NTSC video channel.
It may work, it may not. It would have to be looked into. That
DirecTV works leads me to believe that this method would also be
valid.
The basic principle would be to use one of the varous materials that
are electroreactive, like many silver and cesium compounds, for the TV
station transmitter tube cathodes.
The modulation pattern would be imparted using either a laser or an
x-ray source to irradiate the cathode, thereby introducing tiny
fluctuations into the output signal.
The transmitter would broadcast one narrow band frequency, say at 500
mHz for example, with a bandwith similar to or smaller than an FM
radio station.
No other modulation would be applied to the output (power
transmission) tubes except this tiny signal.
Just bear with me, it gets better.
Now say use a standard very high data rate beam chopper if using a
high powered laser to irradiate the cathode, the chopper modified with
lead plates if x-ray beam chopping is required. Since x-rays are
inherently more powerful, x-rays are likely to be the correct
irradiation source
The idea is to irradiate the catrhode with tiny, very short duration,
but very high powered pulses.
Obviously, the transmitting tube will have to be cooled to increase
the sensitivity (decrease the rise time and fall time) of the cathode
materiel. The cathode should be a laser etched block, etched to leave
large numbers of rows of microminiature towers or posts sticking up
from its surface to increase the surface area and provide better
cooling (with a high pressure supercooled gas) so as to decrease the
rise and fall times.
Looking at the received signal output with a standard type of VHF
receiver with an oscilloscope, there would be no sign of these tiny
introduced disturbances.
So we need a specially designed receiver to process the received
signal.
Whatever the transmitter frequency is, have the user select that
frequency (channel). Inside the reciever, a phase locked loop
generator will tune to the signal and kickstart and maintain a
seperate oscillator circuit at the same frequency.
The idea here is to have the receiver generate the exact same
frequency, and match it to the received signal in phase and power
level without making an exact copy of the received signal, using a
seperate "clean" pure oscillator circuit.
Then this pure signal can be inverted 180 degrees out of phase and
applied to the original signal. Once this is done, all that will
remain is the very very tiny stream of pulses that had been imparted
at the transmitting cathode by the laser or x-ray source.
It could well be that the signlal will be so small that shielding the
receiver and oscialltor circuits will not be enough, they may have to
be cooled or supercooled with a microminiature liquid nitrogen system
or better, a Peltier thermo electric cooler chip.
Mass produxced, neither of these would be any significant cost
consideration. Peltiers are already mass produced. Nitrogen would be
much more energy efficient than a Peltier.
No doubt Peltiers at first because of the cost considerations of
designing a micro liquid nitogen system... but later, nitrogen.
The tiny train of pulses would then just barely be visible on an
oscilloscope of the canceled carrier output processed by the receiver.
Obviously some very high speed fast reactance semiconductor components
would be required, gallium arsenide or YAG rather than silicon or
germanium.
Three very low noise amplifier stages later the signal should be
usable. This is the question: Will the pulses be large enough to be
processed? If they can be made large enough, then this concept is
valid.
Use analog simulated digital for the signal. It has a high error rate
but is still completely acceptable for this application.
In other words, modulate the output power of the irradiation laser or
x-ray source to provide each pulse with a power level of 0 to 255.
This way, two pulses are all that would be required. One for chroma,
the other for luminance.
Now obviously there will be a very high error rate, but an error would
only affect a single pixel, making the brightness of that pixel ever
so slightly inaccurate, or the chroma ever so slightly inaccurate.
This of course is assuming strong signal, no serious noise, and good
sending and receiving conditions.
Just stick with me here, I will be the first to admit that this syetem
would be much, much more susceptable to noise than standard NTSC, but
there are ways around this seemingly insurmountable problem.
For one, a modified type of CRC error correction can be used. It can
be one CRC per pixel (cuts transmission capability by a third), or one
per scan line. Or it can be similar to ECC as used in hard disks, for
8 or 16 pixels in each sample.
First you have to go oover the decoding method before you can
understand some of the higher order forms of error correction that can
be used.
What we need are memory gates and a slight time delay. As each line
of scan is received, insted of being output to the display, instead it
gets stored in RAM. Say we have storage for 5 complete screens in
RAM. Using a FIFO, and a circular buffer pointer, as each new screen
(NTSC - 30 screens per second) is received, it is stored in its own
page of RAM. When the fourth screen begins being received, the first
one that had been received begins being output to the display device.
and so on using the buffer in a circular arrangement. As each new
screen comes in the oldest one gets output to the display.
This method allows considerable noise reduction potential.
First, consider snow. In standard positive picture phase NTSC, snow
is white. Error pixels show up as white or grey dots in the screen,
the more noise, the more white pixels.
This setup uses negative picture phase inversion, and error pixels are
black. So, the result of random noise is not visible interference on
the display but instead, a reduction in overall screen brightness.
How can we find error pixels? One method is to use the banks of RAM
and apply a control program and the appropriate logic. For example,
one yellow pixel in a sea of green ones is probably an error. Color
area changes are almost always either drastic or gradual. Rarely if
ever does just one purple pixel show up in a green area.
When there is a question such as this that arises, the pixel that may
be in error can be rejected as erroneous and replaced with one taken
from averaging the values of all the pixels surrounding it.
Obviously, the logic must extend past the surrounding pixels becasue
there are cases of one yellow pixel for every 30 green ones used as an
area tint. Area tint patterns can be analyzed and left alone if they
show a repeating pattern.
We also have more than one screen to deal with. After the
generalities of the given screen are analyzed by a CPU, error pixels
can be replaced with the pixel from the previous screen which is still
residing in RAM. Even after a screen is output to the display it will
remain in RAM for four more screens before being overwritten.
In cases where there is doubt, the pixel can be blacked out as well,
resulting in a tiny reduction in screen brightness.
Obviously, there will be times when many lines worth of pixel values
exceeding 255 will be received, but instead of showing these as black
lines on the display, the pixels from the ast good screen can be
displayed.
All of these methods are well within the tolerance of the eye not to
be able to discern. The eye is not fast enough to detect even a 50
percent error rate in erroneously "corrected" pixels. This is
especially true for averaged replacement pixels.
Given the random nature of noise, some noise will cause a general
reduction in display screen brightness, but using the above methods to
replace bad pixels with good ones from previous screens will limit the
scale of the reduction considerably.
There are other more sophisticated approaches to using CPUs to analyze
the screens stored in RAM but the basic ideas above illustrate a few
of the basics well enough, and space is limited here.
Remember, the dot rate for NTSC is 12 mHz, (the pixel rate is 24 mHz
using two values per pixel, chroma and luminance) which means a 80386
CPU running at 50 mHz could accomplish a fair amount of processing
given the proper hardware logic gates to work with. Obviously, a
Pentium 500 running multitasking could process numerous logic analysis
activities on multiple banks of RAM.
If the method works, it should be relatively easy to interleave 10 or
12 channels onto a single data stream. Then at the reciver, the
channel selector is used first to pick a transmitter, then to choose
and deinterleave one of the channels contained within that frequency.
Figure using a one gigahertz data rate. Even using a CRC correction
"byte" for each pixel, this means the NTSC dot rate of 12 million
pixels per second times three (ECC, chroma, luminance), or 36 mHz.
How many 36 mHz signals can be interleaved into a 1 gHz data rate? 25
channels, with 2.7 mHz left over. I suspect that the use of ECC or
CRC would be much more agressive, which is why I said 10 to 12
channels rather than 25. Also, we still have the audio channels to
consider, probably some form of MPEG with error correction.
So if 12 channels were used, it would reduce the airwaves requirement
for 12 currently operating TV channels from (12 * 4 mHz) 48 mHz, down
to less than 1 mHz. Not considering the audio channels it would free
since they are far less bandwidth intensive.
But in terms of electrical power use, those audio transmitters aren't
so insignificant, and neither are the video transmitters. If this
concept proves valid, it would save more than a constant gigawatt of
electricity in the USA alone.
Obviously, in the receiver you need high speed gallium arsenide chips
to convert the power level of each received pulse to a digital
value... This same circuit would contain the deinterleave selector
circuitry.
Obviously, the transmitter needs to send time base data periodically
for deinterleaving synchronization.
feklar@rock.com
now you are really screwed
http://www.infernalpress.com/Columns/election.html
http://www.scoop.co.nz/mason/stories/HL0307/S00065.htm
tangled the nation's airwaves to such an extent that wireless company
signals are increasingly interfering with emergency radio frequencies
used by police and firefighters, public safety agencies said.
A cure?
I may know of a way to broadcast 10 to 12 NTSC TV stations from a
single antenna using a hundredth or less of the 4 mHz bandwidth it
currently takes to broadcast a single NTSC video channel.
It may work, it may not. It would have to be looked into. That
DirecTV works leads me to believe that this method would also be
valid.
The basic principle would be to use one of the varous materials that
are electroreactive, like many silver and cesium compounds, for the TV
station transmitter tube cathodes.
The modulation pattern would be imparted using either a laser or an
x-ray source to irradiate the cathode, thereby introducing tiny
fluctuations into the output signal.
The transmitter would broadcast one narrow band frequency, say at 500
mHz for example, with a bandwith similar to or smaller than an FM
radio station.
No other modulation would be applied to the output (power
transmission) tubes except this tiny signal.
Just bear with me, it gets better.
Now say use a standard very high data rate beam chopper if using a
high powered laser to irradiate the cathode, the chopper modified with
lead plates if x-ray beam chopping is required. Since x-rays are
inherently more powerful, x-rays are likely to be the correct
irradiation source
The idea is to irradiate the catrhode with tiny, very short duration,
but very high powered pulses.
Obviously, the transmitting tube will have to be cooled to increase
the sensitivity (decrease the rise time and fall time) of the cathode
materiel. The cathode should be a laser etched block, etched to leave
large numbers of rows of microminiature towers or posts sticking up
from its surface to increase the surface area and provide better
cooling (with a high pressure supercooled gas) so as to decrease the
rise and fall times.
Looking at the received signal output with a standard type of VHF
receiver with an oscilloscope, there would be no sign of these tiny
introduced disturbances.
So we need a specially designed receiver to process the received
signal.
Whatever the transmitter frequency is, have the user select that
frequency (channel). Inside the reciever, a phase locked loop
generator will tune to the signal and kickstart and maintain a
seperate oscillator circuit at the same frequency.
The idea here is to have the receiver generate the exact same
frequency, and match it to the received signal in phase and power
level without making an exact copy of the received signal, using a
seperate "clean" pure oscillator circuit.
Then this pure signal can be inverted 180 degrees out of phase and
applied to the original signal. Once this is done, all that will
remain is the very very tiny stream of pulses that had been imparted
at the transmitting cathode by the laser or x-ray source.
It could well be that the signlal will be so small that shielding the
receiver and oscialltor circuits will not be enough, they may have to
be cooled or supercooled with a microminiature liquid nitrogen system
or better, a Peltier thermo electric cooler chip.
Mass produxced, neither of these would be any significant cost
consideration. Peltiers are already mass produced. Nitrogen would be
much more energy efficient than a Peltier.
No doubt Peltiers at first because of the cost considerations of
designing a micro liquid nitogen system... but later, nitrogen.
The tiny train of pulses would then just barely be visible on an
oscilloscope of the canceled carrier output processed by the receiver.
Obviously some very high speed fast reactance semiconductor components
would be required, gallium arsenide or YAG rather than silicon or
germanium.
Three very low noise amplifier stages later the signal should be
usable. This is the question: Will the pulses be large enough to be
processed? If they can be made large enough, then this concept is
valid.
Use analog simulated digital for the signal. It has a high error rate
but is still completely acceptable for this application.
In other words, modulate the output power of the irradiation laser or
x-ray source to provide each pulse with a power level of 0 to 255.
This way, two pulses are all that would be required. One for chroma,
the other for luminance.
Now obviously there will be a very high error rate, but an error would
only affect a single pixel, making the brightness of that pixel ever
so slightly inaccurate, or the chroma ever so slightly inaccurate.
This of course is assuming strong signal, no serious noise, and good
sending and receiving conditions.
Just stick with me here, I will be the first to admit that this syetem
would be much, much more susceptable to noise than standard NTSC, but
there are ways around this seemingly insurmountable problem.
For one, a modified type of CRC error correction can be used. It can
be one CRC per pixel (cuts transmission capability by a third), or one
per scan line. Or it can be similar to ECC as used in hard disks, for
8 or 16 pixels in each sample.
First you have to go oover the decoding method before you can
understand some of the higher order forms of error correction that can
be used.
What we need are memory gates and a slight time delay. As each line
of scan is received, insted of being output to the display, instead it
gets stored in RAM. Say we have storage for 5 complete screens in
RAM. Using a FIFO, and a circular buffer pointer, as each new screen
(NTSC - 30 screens per second) is received, it is stored in its own
page of RAM. When the fourth screen begins being received, the first
one that had been received begins being output to the display device.
and so on using the buffer in a circular arrangement. As each new
screen comes in the oldest one gets output to the display.
This method allows considerable noise reduction potential.
First, consider snow. In standard positive picture phase NTSC, snow
is white. Error pixels show up as white or grey dots in the screen,
the more noise, the more white pixels.
This setup uses negative picture phase inversion, and error pixels are
black. So, the result of random noise is not visible interference on
the display but instead, a reduction in overall screen brightness.
How can we find error pixels? One method is to use the banks of RAM
and apply a control program and the appropriate logic. For example,
one yellow pixel in a sea of green ones is probably an error. Color
area changes are almost always either drastic or gradual. Rarely if
ever does just one purple pixel show up in a green area.
When there is a question such as this that arises, the pixel that may
be in error can be rejected as erroneous and replaced with one taken
from averaging the values of all the pixels surrounding it.
Obviously, the logic must extend past the surrounding pixels becasue
there are cases of one yellow pixel for every 30 green ones used as an
area tint. Area tint patterns can be analyzed and left alone if they
show a repeating pattern.
We also have more than one screen to deal with. After the
generalities of the given screen are analyzed by a CPU, error pixels
can be replaced with the pixel from the previous screen which is still
residing in RAM. Even after a screen is output to the display it will
remain in RAM for four more screens before being overwritten.
In cases where there is doubt, the pixel can be blacked out as well,
resulting in a tiny reduction in screen brightness.
Obviously, there will be times when many lines worth of pixel values
exceeding 255 will be received, but instead of showing these as black
lines on the display, the pixels from the ast good screen can be
displayed.
All of these methods are well within the tolerance of the eye not to
be able to discern. The eye is not fast enough to detect even a 50
percent error rate in erroneously "corrected" pixels. This is
especially true for averaged replacement pixels.
Given the random nature of noise, some noise will cause a general
reduction in display screen brightness, but using the above methods to
replace bad pixels with good ones from previous screens will limit the
scale of the reduction considerably.
There are other more sophisticated approaches to using CPUs to analyze
the screens stored in RAM but the basic ideas above illustrate a few
of the basics well enough, and space is limited here.
Remember, the dot rate for NTSC is 12 mHz, (the pixel rate is 24 mHz
using two values per pixel, chroma and luminance) which means a 80386
CPU running at 50 mHz could accomplish a fair amount of processing
given the proper hardware logic gates to work with. Obviously, a
Pentium 500 running multitasking could process numerous logic analysis
activities on multiple banks of RAM.
If the method works, it should be relatively easy to interleave 10 or
12 channels onto a single data stream. Then at the reciver, the
channel selector is used first to pick a transmitter, then to choose
and deinterleave one of the channels contained within that frequency.
Figure using a one gigahertz data rate. Even using a CRC correction
"byte" for each pixel, this means the NTSC dot rate of 12 million
pixels per second times three (ECC, chroma, luminance), or 36 mHz.
How many 36 mHz signals can be interleaved into a 1 gHz data rate? 25
channels, with 2.7 mHz left over. I suspect that the use of ECC or
CRC would be much more agressive, which is why I said 10 to 12
channels rather than 25. Also, we still have the audio channels to
consider, probably some form of MPEG with error correction.
So if 12 channels were used, it would reduce the airwaves requirement
for 12 currently operating TV channels from (12 * 4 mHz) 48 mHz, down
to less than 1 mHz. Not considering the audio channels it would free
since they are far less bandwidth intensive.
But in terms of electrical power use, those audio transmitters aren't
so insignificant, and neither are the video transmitters. If this
concept proves valid, it would save more than a constant gigawatt of
electricity in the USA alone.
Obviously, in the receiver you need high speed gallium arsenide chips
to convert the power level of each received pulse to a digital
value... This same circuit would contain the deinterleave selector
circuitry.
Obviously, the transmitter needs to send time base data periodically
for deinterleaving synchronization.
feklar@rock.com
now you are really screwed
http://www.infernalpress.com/Columns/election.html
http://www.scoop.co.nz/mason/stories/HL0307/S00065.htm