R
RCIngham
Guest
Simulation or other experiment will indicate which of us (if either) isOn 8/2/2013 6:35 AM, RCIngham wrote:
On 8/1/13 5:56 AM, RCIngham wrote:
On 7/31/13 9:36 AM, RCIngham wrote:
[snip]
Unless 'length' is limited, your worst case has header
"0000001111111111"
(with an extra bit stuffed) followed by 16 * 1023 = 16368 zeros,
which
will
have 2728 ones stuffed into them. Total line packet length is 19113
symbols. If the clocks are within 1/19114 of each other, the same
number
of
symbols will be received as sent, ASSUMING no jitter. You can't
assume
that, but if there is 'not much' jitter then perhaps 1/100k will be
good
enough for relative drift to not need to be corrected for.
So, for version 1, use the 'sync' to establish the start of fram
and
the
sampling point, simulate the 'Rx fast' and 'Rx slow' cases in
parallel,
and
see whether it works.
BTW, this is off-topic for C.A.F., as it is a system design problem
not
related to the implementation method.
Since you can resynchronize your sampling clock on each transition
received, you only need to "hold lock" for the maximum time between
transitions, which is 7 bit times. This would mean that if you hav
a
nominal 4x clock, some sample points will be only 3 clocks apart (if
you
are slow) or some will be 5 clocks apart (if you are fast), whil
most
will be 4 clock apart. This is the reason for the 1 bit stuffing.
The bit-stuffing in long sequences of zeroes is almost certainl
there
to
facilitate a conventional clock recovery method, which I am proposing
not
using PROVIDED THAT the clocks at each end are within a sufficiently
tight
tolerance. Detect the ones in the as-sent stream first, then decide
which
are due to bit-stuffing, and remove them.
Deciding how tight a tolerance is 'sufficiently tight' is probably
non-trivial, so I won't be doing it for free.
Since a 4x clock allows for a 25% data period correction, and we will
get an opportunity to do so every 7 data periods, we can tolerat
about
a 25/7 ~ 3% error in clock frequency. (To get a more exact value w
will
need to know details like jitter and sampling apertures, but thi
gives
us a good ball-park figure). Higher sampling rates can about double
this, the key is we need to be able to know which direction the erro
is
in, so we need to be less than a 50% of a data period error including
the variation within a sample clock.
To try to gather the data without resynchronizing VASTLY decrease
your
tolerance for clock errors as you need to stay within a clock cycl
over
the entire message.
The protocol, with its 3 one preamble, does seem like there may have
been some effort to enable the use of a PLL to generate the data
sampling clock, which may have been the original method. This doe
have
the advantage the the data clock out of the sampler is more regula
(not
having the sudden jumps from the resyncronizing), and getting a set a
burst of 1s helps the PLL to get a bit more centered on the data. My
experience though is that with FPGAs (as would be on topic for this
group), this sort of PLL synchronism is not normally used, but
oversampling clocks with phase correction is fairly standard.
Some form of clock recovery is essential for continuous ('synchronous')
data streams. It is not required for 'sufficiently short' asynchronou
data
bursts, the classic example of which is RS-232. What I am suggesting is
that the OP determines - using simulation - whether these frames ar
too
long given the relative clock tolerances for a system design withou
clock
recovery.
As I previously noted, this is first a 'system design' problem. Onl
after
that has been completed does it become an 'FPGA design' problem.
I don't think the frame length is the key parameter, rather it is the 6
zero, one insertion that guarantees a transition every 7 bits.
--
Rick
correct.
---------------------------------------
Posted through http://www.FPGARelated.com