ECG signals Compression/Decompression

W

Weiss

Guest
Hi,

I want to ask you if it's feasible to create an EC
compression/decompression algorithm using decomposition of the ECG signa
into orthogonal polynomials bases on FPGA.

And if it's possible, do you think this solution is more efficient on FPG
? Compared to PSoC or DPSs for say.





---------------------------------------
Posted through http://www.FPGARelated.com
 
In article <3eidnZ7ikNcFSxDOnZ2dnUVZ_rednZ2d@giganews.com>,
"Weiss" <100383@embeddedrelated> writes:

And if it's possible, do you think this solution is more efficient on FPGA
? Compared to PSoC or DPSs for say.

A good rule of thumb is that if you can do it in software,
that's probably easier and cheaper.

--
These are my opinions. I hate spam.
 
On 6/3/2014 10:07 AM, Weiss wrote:
Hi,

I want to ask you if it's feasible to create an ECG
compression/decompression algorithm using decomposition of the ECG signal
into orthogonal polynomials bases on FPGA.

And if it's possible, do you think this solution is more efficient on FPGA
? Compared to PSoC or DPSs for say.

The choice of algorithm for your compression is another field of study
than FPGA design. There is a newsgroup for that, comp.compression.

Once you have found a reasonable compression algorithm for your signal,
then you can choose an implementation based on the various requirements.

--

Rick
 
Weiss wrote:

Hi,

I want to ask you if it's feasible to create an ECG
compression/decompression algorithm using decomposition of the ECG signal
into orthogonal polynomials bases on FPGA.

And if it's possible, do you think this solution is more efficient on FPGA
? Compared to PSoC or DPSs for say.
In most cases, the sampling rate is low enough that even general-purpose
micros ought to be able to handle the task.

Jon
 
Tim Wescott <tim@seemywebsite.really> wrote:
On 6/3/2014 10:07 AM, Weiss wrote:

I want to ask you if it's feasible to create an ECG
compression/decompression algorithm using decomposition of the ECG
signal into orthogonal polynomials bases on FPGA.

(snip)

> Is comp.compression active? I haven't heard of it.

It has been pretty quiet lately, as has comp.dsp.

comp.dsp may be a good resource, too. ECG signals (assuming you mean
electro cardio-gram) have some unique features that should make efficient
compression both a joy and a terror, given that (a) the signal has a lot
of "quiet time" with low information content, and (b) it's medical, which
means that lives will depend on the fidelity of the reconstruction.

OK, who remembers watching "Emergency!" many years ago. They have
a portable machine which, if I remember right, sends the signal
though a phone line. That is, pretty much an analog modem.
(Actually, one direction, so modulator on one end, demodulator
on the other.) Seems like that sets a limit on the needed
bandwidth.

But I presume the needed bandwidth is a lot less than 4kHz.

Given the life-critical nature of the thing, I'd certainly want to start
by finding a lossless compression method, and see if that's good enough.

If you know exactly the features that are needed, you can compress
down to just those features. But the bandwidth seems low enough to
me, that I don't see why you need to work that hard.

-- glen
 
On Tue, 03 Jun 2014 13:50:02 -0400, rickman wrote:

On 6/3/2014 10:07 AM, Weiss wrote:
Hi,

I want to ask you if it's feasible to create an ECG
compression/decompression algorithm using decomposition of the ECG
signal into orthogonal polynomials bases on FPGA.

And if it's possible, do you think this solution is more efficient on
FPGA ? Compared to PSoC or DPSs for say.

The choice of algorithm for your compression is another field of study
than FPGA design. There is a newsgroup for that, comp.compression.

Once you have found a reasonable compression algorithm for your signal,
then you can choose an implementation based on the various requirements.

Is comp.compression active? I haven't heard of it.

comp.dsp may be a good resource, too. ECG signals (assuming you mean
electro cardio-gram) have some unique features that should make efficient
compression both a joy and a terror, given that (a) the signal has a lot
of "quiet time" with low information content, and (b) it's medical, which
means that lives will depend on the fidelity of the reconstruction.

Given the life-critical nature of the thing, I'd certainly want to start
by finding a lossless compression method, and see if that's good enough.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
Tim Wescott <tim@seemywebsite.really> wrote:

(snip, I wrote, regarding EKG signals)

(I always heard it as EKG, thought I don't know why)

>> But I presume the needed bandwidth is a lot less than 4kHz.

(snip)

If you know exactly the features that are needed, you can compress down
to just those features. But the bandwidth seems low enough to me, that I
don't see why you need to work that hard.

The OP did not say what he wanted to compress down to, true -- but your
small may be his huge.

My gut feel is that you can accurately reproduce an ECG signal with less
than 100Hz of bandwidth, but without knowing the OP's needs, who can say
if that's small enough?

OK, so 100Hz, say 8 bit samples are enough, and 10s for total length,
so 16000 bits. So, it won't fit in a tweet, but is small enough
for just about everything else.

-- glen
 
On Wed, 04 Jun 2014 20:18:14 +0000, glen herrmannsfeldt wrote:

Tim Wescott <tim@seemywebsite.really> wrote:
On 6/3/2014 10:07 AM, Weiss wrote:

I want to ask you if it's feasible to create an ECG
compression/decompression algorithm using decomposition of the ECG
signal into orthogonal polynomials bases on FPGA.

(snip)

Is comp.compression active? I haven't heard of it.

It has been pretty quiet lately, as has comp.dsp.

comp.dsp may be a good resource, too. ECG signals (assuming you mean
electro cardio-gram) have some unique features that should make
efficient compression both a joy and a terror, given that (a) the
signal has a lot of "quiet time" with low information content, and (b)
it's medical, which means that lives will depend on the fidelity of the
reconstruction.

OK, who remembers watching "Emergency!" many years ago. They have a
portable machine which, if I remember right, sends the signal though a
phone line. That is, pretty much an analog modem. (Actually, one
direction, so modulator on one end, demodulator on the other.) Seems
like that sets a limit on the needed bandwidth.

But I presume the needed bandwidth is a lot less than 4kHz.

Given the life-critical nature of the thing, I'd certainly want to
start by finding a lossless compression method, and see if that's good
enough.

If you know exactly the features that are needed, you can compress down
to just those features. But the bandwidth seems low enough to me, that I
don't see why you need to work that hard.

-- glen

The OP did not say what he wanted to compress down to, true -- but your
small may be his huge.

My gut feel is that you can accurately reproduce an ECG signal with less
than 100Hz of bandwidth, but without knowing the OP's needs, who can say
if that's small enough?

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On 6/4/2014 4:18 PM, glen herrmannsfeldt wrote:
Tim Wescott <tim@seemywebsite.really> wrote:
On 6/3/2014 10:07 AM, Weiss wrote:

I want to ask you if it's feasible to create an ECG
compression/decompression algorithm using decomposition of the ECG
signal into orthogonal polynomials bases on FPGA.

(snip)

Is comp.compression active? I haven't heard of it.

It has been pretty quiet lately, as has comp.dsp.

I've never seen a time when comp.compression had much useful activity,
but from the postings I see there the people frequenting the group seem
to understand the topic pretty well.

I'm not sure the OP needs that particular group since he seems to have
picked his compression algorithm. I don't know how that particular
algorithm works so I can comment on how easy it would be to implement in
an FPGA. Perhaps comp.compression can help him understand how to
implement it.


comp.dsp may be a good resource, too. ECG signals (assuming you mean
electro cardio-gram) have some unique features that should make efficient
compression both a joy and a terror, given that (a) the signal has a lot
of "quiet time" with low information content, and (b) it's medical, which
means that lives will depend on the fidelity of the reconstruction.

OK, who remembers watching "Emergency!" many years ago. They have
a portable machine which, if I remember right, sends the signal
though a phone line. That is, pretty much an analog modem.
(Actually, one direction, so modulator on one end, demodulator
on the other.) Seems like that sets a limit on the needed
bandwidth.

But I presume the needed bandwidth is a lot less than 4kHz.

Given the life-critical nature of the thing, I'd certainly want to start
by finding a lossless compression method, and see if that's good enough.

If you know exactly the features that are needed, you can compress
down to just those features. But the bandwidth seems low enough to
me, that I don't see why you need to work that hard.

I would bet it is like SONAR work. They aren't likely to be willing to
give up much fidelity because of fear of not conveying some information.
Military SONAR doesn't compress the signal even when sending over
expensive satellite links because there is no way to compress noise and
most of the signal is noise. Compress that and you lose the opportunity
to pull weak signals from it.

If you could isolate the important features for compression you are just
one step away from reading the EKG and eliminating the radiologist or
whoever interprets those things. I expect they aren't interested in
letting that happen.

--

Rick
 
In comp.arch.fpga,
rickman <gnuarm@gmail.com> wrote:
On 6/4/2014 4:18 PM, glen herrmannsfeldt wrote:
Tim Wescott <tim@seemywebsite.really> wrote:
On 6/3/2014 10:07 AM, Weiss wrote:

Given the life-critical nature of the thing, I'd certainly want to start
by finding a lossless compression method, and see if that's good enough.

If you know exactly the features that are needed, you can compress
down to just those features. But the bandwidth seems low enough to
me, that I don't see why you need to work that hard.

I would bet it is like SONAR work. They aren't likely to be willing to
give up much fidelity because of fear of not conveying some information.
Military SONAR doesn't compress the signal even when sending over
expensive satellite links because there is no way to compress noise and
most of the signal is noise. Compress that and you lose the opportunity
to pull weak signals from it.

If you could isolate the important features for compression you are just
one step away from reading the EKG and eliminating the radiologist or
whoever interprets those things. I expect they aren't interested in
letting that happen.

Many people would be interested in that. But development and, mostly,
certification of such a device will be far more than that of a 'simple'
ECG monitor. Such a diagnostic device would have to be proved in
probably years of clinical studies. For an ECG monitor it is enough
to comply with existing standards as it is proven technology.


--
Stef (remove caps, dashes and .invalid from e-mail address to reply by mail)

When the going gets tough, the tough go shopping.
 
In comp.arch.fpga,
glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote:
Tim Wescott <tim@seemywebsite.really> wrote:

(snip, I wrote, regarding EKG signals)

(I always heard it as EKG, thought I don't know why)

But I presume the needed bandwidth is a lot less than 4kHz.

(snip)

If you know exactly the features that are needed, you can compress down
to just those features. But the bandwidth seems low enough to me, that I
don't see why you need to work that hard.

The OP did not say what he wanted to compress down to, true -- but your
small may be his huge.

My gut feel is that you can accurately reproduce an ECG signal with less
than 100Hz of bandwidth, but without knowing the OP's needs, who can say
if that's small enough?

OK, so 100Hz, say 8 bit samples are enough, and 10s for total length,
so 16000 bits. So, it won't fit in a tweet, but is small enough
for just about everything else.

What bandwidth is required, depends on the application. IIRC, the standard
bandwidth for 'normal' ECG is 150Hz. For that you would at least need
300SPS, and I believe 500SPS is commonly used. But in some conditions
higher rates are required.

For some applications, 10 seconds may be enough. But in others you would
want to see an entire 24h period, looking for rate variations and other
abnormalities.

And resolution? 8-bit may be enough in some applications to send to send
the filtered result. Aquisition is often done at 24-bit, but this usually
includes a large DC offset.

--
Stef (remove caps, dashes and .invalid from e-mail address to reply by mail)

The meek shall inherit the earth -- they are too weak to refuse.
 
Weiss <100383@embeddedrelated> wrote:

For some applications, 10 seconds may be enough. But in others
you would want to see an entire 24h period, looking for rate
variations and other abnormalities.

And resolution? 8-bit may be enough in some applications to send to send
the filtered result. Aquisition is often done at 24-bit, but this usually
includes a large DC offset.

I am pretty sure that 24 bits is too much. While I do record audio
in 24 bits, I am already pretty sure that many of the bits are
noise, such as from the amplifiers. There has to be a lot more
noise in the EKG signal.

Exactly, this algorithm is needed for those cases where you need
to record 24H worth of the signal, and then upload it, which
will take too much time with no or lossless compression.

OK, but besides the time to upload, it will take too long for
anyone to look at. The compression system will naturally have to
find the similarities between cycles and factor them out.

You also have to remove most below the thermal noise level,
as that won't compress at all.

So, what should be left is the difference between cycles, as a
function of time, which should be exactly what one wants to know.

I just want to be clear about this, so basically, this algorithm
has to :

1- Sample the ECG signal (Analog) to pick every cardiac cycle.

Different cycles will have different lengths. Can that be factored
out. (Resampled such that all have the same length? At the same
time, storing the actual length.)

Also, normalize the amplitude (vertical scale), again saving the
actual values. (I don't know if either the period or amplitude
are important in actual analysis.)

2- Decompose every cardiac cycle signal using an orthogonal
polynomial base (Legendre polynomials for example).

Comp.dsp people tend to like sinusoids, but other transforms are
fine, too.

I would first, after resampling and normalizing, compute the mean
and subtract that from all of them.

3- Save the more relevant coefficients of this decomposition,
it's the compression part. (These coefficients will be used
to recreate the signal)

Seems to me that at this point, you need to know exactly the
features that are actually important. The compression needs to
explicitly extract those features on each cycle. Once that is
done, it should be easy to show exactly how those vary over
time, which is the only reason I can see for wanting 24h of data.

And all that must be done on an FPGA.
And about the comp.compression, i don't know which one you
are referring to, because i only found one Google group.

There is a comp.compression usenet group, but comp.dsp might
be a better choice.

Why does it have to be on an FPGA?

Not that it is a bad idea, but the exact reason can affect
the best way to do the design.

-- glen
 
Hal Murray <hal-usenet@ip-64-139-1-69.sjc.megapath.net> wrote:
In article <M-idnfszQIRIWA3OnZ2dnUVZ_tSdnZ2d@giganews.com>,
"Weiss" <100383@embeddedrelated> writes:

[ECG compression]
And all that must be done on an FPGA.

Why? That seems like a poor approach.
It will be much simpler and cheaper to do it in software.

Yes, but it could be done in a soft processor on the FPGA.

I think by now a small (relatively) FPGA is cheaper than many
other processors, along with the support circuitry needed.

Especially if the hardware design needs to be done before the
rest of the logic is spec'ed. (Or needs to be able to be
easily changed for updates.)

But I suspect it is a project for an FPGA class.

I don't believe that should disallow soft processors.

-- glen
 
For some applications, 10 seconds may be enough. But in others you would
want to see an entire 24h period, looking for rate variations and other
abnormalities.

And resolution? 8-bit may be enough in some applications to send to send
the filtered result. Aquisition is often done at 24-bit, but this usually
includes a large DC offset.

--
Stef

Exactly, this algorithm is needed for those cases where you need to recor
24H worth of the signal, and then upload it, which will take too much tim
with no or lossless compression.

I just want to be clear about this, so basically, this algorithm has to :
1- Sample the ECG signal (Analog) to pick every cardiac cycle.
2- Decompose every cardiac cycle signal using an orthogonal polynomial bas
(Legendre polynomials for example).
3- Save the more relevant coefficients of this decomposition, it's th
compression part. (These coefficients will be used to recreate the signal)

And all that must be done on an FPGA.
And about the comp.compression, i don't know which one you are referrin
to, because i only found one Google group.

---------------------------------------
Posted through http://www.FPGARelated.com
 
In article <M-idnfszQIRIWA3OnZ2dnUVZ_tSdnZ2d@giganews.com>,
"Weiss" <100383@embeddedrelated> writes:

[ECG compression]
>And all that must be done on an FPGA.

Why? That seems like a poor approach.
It will be much simpler and cheaper to do it in software.

--
These are my opinions. I hate spam.
 
On 6/5/2014 3:32 PM, Weiss wrote:
For some applications, 10 seconds may be enough. But in others you would
want to see an entire 24h period, looking for rate variations and other
abnormalities.

And resolution? 8-bit may be enough in some applications to send to send
the filtered result. Aquisition is often done at 24-bit, but this usually
includes a large DC offset.

--
Stef

Exactly, this algorithm is needed for those cases where you need to record
24H worth of the signal, and then upload it, which will take too much time
with no or lossless compression.

I just want to be clear about this, so basically, this algorithm has to :
1- Sample the ECG signal (Analog) to pick every cardiac cycle.
2- Decompose every cardiac cycle signal using an orthogonal polynomial base
(Legendre polynomials for example).
3- Save the more relevant coefficients of this decomposition, it's the
compression part. (These coefficients will be used to recreate the signal)

And all that must be done on an FPGA.

Ok, so where is the question?


And about the comp.compression, i don't know which one you are referring
to, because i only found one Google group.

I'm not sure what to say. comp.compression is a newsgroup just like
comp.arch.fpga. I only suggested that group in case you were not
familiar with the math behind the compression method. If you understand
the algorithm then implementation is the next step. I take it you are
not familiar with FPGAs? What exactly do you need help with?

--

Rick
 
Weiss <100383@embeddedrelated> wrote:

(snip, I wrote)

But I suspect it is a project for an FPGA class.

Yes, it's a project, so i must use a compression algorithm based on
orthogonal polynomials and it must be done on FPGA.

I like FPGAs for these problems, as you can get them to run very
fast. You might be able to filter/compress input at 100MHz or so.
But that is completely useless for a 300Hz input rate.

In my opinion, it would have been better if i was able to use
Matlab and do a software solution.

You should still debug the algorithms in matlab before implementing
them.

Unless there are rules against it, I would do it with a soft
processor inside the FPGA, along with the other logic needed
for I/O. You need to interface with the A/D converter and whatever
the data is going out through.

Also, i'm not familiar with the mathematical portion of the
project, and my knowledge on FPGAs is basic (from VHDL courses),
that's why i'm trying to have some insight from more
experienced people in the Electrical Engineering field.

Well, as I said debug with matlab until you understand the math.

You should have one sample data stream to work with.

If you can't use a soft processor, my favorite way of doing these
problems in FPGAs is with systolic arrays. There should be some
literature on them.

Othewise, see the suggestions from the previous post.

-- glen
 
On 6/5/2014 5:17 PM, glen herrmannsfeldt wrote:
Hal Murray <hal-usenet@ip-64-139-1-69.sjc.megapath.net> wrote:
In article <M-idnfszQIRIWA3OnZ2dnUVZ_tSdnZ2d@giganews.com>,
"Weiss" <100383@embeddedrelated> writes:

[ECG compression]
And all that must be done on an FPGA.

Why? That seems like a poor approach.
It will be much simpler and cheaper to do it in software.

Yes, but it could be done in a soft processor on the FPGA.

I think by now a small (relatively) FPGA is cheaper than many
other processors, along with the support circuitry needed.

I would not say it is cheaper for all cases. The devil is in the
details. Yes, many things can be done in a soft core in an FPGA, but
whether it is the best way depends on many factors. Before making any
sort of judgement on this I'd like to know why the OP thinks the FPGA is
needed.


Especially if the hardware design needs to be done before the
rest of the logic is spec'ed. (Or needs to be able to be
easily changed for updates.)

But I suspect it is a project for an FPGA class.

I don't believe that should disallow soft processors.

Again, that depends. Let's hear the reasons and then discuss the
validity or other options.

--

Rick
 
But I suspect it is a project for an FPGA class.

I don't believe that should disallow soft processors.

Again, that depends. Let's hear the reasons and then discuss the
validity or other options.

--

Rick

Yes, it's a project, so i must use a compression algorithm based o
orthogonal polynomials and it must be done on FPGA.

In my opinion, it would have been better if i was able to use Matlab and d
a software solution.

Also, i'm not familiar with the mathematical portion of the project, and m
knowledge on FPGAs is basic (from VHDL courses), that's why i'm trying t
have some insight from more experienced people in the Electrica
Engineering field.

---------------------------------------
Posted through http://www.FPGARelated.com
 
On 6/5/2014 9:31 PM, glen herrmannsfeldt wrote:
Weiss <100383@embeddedrelated> wrote:

(snip, I wrote)

But I suspect it is a project for an FPGA class.

Yes, it's a project, so i must use a compression algorithm based on
orthogonal polynomials and it must be done on FPGA.

I like FPGAs for these problems, as you can get them to run very
fast. You might be able to filter/compress input at 100MHz or so.
But that is completely useless for a 300Hz input rate.

FPGAs also run slowly pretty well too, lol.


In my opinion, it would have been better if i was able to use
Matlab and do a software solution.

You should still debug the algorithms in matlab before implementing
them.

YES! No point at all in trying to implement an algorithm before you
completely understand it and have it working in something like Matlab.
At least that is the way we would do it on a work project rather than a
school project.


Unless there are rules against it, I would do it with a soft
processor inside the FPGA, along with the other logic needed
for I/O. You need to interface with the A/D converter and whatever
the data is going out through.

I don't agree with this. It is a level of abstraction that is likely
not needed or useful unless there are problems fitting the logic into
the FPGA. It also will be slow to simulate running a CPU emulation in
the HDL simulator in essence.


Also, i'm not familiar with the mathematical portion of the
project, and my knowledge on FPGAs is basic (from VHDL courses),
that's why i'm trying to have some insight from more
experienced people in the Electrical Engineering field.

Well, as I said debug with matlab until you understand the math.

You should have one sample data stream to work with.

Yes, your Matlab run will process the sample stream and produce
identical results to the HDL simulation of your VHDL code. Then I
suggest you design a way to run the same data through the FPGA to make
sure it is working like the simulation. Finally connect your ADC(s) and
process real data if that is part of your project.


If you can't use a soft processor, my favorite way of doing these
problems in FPGAs is with systolic arrays. There should be some
literature on them.

A systolic array will in essence create a logic function for each
operator in the algorithm and process one data sample through the
pipeline on each clock. That may be overkill here, but I don't know
what is involved in running this algorithm. Depending on the math
needed, this may end up being a daunting project.

--

Rick
 

Welcome to EDABoard.com

Sponsor

Back
Top