Forum: GNU Radio Correct method for "compressing" a power spectrum

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
Marcus D. Leech (Guest)
on 2009-03-09 04:46
(Received via mailing list)
Let's say I have an FFT output that's many, many, bins wide, and I want
to compress that information into a narrower
  display (let's say from 4M bins down to 1024 bins).

My approach has been to sum up each set of [4M/1024] bins, and use that
as the final output.  But should I be averaging
  across the bins?  That is, should I be taking each 3906bin group from
the 4M bin wide spectrum and computing an average
  across those bins, and stuffing it into the appropriate place in the
1024-wide spectrum, or something else?

Seems to me that if I have 4000 1Hz-wide bins, I should sum them to give
me the total power in a single bin that
  "represents" the same amount of bandwidth.  But is it more subtle than
that?

--
Marcus L.
Principal Investigator, Shirleys Bay Radio Astronomy Consortium
http://www.sbrac.org
Frank B. (Guest)
on 2009-03-09 16:39
(Received via mailing list)
On Sun, Mar 8, 2009 at 6:13 PM, Marcus D. Leech 
<removed_email_address@domain.invalid>
wrote:

Seems to me that if I have 4000 1Hz-wide bins, I should sum them to give
> me the total power in a single bin that
>  "represents" the same amount of bandwidth.  But is it more subtle than
> that?


As usual, yes and no.

If you're concerned with statistical hygiene, then the mean (averaging
over
multiple bins) is defensible. And if you wanted to add a dimension to
each
reduced output bin, like color, you might want to throw in the variance
within each set of sub-bins contributing to the average.

The most robust estimator would be the median, though, probably -- the
exact
midpoint between the lowest and highest values in each set of sub-bins.

However it sounds like what you're going for is a kind of compression
that's
lossy but optimizes for visual properties, not statistical robustness.
That's usually highly non-linear and very subjective. In that case, why
not
pick what just looks good on your data? The algorithm that John
Ackermann
suggests is likely to be as good as anything else.

Frank
This topic is locked and can not be replied to.