I’ve attached both the .grc and python for a relatively-straightforward
flow-graph that is giving me loads of grief.
A few things to observe about it:
o The virtual size of this thing in execution is elephantine
o Manipulating it inside GRC is really, really, really, painful
o Running it above 10Msps causes UHD to silently stop sending data
(or maybe it never gets started)
o Running it at 5Msps or slower, and it works, although it’s still
elephantine
So, yeah, the FFT is big, but I tried different sized FFTs from 32768
upwards, and I still see the behaviour that at 10Msps or above,
I get no data (after a loooong pause while Gnu Radio does some
cogitation, presumably in (elephantine) buffer setup).
Now this flow-graph (actually a more-complex version of it) used to
work, even at 10Msps, up to about 3 weeks ago, when I updated
UHD and Gnu Radio.
Some perhaps useful data points:
o 6-core AMD 1090T at 3.2GHz with 4GB of 1333MHz memory
o I'm using gr_vmcircbuf_mmap_shm_open_factory
o The very latest UHD and Gnu Radio
o USRP2 + WBX
I’ll make the observation that there’s just got to be a better buffer
allocation policy than the existing one. Should it really take
Gigabytes of VM, and hundreds of MB of RSS to execute the flow-graph
I’ve attached? The largest inter-block objects–the vectors
in front of and behind the FFT, are on the order of a few MB, and
even accounting for some ballooning factor so you can have
several such vectors “in flight”, that doesn’t add up to hundreds of
MB of RSS, and several GB of Virtual Size.