I am posting this again since I did not get a reply, I am sorry if I
should not but I would really like to get some help.
one task of the project I have been working on is to collect raw 4-bit
samples from two input to a basic RX daughterboard (RXA, RXB) connected
to two IF signals.
We have been working on reducing the number of bits per sample taken
from within the rx_buffer.v file. In that file we are choosing the 4 MSB
from channel 0 and channel 1 (outputs of DDC0) to get a byte of I and Q
samples in the buffer. We are then waiting for the next clock sample to
fill the buffer with the next 4 bit I and Q samples until the 512 byte
is stored where then this is forwarded to the USB buffer.
When we load this modified firmware to the FPGA and we do some data
collection (using usrp_rx_cfile.py), reading the output data as a signed
4 bit integer, using the
fact that we have I and Q samples alternating each of them represented
as a signed 4-bit integer, the results show that almost every other
value is a zero!!! So we think that there is some kind of zero-padding
in our data although we are actually configuring the USRP to represent
each sample as a 16-bit word, (without using the -8 flag ) and using a
decimation rate of 4.
My understanding of the system is that as soon as the 512 bytes are
stored in the FPGA buffer, then this is moved as a packet to the USB
which is then controlled by the USRP_Source_S.c before it is written to
the file, but since we have not been using the -8 flag, I cannot really
see why there should be any zero padding in the host software.
Can you please help me out, I am stuck at this point for some time now
trying to configure the system differently in order to get an
understanding of what is going on.
I can provide with the rx_bufer.v file and with some data output to
anyone that can help.
Thank you all