Hi, I want to use one USRP to transmit and receive. I tried setting up two flowgraphs (top_blocks), one that will be a receiver "thread" and another a transmitter "thread". However the program complains: RuntimeError: gr_top_block_impl: multiple simultaneous gr_top_blocks not allowed Is there no way to have a receiver flowgraph in operation while transmitting on a different daughterboard? In order to implement a TX+RX in one USRP do I have to shut down the receiver flowgraph every time I want to start up a transmitter flowgraph? I believe you can run two programs on the same USRP at the same time, e.g. TX: ./benchmark_tx.py -f 2.4G -v --discontinuous RX: /benchmark_rx.py -f 2.4G --rx-gain=90 -v which is pretty much the same as running two top blocks on the same USRP (although IPC is much trickier). Doing this however I run into the same problem as described in this thread, http://www.ruby-forum.com/topic/131602#new, where no packets appear to be received.
on 2009-03-25 00:03
on 2009-03-25 01:19
Hi William, William Sherman wrote: > I want to use one USRP to transmit and receive. I tried setting up two > flowgraphs (top_blocks), one that will be a receiver "thread" and > another a transmitter "thread". However the program complains: > RuntimeError: gr_top_block_impl: multiple simultaneous gr_top_blocks not > allowed > Yes, thats correct you should only have one top block, that's why it's called a top block, because it supposed to be on top of a hierarchy tree, and as everyone knows "There can be /only one/! - Highlander" :) But seriously, the trick is that you can have separete distinct chains in your flowgraph, eg imagine two simple chain Think of it as you have a big graph which has multiple unconnected parts. chain1: usrp->some_processing->packetizer sink chain2: packet_to_stream_source->some_different_processing_than_before->usrp_sink And you create these blocks in one top block, connect them. Clearly the two chains do not have a common point, but the scheduler will take care about them, so they will opeate paralelly. BTW the default scheduler is a thread per block scheduler, so every signal processing block has it's own exacutor thread, but you don't need to deal with this. > Is there no way to have a receiver flowgraph in operation while > transmitting on a different daughterboard? > There is as I mentioned above, even you can have a flowgraph using the same daughterboard for receiving and transmitting (not at the same time of course) > In order to implement a TX+RX in one USRP do I have to shut down the > receiver flowgraph every time I want to start up a transmitter > flowgraph? > No. Consider viewing the tunnel example: (/gnuradio-examples/python/digital/tunnel.py) where there exist simultaneously a sending chain towards the usrp which gets data from a virtual ethernet device and a receiving chain from the usrp which is fed to a virtual ethernet device. But there is only one top block. understanding this example can be a little tricky if you just started, but it worth it. Hope it helped -- David Tisza University of Notre Dame Department of Electrical Engineering
on 2009-03-26 22:56
Thank you, I understand there can only be one top block, but it can have multiple flowgraphs inside it. What if I wanted to change the parameters of just one of the flowgraphs during runtime? Would I need to kill off the whole top block, create another one with changes made to the one flowgraph, then start it up again? I just want to change the one flowgraph. How do you "kill" a thread or flowgraph in python anyway? Also I have looked at code where there is no top block ("Exploring GnuRadio"). Instead a flowgraph is created using gr.flow_graph(). Can I just create and execute multiple flowgraphs using this method?
on 2009-03-26 23:13
On Thu, Mar 26, 2009 at 09:56:13PM +0100, William Sherman wrote: > Also I have looked at code where there is no top block ("Exploring > GnuRadio"). Instead a flowgraph is created using gr.flow_graph(). Can I > just create and execute multiple flowgraphs using this method? That document is out of date, and there's an open ticket to fix it. Eric