I have been digging through the code to try and understand the
architecture
that GNU Radio works on. From flow_graph.py I have come to the
understanding that GNU radio creates and manages it’s own memory
buffering
scheme. This would seem to make it difficult to expand the platform to
work
over multiple computers, in the case where massive amounts of computing
power is needed (in a case where there are hundreds, or even more)
nodes.
Am I correct in my deduction that a complete overhaul of GNU Radio would
be
required in order to do distributed computing?
I have been digging through the code to try and understand the architecture
that GNU Radio works on. From flow_graph.py I have come to the
understanding that GNU radio creates and manages it’s own memory buffering
scheme. This would seem to make it difficult to expand the platform to work
over multiple computers, in the case where massive amounts of computing
power is needed (in a case where there are hundreds, or even more) nodes.
Am I correct in my deduction that a complete overhaul of GNU Radio would be
required in order to do distributed computing?
Hi John - We can do distributed computing manually using pipes, works
great in multi-core processor, and slower data paths can be run
between machines across a network using sockets. There was discussion
once about automatic load balancing across cpu cores / machines.
But you can have one script use a file sink to a fifo, and another
script use the same fifo for a file source - works great.