If I wanted to increase the amount of bytes logged by the sflow module
for the user agent to, say, 128 bytes, what would I need to change?
I tried changing this:
(in ngx_http_sflow.h) #define SFLHTTP_MAX_USERAGENT_LEN 128
but I still get the user agent truncated at 64 bytes. Nor does it seem
like SFLHTTP_MAX_USERAGENT_LEN is used anywhere in the code (or any
SFLHTTP_MAX_* define).
I’m sure I could whack on it long enough to get it to work, but I ask
mainly because I don’t want to cause some buffer/format/etc overflow
somewhere else down the line. Thanks!
It looks like it may be (erroneously!) ignoring that limit and sending
the whole user-agent string. Thanks for pointing it out.
If you are using sflowtool to print the output at the collector then
it’s probably chopping the field there, so you would just need to tweak
sflowtool/src/sflow.h file and recompile sflowtool.
The user-agent can be kilobytes long in some cases. How many bytes is
enough? Please submit comments to the HTTP thread on http://groups.google.com/group/sflow.
I’m sure I could whack on it long enough to get it to work, but I ask
mainly because I don’t want to cause some buffer/format/etc overflow
somewhere else down the line. Thanks!
Thanks for the info. That did the trick. I wrongly assumed the limit
was being applied in the module itself.
BTW, is there anywhere that has info on APIs for host sflow? I’ve been
googling to no avail. The perl Net::sFlow module chokes badly
(presumably just expecting standard sflow), and I’ve not been able to
track down a python module (which is what I’m really interested in).
Just asking since we have this thread going, but I can repost to the
google group if you think it’d be better asked there. Thanks!
like SFLHTTP_MAX_USERAGENT_LEN is used anywhere in the code (or any
googling to no avail. The perl Net::sFlow module chokes badly
(presumably just expecting standard sflow), and I’ve not been able to
track down a python module (which is what I’m really interested in).
Just asking since we have this thread going, but I can repost to the
google group if you think it’d be better asked there. Thanks!
Do you mean you are looking for a Python equivalent of what sflowtool.c
does?
The data is all XDR-encoded, so you might start with Python’s “xdrlib”.
However it may be easier and more compact to just unpack the data
manually. The C implementation for Ganglia-gmond is an example of that.
It is much more compact than sflowtool.c, and might be a better place
to start if you just want the sFlow-HOST structures:
(see the “process_sflow_datagram()” function near the bottom.)
#define SFLHTTP_MAX_USERAGENT_LEN 128
was being applied in the module itself.
The data is all XDR-encoded, so you might start with Python’s “xdrlib”. However
it may be easier and more compact to just unpack the data manually. The C
implementation for Ganglia-gmond is an example of that. It is much more compact
than sflowtool.c, and might be a better place to start if you just want the
sFlow-HOST structures:
(see the “process_sflow_datagram()” function near the bottom.)
P.S. The Perl library should be able to skip over structures it doesn’t
recognize, so you might want to point that out to it’s author(s).
Cool, thanks, I’ll take a look at that – though working with XDR
looks like it could be somewhat painful
On a related note, I get a trickle of errors like this in the nginx
error log:
sFlow agent error: sfl_agent_error: receiver: flow sample too big for
datagram
which must be hitting the in-code limit of 1400 bytes. Would it be
quite horrible if I were to bump up SFL_DEFAULT_DATAGRAM_SIZE over
1460? I’m assuming under normal conditions, it’ll just fragment, which
I’m ok with at the rate they’re occurring now. But again, I’m worried
about some data structure in the code (that my casual reading of the
code isn’t turning up) will overflow.
(in ngx_http_sflow.h)
Thanks for the info. That did the trick. I wrongly assumed the limit
I’m ok with at the rate they’re occurring now. But again, I’m worried
about some data structure in the code (that my casual reading of the
code isn’t turning up) will overflow.
IT looks like you’d have to bump up both SFL_MAX_DATAGRAM_SIZE and
SFL_DEFAULT_DATAGRAM_SIZE. For example:
If you are not using jumbo frames and packet-loss-in-transit ever hits
50% then you might have a problem getting data through to the collector
(just when you really needed it) so in the end the right solution is to
apply the length-limits as proposed on the sFlow mailing list. This can
be done in sfwb_sample_http() at the point where the string lengths are
filled in for the sample. I intend to put that fix in very soon, and
add the missing “X-Forwarded-For” and “req_bytes” fields too. If you
think there are any other fields or counters missing then now is a good
time to chime in.
I’m ok with at the rate they’re occurring now. But again, I’m worried
about some data structure in the code (that my casual reading of the
code isn’t turning up) will overflow.
IT looks like you’d have to bump up both SFL_MAX_DATAGRAM_SIZE and
SFL_DEFAULT_DATAGRAM_SIZE. For example:
If you are not using jumbo frames and packet-loss-in-transit ever hits 50% then
you might have a problem getting data through to the collector (just when you
really needed it) so in the end the right solution is to apply the length-limits
as proposed on the sFlow mailing list. This can be done in sfwb_sample_http() at
the point where the string lengths are filled in for the sample. I intend to put
that fix in very soon, and add the missing “X-Forwarded-For” and “req_bytes”
fields too. If you think there are any other fields or counters missing then now
is a good time to chime in.
Cool, I’ll play with that.
As far as other counters/fields, I was sort of curious why there’s no
timestamp field. Obviously you could just use the time the packet got
sent as the timestamp, but I imagine precision-minded people would get
bent out of shape about not having the exact time as recorded by the
web server.
I’m ok with at the rate they’re occurring now. But again, I’m worried
Cool, I’ll play with that.
As far as other counters/fields, I was sort of curious why there’s no
timestamp field. Obviously you could just use the time the packet got
sent as the timestamp, but I imagine precision-minded people would get
bent out of shape about not having the exact time as recorded by the
web server.
This might be another one to bring up on http://groups.google.com/group/sflow, but the short answer is that if
you timestamp on receipt that’s going to be accurate to a second or so.
Ordering is preserved too. That’s going to be fine for most
applications, I think. The kind of analysis where you are trying to
sequence events that happened on different servers requires accurate
clock-sync and 1-in-1 sampling, and it’s likely to impact performance
too. That’s not what sFlow was designed for.
However if anyone needs an extra timestamp they can always include
another XDR structure that goes along with the “http_request” and
“extended_socket” structures that are sent here. The sFlow standard
allows you to define and send your own structures, you just tag them
using an IANA-registered enterprise number for uniqueness. The
published standard ones use enterprise=0, so http_request is
enterprise=0,format=2201 and extended_socket_ipv4 is
enterprise=0,format=2100. So if someone from, say, CERN, wanted to
include an extra structure with picoseconds since the big bang, like
this:
struct extended_timestamp {
unsigned hyper timestamp;
unsigned int resolution;
}
They could tag it with enterprise=96,format=7 and send it out. An sFlow
decoder that doesn’t know what this is should just skip over it.
Neil
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.