At work, to dispel some of the tedium of writing yet another layer of C
on top of twenty years of rotting but seemingly irreplaceable legacy C
code…I’ve taken to writing something I call “coding standards lint” in
It’s a little script to run on a C file to catch it in the act of
violating some of the wacky coding standards that I’m being payed far
too little to obey.
Some of them are pretty simple even for a ruby nuby like me. Just
single line stuff with Regexp’s a’plenty.
Today, I decided to tackle something that doesn’t live all on one line
like the easier things I’ve tested for in the past. For instance, today
I was checking to see if for,do, and while loops over a certain length
have a closing comment at the end that says something like “// this is
the end of the while(pigsFly) block”.
In order to do this, obviously you have to look at several lines of the
file, not just one at a time. After staring at the pickaxe book for an
hour or so, the best idea I was able to come up with was to read the
whole file into an array using File#readlines. After that, I used
Array#for_each_with_index to zip through the array looking for the end
of the block (being careful to watch out for inner blocks as well).
I got it to work, but I couldn’t help feeling guilty for sucking the
whole bloody file into memory first. Is there something obvious that I
could have done instead that would allow me to look at the file the same
way but leave it on disk instead of in memory?
(BTW, having line numbers to use in error statements is key)
Also, I kind of wonder if my concern is outdated. The typical C file
might be on the order of 100k bytes, and the machine has a Gig of memory
in it. Am I applying a twenty year old concern to a modern programming