A risky and lengthy post, I’m hoping you’ll indulge me on. Stack
Overflow doesn’t like opinion questions, and I wasn’t sure where else to
go to get the opinions of a well-informed group of folks. I figure the
Ruby community is about as uniformly a good-practices OCD bunch as any.
If you have a suggestion for a better place to have this discussion,
please do let me know.
Magic Numbers. Virtually every example of one is obvious. I have a case
I think is NOT a magic number, and neither readability nor
maintainability are improved by using a constant label.
The definitions of a magic number that I like are:
“Unique values with unexplained meaning or multiple occurrences which
could (preferably) be replaced with named constants.”
Magic number (programming) - Wikipedia (3rd bullet)
“The term magic number is also used in programming to refer to a
constant that is employed for some specific purpose but whose presence
or value is inexplicable without additional information.”
Magic number definition by The Linux Information Project (LINFO) (last line)
There’s some critical details there which open the door for cases where
literal numerals are not magic numbers. I for one do not subscribe to
open-ended -1, 0, 1, 2 are not magic numbers just because. IMO, context
is everything. I do not like literal 1 being used to compensate for 0/1
based offsets. I prefer to see new_variable = variable + ZERO_OFFSET
because I still want to know WHY that 1 is there. I don’t want to have
to asssume it is because of zero offset. So, that should tell you I’m in
favor of being cautious with magic numbers.
OK, so here’s my case…
Setup: a bunch of floating point values displayed on a web page.
Essentially a dashboard of a bunch of measured things. Each number has a
distinct engineering-based relevance to the precision which is
displayed. In other words, it’s not a repetition of money values where
of course you would create a MONEY_DISPLAY_PRECISION constant for 2
decimal places to centralize that definition.
Each value has unique units.
Each value’s precision is uniquely meaningful based on real-world
measurement technology and accuracy. Changing one value’s precision
doesn’t mean you’d want to change any of the others.
Each value is displayed in only one place in the application. So there’s
no incentive to create a constant to prevent redundant literals.
The precision is not something the user can choose. The application must
hard code the displayed values, because the precision is meaningful. So
there’s no incentive to have a variable.
The software has a generic method to render a floating point number as a
string with thousands separators and with a specified precision.
SomeClass#float_as_thousands_with_precision(value, precision)
Which would get used something like:
<%= float_as_thousands_with_precision(volts, 1) -%>
<%= float_as_thousands_with_precision(amps, 2) -%>
<%= float_as_thousands_with_precision(watts, 2) -%>
I contend, those are NOT magic numbers.
Using the above definitions (and others I found like them), such numbers
do not need constants because:
– each number is a single instance (no redundancy)
– each number is independent from the other (no shared cause & effect
on precision)
– each number’s purpose is very clear from from both the function name
and argument name (no need to clarify purpose)
I know somone out there will say, “well, why not make a constant for
them? What does it hurt?”
I say, why do it? It’s more code to maintain with no benefit. If people
can justify not making constants for clearly varying purposes of using 1
in code just because it’s a 1, then I contend this is even a more
justifiable case to not bother with a constant.
OK, that’s my exhaustive argument.
What say ye? Do these require constants? Are they magic numbers?