Ada vs Ruby

On 16 Apr 2008, at 19:44, Robert D. wrote:

Turing apply because they are talking about systems describing
themselves. IIRC it is a theorem in TNT(1) making an assumption about
TNT in the first case and a turing machine reading the description of
a turing machine on its tape in the second case.
I do not believe that Aircraft Control Systems have this degree of
self awareness, but I can stand corrected if I am wrong, because
although I have been taught a lot about TM and TNT I do not know a lot
about Aircraft Control.

Any process that is algorithmic is necessarily implementable as a
Turing machine so I’d argue that the very act of coding the system
with a process defines TNT(1) whilst the target system itself becomes
TNT. Therefore until the system runs in situ and responds to its
environment one cannot make any firm statements regarding when the
system will halt. And if you can’t tell when an autopilot will halt,
you have the potential for all kinds of mayhem…

Of course this is a terrible abuse of Church-Turing, but it seems to
fit the real world pretty well.

That’s why run-time error handling and fail-safe behaviour are so
important
regardless of the rigour of Q&A processes.
That however I agree with!

:slight_smile:

Ellie
Who wonders what the hell this thread will look like in Google searches.

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net

raise ArgumentError unless @reality.responds_to? :reason

Eleanor McHugh wrote:

Turing apply because they are talking about systems describing
with a process defines TNT(1) whilst the target system itself becomes

regardless of the rigour of Q&A processes.
http://slides.games-with-brains.net


raise ArgumentError unless @reality.responds_to? :reason

My thanks to the core contributors to this fascinating thread. It’s
stretched me well past my boundaries on several points, but also
clarified some key learning I’ve picked from my own field (psychology &
psychotherapy).

For me, the take-away here (and this is not at all news to me) is that
valid formal approaches to reliability are efficient (at least at times)
and powerful, and should definitely be used - WHEN WE HAVE THEM. The
problem is that they all stop short of the full spectrum of reality in
which our processes must survive. Thus we must leave ultimately the
comfort of deduction and dive into dragon realm of inferential
processes. Ultimately, there simply is not substitute, or adequate
simulation of, reality. Sigh.

Thanks, again. I’ve much enjoyed plowing through these posts.

t.

Tom C., MS MA, LMHC
Private practice Psychotherapist
Bellingham, Washington, U.S.A: (360) 920-1226
<< [email protected] >> (email)
<< TomCloyd.com >> (website & psychotherapy weblog)
<< sleightmind.wordpress.com >> (mental health issues weblog)
<< directpathdesign.com >> (web site design & consultation)

On 16 Apr 2008, at 19:40, Francis B. wrote:

I’m not sure how the Halting Problem relates to presence or
absence of undefined states. What does it mean to call a state
“undefined” anyway, in this context? Presumably there’s always
a finite number of states, which may be extremely large for most
programs that perform useful calculations. If we start with very
small, simple (and not useful) programs, we can enumerate all
the states. Are any of these undefined? As we increase the size
of the program, the number of states increases, presumably at
an alarming rate. At what point do we become unable to identify
program states?

You’re making the mistake of viewing program states as a discrete set,
all of which can be logically enumerated. If that were the case then
whilst complexity would make managing the creation of complex software
difficult, it would still be theoretically possible to create
‘correct’ programs. However Godel’s incompleteness theorems tell us
that for any mathematical system based upon a set of axioms there will
be propositions consistent with that system which cannot be proven or
disproved by application of the system (i.e. they are unprovable,
which is what I meant by the casual short-hand ‘undefined’).

Both Turing machines and Register machines are axiomatic mathematical
systems and therefore can enter states which in terms of their axioms
are unknowable. A program is essentially a meta-state comprised of
numerous transient states and is thus a set of meta-propositions
leading to state propositions which need to be proved, any of which
may be unknowable. This applies equally to both the runtime behaviour
of the program and to the application of any formal methods used to
create it.

For most day-to-day programming Godel incompleteness is irrelevant, in
the same way that quantum indeterminacy can be ignored when playing
tennis, but when you build large software systems which need to be
highly reliable unknowable states do have potential to wreak havoc.
This is why beyond a certain level of complexity it’s helpful to use
statistical methods to gain additional insight beyond the normal
boundaries of a development methodology, and

Now the Halting Problem is fascinating because it’s very simple in
conception: given a program and a series of inputs (which in Godel’s
terms comprises a mathematical system and a set of axioms) determine
whether or not the program will complete. Turing proved that in the
general case this problem is insoluble, and not only does this place
an interesting theoretical limitation of all software systems but it
also applies to sub-programs right the way down to the finest grain of
detail. Basically anytime a program contains a loop condition the
Halting Problem will apply to that loop.

So in essence we’re left with a view of software development in which
we can never truly know if a program is correct or even if it will halt.

An example of a simple program that does a barely useful task
is one that reads an input level from a 16 bit A/D, say, and
writes half that value to a 16 bit D/A. Can we be confident we
can write a 100% reliable and correct program to perform this
task? If not, why not? If so, let us increase the complexity
of the task progressively in small increments. At what point
are we forced to admit that we cannot be sure our program does
what it is meant to do?

That’s very difficult to say for sure. Conceptually I’d have
confidence in the old BASIC standby of:

10 PRINT “HELLO WORLD”
20 GOTO 10

as this will run infinitely and is intended to. But of course under
the hood the PRINT statement needs to be implemented in machine code
and that implementation could itself be incorrect, so even with a very
trivial program (from a coder’s perspective) we see the possibility of
incorrect behaviour.

However balancing this is the fact that a program exists across a
finite period of time and is subject to modification and improvement.
This means that the set of axioms can be adjusted to more closely
match the set of propositions, in principle increasing our confidence
that the program is correct for the problem domain in question.

I’m not trying to prove you wrong; I just want to get a better
handle on the problem.

Douglas Hoffstadter has written extensively on Godel and computability
if you want to delve deeper, but none of this is easy stuff to get
into as it runs counter to our common sense view of mathematics.

Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net

raise ArgumentError unless @reality.responds_to? :reason

“Eleanor McHugh” [email protected] wrote in message
news:[email protected]

Still, it seems to be that no level of genius can create software as
is
necessary for the Space Shuttle or a more average airplane, without
the
level of testing the NASA or Boeing brings to bear for their software.

Oh definitely. Testing that code performs correctly is essential to
any embedded development process, as is validating that the code
written solves the correct problem. The latter is by far the more
difficult though.
[ SNIP ]

On the latter note - validation as opposed to verification - it’s
important
to add that strictly speaking requirements should address the user’s
real
needs, not what the user thinks they are. Raise your hand if you’ve ever
worked on a project where you got some (ostensibly) fairly clear
requirements from a client, built an application that validates,
presented
it to the client, and found out they really wanted something else.

It’s why good business analysts should be worth more than architects or
coders. For the record, I’m not a BA - I just recognize their value.

AHS

“Mike Silva” [email protected] wrote in message
news:[email protected]
On Apr 16, 2:39 pm, “Arved Sandstrom” [email protected]
wrote:

experience and above all, process. Not the programming language.


But that still leaves 10%-. For example, as noted here (http://
www.praxis-his.com/sparkada/pdfs/spark_c130j.pdf), an analysis of
safety-critical code written in three languages (C, Ada and SPARK),
all of which was already certified to DO-178B Level A (the most
stringent level), it was found that the SPARK code had one tenth the
residual error rate of the Ada code, and the Ada code had only one
tenth the residual rate of the C code. That’s a 100:1 difference in
residual error rates in code all of which was certified to the highest
aviation standards. Would anybody argue that putting out safety-
critical software with an error rate 100 times greater than the
current art allows is a good thing? In fact, would anybody argue that
it is not grossly negligent?

Oh, and the anecdote about the compiler finding in minutes a bug that
had defied testing for a week should not be lightly dismissed either.


I won’t dispute the fact that some languages have more inherent support
for
“correct” programming than others do. SPARK wouldn’t be the only one;
Eiffel
and various functional languages come to mind also. For others you can
get
add-ons, such as JML for Java (see
Design by contract - Wikipedia)

Having said that, it seems to me that the better correctness of programs
in
SPARK or Ada compared to C/C++, say, would also be due to the qualities
of
organizations that tend to use/adopt these languages. Those qualities
include programmer competence/experience/education, organizational
standards, processes in place, and external requirements (as in legal
ones
for avionics or medical software). Not to mention, there is a
correlation
between the ease of use of a language and the rate of poor coding (I may
get
flak for that statement), which is not necessarily a fault of that
language.
Note that by ease of use I do not mean masterability, I simply mean how
quickly a programmer can write something that sort of works.

For example, is shabby software written in Java or C or Python or PHP or
JavaScript shabby because one of those languages was chosen, or is it
shabby
because the requirements analysis sucks, design is basically absent,
there
is no documentation, testing is a myth, and the coders haven’t mastered
the
language? I’ve seen more than a few ads in my area advertising Web
developer
jobs for $9 or $10 an hour…you could use the best language in the
world at
a job like that and you’d still end up with crap. Conversely, get a team
of
really experienced and smart coders who are well-versed in process, have
management backing for process, and I don’t see the language of choice
mattering that much. IOW, in that MoD analysis you refer to, was
everything else equal? Throw Ruby at a CMM Level 5 team and I wonder
whether
the product is going to be an order or two of magnitude worse than if
they
had Ada. Myself I doubt it.

AHS

On Thu, Apr 17, 2008 at 1:31 AM, Eleanor McHugh
[email protected] wrote:

features to do the right things, but well understood and diligent
the underlying problem space.
about Aircraft Control.
Of course this is a terrible abuse of Church-Turing, but it seems to fit
the real world pretty well.

Oh I was not clear enough I am afraid. I challenge that you can prove
that one cannot prove if your Aircraft control system can halt or not.
I even believe that in theory it can be proven. And if you are not
talking about theory than I agree with you
it is not a fair statement.
This is not nitpicking because I think that serious work is still done
in automatic proving of theorems and the day might come where even
complex systems can be proven to be correct.
Gödel and Turing only show that complete systems cannot be correct,
they say nothing about complex.

Cheers
Robert

http://slides.games-with-brains.net

raise ArgumentError unless @reality.responds_to? :reason


http://ruby-smalltalk.blogspot.com/


Whereof one cannot speak, thereof one must be silent.
Ludwig Wittgenstein

On Thu, Apr 17, 2008 at 3:21 AM, Eleanor McHugh
[email protected] wrote:

So in essence we’re left with a view of software development in which we
can never truly know if a program is correct or even if it will halt.

In general, proofs around Turing machines only applies with an
infinite length tape - in other words, about a computer with infinite
memory.

The Halting proof only proves that there exists programs that we can’t
prove halting about, not that it isn’t possible to prove things around
some programs.

The Goedel proof is about complete logical systems; in the case of a
computer program, we are working with a system where we have other
formalisms under it that’s providing axioms that we don’t prove, just
assume.

In the absence of hardware failure (something we of course can’t prove
and is a problem in the real world), we can trivially prove halting
for a lot of cases. E.g, the simple program “halt” halts. halt if
true halts. halt if 1 == 1 would generally halt, assuming no
redefinition of ==. An so on.

I appreciate and respect your general contributions in this thread; I
just couldn’t let this particular argument stand, as it confuse people
about what the theorems say.

Eivind.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eleanor McHugh wrote:
|
| Much kudos to your friend. Twelve years ago I did the same thing in VB
| for helicopters and whilst it was pushing the hardware at that time, it
| was still usable. Of course these days most mobile phones have more
| computational grunt and memory than that :slight_smile:

But the mobile phones aren’t necessarily as reliable as, say, the
hardware and operating system of an avionics system.

Considering that an operating system is an abstraction, and that
abstractions, more often than not, are leaky, I contend that specialized
hardware doesn’t use much of an operating system, thusly eliminating a
large set of undefined (rather, unprovable as per Godel) states,
correct? Only providing the bare minimum of APIs needed for the software
on the application level to function properly, or dispersing with
operating systems entirely, working on the bare metal (I think that the
Apollo project computers functioned like that, but correct me if I’m
wrong.)

In my experience, the more complex software gets, the more error-prone
it is. I notice that in my PDA, which performs rock solid, only needing
a driver upgrade for SD cards above 64 MB (well, at the time this thing
was made, cards larger than 64MB weren’t widely available yet; which
shows that not all requirements can be gathered beforehand), my old
smart phone which failed on every possible situation, and all the
operating systems I’ve used with some depth so far.

So, isn’t it part of the requirement gathering process, or the design
process, during software engineering to cut down on unnecessary
complexities and abstractions, too?

And that influences interface design for the user, too, I’ve noticed.
After all, the guidelines of the US Air Force for user interfaces
compose a 486 page book:

“From 1984 to 1986, the U.S. Air Force compiled existing usability
knowledge into a single, well-organized set of guidelines for its user
interface designers. I was one of several people who advised the project
(in a small way), and thus received a copy of the final 478-page book in
August 1986.”

Hm, it seems to always come down to near-perfect requirements during
design.


Phillip G.
Twitter: twitter.com/cynicalryan

Don’t stop with your first draft.
~ - The Elements of Programming Style (Kernighan & Plaugher)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgHAyEACgkQbtAgaoJTgL9T/ACfbW68rjje9gJR4vKKEEjyLuem
evwAnRjmnWbJeZCO4yi5gcd3ALNv8maU
=oYsQ
-----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eivind E. wrote:

|
| The Goedel proof is about complete logical systems; in the case of a
| computer program, we are working with a system where we have other
| formalisms under it that’s providing axioms that we don’t prove, just
| assume.

But a language that is Turing-complete is a complete logical system, is
it not?

Since you can express any Turing machine in a Turing-complete language,
you have to necessarily deal with Goedel’s incompleteness, don’t you?

And language side-effects are there, no matter what language, due to the
way it (or its compiler) are implemented.

| In the absence of hardware failure (something we of course can’t prove
| and is a problem in the real world), we can trivially prove halting
| for a lot of cases. E.g, the simple program “halt” halts. halt if
| true halts. halt if 1 == 1 would generally halt, assuming no
| redefinition of ==. An so on.

There are logical proofs that P is non-P (details of which are an
exercise to the reader since I have misplaced Shaum’s Outline of Logic).

That a state is provable says nothing about the conditions that this
proof is valid. In a complete system, any proof is valid, so that the
proof that halt == true equates to false is a side-effect under the
(in-)correct circumstances.

Now, most applications don’t reach this level of complexity, but the
level of complexity is there, you usually don’t have to care about it.

Conversely, if every state of a program were provable, the processes
like DO-178B wouldn’t be necessary, would they?


Phillip G.
Twitter: twitter.com/cynicalryan

~ Why do we drink cow’s milk? Who was the first guy who first looked
at
a cow and said “I think I’ll drink whatever comes out of these things
when I squeeze 'em!”? – Calvin
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgHJd0ACgkQbtAgaoJTgL9s4QCbBUjq4oK2p5AkQmreI4TmyzTj
DggAnA7rG8GsVcCbyvJBdM8KlHMSqE5E
=RMV1
-----END PGP SIGNATURE-----

On 17 Apr 2008, at 07:25, Robert D. wrote:

it is not a fair statement.
This is not nitpicking because I think that serious work is still done
in automatic proving of theorems and the day might come where even
complex systems can be proven to be correct.
Gödel and Turing only show that complete systems cannot be correct,
they say nothing about complex.

It’s an interesting semantic point, in that complex is not necessarily
the same as complete. But likewise a Turing machine is an ideal device
which cannot exist in the real world: it requires both infinite
storage and infinite time so perforce there are computations which it
could make that cannot be made using the means available within our
physical environment. However for sake of argument let’s suppose that
our complex system is being implemented on such an ideal device and
that our interest is in determining whether or not Godel
incompleteness is significant.

What matters is not specifically how complex the system is, in that a
highly complex system from our perspective may still be composed of
provable states, but how many of its states are unprovable within the
context of the complete system of which it forms a subset. We can
therefore see that the application of Godel to a complex but
incomplete system operating in ideal conditions would result in a
probabilistic incompleteness in that system that at least
theoretically could be measured.

However this then leads to the question of whether or not the complex
system is in fact only a subset of a larger complete system, or a
complete system in its own right. Considering that it is itself based
on a set of axioms (the requirements for the system) and therefore in
that regard complete, I would argue that it was not a subset at all.
This appears to create a probabilistic, relativistic incompleteness
(which is analogous to the viewpoint that physics has reached
regarding our physical universe).

But even if we allow the subset to be an incomplete system we can see
that increasing complexity increases the number of states in the
system, thus increasing the likelihood of included states being drawn
from the full set of states possible in the complete system. Yet
again, we can see that there is a probability of included states being
unprovable which beyond a certain level of complexity specific to the
system would be significant in preventing a formal proof of all the
included states.

Given that Godel’s incompleteness applies to systems as simple as
arithmetic, and that even simple desktop applications often contain
many more orders of axioms than this, the concept of formal proof
should always be seen in light of this probabilistic incompleteness.
That is not to say that attempting to prove a system is futile, but
merely a recognition that beyond a certain level of complexity
specific to a given system there will always be an element of
unpredictability even when environmental adjustment of the system does
not apply.

Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net

raise ArgumentError unless @reality.responds_to? :reason

On 17 Apr 2008, at 10:29, Eivind E. wrote:

and is a problem in the real world), we can trivially prove halting
for a lot of cases. E.g, the simple program “halt” halts. halt if
true halts. halt if 1 == 1 would generally halt, assuming no
redefinition of ==. An so on.

I appreciate and respect your general contributions in this thread; I
just couldn’t let this particular argument stand, as it confuse people
about what the theorems say.

I fully agree with you that for degenerate cases the halting problem
is trivial. Unfortunately these tend to be the exceptions in real
world programs as opposed to the rule. For safety-critical systems
where unexpected halting is a run-time exception that needs to be
handled I think it is highly relevant, although it can rapidly become
a red herring if viewed out of context.

Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net

raise ArgumentError unless @reality.responds_to? :reason

In article [email protected],
Tom C. [email protected] wrote:

My thanks to the core contributors to this fascinating thread. It’s
stretched me well past my boundaries on several points, but also
clarified some key learning I’ve picked from my own field (psychology &
psychotherapy).

Here are a website with some papers that I found in my cursory
research of the topic and might be interesting:

http://www.praxis-his.com/sparkada/publications_journals.asp

The Philosophical Transactions paper by Roderick Chapman reports
that, using SPARK (a well-behaved subset of Ada), “Proof of the
absence of run-time errors has been performed on programs of the
order of 100 000 lines of code.”

Francis

On Wed, Apr 16, 2008 at 12:57 PM, Phillip G.
[email protected] wrote:

Sure. But to know these states, the software should be tested as
thoroughly as possible. I somehow doubt that anybody using something
mission-critical to flying or medical health wants to call the hotline
during the final approach of a plane or when a surgical robot gets
fantasies of being SkyNET. :wink:

Yes, testing, not a blind faith in whatever language is being used,
and it’s compiler.

Anyway, this problem is (AFAIK, anyway), countered by using redundant
implementations of the hardware and software (well, as far as possible,
anyway), to minimize the effect of unknown states.

Of course this isn’t perfect either. In fact “The Bug Heard Round the
World.” which I mentioned earlier in this thread, was a failure of
redundancy.

The Shuttle has, or at least did in the early days, redundant on-board
computers which monitor the health and behavior of shuttle systems,
with voting used to find discrepencies. The hardware is/was comprised
of (3 I think) identical IBM 4Pi computers with 1 of those having a
totally independently implemented software load. When control of the
launch/mission is transferred to this system, the separate processors
run in parallel, and their outputs are compared. If they disagree,
the launch is aborted.

Of course all of this worked well during the pre-STS1 mission sims.

However, on the day of the launch, there was a clock skew between the
redundant computers, so the output from one lagged just a bit behind
the others, and the system halted the launch, unnecessarily as it
turned out, at T-3


Rick DeNatale

My blog on Ruby
http://talklikeaduck.denhaven2.com/

On Thu, Apr 17, 2008 at 10:29 AM, Eivind E. [email protected]
wrote:

In the absence of hardware failure (something we of course can’t prove
and is a problem in the real world), we can trivially prove halting
for a lot of

(trivlal :slight_smile:

cases. E.g, the simple program “halt” halts.

If ‘halt’ is defined as implying that the program halts, then you’re
not really proving anything here. Like saying that true implies true.
You have to take that as an axiom.

halt if
true halts. halt if 1 == 1 would generally halt, assuming no
redefinition of ==. An so on.

Once you invoke conditions, you are invoking some formal system of
axioms (like your assumption above) which you have to define. I grant
that there are very simple formal systems which are ‘complete’ but
they are generally not ‘interesting’. If you want even simple integer
arithmetic[1], you’re subject to Goedel’s incompleteness theorem. And
this is just in the abstract. As soon as you put even the ‘halt’
program on real hardware, all bets are off. You simply cannot prove
anything about it.

I think I’d rather fly on plane whose avionics software had been
written by someone who assumed that they had made mistakes somewhere
and that both their software and the hardware it was running on were
going to fail in some unforeseeable way (and had implemented the
appropriate safeguards both in process and error recovery) than by
someone who assumed that proving their software correct was sufficient
(though I realise that is a bit of a straw man :slight_smile:

On a separate note, I’m continually surprised that computer ‘science’
is so heavily biased to the mathematical theoretical side at the
expense of the empirical and pragmatic. Almost every single code
example in all textbooks and papers has ‘error handling omitted for
clarity’! It’s no wonder we have all these planes falling out of the
sky :slight_smile:

Best regards,
Sean

[1] The number theorists are now shrieking in disbelief: Simple?
Integers? Arithmetic?! :slight_smile:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Rick DeNatale wrote:

|
| Yes, testing, not a blind faith in whatever language is being used,
| and it’s compiler.

Indeed.

|> Anyway, this problem is (AFAIK, anyway), countered by using redundant
|> implementations of the hardware and software (well, as far as possible,
|> anyway), to minimize the effect of unknown states.
|
| Of course this isn’t perfect either. In fact “The Bug Heard Round the
| World.” which I mentioned earlier in this thread, was a failure of
| redundancy.

Perfection is an ideal, that we can only approach asymptotically, never
achieve (since we, as human beings, aren’t perfect).

| Of course all of this worked well during the pre-STS1 mission sims.
|
| However, on the day of the launch, there was a clock skew between the
| redundant computers, so the output from one lagged just a bit behind
| the others, and the system halted the launch, unnecessarily as it
| turned out, at T-3
|

That is it was an unnecessary halt is probably the benefit of hindsight.
Unfortunately, I can only assume that it was so, since I cannot find a
free version of the paper you linked to earlier.

Without the benefit of hindsight, the problem of the skewed clocks could
have a much wider impact than it actually had, masking deeper problems
of the software and / or hardware used.

In such a case, we enter the area of risk management: Is it worth to
risk the whole mission on something that hasn’t been done before at this
scale? While there was knowledge, at the time, of space flight thanks to
the Apollo and Mercury programs, something like the Space Shuttle was
new, and very different from the “throw away” capsules used before, with
different approaches to solve the problem of getting something into
orbit and back again, preferably all in one piece.

With the lives and money at stake with the Shuttle program, the decision
to cancel was wise, IMO, even though it turned out to be unnecessary.

One could even claim, that the systems performed as planned, and
prevented a catastrophe. Without actual empirical testing we probably
won’t know for sure, and can only speculate.

In the end, though, this shows that no amount of software nor hardware
can replace judgment calls made by human beings. Technology can only
assist in making decisions. And in the cases where humans cannot make
decisions (like a Shuttle launch, where automation has to be used), a
use of technology (and not just languages and compilers and processes)
still requires humans for the get go.

I think that the movie Wargames touched on this topic in a good, and
decent, way, as well as Crimson Tide (in a not very related way, though,
but it demonstrates my point of not putting too much trust into
process).


Phillip G.
Twitter: twitter.com/cynicalryan

Zero G and I fell fine.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgHOa8ACgkQbtAgaoJTgL/EsgCfWwHO2OoGyM+8rtM7j9MOlk1Z
48YAn3vtgcnZiMVQy02jwmqwVUNaWRPO
=ZpIR
-----END PGP SIGNATURE-----

On 17 Apr 2008, at 04:16, Tom C. wrote:

ultimately the comfort of deduction and dive into dragon realm of
inferential processes. Ultimately, there simply is not substitute,
or adequate simulation of, reality. Sigh.

That’s the new physics for you :wink:

Ellie

Eleanor McHugh
Games With Brains
http://slides.games-with-brains.net

raise ArgumentError unless @reality.responds_to? :reason

On Thu, Apr 17, 2008 at 1:36 PM, Sean O’Halpin [email protected]
wrote:

On Thu, Apr 17, 2008 at 10:29 AM, Eivind E. [email protected] wrote:

In the absence of hardware failure (something we of course can’t prove
and is a problem in the real world), we can trivially prove halting
for a lot of

(trivlal :slight_smile:

A surprisingly non-trivial number of cases are trivial.

cases. E.g, the simple program “halt” halts.

If ‘halt’ is defined as implying that the program halts, then you’re
not really proving anything here. Like saying that true implies true.
You have to take that as an axiom.

I am proving that a specific program will halt. This is a very
trivial proof; that was the intent. You could of course use an “X:
goto X” for the halt example; it does the same thing.

halt if
true halts. halt if 1 == 1 would generally halt, assuming no
redefinition of ==. An so on.

Once you invoke conditions, you are invoking some formal system of
axioms (like your assumption above) which you have to define. I grant
that there are very simple formal systems which are ‘complete’ but
they are generally not ‘interesting’. If you want even simple integer
arithmetic[1], you’re subject to Goedel’s incompleteness theorem.

My claim wasn’t that reasoning about the system is not subject to
Goedel’s incompleteness theorem - it is. It was that the properties
that was described as being a result of Goedel’s incompleteness
theorem was in fact not related to that theorem. That state proving
is impossible in the case of an infinite memory computer and often
infeasible the case of finite memory computers is, to the best of my
knowledge, a fully separate result.

Eivind.

On Thu, Apr 17, 2008 at 12:26 PM, Phillip G.
[email protected] wrote:

But a language that is Turing-complete is a complete logical system, is
it not?

No. A computer language is a different kind of beast; it is not a way
to state truths about things, but a set of rules for mathematical
operations.

The relevance of the incompleteness is in the space of what programs
can exist, and what we can prove about them.

The incompleteness theorem says that there will be statements we can
make about some programs that will be true but that we will not be
able to prove true.

The halting theorem says that, specifically, that there exists
programs that will halt that we cannot prove if halts - under the
assumption of infinite memory. Under the assumption of finite memory,
we are dealing with a finite state machine, and a proof is in theory
possible by enumerating all all the states and which other state
follows that state.

In practice, of course, enumerating all states and the transition from
them very very quickly becomes intractable.

But as far as I can tell, incompleteness does not enter into it; only
practical feasibility.

proof is valid. In a complete system, any proof is valid, so that the
proof that halt == true equates to false is a side-effect under the
(in-)correct circumstances.

I have no idea what you are trying to say here. Could you reformulate?

Specifically, the following makes no sense to me:

“In a complete system, any proof is valid” (this seems to depend on
some weird definition of complete system)
“There are logical proofs that P is non-P” (this seems like either an
example of a proof with subtle errors or proof-by-contradiction
proving that a particular statement X is false; ie, if we can prove
P!=P if we assume of X, then we know that X is false.)

Now, most applications don’t reach this level of complexity, but the
level of complexity is there, you usually don’t have to care about it.

Conversely, if every state of a program were provable, the processes
like DO-178B wouldn’t be necessary, would they?

If we knew we had a perfect spec and could practically prove all the
relevant aspects of transformation from spec to software/hardware, I
guess we would be able to just say “Prove spec to software” instead of
having any other standard. Alas, to make software and hardware is a
human endeavor - even assuming we could prove halting properties of
our real world state machines on a perfect computer, this is only a
small part of systems development.

Eivind.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eivind E. wrote:

|
| No. A computer language is a different kind of beast; it is not a way
| to state truths about things, but a set of rules for mathematical
| operations.

Which makes it a complete logical system, though not necessarily in the
sense of abstract logic.

| I have no idea what you are trying to say here. Could you reformulate?

Sure.

| Specifically, the following makes no sense to me:
|
| “In a complete system, any proof is valid” (this seems to depend on
| some weird definition of complete system)

Logically complete. That is, a system that implements abstract logic in
tis entirety. Abstract logic is at the root of Godel’s incompleteness
theorem. Essentially, you either have something that can be expressed
with abstract logic. But that means, that there are things that cannot
be proven with abstract logic within the logical system itself.
Whence, an incompleteness theorem.

| “There are logical proofs that P is non-P” (this seems like either an
| example of a proof with subtle errors or proof-by-contradiction
| proving that a particular statement X is false; ie, if we can prove
| P!=P if we assume of X, then we know that X is false.)

  1. P → Q Premise
  2. P → (Q → ¬P) Premise
    ~ 3. P Assumption
    ~ 4. Q 1,3 MP
    ~ 5. Q → ¬P 2,3 MP
    ~ 6. ¬P 4,5 MP
    ~ 7. P & ¬P 3,6 Conj
  3. ¬P

Ergo: P = not-P. :wink:

(Note: The character are UTF-8, in case you can’t see them.)

The complete discussion of this proof:
http://www.iep.utm.edu/p/prop-log.htm#SH5e

Godel’s first theorem:
“For any consistent formal, recursively enumerable theory that proves
basic arithmetical truths, an arithmetical statement that is true, but
not provable in the theory, can be constructed. That is, any effectively
generated theory capable of expressing elementary arithmetic cannot be
both consistent and complete.”

Which drives logicians mad, since abstract logic has to be both complete
and non-contradictory. :wink:

Anyway: Computer languages that are Turing complete are both complete
and contradictory (since Godel’s theorem exists), given a sufficient
algorithm. However, in >=90% of cases, this doesn’t matter.

|
| If we knew we had a perfect spec and could practically prove all the
| relevant aspects of transformation from spec to software/hardware, I
| guess we would be able to just say “Prove spec to software” instead of
| having any other standard. Alas, to make software and hardware is a
| human endeavor - even assuming we could prove halting properties of
| our real world state machines on a perfect computer, this is only a
| small part of systems development.

Not really, thanks to Godel. If we can prove it, it’s either incomplete
(so we don’t have the perfect specs), or contradictory (so we don’t have
the perfect specs either).

I hold the opinion that Godel’s Incompleteness Axiom is a misnomer, and
it should be called Godel’s Incompleteness Paradox.


Phillip G.
Twitter: twitter.com/cynicalryan

Abstain from wine, women, and song; mostly song.
%Abstinence is as easy to me, as temperance would be difficult.
~ – Samuel Johnson
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkgHUnAACgkQbtAgaoJTgL/phACeMJaxYkj54DGzqYcLTQNO5a3X
WgwAnjWRPr2Og+LrUGgmELEYUFDvJkJo
=sZ1C
-----END PGP SIGNATURE-----

On Thu, Apr 17, 2008 at 3:36 PM, Phillip G.
[email protected] wrote:

Which makes it a complete logical system, though not necessarily in the
| “In a complete system, any proof is valid” (this seems to depend on
| “There are logical proofs that P is non-P” (this seems like either an
| example of a proof with subtle errors or proof-by-contradiction
| proving that a particular statement X is false; ie, if we can prove
| P!=P if we assume of X, then we know that X is false.)

  1. P e$B"*e(B Q Premise
  2. P e$B"*e(B (Q e$B"*e(B e$B"Le(BP) Premise
    De falsum quodlibet, nice try :wink:
    IOW You can prove anything with a wrong premise as false → X is
    always true indeed what you proved was
    false → (P && !P)
    which is correct of course.

    | human endeavor - even assuming we could prove halting properties of
    | our real world state machines on a perfect computer, this is only a
    | small part of systems development.

Not really, thanks to Godel. If we can prove it, it’s either incomplete
(so we don’t have the perfect specs), or contradictory (so we don’t have
the perfect specs either).

I hold the opinion that Godel’s Incompleteness Axiom is a misnomer, and
it should be called Godel’s Incompleteness Paradox.
Is it really called an axiom? An axiom cannot be proven, it should be
called a Theorem.

Cheers
Robert


http://ruby-smalltalk.blogspot.com/


Whereof one cannot speak, thereof one must be silent.
Ludwig Wittgenstein