D
Don Y
Guest
On 1/12/2023 11:06 AM, Joe Gwinn wrote:
That\'s *discipline*. You have to have procedures in place and
RELY on their enforcement -- by people qualified and committed.
And, Manglement has to be committed to expending the resources
to do those and impose their edicts.
If \"ship it\" is the bigger imperative, guess what will suffer?
The better approach is to build mechanisms (into the language
or runtime) that enforce these things. Then, you don\'t have to rely
on \"discipline\" to provide those protections.
Yes, you still rely on the developer to declare and define these
\"meaningfully\". But, if you are *designing* your code (and not
just relying on some /ad hoc/ process), you can define the
contracts before any of the code is written and codify these
with an OCL.
For systems of significant complexity *or* where the sources are
not visible, such OCL declarations tell the developer what he
can expect from the interface -- and, WHAT IS EXPECTED OF HIM!
But, developers need to understand their intent AND have considered
a fallback strategy: \"What do I do *if*...\"
You can use a \"black box\" to log messages that can\'t reach the
console (at this time). You wouldn\'t rely on a full-fledged
\"printf()-like\" facility to prepare messages. But, the intended
viewer of those messages can expend some effort to understand them
*in* error conditions. And, given that you want to maximize how much
you can store in the black-box resource, you\'d want messages to
be terse or heavily encoded.
So, just being able to store a series of integers is often enough
(even if the integers are the exp/mantissa of a FP value!)
I have a runtime monitor that lets me \"watch\" memory regions and
\"notice\" things of interest. The display being part of the
monitor\'s hardware support (in some cases, a set of 7-segment digits)
With modern hardware, you can rely on the processor (to some extent)
to catch these things. E.g., dereferencing an invalid pointer,
writing to TEXT space, stack under/over-flow, etc.
But, you have to plan with those things in mind.
If you\'re coding where time is important, then how is NOT
verifying timeliness constraints ARE being met LESS important
than verifying the constraints on a function call/return?
What mechanisms do you have to detect this, at runtime?
In the released product?
Old Wives\' Tale. Your concern is wrt deterministic behavior
and/or timeliness. If you, the CAPABLE developer, know things
about the application and your implementation, why should you
be constrained NOT to use a facility? (why have malloc() at all?
why not just static buffers throughout?)
\"Don\'t run with scissors\" Why hasn\'t someone designed and sold
a device that prevents you from doing so? It could be as simple
as a 300 pound weight and a chain (to the scissors)!
Ans: there are times when you *need* to run with scissors.
So, be *wary*/vigilant when doing so, but don\'t anchor your
scissors to a large weight just to ensure you \"play safe\".
You don\'t need GC with dynamic memory use. Only to catch
stale references that the user isn\'t obligated to clean up
on his own. Java manages objects so has to bear that cost.
In other languages, the developer manages those resources -- and
is responsible for their proper housekeeping.
You\'ve missed the point.
The folks in the design review KNOW (from a detailed examination
of the code) that the condition (represented here by \"FALSE\")
will always resolve to FALSE. Always. So, they KNOW the branch
will never be taken. \"panic()\" represents dead code.
Note that the compiler can\'t always see how the condition will
resolve. Even with an OCL.
if (0 != return_one() - 1)
panic()
if \"return_one()\" -- which, returns the constant \"1\" -- is opaque,
the compiler can\'t know that the expression resolves to
if (0 != 1 - 1)
so it can\'t do anything other than generate the requisite code to
invoke return_one(). I.e., the compiler can\'t decide that this is
\"won\'t happen\".
Resorting to the opacity of functions lets you do things solely
for side-effects -- which would otherwise be if-fy.
E.g., if \"return_one()\" was actually \"fault_in_enough_stack_space()\"
then you wouldn\'t want the call elided -- because the code (presumably)
relies on having the requisite stack \"wired down\" (or faulted in)
before whatever comes next.
On Sat, 7 Jan 2023 17:16:57 -0700, Don Y <blockedofcourse@foo.invalid
wrote:
On 1/7/2023 4:11 PM, Joe Gwinn wrote:
Things which do have an important place in modern software that is
intended to be provably correct are invariants (borrowed from physics).
Yes, but if I recall we called them Assertions:
.<https://en.wikipedia.org/wiki/Assertion_(software_development)
Software also has Invariants, but I don\'t know that either one came
from the Physics world.
.<https://en.wikipedia.org/wiki/Invariant_(mathematics)#Invariants_in_computer_science
The main difference in software seems to be that assertions are
logical statements about the value of a single variable, while
Invariants apply an assertion to the result of a specified function.
One kind of assertion was visual - coplot a 2D plot of something, plus
a circle, and visually verify concentricity. The eye is _very_ good
at this, so it was a very robust and sensitive test.
I for one used them heavily, with some being used in operation, not
just development. This was done in runtime code, not as a property of
the programming language and/or compiler.
That, and the fact that many languages simply don\'t natively support
them, means they end up being a matter of \"discipline\" and not
really enforceable (design reviews?).
Sure they can be enforced in a design review. Simply ask the
programmer what the Invariants are and how the relevant code achieves
them.
That\'s *discipline*. You have to have procedures in place and
RELY on their enforcement -- by people qualified and committed.
And, Manglement has to be committed to expending the resources
to do those and impose their edicts.
If \"ship it\" is the bigger imperative, guess what will suffer?
The better approach is to build mechanisms (into the language
or runtime) that enforce these things. Then, you don\'t have to rely
on \"discipline\" to provide those protections.
This has nothing to do with whether the language has anything like
Invariants - the compiler won\'t understand the domain of the app code
being written, or know when and where to use which invariant.
Yes, you still rely on the developer to declare and define these
\"meaningfully\". But, if you are *designing* your code (and not
just relying on some /ad hoc/ process), you can define the
contracts before any of the code is written and codify these
with an OCL.
For systems of significant complexity *or* where the sources are
not visible, such OCL declarations tell the developer what he
can expect from the interface -- and, WHAT IS EXPECTED OF HIM!
I, for example, support the specification of invariant in-out conditions
in my IDL. I feel this makes the contract more explicit -- in terms
that the developer WILL see (and, that the auto-generated stubs will
enforce in case he wants to overlook them! :> ).
But, I can\'t mechanically require the designer of an object class
(or service) to define them meaningfully.
And, what do you do when one fails at runtime? How often does
code simply propagate assertions up to a top level handler...
that panic()\'s?
Yep. Depends on what the code does.
But, developers need to understand their intent AND have considered
a fallback strategy: \"What do I do *if*...\"
In 1985 or so, I built an invariant into some operating system code
used in a radar. At this level in the kernel, nothing like printf
exists. Nor did it have a printer for that matter.
You can use a \"black box\" to log messages that can\'t reach the
console (at this time). You wouldn\'t rely on a full-fledged
\"printf()-like\" facility to prepare messages. But, the intended
viewer of those messages can expend some effort to understand them
*in* error conditions. And, given that you want to maximize how much
you can store in the black-box resource, you\'d want messages to
be terse or heavily encoded.
So, just being able to store a series of integers is often enough
(even if the integers are the exp/mantissa of a FP value!)
I have a runtime monitor that lets me \"watch\" memory regions and
\"notice\" things of interest. The display being part of the
monitor\'s hardware support (in some cases, a set of 7-segment digits)
The problem was that the app guys were not ensuring that what had to
be global-access buffers were in fact in hardware global memory; if
not, the system\'s picture of reality would rapidly diverge. Subtle
but devastating.
So, what to do? This has to be checked at full speed, so it must be
dead simple: Add to the kernel a few lines of assembly code that
verified that the buffer address fell in hardware global memory and
intentionally execute an illegal instruction if not. This caught
*everybody* within 8 hours.
With modern hardware, you can rely on the processor (to some extent)
to catch these things. E.g., dereferencing an invalid pointer,
writing to TEXT space, stack under/over-flow, etc.
But, you have to plan with those things in mind.
More significantly, do you see real-time software implementing
invariants wrt deadlines? (oops! isn\'t that the whole point of RT??)
Again, I support the specification of per-task \"deadline handlers\"
but there\'s nothing that forces the developer to define one meaningfully.
Deadline schedulers do exist and were widely touted for realtime, but
never caught on outside of academia because they fail badly when the
planets align badly and deadline cannot be met. This proved too
fragile for real applications. Not to mention too complex.
If you\'re coding where time is important, then how is NOT
verifying timeliness constraints ARE being met LESS important
than verifying the constraints on a function call/return?
What mechanisms do you have to detect this, at runtime?
In the released product?
When folks are still failing to test the results of many common
functions (malloc() being a prime example... but, how often have
you tested the return value of printf()?), how can you expect
them to have sorted out what to do for each thrown exception,
failed invariant, etc.?
In RT, one does not use malloc except once, to get the working
buffers. After that, the RT code handles memory management.
Old Wives\' Tale. Your concern is wrt deterministic behavior
and/or timeliness. If you, the CAPABLE developer, know things
about the application and your implementation, why should you
be constrained NOT to use a facility? (why have malloc() at all?
why not just static buffers throughout?)
\"Don\'t run with scissors\" Why hasn\'t someone designed and sold
a device that prevents you from doing so? It could be as simple
as a 300 pound weight and a chain (to the scissors)!
Ans: there are times when you *need* to run with scissors.
So, be *wary*/vigilant when doing so, but don\'t anchor your
scissors to a large weight just to ensure you \"play safe\".
Java-style garbage collection would cause random system hangs, and so
cannot be used in realtime.
You don\'t need GC with dynamic memory use. Only to catch
stale references that the user isn\'t obligated to clean up
on his own. Java manages objects so has to bear that cost.
In other languages, the developer manages those resources -- and
is responsible for their proper housekeeping.
Finally, as they are never supposed to execute, some industries
dictate that they be removed from production code, classifying
them as \"dead code\" (they aren\'t supposed to have side-effects).
Would you have an argument for leaving this in your code?
if (FALSE) panic();
Isn\'t that what an invariant *effectively* reduces to?
Not if it\'s coded correctly. In C and like languages, one either
declares the relevant variable to be volatile, or hides a critical
part of the mechanism in a subroutine, to prevent the compiler\'s code
optimizer from making such assumptions. Or write it in assembler.
You\'ve missed the point.
The folks in the design review KNOW (from a detailed examination
of the code) that the condition (represented here by \"FALSE\")
will always resolve to FALSE. Always. So, they KNOW the branch
will never be taken. \"panic()\" represents dead code.
Note that the compiler can\'t always see how the condition will
resolve. Even with an OCL.
if (0 != return_one() - 1)
panic()
if \"return_one()\" -- which, returns the constant \"1\" -- is opaque,
the compiler can\'t know that the expression resolves to
if (0 != 1 - 1)
so it can\'t do anything other than generate the requisite code to
invoke return_one(). I.e., the compiler can\'t decide that this is
\"won\'t happen\".
Resorting to the opacity of functions lets you do things solely
for side-effects -- which would otherwise be if-fy.
E.g., if \"return_one()\" was actually \"fault_in_enough_stack_space()\"
then you wouldn\'t want the call elided -- because the code (presumably)
relies on having the requisite stack \"wired down\" (or faulted in)
before whatever comes next.