Only if the programmer can prove (through formal machine-checkable proofs) it is free of bugs with respect to a claim of dependability.
Years ago, I followed a course on formal semantics... and, although I could follow all the theory and mathematics with little effort, I was uneasy with the material. More recently, I got to know the concept of an ontology, and realized that it covers concepts but not instances (i.e. a car but not your car parked in front of your office building). Suddenly, while studying Pi-calculus, I realized the cause of my uneasiness: those formal techniques, languages, ... could not even capture a real-world concept, and surely not its real-world instances. This is unfortunate since "dependability" is most relevant when the real world is involved in a physical manner. If certified software wants our respect, it should at least show evidence that it is trying to address the certification of "a person will not be stuck between computer-controlled doors" and if it happens "the subway train will not start moving with a person hanging halfway out of the wagon". Can anyone point me to publications looking into this matter? Which research is going beyond "the rewriting of computations"? I recognize the merit of "treating certification separately from ..." but I have to point out that there is little added value in reinforcing what already are the stronger links. It may be useful but does not merit priority over efforts looking at the weakest link (outside the comfort zones of research communities).
Displaying 1 comment