top of page

The Fine Art of Eradicating Computer Bugs


[ C H R O N I C L E ]

Information technology is now essential in many fields. Be it for banking or sales, electronic voting, piloting planes and cars, pacemakers, insulin pumps or scientific instruments, computers perform functions which are critical, and sometimes vital, in the real sense of the term. It is therefore essential that circuits, software and development tools function irreproachably. This requirement however, runs up against an intrinsic flaw in computers: bugs, tiny human errors in code that calculations can amplify to the point of causing serious damage.

In 1994, Intel had to spend 475 million dollars to correct a bug in a Pentium Pro microprocessor algorithm. The Pathfinder and Spirit probes were nearly lost on landing, primarily due to an incorrectly positioned bit in the system parameters and secondly due to an error in the flash memory’s handling routine. Although they may have no functional impact, small design or coding bugs make IT systems vulnerable to attack by hackers. Examples of bugs are frequent, including in the automobile sector. Eradicating them is therefore an absolute priority.

The traditional method consists of testing the system. When done properly, this test is efficient in finding bugs, but not necessarily in proving their absence. In 1949, Alan Turing proposed another method: proving program correction mathematically. Many scientists worked on the issue subsequently - leading ten of them to win the Turing Prize, the most prestigious award in IT. Now, two formal methods of verification exist: proof assistant verification, which can be total, or model checking, where specific properties are automatically shown by extracting a finite mathematical model from the programs.

Interactive proof assistants, such as Coq, HOL or Isabelle, apply when it is possible to specify exactly what the program must do; mathematical algorithm, sort, compilation, etc. They use powerful mathematical logic to completely express the function and obtain partially automated proofs. They were used to prove the correction of arithmetic calculations and other key functions in Intel, AMD and ARM microprocessors, ensuring 100% that these processors would make correct, bug-free calculations. With Coq, Xavier Leroy also proved a C language compiler to Inria. After an academic premiere in Australia, the French company Prove & Run guaranteed the absence of bugs in the core of an industrial operating system. Finally, critical distributed algorithms were proven to be correct, vis-à-vis the consistency of distributed databases for example. Many other applications will follow.

Model checking on the other hand looks at the proof of partial but essential properties; safety, such as “the lift will never operate with the door open”, and vivacity such as “the lift will eventually carry all passengers that call it”. Logic and the associated algorithms have made considerable progress over the last twenty years, leading to “solvers” (software that solves a specific mathematical problem) now widely used in industry for specific tasks. One example of their power is sudoku, a difficult game for us humans. By translating it into a Boolean problem (using the variables 0 and 1) and using a modern Boolean solver, any sudoku can be solved in the blink of an eye.

IT security is another priority, for online security protocols, the Internet of Things and voting. These sensitive protocols are vulnerable to attack. An Inria/Microsoft project aims at building a proven version. This will avoid numerous major attacks on web transactions due to residual flaws in the protocols. Formal verification remains a delicate task however. Our smartphones and computers will be indicating “this update corrects bugs and enhances stability” for a while to come.

> AUTHOR

Gérard Berry

Computer Scientist

Gérard is a professor at the Collège de France and a member of the french Academy of Sciences.

RECENT POST
LATEST DISCOVERIES OF FRENCH SCIENTIFIC RESEARCH
bottom of page