[ShortestPath] title = An Axiomatic Characterization of the Single-Source Shortest Path Problem author = Christine Rizkallah <https://www.mpi-inf.mpg.de/~crizkall/> base = HOL/HOL-Library/Graph_Theory date = 2013-05-22 topic = Mathematics/Graph Theory abstract = This theory is split into two sections. In the first section, we give a formal proof that a well-known axiomatic characterization of the single-source shortest path problem is correct. Namely, we prove that in a directed graph with a non-negative cost function on the edges the single-source shortest path function is the only function that satisfies a set of four axioms. In the second section, we give a formal proof of the correctness of an axiomatic characterization of the single-source shortest path problem for directed graphs with general cost functions. The axioms here are more involved because we have to account for potential negative cycles in the graph. The axioms are summarized in three Isabelle locales. [Launchbury] title = The Correctness of Launchbury's Natural Semantics for Lazy Evaluation author = Joachim Breitner <http://pp.ipd.kit.edu/~breitner> base = HOL/HOLCF/Nominal2 date = 2013-01-31 topic = Computer Science/Programming Languages/Lambda Calculi abstract = In his seminal paper "Natural Semantics for Lazy Evaluation", John Launchbury proves his semantics correct with respect to a denotational semantics. We have formalized both semantics and identified an ambiguity in what the join operator should do: If it is understood as the least upper bound, as hinted at in the paper, the original proof fails. We fix the proof by taking a detour via a modified natural semantics with an explicit stack. In addition, we show that the original proof goes through when join is understood to be right-sided update. [CCS] title = CCS in nominal logic author = Jesper Bengtson <http://www.itu.dk/people/jebe> base = HOL/HOL-Nominal date = 2012-05-29 topic = Computer Science/Process Calculi abstract = We formalise a large portion of CCS as described in Milner's book 'Communication and Concurrency' using the nominal datatype package in Isabelle. Our results include many of the standard theorems of bisimulation equivalence and congruence, for both weak and strong versions. One main goal of this formalisation is to keep the machine-checked proofs as close to their pen-and-paper counterpart as possible. <p> This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>. [Pi_Calculus] title = The pi-calculus in nominal logic author = Jesper Bengtson <http://www.itu.dk/people/jebe> base = HOL/HOL-Nominal date = 2012-05-29 topic = Computer Science/Process Calculi abstract = We formalise the pi-calculus using the nominal datatype package, based on ideas from the nominal logic by Pitts et al., and demonstrate an implementation in Isabelle/HOL. The purpose is to derive powerful induction rules for the semantics in order to conduct machine checkable proofs, closely following the intuitive arguments found in manual proofs. In this way we have covered many of the standard theorems of bisimulation equivalence and congruence, both late and early, and both strong and weak in a uniform manner. We thus provide one of the most extensive formalisations of a the pi-calculus ever done inside a theorem prover. <p> A significant gain in our formulation is that agents are identified up to alpha-equivalence, thereby greatly reducing the arguments about bound names. This is a normal strategy for manual proofs about the pi-calculus, but that kind of hand waving has previously been difficult to incorporate smoothly in an interactive theorem prover. We show how the nominal logic formalism and its support in Isabelle accomplishes this and thus significantly reduces the tedium of conducting completely formal proofs. This improves on previous work using weak higher order abstract syntax since we do not need extra assumptions to filter out exotic terms and can keep all arguments within a familiar first-order logic. <p> This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>. [Psi_Calculi] title = Psi-calculi in Isabelle author = Jesper Bengtson <http://www.itu.dk/people/jebe> base = HOL/HOL-Nominal date = 2012-05-29 topic = Computer Science/Process Calculi abstract = Psi-calculi are extensions of the pi-calculus, accommodating arbitrary nominal datatypes to represent not only data but also communication channels, assertions and conditions, giving it an expressive power beyond the applied pi-calculus and the concurrent constraint pi-calculus. <p> We have formalised psi-calculi in the interactive theorem prover Isabelle using its nominal datatype package. One distinctive feature is that the framework needs to treat binding sequences, as opposed to single binders, in an efficient way. While different methods for formalising single binder calculi have been proposed over the last decades, representations for such binding sequences are not very well explored. <p> The main effort in the formalisation is to keep the machine checked proofs as close to their pen-and-paper counterparts as possible. This includes treating all binding sequences as atomic elements, and creating custom induction and inversion rules that to remove the bulk of manual alpha-conversions. <p> This entry is described in detail in <a href="http://www.itu.dk/people/jebe/files/thesis.pdf">Bengtson's thesis</a>. [Circus] title = Isabelle/Circus author = Abderrahmane Feliachi <mailto:abderrahmane.feliachi@lri.fr>, Burkhart Wolff <mailto:wolff@lri.fr>, Marie-Claude Gaudel <mailto:mcg@lri.fr> base = HOL/HOLCF date = 2012-05-27 topic = Computer Science/Process Calculi, Computer Science/Programming Languages/Logics abstract = The Circus specification language combines elements for complex data and behavior specifications, using an integration of Z and CSP with a refinement calculus. Its semantics is based on Hoare and He's Unifying Theories of Programming (UTP). Isabelle/Circus is a formalization of the UTP and the Circus language in Isabelle/HOL. It contains proof rules and tactic support that allows for proofs of refinement for Circus processes (involving both data and behavioral aspects). <p> The Isabelle/Circus environment supports a syntax for the semantic definitions which is close to textbook presentations of Circus. This article contains an extended version of corresponding VSTTE Paper together with the complete formal development of its underlying commented theories. [Dijkstra_Shortest_Path] title = Dijkstra's Shortest Path Algorithm author = Benedikt Nordhoff <mailto:b.n@wwu.de>, Peter Lammich <http://cs.uni-muenster.de/sev/staff/lammich/> topic = Computer Science/Algorithms base = HOL/Collections/Refine_Monadic date = 2012-01-30 depends-on = Refine_Monadic, Collections abstract = We implement and prove correct Dijkstra's algorithm for the single source shortest path problem, conceived in 1956 by E. Dijkstra. The algorithm is implemented using the data refinement framework for monadic, nondeterministic programs. An efficient implementation is derived using data structures from the Isabelle Collection Framework. [Refine_Monadic] title = Refinement for Monadic Programs author = Peter Lammich <http://cs.uni-muenster.de/sev/staff/lammich/> topic = Computer Science/Programming Languages/Logics base = HOL/Collections date = 2012-01-30 depends-on = Collections abstract = We provide a framework for program and data refinement in Isabelle/HOL. The framework is based on a nondeterminism-monad with assertions, i.e., the monad carries a set of results or an assertion failure. Recursion is expressed by fixed points. For convenience, we also provide while and foreach combinators. <p> The framework provides tools to automatize canonical tasks, such as verification condition generation, finding appropriate data refinement relations, and refine an executable program to a form that is accepted by the Isabelle/HOL code generator. <p> This submission comes with a collection of examples and a user-guide, illustrating the usage of the framework. [PseudoHoops] title = Pseudo Hoops author = George Georgescu, Laurentiu Leustean, Viorel Preoteasa <http://users.abo.fi/vpreotea/> topic = Mathematics/Algebra base = HOL/LatticeProperties date = 2011-09-22 abstract = Pseudo-hoops are algebraic structures introduced by B. Bosbach under the name of complementary semigroups. In this formalization we prove some properties of pseudo-hoops and we define the basic concepts of filter and normal filter. The lattice of normal filters is isomorphic with the lattice of congruences of a pseudo-hoop. We also study some important classes of pseudo-hoops. Bounded Wajsberg pseudo-hoops are equivalent to pseudo-Wajsberg algebras and bounded basic pseudo-hoops are equivalent to pseudo-BL algebras. Some examples of pseudo-hoops are given in the last section of the formalization. [MonoBoolTranAlgebra] title = Algebra of Monotonic Boolean Transformers author = Viorel Preoteasa <http://users.abo.fi/vpreotea/> topic = Computer Science/Programming Languages/Logics base = HOL/LatticeProperties date = 2011-09-22 abstract = Algebras of imperative programming languages have been successful in reasoning about programs. In general an algebra of programs is an algebraic structure with programs as elements and with program compositions (sequential composition, choice, skip) as algebra operations. Various versions of these algebras were introduced to model partial correctness, total correctness, refinement, demonic choice, and other aspects. We formalize here an algebra which can be used to model total correctness, refinement, demonic and angelic choice. The basic model of this algebra are monotonic Boolean transformers (monotonic functions from a Boolean algebra to itself). [LatticeProperties] title = Lattice Properties author = Viorel Preoteasa <http://users.abo.fi/vpreotea/> topic = Mathematics/Algebra base = HOL date = 2011-09-22 abstract = This formalization introduces and collects some algebraic structures based on lattices and complete lattices for use in other developments. The structures introduced are modular, and lattice ordered groups. In addition to the results proved for the new lattices, this formalization also introduces theorems about latices and complete lattices in general. extra-history = Change history: [2012-01-05]: Removed the theory about distributive complete lattices which is in the standard library now. Added a theory about well founded and transitive relations and a result about fixpoints in complete lattices and well founded relations. Moved the results about conjunctive and disjunctive functions to a new theory. Removed the syntactic classes for inf and sup which are in the standard library now. [Impossible_Geometry] title = Proving the Impossibility of Trisecting an Angle and Doubling the Cube author = Ralph Romanos <mailto:ralph.romanos@student.ecp.fr>, Lawrence Paulson <http://www.cl.cam.ac.uk/~lp15/> topic = Mathematics/Algebra, Mathematics/Geometry base = HOL date = 2012-08-05 abstract = Squaring the circle, doubling the cube and trisecting an angle, using a compass and straightedge alone, are classic unsolved problems first posed by the ancient Greeks. All three problems were proved to be impossible in the 19th century. The following document presents the proof of the impossibility of solving the latter two problems using Isabelle/HOL, following a proof by Carrega. The proof uses elementary methods: no Galois theory or field extensions. The set of points constructible using a compass and straightedge is defined inductively. Radical expressions, which involve only square roots and arithmetic of rational numbers, are defined, and we find that all constructive points have radical coordinates. Finally, doubling the cube and trisecting certain angles requires solving certain cubic equations that can be proved to have no rational roots. The Isabelle proofs require a great many detailed calculations. [KBPs] title = Knowledge-based programs author = Peter Gammie <http://peteg.org/> topic = Computer Science/Automata and Formal Languages base = HOL/HOL-Library date = 2011-05-17 abstract = Knowledge-based programs (KBPs) are a formalism for directly relating agents' knowledge and behaviour. Here we present a general scheme for compiling KBPs to executable automata with a proof of correctness in Isabelle/HOL. We develop the algorithm top-down, using Isabelle's locale mechanism to structure these proofs, and show that two classic examples can be synthesised using Isabelle's code generator. extra-history = Change history: [2012-03-06]: Add some more views and revive the code generation. [Tarskis_Geometry] title = The independence of Tarski's Euclidean axiom author = T. J. M. Makarios <mailto:tjm1983@gmail.com> topic = Mathematics/Geometry base = HOL date = 2012-10-30 abstract = Tarski's axioms of plane geometry are formalized and, using the standard real Cartesian model, shown to be consistent. A substantial theory of the projective plane is developed. Building on this theory, the Klein-Beltrami model of the hyperbolic plane is defined and shown to satisfy all of Tarski's axioms except his Euclidean axiom; thus Tarski's Euclidean axiom is shown to be independent of his other axioms of plane geometry. <p> An earlier version of this work was the subject of the author's <a href="http://researcharchive.vuw.ac.nz/handle/10063/2315">MSc thesis</a>, which contains natural-language explanations of some of the more interesting proofs. [General-Triangle] title = The General Triangle Is Unique author = Joachim Breitner <mailto:mail@joachim-breitner.de> topic = Mathematics/Geometry base = HOL date = 2011-04-01 abstract = Some acute-angled triangles are special, e.g. right-angled or isoscele triangles. Some are not of this kind, but, without measuring angles, look as if they were. In that sense, there is exactly one general triangle. This well-known fact is proven here formally. [LightweightJava] title = Lightweight Java author = Rok Strniša <http://rok.strnisa.com/lj/>, Matthew Parkinson <http://research.microsoft.com/people/mattpark/> topic = Computer Science/Programming Languages/Language Definitions base = HOL date = 2011-02-07 abstract = A fully-formalized and extensible minimal imperative fragment of Java. [Lower_Semicontinuous] title = Lower Semicontinuous Functions author = Bogdan Grechuk <mailto:grechukbogdan@yandex.ru> topic = Mathematics/Analysis base = HOL/HOL-Multivariate_Analysis date = 2011-01-08 abstract = We define the notions of lower and upper semicontinuity for functions from a metric space to the extended real line. We prove that a function is both lower and upper semicontinuous if and only if it is continuous. We also give several equivalent characterizations of lower semicontinuity. In particular, we prove that a function is lower semicontinuous if and only if its epigraph is a closed set. Also, we introduce the notion of the lower semicontinuous hull of an arbitrary function and prove its basic properties. [RIPEMD-160-SPARK] title = RIPEMD-160 author = Fabian Immler <mailto:immler@in.tum.de> topic = Computer Science/Programming Languages/Static Analysis date = 2011-01-10 base = HOL/HOL-Word abstract = This work presents a verification of an implementation in SPARK/ADA of the cryptographic hash-function RIPEMD-160. A functional specification of RIPEMD-160 is given in Isabelle/HOL. Proofs for the verification conditions generated by the static-analysis toolset of SPARK certify the functional correctness of the implementation. [Regular-Sets] title = Regular Sets and Expressions author = Alexander Krauss <http://www.in.tum.de/~krauss>, Tobias Nipkow <http://www.in.tum.de/~nipkow> topic = Computer Science/Automata and Formal Languages date = 2010-05-12 base = HOL/HOL-Library abstract = This is a library of constructions on regular expressions and languages. It provides the operations of concatenation, Kleene star and derivative on languages. Regular expressions and their meaning are defined. An executable equivalence checker for regular expressions is verified; it does not need automata but works directly on regular expressions. <i>By mapping regular expressions to binary relations, an automatic and complete proof method for (in)equalities of binary relations over union, concatenation and (reflexive) transitive closure is obtained.</i> <P> Extended regular expressions with complement and intersection are also defined and an equivalence checker is provided. extra-history = Change history: [2011-08-26]: Christian Urban added a theory about derivatives and partial derivatives of regular expressions<br> [2012-05-10]: Tobias Nipkow added extended regular expressions<br> [2012-05-10]: Tobias Nipkow added equivalence checking with partial derivatives [Myhill-Nerode] title = The Myhill-Nerode Theorem Based on Regular Expressions author = Chunhan Wu, Xingyuan Zhang, Christian Urban <http://www.inf.kcl.ac.uk/staff/urbanc/> topic = Computer Science/Automata and Formal Languages date = 2011-08-26 base = HOL/HOL-Library abstract = There are many proofs of the Myhill-Nerode theorem using automata. In this library we give a proof entirely based on regular expressions, since regularity of languages can be conveniently defined using regular expressions (it is more painful in HOL to define regularity in terms of automata). We prove the first direction of the Myhill-Nerode theorem by solving equational systems that involve regular expressions. For the second direction we give two proofs: one using tagging-functions and another using partial derivatives. We also establish various closure properties of regular languages. Most details of the theories are described in our ITP 2011 paper. [Presburger-Automata] title = Formalizing the Logic-Automaton Connection author = Stefan Berghofer <http://www.in.tum.de/~berghofe>, Markus Reiter date = 2009-12-03 base = HOL/HOL-Library topic = Computer Science/Automata and Formal Languages, Logic abstract = This work presents a formalization of a library for automata on bit strings. It forms the basis of a reflection-based decision procedure for Presburger arithmetic, which is efficiently executable thanks to Isabelle's code generator. With this work, we therefore provide a mechanized proof of a well-known connection between logic and automata theory. The formalization is also described in a publication [TPHOLs 2009]. [Functional-Automata] title = Functional Automata author = Tobias Nipkow <http://www.in.tum.de/~nipkow/> date = 2004-03-30 base = HOL/HOL-Library topic = Computer Science/Automata and Formal Languages abstract = This theory defines deterministic and nondeterministic automata in a functional representation: the transition function/relation and the finality predicate are just functions. Hence the state space may be infinite. It is shown how to convert regular expressions into such automata. A scanner (generator) is implemented with the help of functional automata: the scanner chops the input up into longest recognized substrings. Finally we also show how to convert a certain subclass of functional automata (essentially the finite deterministic ones) into regular sets. depends-on = Regular-Sets [Statecharts] title = Formalizing Statecharts using Hierarchical Automata author = Steffen Helke <mailto:helke@cs.tu-berlin.de>, Florian Kammüller <mailto:flokam@cs.tu-berlin.de> topic = Computer Science/Automata and Formal Languages date = 2010-08-08 abstract = We formalize in Isabelle/HOL the abtract syntax and a synchronous step semantics for the specification language Statecharts. The formalization is based on Hierarchical Automata which allow a structural decomposition of Statecharts into Sequential Automata. To support the composition of Statecharts, we introduce calculating operators to construct a Hierarchical Automaton in a stepwise manner. Furthermore, we present a complete semantics of Statecharts including a theory of data spaces, which enables the modelling of racing effects. We also adapt CTL for Statecharts to build a bridge for future combinations with model checking. However the main motivation of this work is to provide a sound and complete basis for reasoning on Statecharts. As a central meta theorem we prove that the well-formedness of a Statechart is preserved by the semantics. [Stuttering_Equivalence] title = Stuttering Equivalence author = Stephan Merz <http://www.loria.fr/~merz/> topic = Computer Science/Automata and Formal Languages date = 2012-05-07 abstract = <p>Two omega-sequences are stuttering equivalent if they differ only by finite repetitions of elements. Stuttering equivalence is a fundamental concept in the theory of concurrent and distributed systems. Notably, Lamport argues that refinement notions for such systems should be insensitive to finite stuttering. Peled and Wilke showed that all PLTL (propositional linear-time temporal logic) properties that are insensitive to stuttering equivalence can be expressed without the next-time operator. Stuttering equivalence is also important for certain verification techniques such as partial-order reduction for model checking.</p> <p>We formalize stuttering equivalence in Isabelle/HOL. Our development relies on the notion of stuttering sampling functions that may skip blocks of identical sequence elements. We also encode PLTL and prove the theorem due to Peled and Wilke.</p> extra-history = Change history: [2013-01-31]: Added encoding of PLTL and proved Peled and Wilke's theorem. Adjusted abstract accordingly. [Tree-Automata] title = Tree Automata author = Peter Lammich <http://cs.uni-muenster.de/sev/staff/lammich/> date = 2009-11-25 base = HOL/HOL-Library topic = Computer Science/Automata and Formal Languages abstract = This work presents a machine-checked tree automata library for Standard-ML, OCaml and Haskell. The algorithms are efficient by using appropriate data structures like RB-trees. The available algorithms for non-deterministic automata include membership query, reduction, intersection, union, and emptiness check with computation of a witness for non-emptiness. The executable algorithms are derived from less-concrete, non-executable algorithms using data-refinement techniques. The concrete data structures are from the Isabelle Collections Framework. Moreover, this work contains a formalization of the class of tree-regular languages and its closure properties under set operations. depends-on = Collections [DPT-SAT-Solver] title = A Fast SAT Solver for Isabelle in Standard ML topic = Computer Science/Algorithms author = Armin Heller date = 2009-12-09 abstract = This contribution contains a fast SAT solver for Isabelle written in Standard ML. By loading the theory <tt>DPT_SAT_Solver</tt>, the SAT solver installs itself (under the name ``dptsat'') and certain Isabelle tools like Refute will start using it automatically. This is a port of the DPT (Decision Procedure Toolkit) SAT Solver written in OCaml. [Depth-First-Search] title = Depth First Search author = Toshiaki Nishihara, Yasuhiko Minamide <http://www.score.is.tsukuba.ac.jp/~minamide/> date = 2004-06-24 topic = Computer Science/Algorithms abstract = Depth-first search of a graph is formalized with recdef. It is shown that it visits all of the reachable nodes from a given list of nodes. Executable ML code of depth-first search is obtained using the code generation feature of Isabelle/HOL. [FFT] title = Fast Fourier Transform author = Clemens Ballarin <http://www4.in.tum.de/~ballarin/> date = 2005-10-12 topic = Computer Science/Algorithms abstract = We formalise a functional implementation of the FFT algorithm over the complex numbers, and its inverse. Both are shown equivalent to the usual definitions of these operations through Vandermonde matrices. They are also shown to be inverse to each other, more precisely, that composition of the inverse and the transformation yield the identity up to a scalar. [Gauss-Jordan-Elim-Fun] title = Gauss-Jordan Elimination for Matrices Represented as Functions author = Tobias Nipkow <http://www.in.tum.de/~nipkow/> date = 2011-08-19 topic = Computer Science/Algorithms, Mathematics/Algebra abstract = This theory provides a compact formulation of Gauss-Jordan elimination for matrices represented as functions. Its distinctive feature is succinctness. It is not meant for large computations. [GraphMarkingIBP] title = Verification of the Deutsch-Schorr-Waite Graph Marking Algorithm using Data Refinement author = Viorel Preoteasa <http://users.abo.fi/vpreotea/>, Ralph-Johan Back <http://users.abo.fi/Ralph-Johan.Back/> date = 2010-05-28 topic = Computer Science/Algorithms abstract = The verification of the Deutsch-Schorr-Waite graph marking algorithm is used as a benchmark in many formalizations of pointer programs. The main purpose of this mechanization is to show how data refinement of invariant based programs can be used in verifying practical algorithms. The verification starts with an abstract algorithm working on a graph given by a relation <i>next</i> on nodes. Gradually the abstract program is refined into Deutsch-Schorr-Waite graph marking algorithm where only one bit per graph node of additional memory is used for marking. depends-on = DataRefinementIBP extra-history = Change history: [2012-01-05]: Updated for the new definition of data refinement and the new syntax for demonic and angelic update statements [Efficient-Mergesort] title = Efficient Mergesort topic = Computer Science/Algorithms date = 2011-11-09 base = HOL/HOL-Library author = Christian Sternagel <mailto:c.sternagel@gmail.com> abstract = We provide a formalization of the mergesort algorithm as used in GHC's Data.List module, proving correctness and stability. Furthermore, experimental data suggests that generated (Haskell-)code for this algorithm is much faster than for previous algorithms available in the Isabelle distribution. extra-history = Change history: [2012-10-24]: Added reference to journal article [SATSolverVerification] title = Formal Verification of Modern SAT Solvers author = Filip Maric <http://poincare.matf.bg.ac.rs/~filip/> date = 2008-07-23 base = HOL/HOL-Library topic = Computer Science/Algorithms abstract = This document contains formal correctness proofs of modern SAT solvers. Following (Krstic et al, 2007) and (Nieuwenhuis et al., 2006), solvers are described using state-transition systems. Several different SAT solver descriptions are given and their partial correctness and termination is proved. These include: <ul> <li> a solver based on classical DPLL procedure (using only a backtrack-search with unit propagation),</li> <li> a very general solver with backjumping and learning (similar to the description given in (Nieuwenhuis et al., 2006)), and</li> <li> a solver with a specific conflict analysis algorithm (similar to the description given in (Krstic et al., 2007)).</li> </ul> Within the SAT solver correctness proofs, a large number of lemmas about propositional logic and CNF formulae are proved. This theory is self-contained and could be used for further exploring of properties of CNF based SAT algorithms. [Transitive-Closure] title = Executable Transitive Closures of Finite Relations topic = Computer Science/Algorithms date = 2011-03-14 author = Christian Sternagel <mailto:christian.sternagel@uibk.ac.at>, René Thiemann <mailto:rene.thiemann@uibk.ac.at> license = LGPL abstract = We provide a generic work-list algorithm to compute the transitive closure of finite relations where only successors of newly detected states are generated. This algorithm is then instantiated for lists over arbitrary carriers and red black trees (which are faster but require a linear order on the carrier), respectively. Our formalization was performed as part of the IsaFoR/CeTA project where reflexive transitive closures of large tree automata have to be computed. depends-on = Collections, Abstract-Rewriting [Transitive-Closure-II] title = Executable Transitive Closures topic = Computer Science/Algorithms date = 2012-02-29 author = René Thiemann <mailto:rene.thiemann@uibk.ac.at> license = LGPL abstract = We provide a generic work-list algorithm to compute the (reflexive-)transitive closure of relations where only successors of newly detected states are generated. In contrast to our previous work, the relations do not have to be finite, but each element must only have finitely many (indirect) successors. Moreover, a subsumption relation can be used instead of pure equality. An executable variant of the algorithm is available where the generic operations are instantiated with list operations. This formalization was performed as part of the IsaFoR/CeTA project, and it has been used to certify size-change termination proofs where large transitive closures have to be computed. depends-on = Regular-Sets [MuchAdoAboutTwo] title = Much Ado About Two author = Sascha Böhme <http://www4.in.tum.de/~boehmes/> date = 2007-11-06 base = HOL/HOL-Library topic = Computer Science/Algorithms abstract = This article is an Isabelle formalisation of a paper with the same title. In a similar way as Knuth's 0-1-principle for sorting algorithms, that paper develops a 0-1-2-principle for parallel prefix computations. [DiskPaxos] title = Proving the Correctness of Disk Paxos date = 2005-06-22 author = Mauro Jaskelioff <http://www.fceia.unr.edu.ar/~mauro/>, Stephan Merz <http://www.loria.fr/~merz/> topic = Computer Science/Algorithms/Distributed abstract = Disk Paxos is an algorithm for building arbitrary fault-tolerant distributed systems. The specification of Disk Paxos has been proved correct informally and tested using the TLC model checker, but up to now, it has never been fully formally verified. In this work we have formally verified its correctness using the Isabelle theorem prover and the HOL logic system, showing that Isabelle is a practical tool for verifying properties of TLA+ specifications. [GenClock] title = Formalization of a Generalized Protocol for Clock Synchronization author = Alwen Tiu <http://users.cecs.anu.edu.au/~tiu/> date = 2005-06-24 topic = Computer Science/Algorithms/Distributed abstract = We formalize the generalized Byzantine fault-tolerant clock synchronization protocol of Schneider. This protocol abstracts from particular algorithms or implementations for clock synchronization. This abstraction includes several assumptions on the behaviors of physical clocks and on general properties of concrete algorithms/implementations. Based on these assumptions the correctness of the protocol is proved by Schneider. His proof was later verified by Shankar using the theorem prover EHDM (precursor to PVS). Our formalization in Isabelle/HOL is based on Shankar's formalization. [ClockSynchInst] title = Instances of Schneider's generalized protocol of clock synchronization author = Damián Barsotti <http://www.cs.famaf.unc.edu.ar/~damian/> date = 2006-03-15 topic = Computer Science/Algorithms/Distributed abstract = F. B. Schneider ("Understanding protocols for Byzantine clock synchronization") generalizes a number of protocols for Byzantine fault-tolerant clock synchronization and presents a uniform proof for their correctness. In Schneider's schema, each processor maintains a local clock by periodically adjusting each value to one computed by a convergence function applied to the readings of all the clocks. Then, correctness of an algorithm, i.e. that the readings of two clocks at any time are within a fixed bound of each other, is based upon some conditions on the convergence function. To prove that a particular clock synchronization algorithm is correct it suffices to show that the convergence function used by the algorithm meets Schneider's conditions. Using the theorem prover Isabelle, we formalize the proofs that the convergence functions of two algorithms, namely, the Interactive Convergence Algorithm (ICA) of Lamport and Melliar-Smith and the Fault-tolerant Midpoint algorithm of Lundelius-Lynch, meet Schneider's conditions. Furthermore, we experiment on handling some parts of the proofs with fully automatic tools like ICS and CVC-lite. These theories are part of a joint work with Alwen Tiu and Leonor P. Nieto <a href="http://users.rsise.anu.edu.au/~tiu/clocksync.pdf">"Verification of Clock Synchronization Algorithms: Experiments on a combination of deductive tools"</a> in proceedings of AVOCS 2005. In this work the correctness of Schneider schema was also verified using Isabelle (entry <a href="GenClock.shtml">GenClock</a> in AFP). [Heard_Of] title = Verifying Fault-Tolerant Distributed Algorithms in the Heard-Of Model date = 2012-07-27 author = Henri Debrat <mailto:henri.debrat@loria.fr>, Stephan Merz <http://www.loria.fr/~merz/> topic = Computer Science/Algorithms/Distributed abstract = Distributed computing is inherently based on replication, promising increased tolerance to failures of individual computing nodes or communication channels. Realizing this promise, however, involves quite subtle algorithmic mechanisms, and requires precise statements about the kinds and numbers of faults that an algorithm tolerates (such as process crashes, communication faults or corrupted values). The landmark theorem due to Fischer, Lynch, and Paterson shows that it is impossible to achieve Consensus among N asynchronously communicating nodes in the presence of even a single permanent failure. Existing solutions must rely on assumptions of "partial synchrony". <p> Indeed, there have been numerous misunderstandings on what exactly a given algorithm is supposed to realize in what kinds of environments. Moreover, the abundance of subtly different computational models complicates comparisons between different algorithms. Charron-Bost and Schiper introduced the Heard-Of model for representing algorithms and failure assumptions in a uniform framework, simplifying comparisons between algorithms. <p> In this contribution, we represent the Heard-Of model in Isabelle/HOL. We define two semantics of runs of algorithms with different unit of atomicity and relate these through a reduction theorem that allows us to verify algorithms in the coarse-grained semantics (where proofs are easier) and infer their correctness for the fine-grained one (which corresponds to actual executions). We instantiate the framework by verifying six Consensus algorithms that differ in the underlying algorithmic mechanisms and the kinds of faults they tolerate. [Abortable_Linearizable_Modules] title = Abortable Linearizable Modules author = Rachid Guerraoui <mailto:rachid.guerraoui@epfl.ch>, Viktor Kuncak <http://lara.epfl.ch/~kuncak/>, Giuliano Losa <mailto:giuliano.losa@epfl.ch> date = 2012-03-1 topic = Computer Science/Algorithms/Distributed base = HOL/HOLCF abstract = We define the Abortable Linearizable Module automaton (ALM for short) and prove its key composition property using the IOA theory of HOLCF. The ALM is at the heart of the Speculative Linearizability framework. This framework simplifies devising correct speculative algorithms by enabling their decomposition into independent modules that can be analyzed and proved correct in isolation. It is particularly useful when working in a distributed environment, where the need to tolerate faults and asynchrony has made current monolithic protocols so intricate that it is no longer tractable to check their correctness. Our theory contains a typical example of a refinement proof in the I/O-automata framework of Lynch and Tuttle. [AVL-Trees] title = AVL Trees author = Tobias Nipkow <http://www.in.tum.de/~nipkow/>, Cornelia Pusch date = 2004-03-19 topic = Computer Science/Data Structures abstract = Two formalizations of AVL trees with room for extensions. The first formalization is monolithic and shorter, the second one in two stages, longer and a bit simpler. The final implementation is the same. If you are interested in developing this further, please contact <tt>gerwin.klein@nicta.com.au</tt>. extra-history = Change history: [2011-04-11]: Ondrej Kuncar added delete function [BDD] title = BDD Normalisation author = Veronika Ortner, Norbert Schirmer date = 2008-02-29 base = HOL/Simpl topic = Computer Science/Data Structures abstract = We present the verification of the normalisation of a binary decision diagram (BDD). The normalisation follows the original algorithm presented by Bryant in 1986 and transforms an ordered BDD in a reduced, ordered and shared BDD. The verification is based on Hoare logics. depends-on = Simpl [BinarySearchTree] title = Binary Search Trees author = Viktor Kuncak <http://lara.epfl.ch/~kuncak/> date = 2004-04-05 topic = Computer Science/Data Structures abstract = The correctness is shown of binary search tree operations (lookup, insert and remove) implementing a set. Two versions are given, for both structured and linear (tactic-style) proofs. An implementation of integer-indexed maps is also verified. [Binomial-Queues] title = Functional Binomial Queues author = René Neumann <mailto:neumannr@in.tum.de> date = 2010-10-28 topic = Computer Science/Data Structures abstract = Priority queues are an important data structure and efficient implementations of them are crucial. We implement a functional variant of binomial queues in Isabelle/HOL and show its functional correctness. A verification against an abstract reference specification of priority queues has also been attempted, but could not be achieved to the full extent. [Binomial-Heaps] title = Binomial Heaps and Skew Binomial Heaps author = Rene Meis <mailto:rene.meis@uni-muenster.de>, Finn Nielsen <mailto:finn.nielsen@uni-muenster.de>, Peter Lammich <http://cs.uni-muenster.de/sev/staff/lammich/> date = 2010-10-28 base = HOL/HOL-Library topic = Computer Science/Data Structures abstract = We implement and prove correct binomial heaps and skew binomial heaps. Both are data-structures for priority queues. While binomial heaps have logarithmic <em>findMin</em>, <em>deleteMin</em>, <em>insert</em>, and <em>meld</em> operations, skew binomial heaps have constant time <em>findMin</em>, <em>insert</em>, and <em>meld</em> operations, and only the <em>deleteMin</em>-operation is logarithmic. This is achieved by using <em>skew links</em> to avoid cascading linking on <em>insert</em>-operations, and <em>data-structural bootstrapping</em> to get constant-time <em>findMin</em> and <em>meld</em> operations. Our implementation follows the paper by Brodal and Okasaki. [Finger-Trees] title = Finger Trees author = Benedikt Nordhoff <mailto:b_nord01@uni-muenster.de>, Stefan Körner <mailto:s_koer03@uni-muenster.de>, Peter Lammich <http://cs.uni-muenster.de/sev/staff/lammich/> date = 2010-10-28 base = HOL/HOL-Library topic = Computer Science/Data Structures abstract = We implement and prove correct 2-3 finger trees. Finger trees are a general purpose data structure, that can be used to efficiently implement other data structures, such as priority queues. Intuitively, a finger tree is an annotated sequence, where the annotations are elements of a monoid. Apart from operations to access the ends of the sequence, the main operation is to split the sequence at the point where a <em>monotone predicate</em> over the sum of the left part of the sequence becomes true for the first time. The implementation follows the paper of Hintze and Paterson. The code generator can be used to get efficient, verified code. [FinFun] title = Code Generation for Functions as Data author = Andreas Lochbihler <http://www.infsec.ethz.ch/people/andreloc> date = 2009-05-06 topic = Computer Science/Data Structures abstract = <b>FinFun is now part of the Isabelle distribution. The entry is kept for archival; maintained but not developed further. Use the Isabelle distribution version instead.</b><p>FinFuns are total functions that are constant except for a finite set of points, i.e. a generalisation of finite maps. They are formalised as a new type in Isabelle/HOL such that the code generator can handle equality tests and quantification on FinFuns. On the code output level, FinFuns are explicitly represented by constant functions and pointwise updates, similarly to associative lists. Inside the logic, they behave like ordinary functions with extensionality. Via the update/constant pattern, a recursion combinator and an induction rule for FinFuns allow for defining and reasoning about operators on FinFun that are also executable.</p> extra-history = Change history: [2010-08-13]: new concept domain of a FinFun as a FinFun (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/34b3517cbc09">34b3517cbc09</a>)<br> [2010-11-04]: new conversion function from FinFun to list of elements in the domain (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/0c167102e6ed">0c167102e6ed</a>)<br> [2012-03-07]: replace sets as FinFuns by predicates as FinFuns because the set type constructor has been reintroduced (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/b7aa87989f3a">b7aa87989f3a</a>) [Collections] title = Collections Framework author = Peter Lammich <http://cs.uni-muenster.de/sev/staff/lammich/>, Andreas Lochbihler <http://www.infsec.ethz.ch/people/andreloc>, Thomas Tuerk date = 2009-11-25 topic = Computer Science/Data Structures abstract = This development provides an efficient, extensible, machine checked collections framework. The library adopts the concepts of interface, implementation and generic algorithm from object-oriented programming and implements them in Isabelle/HOL. The framework features the use of data refinement techniques to refine an abstract specification (using high-level concepts like sets) to a more concrete implementation (using collection datastructures, like red-black-trees). The code-generator of Isabelle/HOL can be used to generate efficient code. depends-on = Binomial-Heaps, Finger-Trees extra-history = Change history: [2010-10-08]: New Interfaces: OrderedSet, OrderedMap, List. Fifo now implements list-interface: Function names changed: put/get --> enqueue/dequeue. New Implementations: ArrayList, ArrayHashMap, ArrayHashSet, TrieMap, TrieSet. Invariant-free datastructures: Invariant implicitely hidden in typedef. Record-interfaces: All operations of an interface encapsulated as record. Examples moved to examples subdirectory.<br> [2010-12-01]: New Interfaces: Priority Queues, Annotated Lists. Implemented by finger trees, (skew) binomial queues. [2011-10-10]: SetSpec: Added operations: sng, isSng, bexists, size_abort, diff, filter, iterate_rule_insertP MapSpec: Added operations: sng, isSng, iterate_rule_insertP, bexists, size, size_abort, restrict, map_image_filter, map_value_image_filter Some maintenance changes [2012-04-25]: New iterator foundation by Tuerk. Various maintenance changes. [Containers] title = Light-weight Containers author = Andreas Lochbihler <http://www.infsec.ethz.ch/people/andreloc> date = 2013-04-15 base = HOL/HOL-Library/Datatype_Order_Generator topic = Computer Science/Data Structures abstract = This development provides a framework for container types like sets and maps such that generated code implements these containers with different (efficient) data structures. Thanks to type classes and refinement during code generation, this light-weight approach can seamlessly replace Isabelle's default setup for code generation. Heuristics automatically pick one of the available data structures depending on the type of elements to be stored, but users can also choose on their own. The extensible design permits to add more implementations at any time. <p> To support arbitrary nesting of sets, we define a linear order on sets based on a linear order of the elements and provide efficient implementations. It even allows to compare complements with non-complements. depends-on = Datatype_Order_Generator [FileRefinement] title = File Refinement author = Karen Zee <http://www.mit.edu/~kkz/>, Viktor Kuncak <http://lara.epfl.ch/~kuncak/> date = 2004-12-09 topic = Computer Science/Data Structures abstract = These theories illustrates the verification of basic file operations (file creation, file read and file write) in the Isabelle theorem prover. We describe a file at two levels of abstraction: an abstract file represented as a resizable array, and a concrete file represented using data blocks. [Datatype_Order_Generator] title = Generating linear orders for datatypes author = René Thiemann <mailto:rene.thiemann@uibk.ac.at> date = 2012-08-07 base = HOL/HOL-Library topic = Computer Science/Data Structures abstract = We provide a framework for registering automatic methods to derive class instances of datatypes, as it is possible using Haskell's ``deriving Ord, Show, ...'' feature. <p> We further implemented such automatic methods to derive (linear) orders or hash-functions which are required in the Isabelle Collection Framework. Moreover, for the tactic of Huffman and Krauss to show that a datatype is countable, we implemented a wrapper so that this tactic becomes accessible in our framework. <p> Our formalization was performed as part of the <a href="http://cl-informatik.uibk.ac.at/software/ceta">IsaFoR/CeTA</a> project. With our new tactic we could completely remove tedious proofs for linear orders of two datatypes. [List-Index] title = List Index date = 2010-02-20 author = Tobias Nipkow <http://www.in.tum.de/~nipkow> topic = Computer Science/Data Structures abstract = This theory provides functions for finding the index of an element in a list, by predicate and by value. [List-Infinite] title = Infinite Lists date = 2011-02-23 base = HOL/HOL-Library author = David Trachtenherz <http://www4.in.tum.de/~trachten> topic = Computer Science/Data Structures abstract = We introduce a theory of infinite lists in HOL formalized as functions over naturals (folder ListInf, theories ListInf and ListInf_Prefix). It also provides additional results for finite lists (theory ListInf/List2), natural numbers (folder CommonArith, esp. division/modulo, naturals with infinity), sets (folder CommonSet, esp. cutting/truncating sets, traversing sets of naturals). [Matrix] title = Executable Matrix Operations on Matrices of Arbitrary Dimensions topic = Computer Science/Data Structures date = 2010-06-17 author = Christian Sternagel <http://cl-informatik.uibk.ac.at/~griff>, René Thiemann <http://cl-informatik.uibk.ac.at/~thiemann> license = LGPL abstract = We provide the operations of matrix addition, multiplication, transposition, and matrix comparisons as executable functions over ordered semirings. Moreover, it is proven that strongly normalizing (monotone) orders can be lifted to strongly normalizing (monotone) orders over matrices. We further show that the standard semirings over the naturals, integers, and rationals, as well as the arctic semirings satisfy the axioms that are required by our matrix theory. Our formalization is part of the <a href="http://cl-informatik.uibk.ac.at/software/ceta">CeTA</a> system which contains several termination techniques. The provided theories have been essential to formalize matrix-interpretations and arctic interpretations. depends-on = Abstract-Rewriting extra-history = Change history: [2010-09-17]: Moved theory on arbitrary (ordered) semirings to Abstract Rewriting. [Huffman] title = The Textbook Proof of Huffman's Algorithm author = Jasmin Christian Blanchette <http://www4.in.tum.de/~blanchet> date = 2008-10-15 topic = Computer Science/Data Structures abstract = Huffman's algorithm is a procedure for constructing a binary tree with minimum weighted path length. This report presents a formal proof of the correctness of Huffman's algorithm written using Isabelle/HOL. Our proof closely follows the sketches found in standard algorithms textbooks, uncovering a few snags in the process. Another distinguishing feature of our formalization is the use of custom induction rules to help Isabelle's automatic tactics, leading to very short proofs for most of the lemmas. [Coinductive] title = Coinductive topic = Computer Science/Functional Programming author = Andreas Lochbihler <http://www.infsec.ethz.ch/people/andreloc> date = 2010-02-12 base = HOL/HOL-Library abstract = This article collects formalisations of general-purpose coinductive data types and sets. Currently, it contains coinductive natural numbers, coinductive lists, i.e. lazy lists or streams, infinite streams, coinductive terminated lists, coinductive resumptions, a library of operations on coinductive lists, and a version of König's lemma as an application for coinductive lists.<br>The initial theory was contributed by Paulson and Wenzel. Extensions and other coinductive formalisations of general interest are welcome. extra-history = Change history: [2010-06-10]: coinductive lists: setup for quotient package (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/015574f3bf3c">015574f3bf3c</a>)<br> [2010-06-28]: new codatatype terminated lazy lists (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/e12de475c558">e12de475c558</a>)<br> [2010-08-04]: terminated lazy lists: setup for quotient package; more lemmas (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/6ead626f1d01">6ead626f1d01</a>)<br> [2010-08-17]: Koenig's lemma as an example application for coinductive lists (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/f81ce373fa96">f81ce373fa96</a>)<br> [2011-02-01]: lazy implementation of coinductive (terminated) lists for the code generator (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/6034973dce83">6034973dce83</a>)<br> [2011-07-20]: new codatatype resumption (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/811364c776c7">811364c776c7</a>)<br> [2012-06-27]: new codatatype stream with operations (with contributions by Peter Gammie) (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/dd789a56473c">dd789a56473c</a>)<br> [Stream-Fusion] title = Stream Fusion author = Brian Huffman <http://cs.pdx.edu/~brianh> topic = Computer Science/Functional Programming date = 2009-04-29 base = HOL/HOLCF abstract = Stream Fusion is a system for removing intermediate list structures from Haskell programs; it consists of a Haskell library along with several compiler rewrite rules. (The library is available <a href="http://hackage.haskell.org/package/stream-fusion">online</a>.)<br><br>These theories contain a formalization of much of the Stream Fusion library in HOLCF. Lazy list and stream types are defined, along with coercions between the two types, as well as an equivalence relation for streams that generate the same list. List and stream versions of map, filter, foldr, enumFromTo, append, zipWith, and concatMap are defined, and the stream versions are shown to respect stream equivalence. [Tycon] title = Type Constructor Classes and Monad Transformers author = Brian Huffman <mailto:huffman@in.tum.de> date = 2012-06-26 base = HOL/HOLCF topic = Computer Science/Functional Programming abstract = These theories contain a formalization of first class type constructors and axiomatic constructor classes for HOLCF. This work is described in detail in the ICFP 2012 paper <i>Formal Verification of Monad Transformers</i> by the author. The formalization is a revised and updated version of earlier joint work with Matthews and White. <P> Based on the hierarchy of type classes in Haskell, we define classes for functors, monads, monad-plus, etc. Each one includes all the standard laws as axioms. We also provide a new user command, tycondef, for defining new type constructors in HOLCF. Using tycondef, we instantiate the type class hierarchy with various monads and monad transformers. [CoreC++] title = CoreC++ author = Daniel Wasserrab <http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php> date = 2006-05-15 base = HOL/HOL-Library topic = Computer Science/Programming Languages/Language Definitions abstract = We present an operational semantics and type safety proof for multiple inheritance in C++. The semantics models the behavior of method calls, field accesses, and two forms of casts in C++ class hierarchies. For explanations see the OOPSLA 2006 paper by Wasserrab, Nipkow, Snelting and Tip. [FeatherweightJava] title = A Theory of Featherweight Java in Isabelle/HOL author = J. Nathan Foster <http://www.cs.cornell.edu/~jnfoster/>, Dimitrios Vytiniotis <http://research.microsoft.com/en-us/people/dimitris/> date = 2006-03-31 topic = Computer Science/Programming Languages/Language Definitions abstract = We formalize the type system, small-step operational semantics, and type soundness proof for Featherweight Java, a simple object calculus, in Isabelle/HOL. [Jinja] title = Jinja is not Java author = Gerwin Klein <http://www.cse.unsw.edu.au/~kleing/>, Tobias Nipkow <http://www.in.tum.de/~nipkow/> date = 2005-06-01 topic = Computer Science/Programming Languages/Language Definitions abstract = We introduce Jinja, a Java-like programming language with a formal semantics designed to exhibit core features of the Java language architecture. Jinja is a compromise between realism of the language and tractability and clarity of the formal semantics. The following aspects are formalised: a big and a small step operational semantics for Jinja and a proof of their equivalence; a type system and a definite initialisation analysis; a type safety proof of the small step semantics; a virtual machine (JVM), its operational semantics and its type system; a type safety proof for the JVM; a bytecode verifier, i.e. data flow analyser for the JVM; a correctness proof of the bytecode verifier w.r.t. the type system; a compiler and a proof that it preserves semantics and well-typedness. The emphasis of this work is not on particular language features but on providing a unified model of the source language, the virtual machine and the compiler. The whole development has been carried out in the theorem prover Isabelle/HOL. [JinjaThreads] title = Jinja with Threads author = Andreas Lochbihler <http://www.infsec.ethz.ch/people/andreloc> date = 2007-12-03 base = HOL/HOL-Word topic = Computer Science/Programming Languages/Language Definitions abstract = We extend the Jinja source code semantics by Klein and Nipkow with Java-style arrays and threads. Concurrency is captured in a generic framework semantics for adding concurrency through interleaving to a sequential semantics, which features dynamic thread creation, inter-thread communication via shared memory, lock synchronisation and joins. Also, threads can suspend themselves and be notified by others. We instantiate the framework with the adapted versions of both Jinja source and byte code and show type safety for the multithreaded case. Equally, the compiler from source to byte code is extended, for which we prove weak bisimilarity between the source code small step semantics and the defensive Jinja virtual machine. On top of this, we formalise the JMM and show the DRF guarantee and consistency. For description of the different parts, see Lochbihler's papers at FOOL 2008, ESOP 2010, ITP 2011, and ESOP 2012. depends-on = Collections, Coinductive extra-history = Change history: [2008-04-23]: added bytecode formalisation with arrays and threads, added thread joins (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/f74a8be156a7">f74a8be156a7</a>)<br> [2009-04-27]: added verified compiler from source code to bytecode; encapsulate native methods in separate semantics (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/e4f26541e58a">e4f26541e58a</a>)<br> [2009-11-30]: extended compiler correctness proof to infinite and deadlocking computations (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/e50282397435">e50282397435</a>)<br> [2010-06-08]: added thread interruption; new abstract memory model with sequential consistency as implementation (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/0cb9e8dbd78d">0cb9e8dbd78d</a>)<br> [2010-06-28]: new thread interruption model (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/c0440d0a1177">c0440d0a1177</a>)<br> [2010-10-15]: preliminary version of the Java memory model for source code (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/02fee0ef3ca2">02fee0ef3ca2</a>)<br> [2010-12-16]: improved version of the Java memory model, also for bytecode executable scheduler for source code semantics (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/1f41c1842f5a">1f41c1842f5a</a>)<br> [2011-02-02]: simplified code generator setup new random scheduler (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/3059dafd013f">3059dafd013f</a>)<br> [2011-07-21]: new interruption model, generalized JMM proof of DRF guarantee, allow class Object to declare methods and fields, simplified subtyping relation, corrected division and modulo implementation (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/46e4181ed142">46e4181ed142</a>)<br> [2012-02-16]: added example programs (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/bf0b06c8913d">bf0b06c8913d</a>)<br> [2012-11-21]: type safety proof for the Java memory model, allow spurious wake-ups (revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/76063d860ae0">76063d860ae0</a>) [Locally-Nameless-Sigma] title = Locally Nameless Sigma Calculus author = Ludovic Henrio <mailto:Ludovic.Henrio@sophia.inria.fr>, Florian Kammüller <mailto:flokam@cs.tu-berlin.de>, Bianca Lutz <mailto:sowilo@cs.tu-berlin.de>, Henry Sudhof <mailto:hsudhof@cs.tu-berlin.de> date = 2010-04-30 topic = Computer Science/Programming Languages/Language Definitions abstract = We present a Theory of Objects based on the original functional sigma-calculus by Abadi and Cardelli but with an additional parameter to methods. We prove confluence of the operational semantics following the outline of Nipkow's proof of confluence for the lambda-calculus reusing his theory Commutation, a generic diamond lemma reduction. We furthermore formalize a simple type system for our sigma-calculus including a proof of type safety. The entire development uses the concept of Locally Nameless representation for binders. We reuse an earlier proof of confluence for a simpler sigma-calculus based on de Bruijn indices and lists to represent objects. [AutoFocus-Stream] title = AutoFocus Stream Processing for Single-Clocking and Multi-Clocking Semantics author = David Trachtenherz <http://www4.in.tum.de/~trachten/> date = 2011-02-23 base = HOL/HOL-Library/List-Infinite/Nat-Interval-Logic topic = Computer Science/Programming Languages/Language Definitions abstract = We formalize the AutoFocus Semantics (a time-synchronous subset of the Focus formalism) as stream processing functions on finite and infinite message streams represented as finite/infinite lists. The formalization comprises both the conventional single-clocking semantics (uniform global clock for all components and communications channels) and its extension to multi-clocking semantics (internal execution clocking of a component may be a multiple of the external communication clocking). The semantics is defined by generic stream processing functions making it suitable for simulation/code generation in Isabelle/HOL. Furthermore, a number of AutoFocus semantics properties are formalized using definitions from the IntervalLogic theories. [PCF] title = Logical Relations for PCF author = Peter Gammie <mailto:peteg42@gmail.com> date = 2012-07-01 base = HOL/HOLCF topic = Computer Science/Programming Languages/Lambda Calculi abstract = We apply Andy Pitts's methods of defining relations over domains to several classical results in the literature. We show that the Y combinator coincides with the domain-theoretic fixpoint operator, that parallel-or and the Plotkin existential are not definable in PCF, that the continuation semantics for PCF coincides with the direct semantics, and that our domain-theoretic semantics for PCF is adequate for reasoning about contextual equivalence in an operational semantics. Our version of PCF is untyped and has both strict and non-strict function abstractions. The development is carried out in HOLCF. [POPLmark-deBruijn] title = POPLmark Challenge Via de Bruijn Indices author = Stefan Berghofer <http://www.in.tum.de/~berghofe> date = 2007-08-02 base = HOL/HOL-Library topic = Computer Science/Programming Languages/Lambda Calculi abstract = We present a solution to the POPLmark challenge designed by Aydemir et al., which has as a goal the formalization of the meta-theory of System F<sub><:</sub>. The formalization is carried out in the theorem prover Isabelle/HOL using an encoding based on de Bruijn indices. We start with a relatively simple formalization covering only the basic features of System F<sub><:</sub>, and explain how it can be extended to also cover records and more advanced binding constructs. [Lam-ml-Normalization] title = Strong Normalization of Moggis's Computational Metalanguage author = Christian Doczkal <mailto:doczkal@ps.uni-saarland.de> date = 2010-08-29 base = HOL/HOL-Nominal topic = Computer Science/Programming Languages/Lambda Calculi abstract = Handling variable binding is one of the main difficulties in formal proofs. In this context, Moggi's computational metalanguage serves as an interesting case study. It features monadic types and a commuting conversion rule that rearranges the binding structure. Lindley and Stark have given an elegant proof of strong normalization for this calculus. The key construction in their proof is a notion of relational TT-lifting, using stacks of elimination contexts to obtain a Girard-Tait style logical relation. I give a formalization of their proof in Isabelle/HOL-Nominal with a particular emphasis on the treatment of bound variables. [MiniML] title = Mini ML author = Wolfgang Naraschewski, Tobias Nipkow <http://www.in.tum.de/~nipkow/> date = 2004-03-19 topic = Computer Science/Programming Languages/Type Systems abstract = This theory defines the type inference rules and the type inference algorithm <i>W</i> for MiniML (simply-typed lambda terms with <tt>let</tt>) due to Milner. It proves the soundness and completeness of <i>W</i> w.r.t. the rules. [Simpl] title = A Sequential Imperative Programming Language Syntax, Semantics, Hoare Logics and Verification Environment author = Norbert Schirmer date = 2008-02-29 topic = Computer Science/Programming Languages/Language Definitions, Computer Science/Programming Languages/Logics license = LGPL abstract = We present the theory of Simpl, a sequential imperative programming language. We introduce its syntax, its semantics (big and small-step operational semantics) and Hoare logics for both partial as well as total correctness. We prove soundness and completeness of the Hoare logic. We integrate and automate the Hoare logic in Isabelle/HOL to obtain a practically usable verification environment for imperative programs. Simpl is independent of a concrete programming language but expressive enough to cover all common language features: mutually recursive procedures, abrupt termination and exceptions, runtime faults, local and global variables, pointers and heap, expressions with side effects, pointers to procedures, partial application and closures, dynamic method invocation and also unbounded nondeterminism. [Separation_Algebra] title = Separation Algebra author = Gerwin Klein <mailto:kleing@cse.unsw.edu.au>, Rafal Kolanski <mailto:rafal.kolanski@nicta.com.au>, Andrew Boyton <mailto:andrew.boyton@nicta.com.au> date = 2012-05-11 topic = Computer Science/Programming Languages/Logics license = BSD base = HOL/HOL-Word abstract = We present a generic type class implementation of separation algebra for Isabelle/HOL as well as lemmas and generic tactics which can be used directly for any instantiation of the type class. <P> The ex directory contains example instantiations that include structures such as a heap or virtual memory. <P> The abstract separation algebra is based upon "Abstract Separation Logic" by Calcagno et al. These theories are also the basis of the ITP 2012 rough diamond "Mechanised Separation Algebra" by the authors. <P> The aim of this work is to support and significantly reduce the effort for future separation logic developments in Isabelle/HOL by factoring out the part of separation logic that can be treated abstractly once and for all. This includes developing typical default rule sets for reasoning as well as automated tactic support for separation logic. [Separation_Logic_Imperative_HOL] title = A Separation Logic Framework for Imperative HOL author = Peter Lammich <http://cs.uni-muenster.de/sev/staff/lammich/>, Rene Meis <mailto:rene.meis@uni-due.de> date = 2012-11-14 topic = Computer Science/Programming Languages/Logics license = BSD base = HOL/HOL-Library abstract = We provide a framework for separation-logic based correctness proofs of Imperative HOL programs. Our framework comes with a set of proof methods to automate canonical tasks such as verification condition generation and frame inference. Moreover, we provide a set of examples that show the applicability of our framework. The examples include algorithms on lists, hash-tables, and union-find trees. We also provide abstract interfaces for lists, maps, and sets, that allow to develop generic imperative algorithms and use data-refinement techniques. <br> As we target Imperative HOL, our programs can be translated to efficiently executable code in various target languages, including ML, OCaml, Haskell, and Scala. [Inductive_Confidentiality] title = Inductive Study of Confidentiality author = Giampaolo Bella <http://www.dmi.unict.it/~giamp/> date = 2012-05-02 topic = Computer Science/Security abstract = This document contains the full theory files accompanying article <i>Inductive Study of Confidentiality --- for Everyone</i> in <i>Formal Aspects of Computing</i>. They aim at an illustrative and didactic presentation of the Inductive Method of protocol analysis, focusing on the treatment of one of the main goals of security protocols: confidentiality against a threat model. The treatment of confidentiality, which in fact forms a key aspect of all protocol analysis tools, has been found cryptic by many learners of the Inductive Method, hence the motivation for this work. The theory files in this document guide the reader step by step towards design and proof of significant confidentiality theorems. These are developed against two threat models, the standard Dolev-Yao and a more audacious one, the General Attacker, which turns out to be particularly useful also for teaching purposes. [Possibilistic_Noninterference] title = Possibilistic Noninterference author = Andrei Popescu <mailto:uuomul@yahoo.com>, Johannes Hölzl <mailto:hoelzl@in.tum.de> date = 2012-09-10 topic = Computer Science/Security, Computer Science/Programming Languages/Type Systems abstract = We formalize a wide variety of Volpano/Smith-style noninterference notions for a while language with parallel composition. We systematize and classify these notions according to compositionality w.r.t. the language constructs. Compositionality yields sound syntactic criteria (a.k.a. type systems) in a uniform way. <p> An <a href="http://www.in.tum.de/~nipkow/pubs/cpp12.html">article</a> about these proofs is published in the proceedings of the conference Certified Programs and Proofs 2012. [VolpanoSmith] title = A Correctness Proof for the Volpano/Smith Security Typing System author = Gregor Snelting <http://pp.info.uni-karlsruhe.de/personhp/gregor_snelting.php>, Daniel Wasserrab <http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php> date = 2008-09-02 topic = Computer Science/Programming Languages/Type Systems, Computer Science/Security abstract = The Volpano/Smith/Irvine security type systems requires that variables are annotated as high (secret) or low (public), and provides typing rules which guarantee that secret values cannot leak to public output ports. This property of a program is called confidentiality. For a simple while-language without threads, our proof shows that typeability in the Volpano/Smith system guarantees noninterference. Noninterference means that if two initial states for program execution are low-equivalent, then the final states are low-equivalent as well. This indeed implies that secret values cannot leak to public ports. The proof defines an abstract syntax and operational semantics for programs, formalizes noninterference, and then proceeds by rule induction on the operational semantics. The mathematically most intricate part is the treatment of implicit flows. Note that the Volpano/Smith system is not flow-sensitive and thus quite unprecise, resulting in false alarms. However, due to the correctness property, all potential breaks of confidentiality are discovered. [Abstract-Hoare-Logics] title = Abstract Hoare Logics author = Tobias Nipkow <http://www.in.tum.de/~nipkow> date = 2006-08-08 topic = Computer Science/Programming Languages/Logics abstract = These therories describe Hoare logics for a number of imperative language constructs, from while-loops to mutually recursive procedures. Both partial and total correctness are treated. In particular a proof system for total correctness of recursive procedures in the presence of unbounded nondeterminism is presented. [Kleene_Algebra] title = Kleene Algebra author = Alasdair Armstrong <http://staffwww.dcs.shef.ac.uk/people/A.Armstrong/>, Georg Struth <http://staffwww.dcs.shef.ac.uk/people/G.Struth/>, Tjark Weber <http://user.it.uu.se/~tjawe125/> date = 2013-01-15 base = HOL/HOL-Word topic = Computer Science/Programming Languages/Logics, Computer Science/Automata and Formal Languages, Mathematics/Algebra abstract = These files contain a formalisation of variants of Kleene algebras and their most important models as axiomatic type classes in Isabelle/HOL. Kleene algebras are foundational structures in computing with applications ranging from automata and language theory to computational modeling, program construction and verification. <p> We start with formalising dioids, which are additively idempotent semirings, and expand them by axiomatisations of the Kleene star for finite iteration and an omega operation for infinite iteration. We show that powersets over a given monoid, (regular) languages, sets of paths in a graph, sets of computation traces, and binary relations form Kleene algebras, and consider further models based on lattices, max-plus semirings and min-plus semirings. We also demonstrate that dioids are closed under the formation of matrices and formal power series (proofs for Kleene algebras remain to be completed). <p> On the one hand we have aimed at a reference formalisation of variants of Kleene algebras that covers a wide range of variants and the core theorems in a structured and modular way and provides readable proofs at text book level. On the other hand, we intend to use this algebraic hierarchy and its models as a generic algebraic middle-layer from which programming applications can quickly be explored, implemented and verified. [BytecodeLogicJmlTypes] title = A Bytecode Logic for JML and Types author = Lennart Beringer <http://www.tcs.informatik.uni-muenchen.de/~beringer>, Martin Hofmann <http://www.tcs.informatik.uni-muenchen.de/~mhofmann> date = 2008-12-12 topic = Computer Science/Programming Languages/Logics abstract = This document contains the Isabelle/HOL sources underlying the paper <i>A bytecode logic for JML and types</i> by Beringer and Hofmann, updated to Isabelle 2008. We present a program logic for a subset of sequential Java bytecode that is suitable for representing both, features found in high-level specification language JML as well as interpretations of high-level type systems. To this end, we introduce a fine-grained collection of assertions, including strong invariants, local annotations and VDM-reminiscent partial-correctness specifications. Thanks to a goal-oriented structure and interpretation of judgements, verification may proceed without recourse to an additional control flow analysis. The suitability for interpreting intensional type systems is illustrated by the proof-carrying-code style encoding of a type system for a first-order functional language which guarantees a constant upper bound on the number of objects allocated throughout an execution, be the execution terminating or non-terminating. Like the published paper, the formal development is restricted to a comparatively small subset of the JVML, lacking (among other features) exceptions, arrays, virtual methods, and static fields. This shortcoming has been overcome meanwhile, as our paper has formed the basis of the Mobius base logic, a program logic for the full sequential fragment of the JVML. Indeed, the present formalisation formed the basis of a subsequent formalisation of the Mobius base logic in the proof assistant Coq, which includes a proof of soundness with respect to the Bicolano operational semantics by Pichardie. [DataRefinementIBP] title = Semantics and Data Refinement of Invariant Based Programs author = Viorel Preoteasa <http://users.abo.fi/vpreotea/>, Ralph-Johan Back <http://users.abo.fi/Ralph-Johan.Back/> date = 2010-05-28 topic = Computer Science/Programming Languages/Logics abstract = The invariant based programming is a technique of constructing correct programs by first identifying the basic situations (pre- and post-conditions and invariants) that can occur during the execution of the program, and then defining the transitions and proving that they preserve the invariants. Data refinement is a technique of building correct programs working on concrete datatypes as refinements of more abstract programs. In the theories presented here we formalize the predicate transformer semantics for invariant based programs and their data refinement. depends-on = LatticeProperties extra-history = Change history: [2012-01-05]: Moved some general complete lattice properties to the AFP entry Lattice Properties. Changed the definition of the data refinement relation to be more general and updated all corresponding theorems. Added new syntax for demonic and angelic update statements. [SIFPL] title = Secure information flow and program logics author = Lennart Beringer <http://www.tcs.informatik.uni-muenchen.de/~beringer>, Martin Hofmann <http://www.tcs.informatik.uni-muenchen.de/~mhofmann> date = 2008-11-10 topic = Computer Science/Programming Languages/Logics, Computer Science/Security abstract = We present interpretations of type systems for secure information flow in Hoare logic, complementing previous encodings in relational program logics. We first treat the imperative language IMP, extended by a simple procedure call mechanism. For this language we consider base-line non-interference in the style of Volpano et al. and the flow-sensitive type system by Hunt and Sands. In both cases, we show how typing derivations may be used to automatically generate proofs in the program logic that certify the absence of illicit flows. We then add instructions for object creation and manipulation, and derive appropriate proof rules for base-line non-interference. As a consequence of our work, standard verification technology may be used for verifying that a concrete program satisfies the non-interference property.<br><br>The present proof development represents an update of the formalisation underlying our paper [CSF 2007] and is intended to resolve any ambiguities that may be present in the paper. [TLA] title = A Definitional Encoding of TLA* in Isabelle/HOL author = Gudmund Grov <http://homepages.inf.ed.ac.uk/ggrov>, Stephan Merz <http://www.loria.fr/~merz> date = 2011-11-19 topic = Computer Science/Programming Languages/Logics abstract = We mechanise the logic TLA* <a href="http://www.springerlink.com/content/ax3qk557qkdyt7n6/">[Merz 1999]</a>, an extension of Lamport's Temporal Logic of Actions (TLA) <a href="http://dl.acm.org/citation.cfm?doid=177492.177726">[Lamport 1994]</a> for specifying and reasoning about concurrent and reactive systems. Aiming at a framework for mechanising] the verification of TLA (or TLA*) specifications, this contribution reuses some elements from a previous axiomatic encoding of TLA in Isabelle/HOL by the second author [Merz 1998], which has been part of the Isabelle distribution. In contrast to that previous work, we give here a shallow, definitional embedding, with the following highlights: <ul> <li>a theory of infinite sequences, including a formalisation of the concepts of stuttering invariance central to TLA and TLA*; <li>a definition of the semantics of TLA*, which extends TLA by a mutually-recursive definition of formulas and pre-formulas, generalising TLA action formulas; <li>a substantial set of derived proof rules, including the TLA* axioms and Lamport's proof rules for system verification; <li>a set of examples illustrating the usage of Isabelle/TLA* for reasoning about systems. </ul> Note that this work is unrelated to the ongoing development of a proof system for the specification language TLA+, which includes an encoding of TLA+ as a new Isabelle object logic <a href="http://www.springerlink.com/content/354026160p14j175/">[Chaudhuri et al 2010]</a>. [Compiling-Exceptions-Correctly] title = Compiling Exceptions Correctly author = Tobias Nipkow <http://www.in.tum.de/~nipkow/> date = 2004-07-09 topic = Computer Science/Programming Languages/Compiling abstract = An exception compilation scheme that dynamically creates and removes exception handler entries on the stack. A formalization of an article of the same name by <a href="http://www.cs.nott.ac.uk/~gmh/">Hutton</a> and Wright. [NormByEval] title = Normalization by Evaluation author = Klaus Aehlig <http://www.linta.de/~aehlig/>, Tobias Nipkow <http://www.in.tum.de/~nipkow/> date = 2008-02-18 topic = Computer Science/Programming Languages/Compiling abstract = This article formalizes normalization by evaluation as implemented in Isabelle. Lambda calculus plus term rewriting is compiled into a functional program with pattern matching. It is proved that the result of a successful evaluation is a) correct, i.e. equivalent to the input, and b) in normal form. [Program-Conflict-Analysis] title = Formalization of Conflict Analysis of Programs with Procedures, Thread Creation, and Monitors topic = Computer Science/Programming Languages/Static Analysis author = Peter Lammich <http://cs.uni-muenster.de/sev/staff/lammich/>, Markus Müller-Olm <http://cs.uni-muenster.de/u/mmo/> date = 2007-12-14 base = HOL/HOL-Library abstract = In this work we formally verify the soundness and precision of a static program analysis that detects conflicts (e. g. data races) in programs with procedures, thread creation and monitors with the Isabelle theorem prover. As common in static program analysis, our program model abstracts guarded branching by nondeterministic branching, but completely interprets the call-/return behavior of procedures, synchronization by monitors, and thread creation. The analysis is based on the observation that all conflicts already occur in a class of particularly restricted schedules. These restricted schedules are suited to constraint-system-based program analysis. The formalization is based upon a flowgraph-based program model with an operational semantics as reference point. [Shivers-CFA] title = Shivers' Control Flow Analysis topic = Computer Science/Programming Languages/Static Analysis author = Joachim Breitner <mailto:mail@joachim-breitner.de> date = 2010-11-16 base = HOL/HOLCF abstract = In his dissertation, Olin Shivers introduces a concept of control flow graphs for functional languages, provides an algorithm to statically derive a safe approximation of the control flow graph and proves this algorithm correct. In this research project, Shivers' algorithms and proofs are formalized in the HOLCF extension of HOL. [Slicing] title = Towards Certified Slicing author = Daniel Wasserrab <http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php> date = 2008-09-16 base = HOL/Jinja topic = Computer Science/Programming Languages/Static Analysis abstract = Slicing is a widely-used technique with applications in e.g. compiler technology and software security. Thus verification of algorithms in these areas is often based on the correctness of slicing, which should ideally be proven independent of concrete programming languages and with the help of well-known verifying techniques such as proof assistants. As a first step in this direction, this contribution presents a framework for dynamic and static intraprocedural slicing based on control flow and program dependence graphs. Abstracting from concrete syntax we base the framework on a graph representation of the program fulfilling certain structural and well-formedness properties.<br><br>The formalization consists of the basic framework (in subdirectory Basic/), the correctness proof for dynamic slicing (in subdirectory Dynamic/), the correctness proof for static intraprocedural slicing (in subdirectory StaticIntra/) and instantiations of the framework with a simple While language (in subdirectory While/) and the sophisticated object-oriented bytecode language of Jinja (in subdirectory JinjaVM/). For more information on the framework, see the TPHOLS 2008 paper by Wasserrab and Lochbihler and the PLAS 2009 paper by Wasserrab et al. depends-on = Jinja [HRB-Slicing] title = Backing up Slicing: Verifying the Interprocedural Two-Phase Horwitz-Reps-Binkley Slicer author = Daniel Wasserrab <http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php> date = 2009-11-13 base = HOL/Jinja topic = Computer Science/Programming Languages/Static Analysis abstract = After verifying <a href="Slicing.shtml">dynamic and static interprocedural slicing</a>, we present a modular framework for static interprocedural slicing. To this end, we formalized the standard two-phase slicer from Horwitz, Reps and Binkley (see their TOPLAS 12(1) 1990 paper) together with summary edges as presented by Reps et al. (see FSE 1994). The framework is again modular in the programming language by using an abstract CFG, defined via structural and well-formedness properties. Using a weak simulation between the original and sliced graph, we were able to prove the correctness of static interprocedural slicing. We also instantiate our framework with a simple While language with procedures. This shows that the chosen abstractions are indeed valid. depends-on = Jinja [WorkerWrapper] title = The Worker/Wrapper Transformation author = Peter Gammie <http://peteg.org/> date = 2009-10-30 base = HOL/HOLCF topic = Computer Science/Programming Languages/Transformations abstract = Gill and Hutton formalise the worker/wrapper transformation, building on the work of Launchbury and Peyton-Jones who developed it as a way of changing the type at which a recursive function operates. This development establishes the soundness of the technique and several examples of its use. [JiveDataStoreModel] title = Jive Data and Store Model author = Nicole Rauch <mailto:rauch@informatik.uni-kl.de>, Norbert Schirmer date = 2005-06-20 license = LGPL topic = Computer Science/Programming Languages/Misc abstract = This document presents the formalization of an object-oriented data and store model in Isabelle/HOL. This model is being used in the Java Interactive Verification Environment, Jive. [HotelKeyCards] title = Hotel Key Card System author = Tobias Nipkow <http://www.in.tum.de/~nipkow> date = 2006-09-09 topic = Computer Science/Security abstract = Two models of an electronic hotel key card system are contrasted: a state based and a trace based one. Both are defined, verified, and proved equivalent in the theorem prover Isabelle/HOL. It is shown that if a guest follows a certain safety policy regarding her key cards, she can be sure that nobody but her can enter her room. [RSAPSS] title = SHA1, RSA, PSS and more author = Christina Lindenberg, Kai Wirt date = 2005-05-02 topic = Computer Science/Security abstract = Formal verification is getting more and more important in computer science. However the state of the art formal verification methods in cryptography are very rudimentary. These theories are one step to provide a tool box allowing the use of formal methods in every aspect of cryptography. Moreover we present a proof of concept for the feasibility of verification techniques to a standard signature algorithm. ; todo: multiple entries with different naming scheme [InformationFlowSlicing] title = Information Flow Noninterference via Slicing author = Daniel Wasserrab <http://pp.info.uni-karlsruhe.de/personhp/daniel_wasserrab.php> date = 2010-03-23 topic = Computer Science/Security ignore = entry abstract = In this contribution, we show how correctness proofs for <a href="Slicing.shtml">intra-</a> and <a href="HRB-Slicing.shtml">interprocedural slicing</a> can be used to prove that slicing is able to guarantee information flow noninterference. Moreover, we also illustrate how to lift the control flow graphs of the respective frameworks such that they fulfil the additional assumptions needed in the noninterference proofs. A detailed description of the intraprocedural proof and its interplay with the slicing framework can be found in the PLAS'09 paper by Wasserrab et al. depends-on = Slicing, HRB-Slicing [Verified-Prover] title = A Mechanically Verified, Efficient, Sound and Complete Theorem Prover For First Order Logic author = Tom Ridge date = 2004-09-28 base = HOL/HOL-Library topic = Logic abstract = Soundness and completeness for a system of first order logic are formally proved, building on James Margetson's formalization of work by Wainer and Wallen. The completeness proofs naturally suggest an algorithm to derive proofs. This algorithm, which can be implemented tail recursively, is formalized in Isabelle/HOL. The algorithm can be executed via the rewriting tactics of Isabelle. Alternatively, the definitions can be exported to OCaml, yielding a directly executable program. [Completeness] title = Completeness theorem author = James Margetson, Tom Ridge date = 2004-09-20 base = HOL/HOL-Library topic = Logic abstract = The completeness of first-order logic is proved, following the first five pages of Wainer and Wallen's chapter of the book <i>Proof Theory</i> by Aczel et al., CUP, 1992. Their presentation of formulas allows the proofs to use symmetry arguments. Margetson formalized this theorem by early 2000. The Isar conversion is thanks to Tom Ridge. A paper describing the formalization is available <a href="Completeness-paper.pdf">[pdf]</a>. [Ordinal] title = Countable Ordinals author = Brian Huffman <http://web.cecs.pdx.edu/~brianh/> date = 2005-11-11 topic = Logic abstract = This development defines a well-ordered type of countable ordinals. It includes notions of continuous and normal functions, recursively defined functions over ordinals, least fixed-points, and derivatives. Much of ordinal arithmetic is formalized, including exponentials and logarithms. The development concludes with formalizations of Cantor Normal Form and Veblen hierarchies over normal functions. [Ordinals_and_Cardinals] title = Ordinals and Cardinals author = Andrei Popescu date = 2009-09-01 topic = Logic base = HOL/HOL-Cardinals-Base/HOL-Cardinals abstract = We develop a basic theory of ordinals and cardinals in Isabelle/HOL, up to the point where some cardinality facts relevant for the ``working mathematician" become available. Unlike in set theory, here we do not have at hand canonical notions of ordinal and cardinal. Therefore, here an ordinal is merely a well-order relation and a cardinal is an ordinal minim w.r.t. order embedding on its field. extra-history = Change history: [2012-09-25]: This entry has been discontinued because it is now part of the Isabelle distribution. [FOL-Fitting] title = First-Order Logic According to Fitting author = Stefan Berghofer <http://www.in.tum.de/~berghofe> date = 2007-08-02 base = HOL/HOL-Library topic = Logic abstract = We present a formalization of parts of Melvin Fitting's book ``First-Order Logic and Automated Theorem Proving''. The formalization covers the syntax of first-order logic, its semantics, the model existence theorem, a natural deduction proof calculus together with a proof of correctness and completeness, as well as the Löwenheim-Skolem theorem. [SequentInvertibility] title = Invertibility in Sequent Calculi author = Peter Chapman <http://www.cs.st-andrews.ac.uk/~pc> date = 2009-08-28 topic = Logic license = LGPL base = HOL/HOL-Nominal abstract = The invertibility of the rules of a sequent calculus is important for guiding proof search and can be used in some formalised proofs of Cut admissibility. We present sufficient conditions for when a rule is invertible with respect to a calculus. We illustrate the conditions with examples. It must be noted we give purely syntactic criteria; no guarantees are given as to the suitability of the rules. [LinearQuantifierElim] title = Quantifier Elimination for Linear Arithmetic author = Tobias Nipkow <http://www.in.tum.de/~nipkow/> date = 2008-01-11 base = HOL/HOL-Library topic = Logic abstract = This article formalizes quantifier elimination procedures for dense linear orders, linear real arithmetic and Presburger arithmetic. In each case both a DNF-based non-elementary algorithm and one or more (doubly) exponential NNF-based algorithms are formalized, including the well-known algorithms by Ferrante and Rackoff and by Cooper. The NNF-based algorithms for dense linear orders are new but based on Ferrante and Rackoff and on an algorithm by Loos and Weisspfenning which simulates infenitesimals. All algorithms are directly executable. In particular, they yield reflective quantifier elimination procedures for HOL itself. The formalization makes heavy use of locales and is therefore highly modular. [Nat-Interval-Logic] title = Interval Temporal Logic on Natural Numbers author = David Trachtenherz <http://www4.in.tum.de/~trachten/> date = 2011-02-23 base = HOL/HOL-Library/List-Infinite topic = Logic abstract = We introduce a theory of temporal logic operators using sets of natural numbers as time domain, formalized in a shallow embedding manner. The theory comprises special natural intervals (theory IL_Interval: open and closed intervals, continuous and modulo intervals, interval traversing results), operators for shifting intervals to left/right on the number axis as well as expanding/contracting intervals by constant factors (theory IL_IntervalOperators.thy), and ultimately definitions and results for unary and binary temporal operators on arbitrary natural sets (theory IL_TemporalOperators). depends-on = List-Infinite [Recursion-Theory-I] title = Recursion Theory I author = Michael Nedzelsky date = 2008-04-05 topic = Logic abstract = This document presents the formalization of introductory material from recursion theory --- definitions and basic properties of primitive recursive functions, Cantor pairing function and computably enumerable sets (including a proof of existence of a one-complete computably enumerable set and a proof of the Rice's theorem). [Free-Boolean-Algebra] topic = Logic title = Free Boolean Algebra author = Brian Huffman <http://web.cecs.pdx.edu/~brianh/> date = 2010-03-29 base = HOL/HOL-Library abstract = This theory defines a type constructor representing the free Boolean algebra over a set of generators. Values of type (α)<i>formula</i> represent propositional formulas with uninterpreted variables from type α, ordered by implication. In addition to all the standard Boolean algebra operations, the library also provides a function for building homomorphisms to any other Boolean algebra type. [Sort_Encodings] title = Sound and Complete Sort Encodings for First-Order Logic author = Jasmin Christian Blanchette <http://www21.in.tum.de/~blanchet>, Andrei Popescu <http://www21.in.tum.de/~popescua> date = 2013-06-27 base = HOL/HOL-Library topic = Logic abstract = This is a formalization of the soundness and completeness properties for various efficient encodings of sorts in unsorted first-order logic used by Isabelle's Sledgehammer tool. <p> Essentially, the encodings proceed as follows: a many-sorted problem is decorated with (as few as possible) tags or guards that make the problem monotonic; then sorts can be soundly erased. <p> The development employs a formalization of many-sorted first-order logic in clausal form (clauses, structures and the basic properties of the satisfaction relation), which could be of interest as the starting point for other formalizations of first-order logic metatheory. [Abstract-Rewriting] title = Abstract Rewriting topic = Logic/Rewriting date = 2010-06-14 author = Christian Sternagel <http://cl-informatik.uibk.ac.at/~griff>, René Thiemann <http://cl-informatik.uibk.ac.at/~thiemann> license = LGPL abstract = We present an Isabelle formalization of abstract rewriting (see, e.g., the book by Baader and Nipkow). First, we define standard relations like <i>joinability</i>, <i>meetability</i>, <i>conversion</i>, etc. Then, we formalize important properties of abstract rewrite systems, e.g., confluence and strong normalization. Our main concern is on strong normalization, since this formalization is the basis of <a href="http://cl-informatik.uibk.ac.at/software/ceta">CeTA</a> (which is mainly about strong normalization of term rewrite systems). Hence lemmas involving strong normalization constitute by far the biggest part of this theory. One of those is Newman's lemma. extra-history = Change history: [2010-09-17]: Added theories defining several (ordered) semirings related to strong normalization and giving some standard instances. [Free-Groups] title = Free Groups author = Joachim Breitner <mailto:mail@joachim-breitner.de> date = 2010-06-24 topic = Mathematics/Algebra abstract = Free Groups are, in a sense, the most generic kind of group. They are defined over a set of generators with no additional relations in between them. They play an important role in the definition of group presentations and in other fields. This theory provides the definition of Free Group as the set of fully canceled words in the generators. The universal property is proven, as well as some isomorphisms results about Free Groups. depends-on = Ordinals_and_Cardinals extra-history = Change history: [2011-12-11]: Added the Ping Pong Lemma. [CofGroups] title = An Example of a Cofinitary Group in Isabelle/HOL author = Bart Kastermans <http://kasterma.net> date = 2009-08-04 base = HOL/HOL-Library topic = Mathematics/Algebra abstract = We formalize the usual proof that the group generated by the function k -> k + 1 on the integers gives rise to a cofinitary group. [Group-Ring-Module] title = Groups, Rings and Modules author = Hidetsune Kobayashi, L. Chen, H. Murao date = 2004-05-18 base = HOL/HOL-Library topic = Mathematics/Algebra abstract = The theory of groups, rings and modules is developed to a great depth. Group theory results include Zassenhaus's theorem and the Jordan-Hoelder theorem. The ring theory development includes ideals, quotient rings and the Chinese remainder theorem. The module development includes the Nakayama lemma, exact sequences and Tensor products. [Robbins-Conjecture] title = A Complete Proof of the Robbins Conjecture author = Matthew Wampler-Doty date = 2010-05-22 topic = Mathematics/Algebra abstract = This document gives a formalization of the proof of the Robbins conjecture, following A. Mann, <i>A Complete Proof of the Robbins Conjecture</i>, 2003. [Valuation] title = Fundamental Properties of Valuation Theory and Hensel's Lemma author = Hidetsune Kobayashi date = 2007-08-08 topic = Mathematics/Algebra base = HOL/HOL-Library/Group-Ring-Module abstract = Convergence with respect to a valuation is discussed as convergence of a Cauchy sequence. Cauchy sequences of polynomials are defined. They are used to formalize Hensel's lemma. depends-on = Group-Ring-Module [Rank_Nullity_Theorem] title = Rank-Nullity Theorem in Linear Algebra author = Jose Divasón <http://www.unirioja.es/cu/jodivaso>, Jesús Aransay <http://www.unirioja.es/cu/jearansa> topic = Mathematics/Algebra base = HOL/HOL-Multivariate_Analysis date = 2013-01-16 abstract = In this article we present a proof of the result known in Linear Algebra as the ``rank nullity Theorem'', which states that, given any linear form f from a finite dimensional vector space V to a vector space W, then the dimension of V is equal to the dimension of the kernel of f (which is a subspace of V) and the dimension of the range of f (which is a subspace of W). The proof presented here is based on the one given by Sheldon Axler in his book <i>Linear Algebra Done Right</i>. It makes use of the HOL-Multivariate-Analysis session of Isabelle, and of several of its results and definitions. As a corollary of the previous theorem, and taking advantage of the relationship between linear forms and matrices, we prove that, for every matrix A (which has associated a linear form between finite dimensional vector spaces), the sum of its null space and its column space (which is equal to the range of the linear form) is equal to the number of columns of A. [Cauchy] title = Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality author = Benjamin Porter date = 2006-03-14 topic = Mathematics/Analysis abstract = This document presents the mechanised proofs of two popular theorems attributed to Augustin Louis Cauchy - Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality. [Integration] title = Integration theory and random variables author = Stefan Richter <http://www-lti.informatik.rwth-aachen.de/~richter/> date = 2004-11-19 topic = Mathematics/Analysis abstract = Lebesgue-style integration plays a major role in advanced probability. We formalize concepts of elementary measure theory, real-valued random variables as Borel-measurable functions, and a stepwise inductive definition of the integral itself. All proofs are carried out in human readable style using the Isar language. extra-note = Note: This article is of historical interest only. Lebesgue-style integration and probability theory are now available as part of the Isabelle/HOL distribution (directory Probability). [Ordinary_Differential_Equations] title = Ordinary Differential Equations author = Fabian Immler <http://home.in.tum.de/~immler>, Johannes Hölzl <http://home.in.tum.de/~hoelzl> topic = Mathematics/Analysis base = HOL/HOL-Multivariate_Analysis date = 2012-04-26 abstract = We formalize initial value problems (IVPs) of ODEs and prove the existence of a unique solution, i.e. the Picard-Lindelöf theorem. We introduce general one-step methods for numerical approximation of the solution and provide an analysis regarding the local and global error of one-step methods. We give an executable specification of the Euler method to approximate the solution of IVPs and prove an explicit bound for the global error. We use arbitrary-precision floating-point numbers and also handle rounding errors when we truncate the numbers for efficiency reasons. [Polynomials] title = Executable Multivariate Polynomials author = Christian Sternagel <http://cl-informatik.uibk.ac.at/~griff>, René Thiemann <http://cl-informatik.uibk.ac.at/~thiemann> date = 2010-08-10 topic = Mathematics/Analysis license = LGPL abstract = We define multivariate polynomials over arbitrary (ordered) semirings in combination with (executable) operations like addition, multiplication, and substitution. We also define (weak) monotonicity of polynomials and comparison of polynomials where we provide standard estimations like absolute positiveness or the more recent approach of Neurauter, Zankl, and Middeldorp. Moreover, it is proven that strongly normalizing (monotone) orders can be lifted to strongly normalizing (monotone) orders over polynomials. Our formalization was performed as part of the <a href="http://cl-informatik.uibk.ac.at/software/ceta">IsaFoR/CeTA-system</a> which contains several termination techniques. The provided theories have been essential to formalize polynomial interpretations. depends-on = Abstract-Rewriting extra-history = Change history: [2010-09-17]: Moved theories on arbitrary (ordered) semirings to Abstract Rewriting. [Sqrt_Babylonian] title = Computing Square Roots using the Babylonian Method author = René Thiemann <mailto:rene.thiemann@uibk.ac.at> date = 2013-01-03 topic = Mathematics/Analysis license = LGPL abstract = We implement the Babylonian method to compute square roots of numbers. We provide precise algorithms for naturals, integers and rationals, and offer an approximation algorithm for linear ordered fields. [Markov_Models] title = Markov Models author = Johannes Hölzl <http://www.in.tum.de/~hoelzl>, Tobias Nipkow <http://www.in.tum.de/~nipkow/> date = 2012-01-03 topic = Mathematics/Probability Theory, Computer Science/Automata and Formal Languages base = HOL/HOL-Multivariate_Analysis/HOL-Probability abstract = This is a formalization of Markov models in Isabelle/HOL. It builds on Isabelle's probability theory. The available models are currently Discrete-Time Markov Chains and a extensions of them with rewards. <p> As application of these models we formalize probabilistic model checking of pCTL formulas, analysis of IPv4 address allocation in ZeroConf and an analysis of the anonymity of the Crowds protocol. [Fermat3_4] title = Fermat's Last Theorem for Exponents 3 and 4 and the Parametrisation of Pythagorean Triples author = Roelof Oosterhuis <http://www.roelofoosterhuis.nl/> date = 2007-08-12 topic = Mathematics/Number Theory abstract = This document presents the mechanised proofs of<ul><li>Fermat's Last Theorem for exponents 3 and 4 and</li><li>the parametrisation of Pythagorean Triples.</li></ul> [Perfect-Number-Thm] title = Perfect Number Theorem author = Mark Ijbema <mailto:ijbema@fmf.nl> date = 2009-11-22 topic = Mathematics/Number Theory abstract = These theories present the mechanised proof of the Perfect Number Theorem. [SumSquares] title = Sums of Two and Four Squares author = Roelof Oosterhuis <http://www.roelofoosterhuis.nl/> date = 2007-08-12 topic = Mathematics/Number Theory abstract = This document presents the mechanised proofs of the following results:<ul><li>any prime number of the form 4m+1 can be written as the sum of two squares;</li><li>any natural number can be written as the sum of four squares</li></ul> depends-on = Fermat3_4 [Lehmer] title = Lehmer's Theorem author = Simon Wimmer <mailto:simon.wimmer@tum.de>, Lars Noschinski <http://www21.in.tum.de/~noschinl/> date = 2013-07-22 topic = Mathematics/Number Theory base = HOL/HOL-Number_Theory abstract = In 1927, Lehmer presented criterions for primality, based on the converse of Fermat's litte theorem. This work formalizes the second criterion from Lehmer's paper, a necessary and sufficient condition for primality. <p> As a side product we formalize some properties of Euler's phi-function, the notion of the order of an element of a group, and the cyclicity of the multiplicative group of a finite field. [Pratt_Certificate] title = Pratt's Primality Certificates author = Simon Wimmer <mailto:simon.wimmer@tum.de>, Lars Noschinski <http://www21.in.tum.de/~noschinl/> date = 2013-07-22 base = HOL/HOL-Number_Theory/Lehmer topic = Mathematics/Number Theory abstract = In 1975, Pratt introduced a proof system for certifying primes. He showed that a number <i>p</i> is prime iff a primality certificate for <i>p</i> exists. By showing a logarithmic upper bound on the length of the certificates in size of the prime number, he concluded that the decision problem for prime numbers is in NP. This work formalizes soundness and completeness of Pratt's proof system as well as an upper bound for the size of the certificate. depends-on = Lehmer [ArrowImpossibilityGS] title = Arrow and Gibbard-Satterthwaite author = Tobias Nipkow <http://www.in.tum.de/~nipkow/> date = 2008-09-01 base = HOL/HOL-Library topic = Mathematics/Social Choice Theory abstract = This article formalizes two proofs of Arrow's impossibility theorem due to Geanakoplos and derives the Gibbard-Satterthwaite theorem as a corollary. One formalization is based on utility functions, the other one on strict partial orders.<br><br>An article about these proofs is found <a href="http://www.in.tum.de/~nipkow/pubs/arrow.html">here</a>. [SenSocialChoice] title = Some classical results in Social Choice Theory author = Peter Gammie <http://peteg.org/> date = 2008-11-09 topic = Mathematics/Social Choice Theory abstract = Drawing on Sen's landmark work "Collective Choice and Social Welfare" (1970), this development proves Arrow's General Possibility Theorem, Sen's Liberal Paradox and May's Theorem in a general setting. The goal was to make precise the classical statements and proofs of these results, and to provide a foundation for more recent results such as the Gibbard-Satterthwaite and Duggan-Schwartz theorems. [Topology] title = Topology author = Stefan Friedrich date = 2004-04-26 base = HOL/HOL-Library topic = Mathematics/Topology abstract = This entry contains two theories. The first, <tt>Topology</tt>, develops the basic notions of general topology. The second, which can be viewed as a demonstration of the first, is called <tt>LList_Topology</tt>. It develops the topology of lazy lists. depends-on = Lazy-Lists-II [Graph_Theory] title = Graph Theory author = Lars Noschinski <http://www21.in.tum.de/~noschinl/> date = 2013-04-28 base = HOL/HOL-Library topic = Mathematics/Graph Theory abstract = This development provides a formalization of directed graphs, supporting (labelled) multi-edges and infinite graphs. A polymorphic edge type allows edges to be treated as pairs of vertices, if multi-edges are not required. Formalized properties are i.a. walks (and related concepts), connectedness and subgraphs and basic properties of isomorphisms. <p> This formalization is used to prove characterizations of Euler Trails, Shortest Paths and Kuratowski subgraphs. [Max-Card-Matching] title = Maximum Cardinality Matching author = Christine Rizkallah <http://www.mpi-inf.mpg.de/homepage/crizkall/> date = 2011-07-21 topic = Mathematics/Graph Theory abstract = A <em>matching</em> in a graph <i>G</i> is a subset <i>M</i> of the edges of <i>G</i> such that no two share an endpoint. A matching has maximum cardinality if its cardinality is at least as large as that of any other matching. An <em>odd-set cover</em> <i>OSC</i> of a graph <i>G</i> is a labeling of the nodes of <i>G</i> with integers such that every edge of <i>G</i> is either incident to a node labeled 1 or connects two nodes labeled with the same number <i>i ≥ 2</i>. This article proves Edmonds theorem: Let <i>M</i> be a matching in a graph <i>G</i> and let <i>OSC</i> be an odd-set cover of <i>G</i>. For any <i>i ≥ 0</i>, let <var>n(i)</var> be the number of nodes labeled <i>i</i>. If <i>|M| = n(1) + ∑<sub>i ≥ 2</sub>(n(i) div 2)</i>, then <i>M</i> is a maximum cardinality matching. [Girth_Chromatic] title = A Probabilistic Proof of the Girth-Chromatic Number Theorem author = Lars Noschinski <http://www21.in.tum.de/~noschinl/> date = 2012-02-06 topic = Mathematics/Graph Theory base = HOL/HOL-Multivariate_Analysis/HOL-Probability abstract = This works presents a formalization of the Girth-Chromatic number theorem in graph theory, stating that graphs with arbitrarily large girth and chromatic number exist. The proof uses the theory of Random Graphs to prove the existence with probabilistic arguments. [Flyspeck-Tame] title = Flyspeck I: Tame Graphs author = Gertrud Bauer, Tobias Nipkow <http://www.in.tum.de/~nipkow> date = 2006-05-22 base = HOL/HOL-Library topic = Mathematics/Graph Theory abstract = These theories present the verified enumeration of <i>tame</i> plane graphs as defined by Thomas C. Hales in his <i>revised</i> proof of the Kepler Conjecture. The revison concerns the definition of tameness. The <a href="http://www.in.tum.de/~nipkow/pubs/Flyspeck/">IJCAR 2006 paper by Nipkow, Bauer and Schultz</a> refers to the original version of Hales' proof. extra-history = Change history: [2010-11-02]: modified theories to reflect the modified definition of tameness in Hales' revised proof. [Well_Quasi_Orders] title = Well-Quasi-Orders author = Christian Sternagel <mailto:c-sterna@jaist.ac.jp> date = 2012-04-13 topic = Mathematics/Combinatorics abstract = Based on Isabelle/HOL's type class for preorders, we introduce a type class for well-quasi-orders (wqo) which is characterized by the absence of "bad" sequences (our proofs are along the lines of the proof of Nash-Williams, from which we also borrow terminology). Our main results are instantiations for the product type, the list type, and a type of finite trees, which (almost) directly follow from our proofs of (1) Dickson's Lemma, (2) Higman's Lemma, and (3) Kruskal's Tree Theorem. More concretely: <ul> <li>If the sets A and B are wqo then their Cartesian product is wqo.</li> <li>If the set A is wqo then the set of finite lists over A is wqo.</li> <li>If the set A is wqo then the set of finite trees over A is wqo.</li> </ul> The research was funded by the Austrian Science Fund (FWF): J3202. extra-history = Change history: [2012-06-11]: Added Kruskal's Tree Theorem. [2012-12-19]: New variant of Kruskal's tree theorem for terms (as opposed to variadic terms, i.e., trees), plus finite version of the tree theorem as corollary. [Marriage] title = Hall's Marriage Theorem author = Dongchen Jiang <mailto:dongchenjiang@googlemail.com>, Tobias Nipkow <http://www.in.tum.de/~nipkow/> date = 2010-12-17 topic = Mathematics/Combinatorics abstract = Two proofs of Hall's Marriage Theorem: one due to Halmos and Vaughan, one due to Rado. extra-history = Change history: [2011-09-09]: Added Rado's proof [Bondy] title = Bondy's Theorem author = Jeremy Avigad <http://www.andrew.cmu.edu/user/avigad/>, Stefan Hetzl <http://www.logic.at/people/hetzl/> date = 2012-10-27 topic = Mathematics/Combinatorics abstract = A proof of Bondy's theorem following B. Bollabas, Combinatorics, 1986, Cambridge University Press. [Ramsey-Infinite] title = Ramsey's theorem, infinitary version author = Tom Ridge date = 2004-09-20 topic = Mathematics/Combinatorics abstract = This formalization of Ramsey's theorem (infinitary version) is taken from Boolos and Jeffrey, <i>Computability and Logic</i>, 3rd edition, Chapter 26. It differs slightly from the text by assuming a slightly stronger hypothesis. In particular, the induction hypothesis is stronger, holding for any infinite subset of the naturals. This avoids the rather peculiar mapping argument between kj and aikj on p.263, which is unnecessary and slightly mars this really beautiful result. [Open_Induction] title = Open Induction author = Mizuhito Ogawa, Christian Sternagel <http://www.jaist.ac.jp/~c-sterna/> date = 2012-11-02 base = HOL/HOL-Library topic = Mathematics/Combinatorics abstract = A proof of the open induction schema based on J.-C. Raoult, Proving open properties by induction, <i>Information Processing Letters</i> 29, 1988, pp.19-23. <p>This research was supported by the Austrian Science Fund (FWF): J3202.</p> [Category] title = Category Theory to Yoneda's Lemma author = Greg O'Keefe <http://users.rsise.anu.edu.au/~okeefe/> date = 2005-04-21 base = HOL/HOL-Library topic = Mathematics/Category Theory license = LGPL abstract = This development proves Yoneda's lemma and aims to be readable by humans. It only defines what is needed for the lemma: categories, functors and natural transformations. Limits, adjunctions and other important concepts are not included. extra-history = Change history: [2010-04-23]: The definition of the constant <tt>equinumerous</tt> was slightly too weak in the original submission and has been fixed in revision <a href="http://afp.hg.sourceforge.net/hgweb/afp/afp/rev/8c2b5b3c995f">8c2b5b3c995f</a>. [Category2] title = Category Theory author = Alexander Katovsky <mailto:apk32@cam.ac.uk> date = 2010-06-20 base = HOL/HOL-Library topic = Mathematics/Category Theory abstract = This article presents a development of Category Theory in Isabelle/HOL. A Category is defined using records and locales. Functors and Natural Transformations are also defined. The main result that has been formalized is that the Yoneda functor is a full and faithful embedding. We also formalize the completeness of many sorted monadic equational logic. Extensive use is made of the HOLZF theory in both cases. For an informal description see <a href="http://www.srcf.ucam.org/~apk32/Isabelle/Category/Cat.pdf">here [pdf]</a>. [FunWithFunctions] title = Fun With Functions author = Tobias Nipkow <http://www.in.tum.de/~nipkow/> date = 2008-08-26 topic = Mathematics/Misc abstract = This is a collection of cute puzzles of the form ``Show that if a function satisfies the following constraints, it must be ...'' Please add further examples to this collection! [FunWithTilings] title = Fun With Tilings author = Tobias Nipkow <http://www.in.tum.de/~nipkow/>, Lawrence Paulson <http://www.cl.cam.ac.uk/users/lcp/> date = 2008-11-07 topic = Mathematics/Misc abstract = Tilings are defined inductively. It is shown that one form of mutilated chess board cannot be tiled with dominoes, while another one can be tiled with L-shaped tiles. Please add further fun examples of this kind! [Lazy-Lists-II] title = Lazy Lists II author = Stefan Friedrich date = 2004-04-26 topic = Computer Science/Data Structures abstract = This theory contains some useful extensions to the LList (lazy list) theory by <a href="http://www.cl.cam.ac.uk/users/lcp/">Larry Paulson</a>, including finite, infinite, and positive llists over an alphabet, as well as the new constants take and drop and the prefix order of llists. Finally, the notions of safety and liveness in the sense of Alpern and Schneider (1985) are defined. depends-on = Coinductive [Ribbon_Proofs] title = Ribbon Proofs author = John Wickerson <http://www.pes.tu-berlin.de/menue/ueber_uns/team/john_wickerson/> date = 2013-01-19 base = HOL/HOL-Library topic = Computer Science/Programming Languages/Logics abstract = This document concerns the theory of ribbon proofs: a diagrammatic proof system, based on separation logic, for verifying program correctness. We include the syntax, proof rules, and soundness results for two alternative formalisations of ribbon proofs. <p> Compared to traditional proof outlines, ribbon proofs emphasise the structure of a proof, so are intelligible and pedagogical. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they may be more scalable. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. [Koenigsberg_Friendship] title = The Königsberg Bridge Problem and the Friendship Theorem author = Wenda Li <mailto:wl302@cam.ac.uk> date = 2013-07-19 base = HOL/Collections/Refine_Monadic/Dijkstra_Shortest_Path topic = Mathematics/Graph Theory depends-on = Refine_Monadic, Collections, Dijkstra_Shortest_Path abstract = This development provides a formalization of undirected graphs and simple graphs, which are based on Benedikt Nordhoff and Peter Lammich's simple formalization of labelled directed graphs in the archive. Then, with our formalization of graphs, we show both necessary and sufficient conditions for Eulerian trails and circuits as well as the fact that the Königsberg Bridge Problem does not have a solution. In addition, we show the Friendship Theorem in simple graphs. [IEEE_Floating_Point] title = A Formal Model of IEEE Floating Point Arithmetic author = Lei Yu <mailto:ly271@cam.ac.uk> date = 2013-07-27 topic = Computer Science/Data Structures abstract = This development provides a formal model of IEEE-754 floating-point arithmetic. This formalization, including formal specification of the standard and proofs of important properties of floating-point arithmetic, forms the foundation for verifying programs with floating-point computation. There is also a code generation setup for floats so that we can execute programs using this formalization in functional programming languages.