Jekyll2018-07-05T13:28:27-04:00/Kevin Clancy’s BlogA blog about programming languages and computer science.Order Theory for the Iridescent Astronaut2018-06-13T17:36:46-04:002018-06-13T17:36:46-04:00/2018/06/13/order-theory-tutorials<h1 id="q-and-a">Q and A</h1>
<details>
<summary>Q: Do we really need another order theory tutorial for astronauts? Aren't there enough already?</summary>
<div style="background-color:lightblue">
<p>A: As far as I know, there are no order theory tutorials for astronauts. The title <em>Order Theory for the Iridescent Astronaut</em>
was only chosen to get your attention. If this tutorial isn’t for astronauts, who then is it for?</p>
<h1 id="programmers-and-computer-scientists">Programmers and computer scientists</h1>
<p>Order theory is useful for many forms of reasoning that are often relevant to the tasks programmers perform.
Have you ever performed a topological sort? That’s an inherently order theoretic operation, but it’s just the tip of the iceberg.
Order theory can also be used for reasoning about hierarchical relations (such as class hierarchies in OOP),
algorithms which perform successive refinements of an approximation (such as the binary search),
causality in distributed systems, and more.</p>
<h1 id="the-mathematically-inclined">The mathematically inclined</h1>
<p>The goal is to understand order on a deep level, and to do that we’re going to need to work
through rigorous proofs. As a prerequisite, you will need familiarity with the following concepts</p>
<ul>
<li>Proof by contradiction</li>
<li>Proof by contrapositive</li>
<li>Proof by vacuity (i.e. when a claim is <em>vacuously true</em>)</li>
<li>How to prove two sets are equal</li>
</ul>
<p>If you don’t know what any of those things are, you may be feeling pretty depressed right now,
but you should cheer up. Why? Because you can learn all of these things, starting now! There is a widespread societal problem where people believe they cannot learn anything beyond basic reading, writing, and arithmetic unless they are enrolled in a formal program, working
toward an advanced degree. In fact, there is a fairly effective way to learn almost anything:</p>
<ul>
<li>
<p>Step 1: Figure out what the most established and respected introductory textbooks are in the field of your choice. Find out which textbooks universities like Stanford and Princeton are using to teach their introductory courses. (They should be included in publicly visible course syllabi in the courses’ websites.)</p>
</li>
<li>
<p>Step 2: Choose a textbook, buy it from amazon, read through the chapters and do the exercises.</p>
</li>
<li>
<p>Step 3: For further info on topics covered in the book, follow the references in the back.</p>
</li>
</ul>
<p>To learn the basics of rigorous mathematics <em>in particular</em> (which includes contrapositive, vacuity, etc.), I recommend <a href="https://www.amazon.com/Mathematics-Discrete-Introduction-Edward-Scheinerman/dp/0534398987/ref=sr_1_2?ie=UTF8&qid=1529345755&sr=8-2&keywords=mathematics+a+discrete+introduction&dpID=51rPrT%252BgOvL&preST=_SX218_BO1,204,203,200_QL40_&dpSrc=srch">Mathematics: A Discrete Introduction</a> by Ed Scheinerman. I read this book in high school. It’s really quite a gentle and
approachable, but this sort of thing does not click with everyone.</p>
</div>
</details>
<details>
<summary>Q: I heard that the human psyche is a balance between <i>Order</i> and <i>Chaos</i>. Will this material help me defeat the <i>Dragon of Chaos</i>?</summary>
<div style="background-color:lightblue">
<p>A: I don’t know about that, but I can tell you that this set of tutorials is highly structured. It is organized into several topics,
as listed in the table of contents below. Each topic is divided into at most three subsections, described below</p>
<h1 id="introduction">Introduction</h1>
<p>The introduction section provides any definitions relevant to the topic,
as well as diagrams depicting the mathematical objects involved,
and also potentially proofs which are central to the topic.
Proofs in an introduction section are typically hidden away in
pop-out boxes. They have been hidden so that detailed proofs
do not obscure the high-level eagle-eye view of the topic.
However, this does not mean that you should not read them;
in fact, it is important to read <em>all proofs</em> (if any) in each
introduction section. At the end of each introduction section,
several links have been placed, some to related topics and others<br />
to the present topics <em>Examples</em> and <em>Exercises</em> sections.</p>
<h1 id="examples">Examples</h1>
<p>An examples section lists various ways that the present topic can applied,
whether to other areas of mathematics, practical programming, or
underwater basket weaving. Unlike introduction sections, examples are
<em>optional</em>; some examples may reference areas of mathematics that you
are not familiar with, and it is totally okay to skip these examples.</p>
<h1 id="exercises">Exercises</h1>
<p>Exercises, like examples, are optional. Sometimes the examples of
a particular section build off of each other, so it’s probably best
to solve them from top to bottom. The solutions to exercises are hidden
away in pop-out boxes, obviously because you’re supposed to find the
solution yourself before looking at the provided explanation.</p>
</div>
</details>
<h1 id="begin-learning">Begin learning</h1>
<p>These tutorials are structured as a non-linear schema. Start reading <a href="/order_theory/Poset.html">here</a>
and follow links as you see fit. Or, if you insist on linearity, use the table of contents below, traversing topics
from top to bottom.</p>
<h1 id="table-of-contents">Table of contents</h1>
<ul>
<li><a href="/order_theory/Poset.html">Posets</a></li>
<li><a href="/order_theory/MonotoneFunction.html">Monotone Functions</a></li>
<li><a href="/order_theory/JoinAndMeet.html">Join and Meet</a></li>
<li><a href="/order_theory/Lattice.html">Lattices</a></li>
<li><a href="/order_theory/CompleteLattice.html">Complete Lattices</a></li>
</ul>Q and AMonotonicity Through Coeffects2018-02-16T14:50:04-05:002018-02-16T14:50:04-05:00/2018/02/16/monotonicity-through-coeffects<p><script type="math/tex">\newcommand{\sem}[1]{ [\![ #1 ]\!]}</script>
<script type="math/tex">\newcommand{\catq}[0]{ \mathbf{Q} }</script>
<script type="math/tex">\newcommand{\mbf}[1]{ \mathbf{#1} }</script>
<script type="math/tex">\newcommand{\qmon}[0]{ \mathbf{Q_{mon}} }</script>
<script type="math/tex">\newcommand{\ato}[0]{ \overset{-}{\to} }</script>
<script type="math/tex">\newcommand{\pto}[0]{ \overset{+}{\to} }</script></p>
<h1 id="more-monotonicity">More monotonicity</h1>
<p>Thanks to a tip from <a href="https://bentnib.org/">Bob Atkey</a>, I’ve recently been looking
at coeffects as a means to prove program functions monotone.
This post
contains an introduction to coeffects, along with an explanation
of how they can be used to reason about monotonicity. My goal
is to cover the key ideas of coeffects without getting into a full
treatment of their semantics.
The particular coeffect system that I will examine is taken from
<a href="http://tomasp.net/academic/papers/structural/coeffects-icfp.pdf">Coeffects: A calculus of context-dependent computation</a>
by Tomas Petricek, Dominic Orchard, and Alan Mycroft.</p>
<p>The content of this post depends on some basic category theory.
In particular, it involves functors, natural transformations,
and cartesian closed categories. We write objects using
capital letters <script type="math/tex">A,B,C,</script> etc. We write arrows as lower-case
letters <script type="math/tex">f,g,</script> etc. Categories are written using
bold capital letters <script type="math/tex">\mbf{C}, \mbf{Q},</script> etc. Natural transformations
are written as lowercase greek letters: <script type="math/tex">\delta, \epsilon, \eta</script>, etc.
The component of a natural transformation <script type="math/tex">\eta</script> at an object <script type="math/tex">A</script>
is written <script type="math/tex">(\eta)_A</script>, where the natural transformation has been
parenthesized and subscripted by the object. We write functors
using capital letters <script type="math/tex">D,F,G,</script> etc. We write the composition of two
functors <script type="math/tex">F</script> and <script type="math/tex">G</script> as <script type="math/tex">(G \circ F)</script> and the application of
a functor <script type="math/tex">F</script> to an object <script type="math/tex">A</script> as <script type="math/tex">F(A)</script>.</p>
<h1 id="coeffects">Coeffects</h1>
<p>In a structural type-and-coeffect system, a function <script type="math/tex">f</script> has a type of the form <script type="math/tex">\tau \overset{q}{\to} \sigma</script>, where <script type="math/tex">\tau</script> and <script type="math/tex">\sigma</script> are <script type="math/tex">f</script>’s domain and codomain types, and <script type="math/tex">q</script> is a <em>coeffect scalar</em>. The coeffect scalar denotes some ``contextual capability’’ which augments the domain <script type="math/tex">\tau</script>.
We will formalize the notion of a contextual capability in the next section, and in the meantime it will stay intuitive.</p>
<p><img src="/assets/dataflowDeriv.png" alt="Fif" /></p>
<p>Petricek et al. provide an example targeted at dataflow languages such as Lustre. In this example, time is modeled as a sequence of discrete steps, and variables denote value streams which contain one element per time step. A variable occurrence denotes its stream’s value at the
current time step, and values at previous timesteps can be accessed by wrapping occurrences of the variable in a syntactic
form called <em>pre</em>. As depicted above, a coeffect scalar in this system is a natural number denoting a lower bound on the number of cached values at consecutive previous timesteps for a particular stream. Note that certain contextual capabilities are stronger than others. For example, in the dataflow system, the coeffect scalar <script type="math/tex">1</script> allows us to read a stream’s value at 1 timestep prior, while the coeffect scalar <script type="math/tex">2</script> is <em>stronger than</em> <script type="math/tex">1</script> because it allows us to read a stream’s value both 1 <em>and</em> 2 timesteps prior.</p>
<p><img src="/assets/coeffects.png" alt="FigTwo" /></p>
<p>It may be helpful to open a new browser window fixed to <em>Figure 2</em>,
so that its rules may be quickly recalled when required by later sections.</p>
<p>We can assume that a function constant of type <script type="math/tex">\tau \overset{s}{\to} \sigma</script> ``utilizes’’ at most
the contextual capabilities indicated by its scalar <script type="math/tex">s</script>. But an upper bound on the contextual requirements
of a lambda abstraction must be proven, using the type-and-coeffect system of <em>Figure 2</em>.
A type-and-coeffect judgment has the form
<script type="math/tex">\Gamma @ R \vdash e : \tau</script>. Here <script type="math/tex">\Gamma</script>, <script type="math/tex">e</script>, and <script type="math/tex">\tau</script> are the familiar
typing context, expression, and type. <script type="math/tex">R</script> is a vector of coeffect scalars, associating one scalar to each entry in
<script type="math/tex">\Gamma</script>; we refer to such a vector simply as a <em>coeffect</em>. We write the concatenation of coeffects <script type="math/tex">R</script> and <script type="math/tex">S</script> as <script type="math/tex">R \times S</script>.</p>
<p>The <em>ABS</em> rule says that under context <script type="math/tex">\Gamma</script> and coeffect <script type="math/tex">\langle \ldots \rangle</script>,
an abstraction <script type="math/tex">\lambda x. e</script> has type <script type="math/tex">\sigma \overset{s}{\to} \tau</script> whenever we can type-check its body under the
the extended context <script type="math/tex">\Gamma, x:\tau</script> and extended coeffect <script type="math/tex">\langle \ldots, s \rangle</script>.
The key to ensuring that our abstraction does not ``overuse’’ the capability denoted by <script type="math/tex">s</script> is
to restrict, within the body <script type="math/tex">e</script>, both the positions in which the variable <script type="math/tex">x</script> may occur <em>and</em> the positions in which it may be discarded; the weaker the capability
a scalar denotes, the more restrictive it is syntactically. The particular set of restrictions a scalar makes is
dependent on a parameter to the structural type-and-coeffect system called the <em>coeffect scalar structure</em>.</p>
<p>A <em>coeffect scalar structure</em> <script type="math/tex">\mathbf Q = (Q, \otimes, \oplus, use, ign, \leq)</script> consists of</p>
<ul>
<li>An underlying set <script type="math/tex">Q</script> of scalars</li>
<li>Binary <em>composition</em> and <em>contraction</em> operators <script type="math/tex">\otimes</script> and <script type="math/tex">\oplus</script> on <script type="math/tex">Q</script></li>
<li>A coeffect scalar <script type="math/tex">use</script>, the most restrctive scalar that
permits a variable to appear in an hole context,
i.e. the most restrictive scalar <script type="math/tex">s</script> such that <script type="math/tex">x:\tau @ \langle s \rangle \vdash x : \tau</script> is derivable</li>
<li>
<p>A coeffect scalar <script type="math/tex">ign</script>, the most restrictive scalar which constrains a variable in a way that permits
it to be discarded, i.e. whenever <script type="math/tex">\Gamma @ \langle \ldots \rangle \vdash e : \sigma</script> is derivable so is
<script type="math/tex">\Gamma, x: \tau @ \langle \ldots, ign \rangle \vdash e : \sigma</script></p>
</li>
<li>A preorder <script type="math/tex">\leq</script> on <script type="math/tex">Q</script>, where <script type="math/tex">q_1 \leq q_2</script> means that <script type="math/tex">q_1</script> is more restrictive than <script type="math/tex">q_2</script></li>
</ul>
<p>We require <script type="math/tex">(Q, \otimes, use)</script> and <script type="math/tex">(Q, \oplus, ign)</script> to be monoids. Further,
for all <script type="math/tex">p,q,r \in Q</script> we require these <em>distributivity</em> equalities</p>
<script type="math/tex; mode=display">p~\otimes~(q~\oplus~r) = (p~\otimes~q)~\oplus~(p~\otimes~r)</script>
<script type="math/tex; mode=display">(q~\otimes~r)~\otimes~p =(q~\otimes~p)~\oplus~(r~\otimes~p)</script>
<p>Finally, both <script type="math/tex">\oplus</script> and <script type="math/tex">\otimes</script> should be monotone separately in each argument,
that is <script type="math/tex">q \leq r</script> implies</p>
<script type="math/tex; mode=display">p \oplus q \leq p \oplus r</script>
<script type="math/tex; mode=display">q \oplus p \leq r \oplus p</script>
<script type="math/tex; mode=display">p \otimes q \leq p \otimes r</script>
<script type="math/tex; mode=display">q \otimes p \leq r \otimes p</script>
<p>The coeffect scalar structure for the dataflow coeffect system is
<script type="math/tex">\mathbf{Q_{df}} = (\mathbb N, +, max, 0, 0, \leq)</script>, where <script type="math/tex">\mathbb N</script> is the set of natural numbers,
<script type="math/tex">+</script> is the addition operator on natural numbers, <script type="math/tex">max</script> is the operator which produces the greater
of two natural number arguments, and <script type="math/tex">\leq</script> is the standard comparison operator for
natural numbers. It’s instructive to consider the system of <em>Figure 2</em> instantiated
with this structure, in which the <em>VAR</em> axiom <script type="math/tex">x : \tau @ \langle 0 \rangle \vdash x : \tau</script> says
that a variable occurrence does not require the capability to read that variable’s stream at prior time steps. Assuming <script type="math/tex">B</script> is the sole base type, the <script type="math/tex">pre</script> construct, which allows us to access a stream at the previous time step,
is then implemented as a built-in
function constant of type <script type="math/tex">B \overset{1}{\to} B</script>.</p>
<p>To get a feel for the <em>APP</em> rule, consider this derivation.</p>
<p><img src="/assets/preExample.png" alt="preExample" /></p>
<p>The conclusion concatenates disjoint typing contexts
and coeffects from its function and argument premises. While the scalar in the left premise (<script type="math/tex">0</script>) matches its portion of the conclusion (the <script type="math/tex">0</script> in <script type="math/tex">\langle 0, 1 \rangle</script>),
the scalar in the right premise (<script type="math/tex">0</script>) differs from <em>its</em> portion of the conclusion (the <script type="math/tex">1</script> in <script type="math/tex">\langle 0, 1 \rangle</script>). The right premise of <em>Figure 3</em> places the variable
<script type="math/tex">x</script> in an empty context, where it required to provide its value at the current timestep, and <script type="math/tex">0</script> prior timesteps (thus the scalar <script type="math/tex">0</script>).
In the left premise, <script type="math/tex">pre</script> is a function requiring access to its argument at 1 prior timestep (thus the scalar <script type="math/tex">1</script>).
Hence, the conclusion <script type="math/tex">pre~x</script> requires access to <script type="math/tex">x</script> at <script type="math/tex">0 + 1 = 1</script> prior timesteps. The essential power of type-and-coeffect
systems is to reason about such composition of contextual capabilities, and this is handled with <script type="math/tex">\otimes</script> operator,
which in the dataflow coeffect scalar structure <script type="math/tex">\mathbf{Q_{df}}</script>
is the <script type="math/tex">+</script> operator on natural numbers. In the <em>APP</em> rule,
<script type="math/tex">\Gamma_2</script> and <script type="math/tex">S</script> may in general contain multiple entries. When a vector coeffect rather than a scalar
is used for the second argument of <script type="math/tex">\otimes</script>, we apply <script type="math/tex">\otimes</script> componentwise:
for all coeffects <script type="math/tex">\langle s_1, s_2, \ldots, s_n \rangle</script> and all scalars <script type="math/tex">t</script> we have
<script type="math/tex">t \otimes \langle s_1, s_2, \ldots, s_n \rangle =
\langle t \otimes s_1, t \otimes s_2, \ldots, t \otimes s_n \rangle</script></p>
<p>Since the <em>APP</em> rule concatenates disjoint typing contexts and coeffects, one might wonder
how distinct occurrences of a single variable within the same term are typed.
It’s accomplished by <em>contracting</em> two variables in context and combining their associated contextual capabilities. This requires an application of the <em>CTXT</em> typing rule, which allows us to prove a typing judgment with context-and-coeffect <script type="math/tex">\Gamma'@R'</script> using a typing derivation with context-and-coeffect <script type="math/tex">\Gamma @ R</script>, obtained from a structural manipulation of <script type="math/tex">\Gamma'@R'</script>. The right premise allows us to apply a structural manipulation rule of the form <script type="math/tex">\Gamma'@ R' \rightsquigarrow \Gamma@R,\theta</script>; in particular, to contract two variables (and their associated capabilities) we must use the <em>CONTR</em> rule.</p>
<p><img src="/assets/contractionExample2.png" alt="contractionExample" /></p>
<h1 id="formalizing-contextual-capabilities">Formalizing contextual capabilities</h1>
<p>So far we have been keeping the notion of a contextual capability intuitive.
We will now briefly summarize the formalization of contextual capabilities.
Recall that a standard typed lambda calculus can be modeled as a cartesian closed category <script type="math/tex">\mathbf C</script>,
where for every type <script type="math/tex">\tau</script>, the denotation <script type="math/tex">\sem{\tau}</script> of <script type="math/tex">\tau</script> is an object of <script type="math/tex">\mathbf C</script>.
In particular, the interpretation of a function type <script type="math/tex">\sem{\sigma \to \tau}</script> is the
exponential object <script type="math/tex">\sem{\sigma} \Rightarrow \sem{\tau}</script>. Furthermore, the interpretation
of a typing judgment is
an arrow <script type="math/tex">\sem{x_1:\tau,\ldots,x_n:\tau_n \vdash e : \sigma} : \sem{\tau_1} \times \ldots \times \sem{\tau_n} \to \sem{\sigma}</script>.</p>
<p>To model type-and-coeffect judgments categorically, we interpret
the scalar coeffect structure <script type="math/tex">\catq = (Q,\otimes,\oplus,use,ign,\leq)</script> as a category using
the standard interpretation of preorders as categories. However, we use the transpose of <script type="math/tex">\leq</script> for
the preorder.
That is, the objects of <script type="math/tex">\catq</script>
are the elements of <script type="math/tex">Q</script>, and for each <script type="math/tex">q_1, q_2 \in Q</script> with <script type="math/tex">q_2 \leq q_1</script>,
<script type="math/tex">\catq</script> has one arrow with domain <script type="math/tex">q_1</script> and codomain <script type="math/tex">q_2</script>. We consider <script type="math/tex">\catq</script> a strictly monoidal
category with monoidal product <script type="math/tex">\otimes</script> and unit <script type="math/tex">use</script>.</p>
<p>We interpret a structural type-and-coeffect system (instantiated with a scalar coeffect structure <script type="math/tex">\catq</script>)
with respect to a cartesian closed category <script type="math/tex">\mathbf C</script> and an <em>indexed comonad</em> functor <script type="math/tex">D</script></p>
<script type="math/tex; mode=display">D : \catq \to [\mathbf C,\mathbf C]</script>
<p>Writing <script type="math/tex">D_{q}</script> for the application <script type="math/tex">D</script> to the scalar <script type="math/tex">q</script>, D is associated with a <em>counit</em>
natural transformation</p>
<script type="math/tex; mode=display">\epsilon_{use} : D_{use} \to 1_{\mathbf C}</script>
<p>and a family of <em>comultiplication</em> natural transformations</p>
<script type="math/tex; mode=display">\delta_{q,r} : D_{q \otimes r} \to D_{q} \circ D_{r}</script>
<p>making the following diagrams commute for all objects <script type="math/tex">C</script> of <script type="math/tex">\mathbf C</script></p>
<p><img src="/assets/commutativeDiagrams2.png" alt="commDiagramsC" /></p>
<p>In true categorical fashion, a contextual capability is defined not by its structure, but how it behaves.
In particular, the contextual capability denoted by <script type="math/tex">q</script> is the endofunctor <script type="math/tex">D_{q} : \mathbf C \to \mathbf C</script>.
<script type="math/tex">D_{q}</script> operates on an object <script type="math/tex">C</script> of <script type="math/tex">\mathbf C</script> by ``placing it into a context with capability <script type="math/tex">q</script>’’.
It operates on an arrow <script type="math/tex">f : A \to B</script> of <script type="math/tex">\mathbf C</script>,
which for our purposes represents a transformation,
by converting it into a transformation <script type="math/tex">D_{q} f : D_q A \to D_q B</script> that preserves the
contextual capability <script type="math/tex">q</script>.</p>
<p><script type="math/tex">\epsilon_{use}</script> is the transformation of pulling an object out of a context,
while for scalars <script type="math/tex">q,r \in \catq</script>, <script type="math/tex">\delta_{q,r}</script> is the transformation of decomposing a single context into a
context-in-a-context. Whenever <script type="math/tex">p, q \in \catq</script> such that <script type="math/tex">p \leq q</script>, the functorality of <script type="math/tex">D</script>
gives a natural transformation <script type="math/tex">sub_{p,q} : D_{q} \to D_{p}</script> which helps justify the context manipulation rule called <em>SUB</em>.</p>
<p>The bottom left corner of the top diagram therefore states that a transformation which</p>
<ol>
<li>Decomposes a context of capability <script type="math/tex">q</script> into an empty context containing a context of capability <script type="math/tex">q</script>.</li>
<li>Then takes the inner context of capability <script type="math/tex">q</script> out of the empty context.</li>
</ol>
<p>is equivalent to the identity transformation, which does nothing. The bottom diagram shows that context decomposition
is in a sense associative.</p>
<p>With the indexed comonad <script type="math/tex">D</script>, a function type <script type="math/tex">\sigma \overset{s}{\to} \tau</script> is interpreted as the exponential
<script type="math/tex">D_{s} \sem{\sigma} \Rightarrow \sem{\tau}</script>. A single-variable type-and-coeffect judgment
<script type="math/tex">x:\sigma @ \langle s \rangle \vdash e : \tau</script> is interpreted as an arrow of type
<script type="math/tex">D_{s}\sem{\sigma} \to \sem{\tau}</script>. Of course, we also need a way to interpret type-and-coeffect judgments
with multiple variables in context, as well as interpretations of structural rules such as contraction
and weakening. For this we need <em>structural indexed comonads</em>, a topic outside the scope of this post.
For a full account of categorical semantics for coeffects, see <a href="http://tomasp.net/academic/papers/structural/coeffects-icfp.pdf">Petricek</a>.</p>
<p>At this point, you might be wondering about the goal of all this indexed
comonad business. Furthermore, if the coeffect <script type="math/tex">R</script> in a
type-and-coeffect judgment <script type="math/tex">\Gamma @ R \vdash e : \tau</script> just denotes
some applications of endofunctors, why bother making those applications
explicity in the judgment system? In other words, why
have a judgment of the form <script type="math/tex">x : Nat @ \langle - \rangle \vdash negate~x : Nat</script>
rather than <script type="math/tex">x : D_{-}(Nat) \vdash negate~x : Nat</script>?
The answer is that the domain has been decomposed into a portion <script type="math/tex">\Gamma</script> that
is handled manually by the programmer and a portion <script type="math/tex">R</script> that is handled automatically
by the indexed comonad. To apply a function of type <script type="math/tex">\sigma \overset{t}{\to} \tau</script>,
the programmer must place an expression of type <script type="math/tex">\sigma</script> in the argument position of the application.
Importantly, the programmer does <em>not</em> supply an argument of type <script type="math/tex">D_t( \sigma)</script>;
the contextual capability <script type="math/tex">t</script> is automatically piped in to the function application.</p>
<p>To convey the intuition behind this automatic piping of contextual capabilities,
we present the notion of an indexed coKleisli category.</p>
<p>If <script type="math/tex">D : \mbf{Q} \to [\mbf{C},\mbf{C}]</script> is an indexed comonad, the <em>indexed
coKleisli category determined by <script type="math/tex">D</script></em>, written <script type="math/tex">\mbf{D^\ast}</script> is
defined such that</p>
<ul>
<li>
<p>The objects of <script type="math/tex">\mbf{D^\ast}</script> are the same as the objects of <script type="math/tex">\mbf{C}</script>.</p>
</li>
<li>
<p>For all <script type="math/tex">t \in \mbf{Q}</script>, and objects <script type="math/tex">A</script> and <script type="math/tex">B</script> of <script type="math/tex">\mbf{C}</script>,
an arrow <script type="math/tex">f' : D_{t}(A) \to B</script> is considered as an arrow of <script type="math/tex">f : A \to B</script>
of <script type="math/tex">\mbf{D^\ast}</script>.</p>
</li>
<li>
<p>Given arrows <script type="math/tex">f : A \to B</script> and <script type="math/tex">g : B \to C</script> of <script type="math/tex">\mbf{D^\ast}</script> with underlying
arrows <script type="math/tex">f' : D_t(A) \to B</script> and <script type="math/tex">g' : D_s(B) \to C</script> in <script type="math/tex">C</script>,
we define the composition <script type="math/tex">g \circ f : A \to C</script> as the arrow generated
from the arrow <script type="math/tex">(g \circ f)' : D_{s \otimes t}(A) \to C</script> defined as
<script type="math/tex">(g \circ f)' = g' \circ D_{s}(f') \circ (\delta_{s,t})_A</script>.</p>
</li>
<li>
<p>For each object <script type="math/tex">A</script> of <script type="math/tex">\mbf{D^\ast}</script>, the identity arrow <script type="math/tex">1_A</script> in <script type="math/tex">\mbf{D^\ast}</script>
has the underlying arrow <script type="math/tex">1'_A = (\epsilon_{use})_A</script> in <script type="math/tex">\mbf{C}</script>.</p>
</li>
</ul>
<p>In an indexed coKleisli category, we can compose arrows without regard
to the scalars attached to their underlying arrows. In this way, programming in a
type-and-coeffect system is kind of like composing arrows in a coKleisli category.</p>
<p>To provide some further intuition, in an algorithmic type-and-coeffect system, <script type="math/tex">R</script> in the judgment
<script type="math/tex">\Gamma @ R \vdash e : \tau</script> is an output position, much like <script type="math/tex">\tau</script>;
in constrast, <script type="math/tex">\Gamma</script> is an input position. The contextual capabilities required to execute expression <script type="math/tex">e</script> must be synthesized from the structure
of <script type="math/tex">e</script> rather than inherited from <script type="math/tex">e</script>’s context.</p>
<p>We can interpret dataflow coeffects by letting <script type="math/tex">\mathbf C = \mathbf{Sets}</script>, the category of sets and functions,
and using the indexed comonad <script type="math/tex">D : \mathbf{Q_{df}} \to [\mathbf{Sets}, \mathbf{Sets}]</script> where for
all <script type="math/tex">n, m \in \mathbb{N}</script>, sets <script type="math/tex">A</script> and <script type="math/tex">B</script>, and functions <script type="math/tex">f : A \to B</script>, we have</p>
<script type="math/tex; mode=display">D_{n}(A) = A \times A^n</script>
<p>and</p>
<script type="math/tex; mode=display">D_{n}(f : A \to B) = \lambda (a, (a_1, \ldots, a_n)).~(f~a, (f~a_1, \ldots, f~a_n))
: A \times A^n \to B \times B^n</script>
<p>We place the type <script type="math/tex">A</script> into a context of capability <script type="math/tex">n</script> by pairing it up with
its n-ary self-product, where the nth component of the product represents
the value of the stream at the nth previous consecutive timestep.
To transform a function <script type="math/tex">f</script> into an equivalent function which
preserves the contextual capability <script type="math/tex">n</script>, we must apply the function not
only at the current timestep, but also at the <script type="math/tex">n</script> cached consecutive
previous timesteps.</p>
<script type="math/tex; mode=display">(\epsilon_0)_A = \lambda (a, ()). a</script>
<p>We pull a value of type <script type="math/tex">A</script> out of a context with 0 cached previous
consecutive timesteps simply by getting its value at the current timestep.</p>
<script type="math/tex; mode=display">(\delta_{n,m})_A = \lambda (a, (a_1, \ldots, a_{m + n})). \\
((a, (a_1, \ldots, a_m)), ((a_1, (a_2, \ldots, a_{m+1}), \ldots,
(a_n, (a_{n + 1}, \ldots, a_{n + m}))</script>
<p>Having cached <script type="math/tex">n+m</script> previous consecutive timesteps is equivalent to having
cached the length-<script type="math/tex">m</script> suffixes at the previous <script type="math/tex">n</script> consecutive timesteps.</p>
<p>Finally, we can discard cached values. For <script type="math/tex">n \leq m</script> we have</p>
<script type="math/tex; mode=display">(sub_{n, m})_A = \lambda (a_1, \ldots, a_m). (a_1, \ldots, a_n)</script>
<h1 id="monotonicity-as-coeffect">Monotonicity as coeffect</h1>
<p><img src="/assets/operators.png" alt="monOperators" /></p>
<p><img src="/assets/monHasse.png" alt="monHasse" /></p>
<p>Instantiating the coeffect calculus of <em>Figure 2</em> with the following coeffect scalar structure results in a system which allows us to prove functions monotone.</p>
<ul>
<li>The scalar set is <script type="math/tex">Q = \{ ?, +, -, \sim \}</script>, where <script type="math/tex">?</script> denotes the capability to be transformed in an equivalence-class-preserving manner,
<script type="math/tex">+</script> to be transformed monotonically, <script type="math/tex">-</script> to be transformed antitonically,
and <script type="math/tex">\sim</script> to be transformed in an order-robust manner.
(An order robust function maps two inputs values related by the symmetric transitive closure of
its domain to two isomorphic elements of its codomain.)</li>
<li>The composition and contraction operators
<script type="math/tex">\otimes</script> and <script type="math/tex">\oplus</script> are defined in <em>Figure 6</em>.</li>
<li>The <script type="math/tex">use</script> scalar is <script type="math/tex">+</script>.</li>
<li>The <script type="math/tex">ign</script> scalar is <script type="math/tex">\sim</script>.</li>
<li>The preorder is the partial order <script type="math/tex">\leq</script> depicted in <em>Figure 7</em>.</li>
</ul>
<p>For the underlying cartesian closed category <script type="math/tex">\mathbf C</script> of our semantics, we choose <script type="math/tex">\mathbf{Preorder}</script>, the category
of preordered sets and monotone functions.
For preordered sets <script type="math/tex">(A, \leq_A)</script> and <script type="math/tex">(B, \leq_B)</script>, and monotone
function <script type="math/tex">f : (A, \leq_A) \to (B, \leq_B)</script>, our indexed comonad <script type="math/tex">D</script> is defined as</p>
<script type="math/tex; mode=display">D_{?}(A, \leq_A) = (A, =_A)</script>
<script type="math/tex; mode=display">D_{+}(A, \leq_A) = (A, \leq_A)</script>
<script type="math/tex; mode=display">D_{-}(A, \leq_A) = (A, \geq_A)</script>
<script type="math/tex; mode=display">D_{\sim}(A, \leq_A) = (A, \ast_A)</script>
<script type="math/tex; mode=display">D_{q}(f) : D_{q}(A) \to D_{q}(B) = \text{the arrow with the same underlying function as f}</script>
<p>where</p>
<ul>
<li><script type="math/tex">\geq_A</script> is the <em>dual</em> of <script type="math/tex">(A, \leq_A)</script>, that is, the preordered set defined such that for all
<script type="math/tex">a_1,a_2 \in A</script> we have <script type="math/tex">a_1 \geq_A a_2</script> if and only if <script type="math/tex">a_2 \leq_A a_1</script>.</li>
<li><script type="math/tex">=_A</script> is the order obtained by removing from <script type="math/tex">\leq_A</script>
all edges <script type="math/tex">(a,a')</script>
which do not have a symmetric counterpart <script type="math/tex">(a',a)</script> in <script type="math/tex">\leq_A</script>.</li>
<li><script type="math/tex">\ast_A</script> is the symmetric transitive closure of <script type="math/tex">\leq_A</script>.</li>
</ul>
<p>For all preorders <script type="math/tex">S</script> and all <script type="math/tex">p,q \in Q</script> we have</p>
<script type="math/tex; mode=display">(\epsilon_{+})_{S} = (\lambda x.~x)\text{ and }(\delta_{p,q})_{S} = (\lambda x.~x)</script>
<p>Furthermore, if <script type="math/tex">p \leq q</script> then we have</p>
<script type="math/tex; mode=display">(sub_{p,q})_S = (\lambda x.~x)</script>
<p>These definitions are a bit anticlimactic, but they make sense when one considers
that <script type="math/tex">D_{+}(S) = S</script> and <script type="math/tex">D_{p \otimes q}(S) = (D_{p} \circ D_{q})(S)</script></p>
<p>The natural transformations of monotonicity coeffects have a much different flavor
than those of the dataflow coeffects, in that they have no significant “runtime
content”. While it’s tempting to think of a contextual capability as
some extra information that is paired with each input value supplied to a
function, monotonicity coeffects show that this really isn’t the case.</p>
<h1 id="example-derivation">Example derivation</h1>
<p>I will demonstrate this system using an example from page 2 of <a href="http://db.cs.berkeley.edu/papers/cidr11-bloom.pdf">Consistency Analysis in Bloom: a CALM and Collected
Approach</a>. Let <script type="math/tex">Nat</script> be the type of natural numbers,
let <script type="math/tex">Set[Nat]</script> be the type of finite sets of natural numbers, ordered by inclusion, and let <script type="math/tex">Bool</script> be the two element
preordered set containing the boolean values true and false, ordered such that <script type="math/tex">false \leq true</script>.
The example requires us to prove the expression <script type="math/tex">% <![CDATA[
Min(X) < 10 %]]></script> monotone, where</p>
<ul>
<li><script type="math/tex">X</script> is a variable of type <script type="math/tex">Set[Nat]</script></li>
<li><script type="math/tex">Min</script> is a function of type <script type="math/tex">Set[Nat] \ato Nat</script>, which produces the minimum element contained in the argument</li>
<li><script type="math/tex">% <![CDATA[
< %]]></script> is the standard “less than” operator on naturals, of type <script type="math/tex">Nat \ato (Nat \pto Bool)</script></li>
</ul>
<p>Renaming “<” to “lt”, <script type="math/tex">% <![CDATA[
Min(X) < 10 %]]></script> rewritten to prefix notation is <script type="math/tex">(Lt~(Min~X))~10</script>.
The following abbreviations allow our proofs to fit on the page.</p>
<ul>
<li><script type="math/tex">ty(Min)</script> abbreviates <script type="math/tex">Set[Nat] \ato Nat</script></li>
<li><script type="math/tex">ty(Lt)</script> abbreviates <script type="math/tex">Nat \ato (Nat \pto Bool)</script></li>
</ul>
<p>Using Petricek’s coeffect system, proof of the judgment
<script type="math/tex">Lt : ty(Lt), Min : ty(Min), X : Set[Nat] @ \langle ?, ?, + \rangle \vdash (Lt~(Min~X))~10</script> follows.</p>
<p><img src="/assets/derivation1.png" alt="derivation1" /></p>
<p>where <script type="math/tex">\Pi_1</script> is</p>
<p><img src="/assets/derivation2.png" alt="derivation2" /></p>
<p>and <script type="math/tex">\Pi_2</script> is</p>
<p><img src="/assets/derivations3-2.png" alt="derivation3" /></p>
<p>and <script type="math/tex">\Pi_3</script> is</p>
<p><img src="/assets/derivations4-2.png" alt="derivation4" /></p>
<p>and <script type="math/tex">\Pi_4</script> is</p>
<p><img src="/assets/derivations5-2.png" alt="derivation5" /></p>
<h1 id="conclusion">Conclusion</h1>
<p>Coeffects are a simple yet extremely general framework enabling type systems which can prove,
among many other things, that a program function is monotone.
I think that recently proposed programming frameworks such as Bloom, LVars, and Lasp
can benefit from this power. In addition, I believe all of these frameworks
would also benefit from the ability to prove that a program function is a semilattice
homomorphism. For many semilattice datatypes, a monotonicity-aware type system such
as the one presented here can be used for this purpose, as a monotone function
from a certain (common) kind of semilattice’s join-irreducible elements can be converted into
a homomorphism from the semilattice itself.</p>
<p>While this post focused primarily on semantics, there are pragmatic issues
which need addressing.</p>
<p>First, the coeffect calculus presented here
in non-algorithmic. However, it’s not hard to imagine an algorithmic variant,
developed under the same strategy as algorithmic linear type systems:
a type-and-coeffect judgment would include an additional vector of
<em>output</em> coeffects, denoting that portion of the input capability which
is not actually utilized by the term. It may be necessary to place
some additional restrictions on the scalar coeffect structure, but I expect
that the scalar coeffect structure for monotonicity coeffects would satisfy these
restrictions.</p>
<p>Second, even with an algorithmic system, it isn’t currently clear to me
that this approach is going to be mentally tractable for
programmers, as there’s a fair amount of coeffect shuffling that happens
to the left of the turnstile. In my opinion, a type system is most useful
as a <em>design tool</em>, a mental framework that a programmer uses to compose
a complex program from simple parts. I’m ultimately looking for such
a mental framework, rather than a mere verification tool, for proving
program functions monotone.</p>
<p>It’s exciting to imagine a statically typed variant of BloomL.
Perhaps coeffects can serve as the basis for static BloomL’s type system.</p>
<h1 id="links">Links</h1>
<p>If you found this post interesting, these links may interest you as well.</p>
<p><a href="http://www.rntz.net/datafun/">Datafun</a>, a functional calculus inspired by Datalog,
currently has the best monotonicity type system that I know of, as well
as a working prototype. Interestingly, it features the ability
to take the fixpoint of a monotone function under conditions which
guarantee its existence.</p>
<p>Tomas Petricek created a really cool <a href="http://tomasp.net/coeffects/">interactive web tutorial</a>
for coeffects.</p>
<p>The semantics of Petricek et al.’s coeffect calculus is based on category
theory. I’ve found <a href="https://www.amazon.com/Category-Theory-Oxford-Logic-Guides/dp/0199237182">Awodey’s book</a> one of the best sources on the subject.
Presentations of categorical type system semantics can be daunting,
but <a href="https://arxiv.org/abs/1102.1313">Abramsky and Tzevelekos’ lecture notes</a> are a highly accessible
exception to the rule.</p>
<h1 id="acknowledgements">Acknowledgements</h1>
<p>Thanks to Michael Arntzenius and Chris Meiklejohn for providing feedback and catching errors in this post.</p>Monotonicity Through Types2017-11-09T14:05:41-05:002017-11-09T14:05:41-05:00/2017/11/09/monotonicity-through-types<p>A partially ordered set is a set <script type="math/tex">P</script> endowed with a binary relation <script type="math/tex">\leq</script> on <script type="math/tex">P</script> such that for all <script type="math/tex">p, q, r \in P</script> we have:</p>
<p>1.) <script type="math/tex">p \leq p</script> (reflexivity)</p>
<p>2.) <script type="math/tex">p \leq q</script> and <script type="math/tex">q \leq r</script> implies <script type="math/tex">p \leq r</script> (transitivity)</p>
<p>3.) <script type="math/tex">p \leq q</script> and <script type="math/tex">q \leq p</script> implies <script type="math/tex">p = q</script> (anti-symmetry)</p>
<p>If <script type="math/tex">P</script> and <script type="math/tex">Q</script> are partially ordered sets, we say that a function <script type="math/tex">f : P \to Q</script> between them is monotone if for all <script type="math/tex">p_1, p_2 \in P</script> with <script type="math/tex">p_1 \leq p_2</script>, we have <script type="math/tex">f(p_1) \leq f(p_2)</script>.</p>
<p>Several recent research papers (for example <a href="http://christophermeiklejohn.com/publications/ppdp-2015-preprint.pdf">Lasp</a> and <a href="http://www.neilconway.org/docs/socc2012_bloom_lattices.pdf">BloomL</a>) propose programming frameworks which utilize monotone functions as primitives of program composition, but they provide the user with a fixed set of monotone functions to work with. A type system capable of proving program functions monotone may enable the development of extensible versions of such frameworks.</p>
<p>Lately, I’ve been designing such a type system, for an extension of the simply typed lambda calculus. Since the programmer only cares about the monotonicity of a select group of functions, a special syntax construct, the <em>sfun abstraction</em>, serves as a signal to the type checker: unlike the simply typed world outside of the sfun abstraction, the body of the sfun abstraction is type checked using a special type system, which I call the lifted type system, in which monotonicity is tracked.</p>
<p>Reasoning about pointwise orderings on function spaces seems a bit heavy-weight and hasn’t been necessary for any of my use cases. An sfun is therefore first order; that is, both its return type and all of its argument types must be data types rather than function types. We would like to be able to prove that a multi-argument function is monotone <em>separately</em> in each of its
arguments; that is, for <script type="math/tex">i \in 1..n</script>, if <script type="math/tex">p_i \leq p_i'</script> then <script type="math/tex">f(p_1, \ldots, p_i, \ldots, p_n) \leq f(p_1, \ldots p_i', \ldots p_n)</script>.</p>
<p>The monotonicity of an sfun is typically derived from the monotonicity of the primitives used to implement it, which are also sfuns. Here are some example sfun primitives, addition and subtraction on integers:</p>
<p>1.) plus : <script type="math/tex">(x : Int, y : Int) \Rightarrow Int[\uparrow x, \uparrow y]</script></p>
<p>2.) minus : <script type="math/tex">(x : Int, y : Int) \Rightarrow Int[\uparrow x, \downarrow y]</script></p>
<p>An <em>sfun type</em>, written with <script type="math/tex">\Rightarrow</script> rather than <script type="math/tex">\rightarrow</script>, names its formal arguments and also <em>qualifies</em> each one. A qualifier is an argument-specific constraint on the behavior of the function. In the above types, the qualifier <script type="math/tex">\uparrow</script> is associated with arguments that are separately monotone and <script type="math/tex">\downarrow</script> is associated with arguments that are separately antitone. The second argument of a binary function <script type="math/tex">f</script> is separately antitone if <script type="math/tex">p_2 \leq p_2'</script> implies <script type="math/tex">f(p_1, p_2) \geq f(p_1, p_2')</script>.</p>
<p>Terms outside of sfun abstractions are typed using a <em>global</em> typing relation,
which, aside from an sfun abstraction typing rule, is not different from the
typing relations we are familiar with. A global typing judgment has the following form.</p>
<script type="math/tex; mode=display">\Gamma \vdash t : T</script>
<p>A typing judgment of the lifted type system, used to type check the body of an sfun, has the following form:</p>
<script type="math/tex; mode=display">\Gamma;\Omega;\Phi \vdash t : T</script>
<p>Here the <em>global type environment</em> <script type="math/tex">\Gamma</script> contains all of the variables bound outside of the sfun, the <em>ambient type environment</em> <script type="math/tex">\Omega</script> contains the list of the sfun’s formal arguments, and the
<em>lifted type environment</em> <script type="math/tex">\Phi</script> contains those variables in <script type="math/tex">t</script>’s context which are bound inside the sfun. Before getting into the significance of lifted typing judgments, let’s look
at a specific application of the global typing rule for sfun abstractions, which uses a single lifted premise.</p>
<script type="math/tex; mode=display">\frac{\Gamma;x:Int;x:Int[=~x] \vdash plus(x,x) : Int[\uparrow~x]}
{\Gamma \vdash \tilde{\lambda} x : Int. plus(x,x) : ( x : Int ) \Rightarrow Int[\uparrow~x]}</script>
<p>Here we type a single-argument sfun abstraction <script type="math/tex">\tilde{\lambda} x:Int. plus(x,x)</script>. As you might
have guessed, <script type="math/tex">\tilde{\lambda}</script> is used rather that <script type="math/tex">\lambda</script> to distinguish this as an
sfun abstraction rather than a standard one. Examine the ambient and lifted type environments
used in the premise. Perhaps surprisingly, the abstraction’s bound variable <script type="math/tex">x</script> is entered into both environments. When variables occur in types, they are considered references to formal arguments
rather than actual arguments; that is, an occurrence of <script type="math/tex">x</script> in a type (for example <script type="math/tex">Int[\uparrow x]</script>) does not refer to some integer, but instead a “slot” named <script type="math/tex">x</script> which expects to receive some integer from an external source.
Inside the scope of the sfun abstraction, we would like the ability to refer to the abstraction’s formal argument <script type="math/tex">x</script>, and therefore we add <script type="math/tex">x : Int</script> to the ambient environment.
We would also like to include occurrences of <script type="math/tex">x</script> as terms in the body of the abstraction; for these, we add the entry <script type="math/tex">x : Int[=~x]</script> into the lifted type environment, to be used as a
placeholder for the actual argument supplied to the formal argument <script type="math/tex">x</script>. Because references to formal arguments occur only in types, and references to actual arguments occur only in terms,
we can add entries with the same name to both the ambient and lifted environments without creating any ambiguity.</p>
<p>The premise of the above rule application includes the strange looking types <script type="math/tex">Int[=~x]</script> and <script type="math/tex">Int[\uparrow~x]</script>.
Normally, we would expect occurrences of x, which serve as placeholders for the actual argument
of the the function, to have type <script type="math/tex">Int</script>, and we would expect our abstraction’s body <script type="math/tex">plus(x,x)</script> to
have type <script type="math/tex">Int</script> as well. This traditional approach to typing a function abstraction
characterizes the operational behavior of a single function <em>after</em> it has been applied.
Unfortunately, this isn’t adequate for reasoning about properties such as monotonicity,
which involve multiple calls to the same function. My approach instead takes the
perspective of inside of a function, <em>before</em> it has been applied. Lifted typing then
characterizes the structure of a function as the composition of its constituent parts.
In the above example, an occurrence of the variable <script type="math/tex">x</script> in the term <script type="math/tex">plus(x,x)</script>
has type <script type="math/tex">Int[=~x]</script>, meaning that it is a function which takes the value provided to <script type="math/tex">x</script>
(the enclosing sfun’s formal argument) as an input, and produces that value unchanged
as a result. We ultimately care about the input/output relation of this function,
and so the concrete values which inhabit this type are set-of-pairs function representations.
The type <script type="math/tex">Int[=~x]</script> happens to be a singleton type, containing the set of pairs
<script type="math/tex">\{ (0,0), (1,1), (-1,-1), (2,2), (-2-2), \ldots \}</script>.</p>
<p>The sfun application <script type="math/tex">plus(x,x)</script> is viewed as a function composition,
where the outputs of the functions represented by the two occurrences of <script type="math/tex">x</script>
are forwarded into the left and right arguments of the sfun <script type="math/tex">plus</script>. The domain
of this composite function matches the domain <script type="math/tex">x:Int</script> of the enclosing sfun, which it inherits from
the two occurrences of <script type="math/tex">x</script>. Since <script type="math/tex">plus</script> returns an <script type="math/tex">Int</script>, so does the
composite function. The premise of the above typing rule application tells
us that <script type="math/tex">plus(x,x)</script> has type <script type="math/tex">Int[\uparrow~x]</script>, but this premise must
be derived. How do we go about proving that the composite function
<script type="math/tex">plus(x,x)</script> is monotone?</p>
<p>First, pretend that the two occurrences of <script type="math/tex">x</script> reference different formal arguments
<script type="math/tex">x_1</script> and <script type="math/tex">x_2</script>. Holding the left formal argument fixed gives a single-argument
function <script type="math/tex">plus(-,x_2)</script>, which the type signature of <script type="math/tex">plus</script> tells us must
be monotone.
<script type="math/tex">x_1</script>, representing the identity function on integers, is clearly monotone,
since for all integers <script type="math/tex">z_1, z_2</script> with <script type="math/tex">z_1 \leq z_2</script>, we have
<script type="math/tex">id(z_1) = z_1 \leq z_2 = id(z_2)</script>. <script type="math/tex">plus(x_1, x_2)</script> is then the composition
of two monotone functions, which itself must be monotone. The same reasoning tells
us that <script type="math/tex">plus(x_1,x_2)</script> is monotone as a function of <script type="math/tex">x_2</script> when <script type="math/tex">x_1</script>
is held fixed. <script type="math/tex">plus(x_1, x_2)</script> is therefore monotone separately
in both <script type="math/tex">x_1</script> and <script type="math/tex">x_2</script>. However, we are interested in <script type="math/tex">plus(x,x)</script>,
which is the function we get when we contract <script type="math/tex">x_1</script> and <script type="math/tex">x_2</script> into a single
argument, supplying both of <script type="math/tex">plus</script>’s “slots” with the same value.
Contracting two arguments of a function which are both separately monotone
results in a new argument which is also separately monotone, and so we
can conclude that <script type="math/tex">plus(x,x)</script> has type <script type="math/tex">Int[\uparrow~x]</script>.</p>
<p>The lifted sfun application typing rule utilizes two binary operators
<script type="math/tex">\circ</script> and <script type="math/tex">+</script> on qualifiers, which describe how monotonicity
is propagated across function composition and argument contraction.
The above example utilized the facts that <script type="math/tex">= \circ \uparrow</script> is equal to
<script type="math/tex">\uparrow</script> and <script type="math/tex">\uparrow + \uparrow</script> is equal to <script type="math/tex">\uparrow</script>.
These operators are defined as lookup tables, recording a set of predefined facts
about the propagation of monotonicity.</p>
<p>I’ll wrap things up by leaving you
with one of the central features of my calculus.
Namely, that the “global world” outside of an sfun abstraction is viewed
as a degenerate subset of the “lifted world” inside the sfun abstraction.
A globally well-typed sfun application is viewed as a projection onto this
degenerate subset. Inside the sfun abstraction, we track the way in which each term depends
on the sfun’s arguments, but terms originating outside of the sfun
(both literal constants and occurrences of variables from the global type environment <script type="math/tex">\Gamma</script>)
depend on the sfun’s arguments in a specific way: they are not affected by them
at all. So, for any sfun with ambient environment <script type="math/tex">\Omega</script>,
we can view the literal integer <script type="math/tex">1</script> as a constant-valued
function which, given any valuation of <script type="math/tex">\Omega</script>, produces the value one
as a result. Of course, constant functions are monotone, and so a lifted subtyping relation
allows 1 to occur in any context where Integer-valued functions with monotone
dependence on the ambient environment <script type="math/tex">\Omega</script> are expected.
I view this as a weird refinement type system. Instead of starting
with a simply typed system and decomposing its base types into preorders to induce a
subtyping relation, I started with a simply typed system and positioned its base
types as refinements of base types in a system with larger types (larger in that
they denote larger sets of values).
Consider the ambient environment <script type="math/tex">\Omega = x:Int</script>. Letting
<script type="math/tex">Int[?~x]</script> be the type of Int-valued functions which share the enclosing sfun’s formal parameter <script type="math/tex">x</script>, the following diagram decomposes the type <script type="math/tex">Int[?~x]</script> into refinements.</p>
<p><img src="/assets/DiagramArrows.png" alt="Refinement diagram" /></p>
<p>The red arrows indicate that projection from <script type="math/tex">Int[?~x]</script> into the refinement
<script type="math/tex">Int</script> plays a special role. Still curios? See <a href="https://infoscience.epfl.ch/record/231867/files/monotonicity-types.pdf">this paper</a> for motivating examples, a full formalization, and a soundness proof.</p>A partially ordered set is a set endowed with a binary relation on such that for all we have:A Class System for Typed Lua2016-08-18T13:04:58-04:002016-08-18T13:04:58-04:00/2016/08/18/a-class-system-for-typed-lua<p>I spent this summer implementing a class system for Typed Lua, as part of the Google Summer of Code 2016.
It was largely inspired by the paper <a href="http://www.cs.cornell.edu/~ross/publications/shapes/shapes-pldi14.pdf">Getting F-Bounded Polymorphism into Shape</a>.
To try out the new class sytem, clone my fork of the TypedLua repository, cd to the root directory of the cloned repository, and run the my modified version of the Typed Lua compiler via “tlc <em>source_filename</em>”.</p>
<p>I haven’t merged any of my changes yet, and still plan to clean up some of the code within the next few weeks. Below is a description of the features that I have added.</p>
<h1 id="the-basics">The Basics</h1>
<p>The following syntax is used to declare a class</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class Vector2
x : number
y : number
constructor new(x : number, y : number)
self.x = x
self.y = y
end
method DotProduct(other : Vector2) : number
return self.x*other.x + self.y*other.y
end
method Offset(offset : Vector2)
self.x = self.x + offset.x
self.y = self.y + offset.y
end
end
</code></pre></div></div>
<p>The Vector2 class first declares two data members x and y, both having Lua’s 64-bit floating-point type number.</p>
<p>Next, it declares a constructor named new. The body of the constructor must assign values to all data members that the class defines. Unlike languages such as Java, where all variables can be assigned to a nil value, variables in Typed Lua are generally non-nil, and so leaving data members with a the default value nil would not be safe.</p>
<p>Finally, two methods called DotProduct and Offset are defined. Notice that the x and y fields of the invoked object are accessed via the <em>self</em> variable. This class system does not yet have encapsulation, so we can access the x and y fields of <em>other</em> externally. Omitting a return type annotation on a method declaration sets the expected return type to an infinite tuple of nils, which is appropriate for methods which, like Offset, do not return values.</p>
<p>Now let’s try instantiating the class and calling some of its methods.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>local v = Vector2.new(1,0)
local u = Vector2.new(0,1)
print(u:DotProduct(v))
</code></pre></div></div>
<p>A class declaration adds a <em>class value</em> of the same name into scope. A class value is a table which contains all of the class’s constructors. In this case, our class table is <em>Vector2</em>, which contains one constructor <em>new</em>. Running tlc on the concatenation of the above code blocks should produce a lua file which prints 0 when it is run.</p>
<h1 id="inheritance">Inheritance</h1>
<p>We can inherit using the <em>extends</em> keyword. Inheritance creates a child class which reuses code from the parent class and has an instance type that is a subtype of the parent class’s instance type.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>local debugVectorId = 0
class DebugVector2 extends Vector2
const id : string
constructor new(x : number, y : number)
self.x = x
self.y = y
self.id = tostring(debugVectorId)
debugVectorId = debugVectorId + 1
print("DebugVector #" .. self.id .. " created with x = " .. tostring(x) .. " and y = " .. tostring(y))
end
method Offset(offset : Vector2)
local strInitial = "(" .. tostring(self.x) .. ", " .. tostring(self.y) .. ")"
self.x = self.x + offset.x
self.y = self.y + offset.y
local strFinal = "(" .. tostring(self.x) .. ", " .. tostring(self.y) .. ")"
print("DebugVector #" .. self.id .. " offset from " .. strInitial .. " to " .. strFinal)
end
end
v = DebugVector2.new(1,0)
u = DebugVector2.new(0,1)
print(v:DotProduct(u))
u:Offset(v)
print(v:DotProduct(u))
</code></pre></div></div>
<p>Vector2’s fields x and y, as well as the methods Offset and DotProduct, are automatically included into the DebugVector2 class. However, DebugVector2 overrides the Offset with its own implementation.</p>
<p>Concatenating the above code block and then running the result should give the following output.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>DebugVector #0 created with x = 1 and y = 0
DebugVector #1 created with x = 0 and y = 1
0
DebugVector #1 offset from (0, 1) to (1, 1)
1
</code></pre></div></div>
<h1 id="the-super-keyword">The super keyword</h1>
<p>We can improve the above code by reusing the functionality of the parent class via the <em>super</em> reference.</p>
<p>A constructor may call exactly one superclass constructor on its first line using the syntax “super.constructorname(arguments)”. If this happens, then the child class constructor does not need to initialize inherited members.</p>
<p>A superclass method may be called from anywhere inside a child class method, using the syntax “super:<em>methodname(arguments)</em>”.</p>
<h1 id="interfaces">Interfaces</h1>
<p>We may want to declare a type, without any associated implementation, that describes a set of operations which several classes share in common. Such a type is called an <em>interface</em>, and is declared using the following syntax.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>interface Showable
method ToString() : () => (string)
end
</code></pre></div></div>
<p>We can associate a class to an interface by using an <em>implements clause</em>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class Vector2 implements Showable
...
method ToString() : string
return "(" .. self.x .. ", " .. self.y .. ")"
end
...
end
</code></pre></div></div>
<p>Instances of class Vector2 can then be used in contexts where instances of the Showable interface are expected.</p>
<h1 id="typedefs">Typedefs</h1>
<p>Typedefs are a lightweight alternative mechanism for defining types. Consider the following type, which represents a linked list of numbers.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>typedef NumList = { "val" : number, "next" : NumList }?
</code></pre></div></div>
<p>Here we have defined a typedef called NumList, which expands to { val : number, next : NumList }?. Like classes and interfaces, typedefs may contain recursive references to themselves. This can be useful for defining data-description schemas and data structures of unbounded size. NumList, for example, describes finite number lists of any size; nil, { val = 1, next = nil }, and { val = 1, next = { val = 1, next = nil } } are each NumLists. They represent the empty list, [1], and [1,1] respectively.</p>
<p>There’s an important difference between classes and interfaces on the one hand, and typedefs on the other. Classes and interfaces are <em>nominal</em> types, whereas typedefs are <em>structural</em> ones. Subtyping relations between nominal types are declared explicitly by the programmer, whereas subtyping relations between structural types are deduced by the subtyping algorithm. For example, this implies that instances of the following two classes are not interchangeable:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class Foo1
x : boolean
constructor new(x : boolean)
self.x = x
end
end
class Foo2
x : boolean
constructor new(x : boolean)
self.x = x
end
end
</code></pre></div></div>
<p>In fact, passing an instance of Foo1 into a function that expects an instance of Foo2 will generate a type error:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>local fooFunction(foo : Foo2) : boolean
return foo.x
end
local f1 = Foo1.new(true)
fooFunction(f1)
</code></pre></div></div>
<p>The following code block, however, is considered well-typed by the Typed Lua compiler.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>typedef NumList1 = { "val" : number, "next" : NumList1 }?
typedef NumList2 = { "val" : number, "next" : NumList2 }?
local function listFunction(l1 : NumList1)
local l2 : NumList2 = l1
if l2 then
print(tostring(l2.val) .. "\n")
listFunction(l2.next)
else
print("done.")
end
end
</code></pre></div></div>
<h1 id="mutually-recursive-types">Mutually recursive types</h1>
<p>By default a type definition can only refer to previously defined types. But what if we want to have two types A and B such that A is defined in terms of B and B is defined in terms of A? One of these types must be defined first; how can it reference the other one?</p>
<p>To handle this, several mutually recursive type definitions can be chained together with the <em>and</em> keyword, as in the following code block.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>typedef WeirdList1 = { "val" : boolean, "next" : WeirdList2 }?
and typedef WeirdList2 = { "val" : number, "next" : WeirdList1 }?
</code></pre></div></div>
<p><em>and</em> can be used to join an arbitrary collection of typedefs, classes, and interfaces into a mutually recursive bundle.</p>
<h1 id="global-typenames-and-aliases">Global typenames and aliases</h1>
<p>When a type is declared, it is given both a <em>global name</em> and an <em>alias</em>. The global name is the name specified by the programmer, with the name of the module containing the definition prepended to it. The global name of a class Foo defined in a module Bar is Bar.Foo. Once a global name is generated, it lasts throughout the duration of the type-checking process. Referring to all types by their global names would be tedious, and so an alias mechanism is also provided. An alias is a short name that is translated into a global name. All type definitions generate an alias, associating the name that the programmer typed with the generated global name. Our Foo class would generate an alias Foo that maps to the global name Bar.Foo.</p>
<p>Unlike global names, aliases are locally scoped. In particular, they are not preserved across multiple files. Types defined in external files must therefore be referred to by their global names.</p>
<h1 id="class-value-lookup">Class value lookup</h1>
<p>The expression class(<em>typename</em>) will evaluate to the class value of the class named <em>typename</em>. This is useful for instantiating classes defined in external modules.</p>
<p>A module Foo could define a class Test:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class Test
constructor new()
end
end
</code></pre></div></div>
<p>and then Test could be instantiated in a module Bar as follows:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>require("Foo")
local t = class(Foo.Test).new()
</code></pre></div></div>
<h1 id="generic-classes">Generic classes</h1>
<p>Classes, interfaces, functions, and methods can all be parameterized by types, a feature often called <em>generics</em>. Among other things, this allows us to create collections that are parameterized by their element types.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class Stack<ElementType>
contents : {ElementType}
constructor new()
self.contents = {}
end
method push(x : ElementType)
self.contents[#self.contents + 1] = x
end
method pop() : ElementType
local ret = self.contents[#self.contents]
self.contents[#self.contents] = nil
assert(ret)
return ret
end
end
</code></pre></div></div>
<p>NOTE: The above code snippet will not compile in my fork, because assertions have not been integrated with occurrence typing. This functionality has been implemented in a separate fork.</p>
<p>A class’s type parameters appear in a comma-separated list between angle brackets, after the class name. The above Stack class contains a single type parameter called ElementType. We can instantiate the above class as follows:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>local stack = Stack.new<number>()
stack.push(3)
print(stack.pop())
</code></pre></div></div>
<p>The type parameters of a class definition are wrapped around each of its constructors. We can instantiate these parameters by providing type arguments between angle brackets before the constructor’s normal arguments. In the above code, we instantiate ElementType with number. This produces an instance whose type matches the instance type described by Stack, but with each occurrence of ElementType replaced with number.</p>
<h1 id="generic-functions-and-methods">Generic functions and methods</h1>
<p>Programmers may define their own type-parameterized functions. This can be useful for standard library functions.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>local function init_array<ElementType>(x : ElementType, size : number) : {ElementType}
local ret : {ElementType} = {}
for i = 1, size do
ret[i] = x
end
return ret
end
local r = init_array<string>("thumbs up", 2)
for i=1,#r do
local s = r[i]
if s then
print(s)
end
end
</code></pre></div></div>
<p>We can also provide type parameters for methods.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class Foo
method id<T>(x : T) : T
return x
end
end
interface Bar
method id<T> : (T) => (T)
end
</code></pre></div></div>
<h1 id="type-parameter-bounds">Type parameter bounds</h1>
<p>We can restrict type parameters so that they can only be instantiated with types which are a subtype of a specified type. This specified type is referred to as the bound of the type parameter that it is attached to. Suppose that we have the following interface:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>interface Showable
method Show : () => (string)
end
</code></pre></div></div>
<p>We can use type parameter bounds to implement a function which prints out a string representation of all elements of an array, assuming that the element type of the array implements the Showable interface.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>local function show_array<ElementType <: Showable>(x : {ElementType})
for i=1,#x do
local e = x[i]
if e then
print(e:Show())
end
end
end
</code></pre></div></div>
<p>As the above example demonstrates, we attach a bound to a type parameter by writing “<: BoundType” after the type parameter’s occurrence in the parameter list.</p>
<h1 id="generic-subtyping">Generic subtyping</h1>
<p>We can inherit from and implement generic classes.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>interface Container<ElementType>
method GetElements : () => ({ElementType})
end
class Array<ElementType> implements Container<ElementType>
contents : {ElementType}
constructor new(contents : {ElementType})
self.contents = contents
end
method GetElements() : {ElementType}
return self.contents
end
end
</code></pre></div></div>
<p>An implements clause generally consists of a nominal type applied to arbitrary type arguments which may include occurrences of the parameters of the class being defined. When class parameters occur in the implements clause, a family of subtyping relations is generated; the above implements clause implies that for <em>all</em> types ElementType, Array<ElementType> is a subtype of Container<ElementType>.</p>
<p>As the next example demonstrates, the parameters of a class do not have to correspond to the parameters of the nominal types that it implements. The below class definition implies that for all pairs of types KeyType and ValueType, Map<KeyType, ValueType> is a subtype of Container<ValueType>.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class Map<KeyType, ValueType> implements Container<ValueType>
contents : { any : ValueType }
constructor new()
self.contents = {}
end
method set(key : KeyType, val : ValueType)
self.contents[key] = val
end
method get(key : KeyType) : ValueType?
return self.contents[key]
end
method GetElements() : {ValueType}
local ret : {ValueType} = {}
for k,v in pairs(self.contents) do
ret[#ret + 1] = v
end
return ret
end
end
</code></pre></div></div>
<p>The arguments to a nominal type constructor occurring in an implements clause can be <em>arbitrary</em> types, both structural and nominal. The next example demonstrates the use of a structural type, namely Point2, as an argument in an implements clause.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>typedef Point2 = { x : number, y : number }
class Polygon implements Container<Point2>
points : {Point2}
constructor new(points : {Point2})
self.points = points
end
method GetElements() : {Point2}
return self.points
end
end
</code></pre></div></div>
<h1 id="recursive-inheritance-and-shapes">Recursive inheritance and shapes</h1>
<p>Suppose we want to bound a type parameter in a way that requires values of its argument to be comparable to each other. We might use the following interface, which describes types that are comparable to some type T.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>interface Ordered<T>
method lessThanEq : (T) => (boolean)
end
</code></pre></div></div>
<p>Then we would implement this interface with the following class.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class NumSet implements Ordered<NumSet>
contents : { number : boolean? }
method lessThanEq(other : NumSet) : boolean
for k,v in pairs(self.contents) do
if not other.contents[k] then
return false
end
end
return true
end
constructor new(contents : { number : boolean? })
self.contents = contents
end
end
</code></pre></div></div>
<p>However, the compiler generates an error from the above implements clause. The reason for this is that it creates a cycle in the <em>type graph</em>. The type graph is a graph in which nominal typenames are nodes. Every implements and extends clause adds an edge from the class being defined (NumSet in this case), to every nominal typename occurring in the implements or extends clause (here this includes both Ordered and NumSet) labeled with the name of the outer nominal type of the implements or extends clause (Ordered in this case). To ensure that our subtyping algorithm terminates, we forbid any implements or extends clause which introduces cycles into this graph.</p>
<p>Still, there’s a need for recursive inheritance as described above. To deal with this, we include another kind of nominal type definition in addition to classes and interfaces: shapes. A shape definition is exactly like an interface definition, but the word interface is replaced with shape:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>shape Ordered<T>
method lessThanEq : (T) => (boolean)
end
</code></pre></div></div>
<p>When searching for cycles in our type graph, we ignore those edges that are labeled with shapes. After changing Ordered to a shape, the NumSet class should compile without producing any errors. Occurrences of shape types are restricted to the outer level of type bounds and implements clauses; for a somewhat technical reason, this ensures that our subtyping algorithm terminates even when our type graph has cycles which include shape edges.</p>
<p>To utilize recursively inheriting types, we use type bounds which refer to the variable being bounded.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>local comparable<T <: Ordered<T>>(x : T, y : T) : boolean
return (x:lessThanEq(y) or y:lessThanEq(x))
end
</code></pre></div></div>
<h1 id="variance-annotations">Variance annotations</h1>
<p>To give nominal types greater subtyping flexibility, we allow the user to provide variance annotations for the type parameters of classes, interfaces, and shapes. Prefixing a type parameter with + designates that the parameter is covariant, whereas prefixing a parameter with - designates that it is contravariant.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>interface Describable<+DescriptionType>
method Describe : () => (DescriptionType)
end
</code></pre></div></div>
<p>The above interface indicates that its type parameter DescriptionType is covariant by prefixing it with a +. What this implies for subtyping is that if A and B are types and B is a subtype of A, the Ord<B> is a subtype of Ord<A>. Covariant means <em>with change</em>. A covariant parameter is one whose subtyping precision changes in the same direction as the type that contains it; Ord<B> is more precise than Ord<A> only when B is more precise than A.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>interface Accumulator<-ValueType>
method Accumulate : (ValueType) => ()
end
</code></pre></div></div>
<p>Here is an interface for classes which describe objects which can “absorb” values of some type ValueType. ValueType has been marked as contravariant by prefixing it with a -. Contravariance means <em>against change</em>. A contravariant type parameter is one whose subtyping precision changes in the opposite direction of the subtyping precision of the type that contains it; if B is a subtype of A then Accumulator<A> is a subtype of Accumulator<B>.</p>
<p>The positions of the occurrences of a type parameter in a type definition dictate which variance annotations it can have. Roughly, types variables occurring only in input positions are allowed to be contravariant, whereas type variables occurring only in output positions are allowed to be covariant.</p>
<p>DescriptionType is allowed to be covariant because it occurs as a method return type, and classifies values that instances of the class are outputting into an external context. Suppose DetailedDescription is a subtype of Description; then retrieving a DetailedDescription from the Describe method of Describable<Description> is perfectly acceptable, because we were expecting a Description, and values of type DetailedDescription can be used in any context where values of type Description are expected.</Description></p>
<p>On the other hand, ValueType is allowed to be contravariant because it occurs as a method input type. Suppose OddInteger is a subtype of Integer. Then passing an OddInteger into the Accumulate method of an object of type Accumulator<Integer> is perfectly acceptable, and so we can use an object of type Accumulator<Integer> where an object of type Accumulator<OddInteger> is expected.</p>
<h1 id="additional-features">Additional features</h1>
<ul>
<li>A class can implement multiple interfaces by using a comma-separated list for an implements clause.</li>
</ul>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class Foo extends Interface1, Interface2
</code></pre></div></div>
<ul>
<li>A nominal subtyping edge may be added without introducing any new nominal type definitions using an implements statement.</li>
</ul>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
a = 5
Foo implements Bar
a = a + 1
</code></pre></div></div>
<ul>
<li>A class may be used as an interface</li>
</ul>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>class Foo ... end
class Bar implements Foo ... end
</code></pre></div></div>
<ul>
<li>Failed subtyping queries now generate explanations of why the specified subtyping judgment does not hold. These can currently be a bit messy, which is something I hope to improve soon.</li>
</ul>I spent this summer implementing a class system for Typed Lua, as part of the Google Summer of Code 2016. It was largely inspired by the paper Getting F-Bounded Polymorphism into Shape. To try out the new class sytem, clone my fork of the TypedLua repository, cd to the root directory of the cloned repository, and run the my modified version of the Typed Lua compiler via “tlc source_filename”.