Computability Logic

(CoL)


Section 3

The CoL zoo of game operations



3.1    Preview

3.2     Prefixation

3.3    Negation

3.4     Choice operations

3.5     Parallel operations

3.6     Reduction

3.7     Blind operations

3.8     Branching operations

3.9     Sequential operations

        3.10  Toggling operations

        3.11  Cirquents


3.1 Preview


As we already know, logical operators in CoL stand for operations on games. There is an open-ended pool of operations of potential interest, and which of those to study may depend on particular needs and taste. Below is an incomplete list of the operators that have been officially introduced so far.

Negation: 

Conjunctions:    (parallel),   (choice),   (sequential),   (toggling)

Disjunctions:     (parallel),   (choice),   (sequential),   (toggling)

Recurrences:   (branching),   (parallel),   (sequential),   (toggling)

Corecurrences:  (branching),   (parallel),   (sequential),   (toggling)

Universal quantifiers:    (blind),   (parallel),   (choice),   (sequential),   (toggling)

Existential quantifiers:   (blind),   (parallel),   (choice),   (sequential),   (toggling)

Implications:    (parallel),   (choice),   (sequential),  (toggling)


Rimplications: 
 (branching),   (parallel),   (sequential),   (toggling)

Repudiations:   (branching),   (parallel),   (sequential),   (toggling)

Among these we see all operators of classical logic, and our choice of the classical notation for them is no accident. It was pointed out earlier that classical logic is nothing but the elementary, zero-interactivity fragment of computability logic. Indeed, after analyzing the relevant definitions, each of the classically-shaped operations, when restricted to elementary games, can be easily seen to be virtually the same as the corresponding operator of classical logic. For instance, if A and B are elementary games, then so is AB, and the latter is exactly the classical conjunction of A and B understood as an (elementary) game. In a general --- not-necessarily-elementary --- case, however, ,  , ,  become more reminiscent of (yet not the same as) the corresponding multiplicative operators of linear logic. Of course, here we are comparing apples with oranges for, as noted earlier, linear logic is a syntax while computability logic is a semantics, and it may be not clear in what precise sense one can talk about similarities or differences.

In the same apples and oranges style, our operations , , ,  can be perceived  as relatives of the additive connectives and quantifiers of linear logic; , as “multiplicative quantifiers”; ,,, as “exponentials”, even though it is hard to guess which of the two groups --- , or , --- would be closer to an orthodox linear logician's heart.  On the other hand, the blind, sequential and toggling groups of operators have no counterparts in linear logic.

In this section we are going to see formal definitions of all of the above-listed operators. We agree that, throughout those definitions, Φ ranges over positions, Γ ranges over runs and e over (V,D)-valuations, where D is the domain of the game G that is being defined and V is the set of variables on which that game depends. All three metavariables should be considered universally quantified. Each definition has two clauses, one defining LpeG (and thus also LreG) and the other WneG. The second clause, telling us who wins a given run Γ of G(e), always implicitly assumes that such a Γ is in LreG.


Intuitive explanations of game operations will be provided as well. In such cases, unless otherwise implied by the context, the games to which those operations are applied are assumed to be constant.

This section also provides many examples of particular games. Let us agree that, unless otherwise suggested by the context, in all those cases we have the standard universe in mind. Often we let non-numerals such as people, Turing machines, etc. act in the roles of “constants”. These should be understood as abbreviations of the corresponding decimal numerals that encode these objects in some fixed reasonable encoding. It should also be remembered that algorithmicity is a minimum requirement on ’s strategies. Some of our examples implicitly assume stronger requirements, such as efficiency or ability to act with imperfect knowledge. For instance, the problem of telling whether there is or has been life on Mars is, of course, decidable, for this is a finite problem. Yet our knowledge does not allow us to actually solve the problem. Similarly, chess is a finite game and (after ruling out the possibility of draws) one of the players does have a winning strategy in it. Yet we do not know specifically what (and which player’s) strategy is a winning one.

When omitting parentheses in compound expressions, we assume that all unary operators (negation, repudiations, recurrences, corecurrences and quantifiers) take precedence over all binary operators (conjunctions, disjunctions, implications, rimplications), among which implications and rimplications have the lowest precedence. So, for instance, A BC should be understood as A ((B)C)  rather than, say,  as (A B)C or as A ((BC)).   

Theorem 3.1.1 (proven in [Jap03, Jap08b, Jap11a]) All operators listed in this subsection preserve the static property of games (i.e., when applied to static games, the resulting game is also static). 


3.2 Prefixation


Unlike the operators listed in the preceding subsection, the operation of prefixation is not meant here to act as a logical operator in the formal language of CoL. Yet, it is very useful in characterizing and analyzing games, and we want to start our tour of the zoo with it.

Definition 3.2.1 Assume A = (Dm, Dn, Vr, A) is a game and Ψ is a unilegal position of A (otherwise the operation is undefined). The Ψprefixation of A, denoted ΨA, is defined as the game (Dm, Dn, Vr, G) such that:

Intuitively, ΨA is the game playing which means playing A starting (continuing) from position Ψ. That is, ΨA is the game to which A evolves (is  brought down) after the moves of Ψ have been made. Visualized as a tree, ΨA is nothing but the subtree of A rooted at the node corresponding to position Ψ. Below is an illustration.




3.3 Negation


For a run Γ, by Γ we mean the “negative image” of Γ (green and red interchanged). For instance,  α,β,γα,β,γ.

Definition 3.3.1  Assume A = (Dm, Dn, Vr, A) is a game.  A  (read "not A")  is defined as the game (Dm, Dn, Vr, G) such that:

In other words, A is A with the roles of the two players interchanged: Machine’s (legal) moves and wins become Environment’s moves and wins, and vice versa. So, when visualized as a tree, A is the exact negative image of A, as illustrated below:


Figure 3.3.2: Negation

Let Chess be the game of chess (with draws ruled out) from the point of view of the white player. Then  Chess is Chess “upside down”, i.e., the game of chess from the point of view of the black player:


Observe that the classical double negation principle  A = A  holds: interchanging the players’ roles twice restores the original roles of the players. It is also easy to see that we always have  ΨA Ψ A. So, for instance, if α is Machine’s legal move in the empty position of A that brings A down to B, then the same α is Environment’s legal move in the empty position of  A, and it brings  A down to  B. Test the game A of   Figure 3.3.2 to see that this is indeed so.


3.4 Choice operations


Choice conjunction (read “chand”)  and choice disjunction (read chor”)  combine games in the way seen below:



A  B is a game where, in the initial (empty) position, only Environment has legal moves. Such a move should be either ‘0’ or ‘1’. If Environment moves 0, the game continues as A, meaning that 0(A  B) = A; if it moves 1, then the game continues as B, meaning that 1(A  B) = B; and if it fails to make either move (“choice”), then it loses. A  B is similar, with the difference that here it is Machine who has initial moves and who loses if no such move is made.

Definition 3.4.1 Assume A0 = (Dm, Dn, Vr0, A0) and A1 =(Dm, Dn, Vr1, A1) are games. 

a)     A0  A1 is defined as the game (Dm, Dn, Vr0Vr1, G) such that:

b)     A0  A1 is defined as the game (Dm, Dn, Vr0Vr1, G) such that:

The game A of Figure 3.3.2 can now be easily seen to be (  )  ( ), and its negation be (  )  (  ). The De Morgan laws familiar from classical logic persist: we always have  (A B) = A  B and  (A  B) = A  B. Together with the earlier observed double negation principle, this means that A  B =  (A B) and A  B =  (A  B). Similarly for the quantifier counterparts   and   of  and .

Given a game A(x), the choice universal quantification (read “chall”)  xA(x) of it is nothing but the “infinite choice conjunction” A(0)A(1)A(2) …,  and the choice existential quantification (read “chexists”)   xA(x) of A(x) is the “infinite choice disjunction” A(0)A(1)A(2) …:


Specifically, xA(x) is a game where, in the initial position, only Environment has legal moves, and such a move should be one of the constants. If Environment moves c, then the game continues as A(c), and if Environment fails to make an initial move/choice, then it loses.  xA(x) is similar, with the difference that here it is Machine who has initial moves and who loses if no such move is made.  So, we always have  cxA(x)=A(c)  and cxA(x)=A(c). Below is a formal definition of choice quantifiers.

Definition 3.4.2 Assume A(x) = (Dm, Dn, Vr, A) is a game.

a)     xA(x)  is defined as the game (Dm, Dn, Vr-{x}, G) such that:

b)     xA(x)  is defined as the game (Dm, Dn, Vr-{x}, G) such that:

Now we are already able to express traditional computational problems using formulas. Traditional problems come in two forms: the problem of computing a function f(x), or the problem of deciding a predicate p(x). The former can be written as xy(y=f(x)), and the latter as x(p(x)  p(x)). So, for instance, the constant “successor” game of Figure 2.3.4 will be written as xyA(y=x+1), and the unary “generalized successor” game of Figure 2.3.5 as xyA(y=x+z).  The following game, which is about deciding the “evenness” predicate, could be written as  x(y(x=2y)  y(x=2y)) ( will be officially defined later, but, as promised, its meaning is going to be exactly classical when applied to an elementary game like  x=2y). 


Classical logic has been repeatedly criticized for its operators not being constructive. Consider, for example, xy(y=f(x)). It is true in the classical sense as long as f is a total function. Yet its truth has little (if any) practical import, as “y” merely signifies existence of y, without implying that such a y can actually be found. And, indeed, if f is an incomputable function, there is no method for finding y. On the other hand, the choice operations of CoL are constructive. Computability (“truth”) of  xy(y=f(x)) means more than just existence of y; it means the possibility to actually find (compute, construct) a corresponding y for every x.

Similarly, let Halts(x,y) be the predicate “Turing machine x halts on input y”. Consider xy(Halts(x,y)  Halts(x,y)).    It is true in classical logic, yet not in a constructive sense. Its truth means that, for all x and y, either Halts(x,y)  or Halts(x,y) is true, but it does not imply existence of an actual way to tell which of these two is true after all. And such a way does not really exist, as the halting problem is undecidable. This means that xy(Halts(x,y)  Halts(x,y)) is not computable. Generally, as pointed out earlier, the principle of the excluded middle  A OR A”, validated by classical logic and causing the indignation of the constructivistically-minded, is not valid in computability logic with OR understood as choice disjunction. The following is an example of a constant game of the form A  A with no algorithmic solution (why, by the way?):

 xy(Halts(x,y)  Halts(x,y))    xy(Halts(x,y)  Halts(x,y)).

Chess   Chess, on the other hand, is an example of a computable-in-principle yet “practically incomputable” problem, with no real computer anywhere close to being able to handle it.

There is no need to give a direct definition for the remaining choice operation of choice implication (“chimplication”), for it can be defined in terms of  in the “standard” way:

Definition 3.4.3 AB  =def   A B.

Each of the other sorts of disjunctions (parallel, sequential and toggling) generates the corresponding implication the same way.


3.5 Parallel operations


The parallel conjunction (read “pand”) AB and the parallel disjunction (read “por”)  AB of A and B are games playing which means playing the two games simultaneously. In order to win in AB [resp. AB],  needs to win in both [resp. at least one] of the components A,B. For instance, ChessChess is a two-board game, where  plays black on the left board and white on the right board, and where it needs to win in at least one of the two parallel sessions of chess. A win can be easily achieved here by just mimicking in Chess the moves that the adversary makes in Chess, and vice versa. This copycat strategy guarantees that the positions on the two boards always remain symmetric, as illustrated below, and thus  eventually loses on one board but wins on the other. 

This is very different from Chess  Chess. In the latter  needs to choose between the two components and then win the chosen one-board game, which makes Chess  Chess essentially as hard to win as either Chess or Chess. A game of the form AB is generally easier (at least, not harder) to win than AB, the latter is easier to win than AB, and the latter in turn is easier to win than AB.

Technically, a move α in the left [resp. right] -conjunct or -disjunct is made by prefixing α with ‘0.’ [resp. ‘1.’]. For instance, in (the initial position of) (AB)(CD), the move ‘1.0’ is legal for , meaning choosing the left -conjunct in the second -disjunct of the game. If such a move is made, the game continues as (AB)C. The player , too, has initial legal moves in (AB)(CD), which are ‘0.0’ and ‘0.1’.

In the formal definitions of this section and throughout the rest of this webpage, we use the important notational convention according to which:

Notation 3.5.1 For a run Γ and string α,  Γ α means  the result of removing from Γ all moves except those of the form αβ,  and then deleting the prefix ‘α’ in the remaining moves.

So, for instance, 1.2,1.0,0.331.  2,0  and  1.2,1.0,0.330. 33. Another example:  where Γ is the leftmost branch of the tree for (  ) (  ) shown in Figure 3.5.3, we have Γ0. =  0 and Γ1.0. Intuitively, we see this Γ as consisting of two subruns, one (Γ0.) being a run in the first -disjunct of (  ) (  ) , and the other (Γ1.) being a run in the second disjunct.

Definition 3.5.2 Assume A0 = (Dm, Dn, Vr0A0) and A1 = (Dm, Dn, Vr1A1) are games.

a)     A0A1 is defined as the game (Dm, Dn, Vr0Vr1, G) such that:

b)     A0A1 is defined as the game (Dm, Dn, Vr0Vr1, G) such that:

When A and B are (constant) finite games, the depth of AB or AB is the sum of the depths of A and B, as seen below.


 Figure 3.5.3: Parallel disjunction

This signifies an exponential growth of the breadth, meaning that, once we have reached the level of parallel operations, continuing drawing trees in the earlier style becomes no fun. Not to be disappointed though: making it possible to express large- or infinite-size game trees in a compact way is what our game operators are all about after all.

Whether trees are or are not helpful in visualizing parallel combinations of unistructural games, prefixation is still very much so if we think of each unilegal position Φ of A as the game  ΦA. This way, every unilegal run G of A becomes a sequence of games as illustrated in the following example.

Example 3.5.4   To the (uni)legal run Γ1.7, 0.7, 0.49, 1.49  of game A = xy(yx2)  xy(y=x2) induces the following sequence, showing how things evolve as Γ runs, i.e., how the moves of Γ affect/modify the game that is being  played:

The run hits the true proposition 4972  49=72, and hence is won by the machine.

As we may guess, the parallel universal quantification (“pall”) xA(x) of A(x) is nothing but A(0)A(1)A(2) … and the parallel existential quantification (“pexists”) xA(x) of A(x) is nothing but A(0)A(1)A(2)

Definition 3.5.5 Assume A(x) = (Dm, Dn, Vr, A) is a game.

a)     xA(x)  is defined as the game (Dm, Dn, Vr-{x}, G) such that:

b)     xA(x)  is defined as the game (Dm, Dn, Vr-{x}, G) such that:

The next group of parallel operators are parallel recurrence (“precurrence”)  and parallel corecurrence (“coprecurrence”) .  A is nothing but the infinite parallel conjunction AAA…, and A is the infinite parallel disjunction AAA…. Equivalently, A and A can be respectively understood as xA and xA, where x is a dummy variable on which A does not depend. Intuitively, playing A means simultaneously playing in infinitely many “copies” of A, and  is the winner iff it wins A in all copies. A is similar, with the only difference that here winning in just one copy is sufficient.

Definition 3.5.6 Assume A = (Dm, Dn, Vr, A) is a game.

a)     A is defined as the game (Dm, Dn, Vr, G) such that:

b)     A is defined as the game (Dm, Dn, Vr, G) such that:

As was the case with choice operations, we can see that the definition of each parallel operation seen so far in this section can be obtained from the definition of its dual by just interchanging  with . Hence it is easy to verify that we always have  (A B) = AB(A B) = ABxA(x) = xA(x),  xA(x) = xA(x),  A = AA = A, This, in turn, means that each parallel operation is definable in the standard way in terms of its dual operation and negation. For instance, AB can be defined as (AB), and A as A. Three more parallel operations defined in terms of negation and other parallel operations are   parallel implication (“pimplication”) parallel rimplication (“primplication”)   and  parallel repudiation (“prepudiation”) . Here the prefix “p”, as before, stands for “parallel”, and the prefix “r” in “rimplication” stands for “recurrence”. 

Definition 3.5.7    a) A  B  =def   A  B

                              b) A  B  =def  A  B

                              c)  A  =def  A

Note that, just like negation and unlike choice operations, parallel operations preserve the elementary property of games. When restricted to elementary games, the meanings of ,  and  coincide with those of classical conjunction, disjunction and implication. Further, as long as all individuals of the universe have naming constants,  the meanings of  and  coincide with those of classical universal quantifier and existential quantifier. The same conservation of classical meaning (but without any conditions on the universe) is going to be the case with the blind quantifiers , defined later; so, at the elementary level, when all individuals of the universe have naming constants,  and  are indistinguishable from  and , respectively. As for the parallel recurrence and corecurrence, for an elementary A we simply have A=A=A

While all classical tautologies automatically remain valid when parallel operators are applied to elementary games, in the general case the class of valid (“always computable”) principles shrinks. For example, P PP, i.e. P(PP), is not valid.  Back to our chess example, one can see that the earlier copycat strategy successful for ChessChess would be inapplicable to Chess(ChessChess). The best that  can do in this three-board game is to  synchronize Chess with one of the two conjuncts of ChessChess. It is possible that then Chess and the unmatched Chess are both lost, in which case the whole game will be lost.

The principle P PP is valid in classical logic because the latter sees no difference between P and PP. On the other hand, in virtue of its semantics, CoL is resource-conscious, and in it P is by no means the same as PP or PP.  Unlike P PP,  P PP is a valid principle. Here, in the antecedent, we have infinitely many “copies” of P. Pick any two copies and, via copycat, synchronize them with the two conjuncts of the consequent. A win is guaranteed.


3.6 Reduction


Intuitively, AB is the problem of reducing B to A: solving AB means solving B while having A as a computational resource. Specifically,  may observe how A is being solved (by the environment), and utilize this information in its own solving B. As already pointed out, resources are symmetric to problems: what is a problem to solve for one player is a resource that the other player can use, and vice versa. Since A is negated in AB and negation means switching the roles, A appears as a resource rather than problem for  in AB.  Our copycat strategy for ChessChess was an example of reducing Chess to Chess. The same strategy was underlying Example 3.5.4, where xy(y=x2) was reduced to itself.

Let us look at a more meaningful example: reducing the acceptance problem to the halting problem. The former, as a decision problem, will be written as xy (Accepts(x,y)  Accepts(x,y)), where Accepts(x,y) is the predicate “Turing machine x accepts input y”. Similarly, the halting problem is written as xy (Halts(x,y)  Halts(x,y)). Neither problem has an algorithmic solution, yet the following pimplication does:

xy (Halts(x,y)  Halts(x,y))  xy (Accepts(x,y)  Accepts(x,y))

Here is Machine’s winning strategy for the above game. Wait till Environment makes moves 1.m and 1.n for some m and n. Making these moves essentially means asking the question “Does machine m accept input n?”.  If such moves are never made, you win. Otherwise, the moves bring the game down to

xy (Halts(x,y)  Halts(x,y))  Accepts(m,n)  Accepts(m,n)

Make the moves 0.m and 0.n, thus asking the counterquestion “Does machine m halt on input n?”.  Your moves bring the game down to 

Halts(m,n)  Halts(m,n) Accepts(m,n)  Accepts(m,n)

Environment will have to answer this question, or else it loses (why?). If it answers by move 0.0 (“No, m does not halt on n”), you make the move 1.0 (say “m does not accept n”). The game will be brought down to Halts(m,n) Accepts(m,n). You win, because if m does not halt on n, then it does not accept n, either. Otherwise, if Environment answers by move 0.1 (“Yes, m halts on n”), start simulating m on n until m halts. If you see that m accepted n, make the move 1.1 (say “m accepts n”); otherwise make the move 1.0 (say “m does not accept n”). Of course, it is a possibility that the simulation goes on forever. But then Environment has lied when saying “m halts on n”; in other words, the antecedent is false, and you still win.  

Note that what Machine did when following the above strategy was indeed reducing the acceptance problem to the halting problem: it solved the former using an external (Environment-provided) solution of the latter.

There are many natural concepts of reduction, and a strong case can be made in favor of the thesis that  the sort of reduction captured by  is most basic among them. For this reason, we agree that, if we simply say “reduction”, it always means the sort of reduction captured by  . A great variety of other reasonable concepts of reduction is expressible in terms of .  Among those is Turing reduction. Remember that a predicate q(x) is said to be Turing reducible to a predicate p(x) if q(x) can be decided by a Turing machine equipped with an oracle for p(x).  For a positive integer n, n-bounded Turing reducibility is defined in the same way, with the only difference that here the oracle is allowed to be used only n times. It turns out that parallel rimplication  is a conservative generalization of Turing reduction. Namely, when p(x) and q(x) are elementary games (i.e. predicates), q(x) is Turing reducible to p(x) if and only if the problem x(p(x) p(x))  x(q(x)  q(x)) has an algorithmic solution. If here we change  back to , we get the same result for 1-bounded Turing reducibility. More generally, as one might guess, n-bounded Turing reduction will be captured by

x1(p(x1)  p(x1))    xn(p(xn)  p(xn)) x(q(x) q(x)).

If, instead, we write 

x1xn((p(x1)  p(x1))  (p(xn) p(xn))) x(q(x) q(x)),

then we get a conservative generalization of n-bounded weak truth-table reduction. The latter differs from n-bounded Turing reduction in that here all n oracle queries should be made at once, before seeing responses to any of those queries. What is called mapping (many-one) reducibility of q(x) to p(x) is nothing but computability of xy(q(x)p(y)), where AB abbreviates (AB)(BA).  We could go on and on with this list.

And yet many other natural concepts of reduction expressible in the language of CoL  may have no established names in the literature. For example, from the previous discussion it can be seen that a certain reducibility-style relation holds between the predicates Accepts(x,y) and Halts(x,y) in an even stronger sense than the algorithmic winnability of

xy (Halts(x,y)  Halts(x,y)) xy (Accepts(x,y)  Accepts(x,y)).

In fact, not only the above problem has an algorithmic solution, but also the generally harder-to-solve problem

 xy (Halts(x,y) Halts(x,y) Accepts(x,y)  Accepts(x,y)).

Among the merits of CoL is that it offers a formalism and deductive machinery for systematically expressing and studying computation-theoretic relations such as reducibility, decidability, enumerability, etc., and all kinds of variations of such concepts.

Back to reducibility, while the standard approaches only allow us to talk about (a whatever sort of) reducibility as a relation between problems, in our approach reduction becomes an operation on problems, with reducibility as a relation simply meaning computability of the corresponding combination (such as AB) of games. Similarly for other relations or properties such as the property of decidability. The latter becomes the operation of deciding if we define the problem of deciding a predicate (or any computational problem) p(x) as the game x(p(x) p(x)). So, now we can meaningfully ask questions such as “Is the reduction of the problem of deciding q(x) to the problem of deciding p(x) always reducible to the mapping reduction of q(x) to p(x)?”. This question would be equivalent to whether the following formula is valid in CoL:

xy(q(x) p(y)) (x(p(x)  p(x)) x(q(x)  q(x))).

The answer turns out to be “Yes”, meaning that mapping reduction is at least as strong as reduction. Here is a strategy which wins this game no matter what particular predicates p(x) and q(x) are: 

1. Wait till, for some m, Environment brings the game down to  

xy(q(x p(y)) (x(p(x)  p(x)) q(m)  q(m)).

2. Bring the game down to  

y(q(m p(y)) (x(p(x)  p(x)) q(m)  q(m)).

3. Wait till, for some n, Environment brings the game down to  

 (q(m p(n)) (x(p(x)  p(x)) q(m)  q(m)).

4. Bring the game down to  

(q(m p(n)) (p(n)  p(n) q(m)  q(m)).

5. Wait till Environment brings the game down to one of the following:  

      5a. (q(m p(n)) (p(n) q(m)  q(m)). In this case, further bring it down to (q(m p(n)) (p(n) q(m)), and you have won.

      5b. (q(m p(n)) (p(n) q(m)  q(m)). In this case, further bring it down to (q(m p(n)) (p(n) q(m)), and you have won, again.

We can also ask: “Is the mapping reduction of q(x) to p(x) always reducible to the reduction of the problem of deciding q(x) to the problem of deciding p(x)?”. This question would be equivalent to whether the following formula is valid:

( x(p(x)  p(x)) x(q(x)  q(x)))   xy(q(x p(y)).

The answer here turns out to be “No”, meaning that mapping reduction is properly stronger than reduction. This negative answer can be easily obtained by showing that the above formula is not provable in the deductive system CL4 that we are going to see later. CL4 is sound and complete with respect to validity. Its completeness implies that any formula which is not provable in it (such as the above formula) is not valid. And the soundness of CL4 implies that every provable formula is valid. So, had our ad hoc attempt to come up with a strategy for xy(q(x p(y)) (x(p(x)  p(x)) x(q(x)  q(x))) failed, its validity could have been easily established by finding a CL4-proof of it.

To summarize, CoL offers not only a convenient language for specifying computational problems and relations or operations on them, but also a systematic tool for answering questions in the above style and beyond.


3.7 Blind operations


Our definition of the blind universal quantifier (“blall”) x and blind existential quantifier (“blexists”) x assumes that the game A(x) to which they are applied satisfies the condition of unistructurality in x. This condition is weaker than the earlier seen unistructurality: every unistructural game is also unistructural in x, but not vice versa. Intuitively, unistructurality in x means that the structure of the game does not depend on (how the valuation evaluates) the variable x. Formally, we say that a game A(x)=(Dm,Dn,Vr,A)  is unistructural in x iff, for any (Vr,Dm)-valuation e and any two individuals a and b, we have LreA(a) = LreA(b).  All constant or elementary games are unistructural in (whatever variable) x. And all operations of CoL are known to preserve unistructurality in x. 

Definition 3.7.1 Assume A(x) = (Dm, Dn, Vr, A) is a game unistructural in x.

a)     xA(x)  is defined as the game (Dm, Dn, Vr-{x}, G) such that:

b)     xA(x)  is defined as the game (Dm, Dn, Vr-{x}, G) such that:

Intuitively, playing xA(x) or xA(xmeans just playing A(x) “blindly”, without knowing the value of x. In xA(x), Machine wins iff the play it generates is successful for every possible value of x from the domain, while in xA(x) being successful for just one value is sufficient. When applied to elementary games, the blind quantifiers act exactly like the quantifiers of classical logic, even if not all individuals of the universe have naming constants.

From the definition one can see a perfect symmetry between  and . Therefore, as with the other quantifiers seen so far, the standard De Morgan laws and interdefinabilities hold.

Unlike xA(x) which is a game on infinitely many boards, both xA(x) and xA(x)  are one-board games. Yet, they are very different from each other. To see this difference, compare the problems x(Even(x)  Odd(x)) and x(Even(x)  Odd(x)). The former is an easily winnable game of depth 2: Environment selects a number, and Machine tells whether that number is even or odd. The latter, on the other hand, is a game which is impossible to win. This is a game of depth 1, where the value of x is not specified by either player, and only Machine moves --- tells whether (the unknown) x is even or odd. Whatever Machine says, it loses, because there is always a value for x that makes the answer wrong.

This should not suggest that nontrivial -games can never be won. For instance, the problem x(Even(x)  Odd(x)  y(Even(x+y)  Odd(x+y))) has and easy solution. The idea of a winning strategy is that, for any given y, in order to tell whether x+y is even or odd, it is not really necessary to know the value of x. Rather, just knowing whether x is even or odd is sufficient. And such knowledge can be obtained from the antecedent. In other words, for any known y and unknown x, the problem of telling whether x+y is even or odd is reducible to the problem of telling whether x is even or odd. Specifically, if both x and y are even or both are odd, then x+y is even; otherwise x+y is odd. Below is the evolution sequence induced by the run 1.5, 0.0, 1.1 where Machine used such a strategy.

Machine won because the play hit the true x(Even(x Odd(x+5)). Notice how x persisted throughout the sequence. Generally, the (,)-structure of a game will remain unchanged in such sequences. The same is the case with the parallel operations such as  in the present case.

 Exercise 3.7.2 Make your best guess regarding whether the following problems are always computable.

xA(x)   xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

x(A(x)B(x))   xA(x)xB(x)  answer

x(A(x)B(x))   xA(x) xB(x)  answer

xA(x)xB(x)   x(A(x)B(x))  answer




3.8 Branching operations


There are only four branching operations that we consider: recurrence , corecurrence , rimplication  and repudiation .  Let us talk about recurrence first.

We have already seen one (parallel) sort of recurrence operations, and more are still to come. What is common for the members of the family of game operations called recurrence operations is that, when applied to a game A, they turn it into a game playing which means repeatedly playing A. In terms of resources, recurrence operations generate multiple “copies” of A, thus making A a reusable/recyclable resource. In classical logic, recurrence-style operations would be meaningless, because classical logic, as we know, is resource-blind and thus sees no difference between one and multiple copies of A. In the resource-conscious CoL, however, recurrence operations are not only meaningful, but also necessary to achieve a satisfactory level of expressiveness and realize its potential and ambitions. Hardly any computer program is used only once; rather, it is run over and over again. Loops within such programs also assume multiple repetitions of the same subroutine. In general, the tasks performed in the real life by computers, robots or humans are typically recurring ones or involve recurring subtasks.

There is more than one naturally emerging recurrence operation. The differences between various recurrence operations are related to how “repetition” or “reusage” is exactly understood. Imagine a computer that has a program successfully playing chess. The resource that such a computer provides is obviously something stronger than just Chess, for it permits to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. The simplest operating system would allow to start a session of Chess, then --- after finishing or abandoning and destroying it --- start a new play again, and so on. The game that such a system plays --- i.e. the resource that it supports/provides --- is Chess,  which assumes an unbounded number of plays of Chess in a sequential fashion. A formal definition of the operation , called sequential recurrence, will be given later is Section 3.9.  

A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by Chess, meaning nothing but the infinite parallel conjunction Chess  Chess  Chess  ...  As we remember from Section 3.5,  is called parallel recurrence.

Yet a really good operating system would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e., create any number of  “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is the branching recurrence (“brecurrence”) Chess of Chess.

At the intuitive level, the difference between  and  is that in A, unlike A,  Environment does not have to restart A from the very beginning every time it wants to reuse it (as a resource); rather, Environment is allowed to backtrack to any  of the previous --- not necessarily starting --- positions and try a new continuation from there, thus depriving the adversary of the possibility to reconsider the moves it has already made in that position. This is in fact the type of reusage every purely software resource allows or would allow in the presence of an advanced operating system and unlimited memory: one can start running process A; then fork it at any stage thus creating two threads that have a common past but possibly diverging futures  (with the possibility to treat one of the threads as a “backup copy” and preserve it for backtracking purposes); then further fork any of the branches at any time; and so on.

The less flexible type of reusage of A assumed by A, on the other hand, is closer to what infinitely many autonomous physical resources would naturally offer, such as an unlimited number of independently acting robots each performing task A, or an unlimited number of computers with limited memories, each one only capable of and responsible for running a single thread of process A. Here the effect of forking an advanced stage of A cannot be achieved unless, by good luck, there are two identical copies of the stage, meaning that the corresponding two robots or computers have so far acted in precisely the same ways.

The formal definitions of A and its dual branching corecurrence (“cobrecurrence”) A (= A) in early papers on CoL [Jap03, Jap09a] were direct formalizations of the above intuitions, with an explicit presence of “replicative” moves used by players to fork a given thread of A and create two threads out of one. Later, in [Jap12b], another definition was found which was proven to be equivalent to the old one in the sense of mutual reducibility of the old and the new versions of A. The new definition less directly corresponds to the above intuitions, but is technically simpler, and we choose it as our “canonical” definition of branching (co)recurrence. To be able to state it, we agree on the following:

Notation 3.8.1 Where Γ is a run and w is a bitstring (finite or infinite sequence of 0s and 1s), Γw means the result of deleting from Γ all moves except those that look like u.β for some initial segment u of w, and then further deleting the prefix “u.” from such moves.

Definition 3.8.2 Assume A = (Dm, Dn, Vr, A) is a game.

a)     A is defined as the game (Dm, Dn, Vr, G)  such that:

b)     A is defined as the game (Dm, Dn, Vr, G)  such that:

The direct intuitions underlying the above definition are as follows. To play A or A means to simultaneously play in multiple parallel copies/threads of A.  Each such thread is denoted by an infinite bitstring w (so, there are in fact uncountably many threads). Every legal move by either player looks like u.β for some finite bitstring u, and the effect/meaning of such a move is simultaneously making the move β in all threads w such that u is an initial segment of w. So, where Γ is the overall run of A or A, the run in a given thread w of A is Γw. In order to win A, Machine needs to win A in all threads, while for winning A it is sufficient to win in just one thread.

It is obvious that  A=A and A=A, hence A=A and A=A.

Branching recurrence  can be shown to be stronger than its parallel counterpart  , in the sense that the principle  AA is valid while AA is not. The two groups of  operators, in isolation from each other, also validate different principles. For instance, A(AAA) A is valid while  A(AAA) A is not;  (A B) AB is valid while (AB) AB is not;  x(A(x) A(x)) is valid while x(A(x) A(x)) is not.

Branching rimplication (“brimplication”)  and branching repudiation (“brepudiation”)  are defined in terms of , and  the same way as parallel rimplication  and  repudiation  are defined in terms of , and :

Definition 3.8.3 

        a) A B  =def  A B

        b)  A  =def  A 

Exercise 3.8.4 The Kolmogorov complexity k(x) of a number x is the size of the smallest Turing machine which outputs x on input 0. The Kolmogorov complexity problem  xy(y=k(x)) has no algorithmic solution. Nor is it reducible to the halting problem in the strong sense of , meaning that the problem x y(Halts(x,y)  Halts(x,y)) xy(y=k(x)) has no algorithmic solution, either. The Kolmogorov complexity problem, however, is reducible to the halting problem in the weaker sense of , meaning that Machine has a winning strategy for  x y(Halts(x,y)  Halts(x,y))  xy(y=k(x)). Describe such a strategy, informally.    

Both and are conservative generalizations of Turing reduction. Specifically, for any predicates p(x) and q(x), the problem x(p(x)  p(x))  x(q(x)  q(x)) is computable iff q(x) is Turing reducible to p(x) iff the problem  x(p(x)  p(x))  x(q(x)  q(x)) is computable. This means that, when restricted to traditional sorts of problems such as decision problems, the behaviors of   and   are indistinguishable. This stops being the case when these operators are applied to problems with higher degrees of interactivity though. For instance, the following problem is computable, but becomes incomputable with  instead of :

yx(Halts(x,y)  Halts(x,y))  y(x(Halts(x,y) Halts(x,y)) x(Halts(x,y) Halts(x,y))).

Generally, (AB) (AB) is valid but (AB) (AB) is not.

While both   and   are weaker than  and hence more general,  is still a more interesting operation of reduction than . What makes it special is the following belief. The latter, in turn, is based on the belief that  (and by no means )  is the operation allowing to reuse its argument in the strongest algorithmic sense possible.

Let A and B be computational problems (games). We say that B is brimplicationally [resp. primplicationally, pimplicationally, etc.] reducible to A iff AB [resp. AB, AB, etc.] has an algorithmic solution (winning strategy).

Thesis 3.8.5  Brimplicational reducibility is an adequate mathematical counterpart of our intuition of reducibility in the weakest (and thus most general) algorithmic sense possible. Specifically:

   (a) Whenever a problem B is brimplicationally reducible to a problem A, B is also algorithmically reducible to A according to anyone’s reasonable intuition.

   (b) Whenever a problem B is algorithmically reducible to a problem A according to anyone’s reasonable intuition, B is also brimplicationally reducible to A.

This is pretty much in the same sense as (by the Church-Turing thesis), a function f is computable by a Turing machine iff  f  has an algorithmic solution according to everyone’s reasonable intuition.


3.9 Sequential operations


The sequential conjunction (“sand”) AB of games A and B starts and proceeds as A. It will also end as A unless, at some point, Environment decides to switch to the next component, in which case A is abandoned, and the game restarts, continues and ends as B. The sequential disjunction (“sor”) AB of A and B is similar, with the difference that it is Machine who decides whether and when to switch from A to B.

The original formal definition of AB and AB found in [Jap08b] was a direct formalization of the above description. Definition 3.9.1 given below, while less direct, still faithfully formalizes the above intuitions as long as only static games are considered, and we opt for it because it is technically simpler. Specifically, Definition 3.9.1 allows either player to continue making moves in A even after a switch takes place; such moves are meaningless but harmless. Similarly, it allows either player to make moves in B without waiting for a switch to take place, even though a smart player would only start making such moves if and when a switch happens.  

Definition 3.9.1 Assume A0 = (Dm, Dn, Vr0, A0) and A1 = (Dm, Dn, Vr1, A1) are games.

a)     A0A1 is defined as the game (Dm, Dn, Vr0Vr1, G) such that:

b)     A0A1 is defined as the game (Dm, Dn, Vr0Vr1, G) such that:

Recall that, for a predicate p(x),  x(p(x)  p(x))  is the problem of deciding p(x). Then what is the similar-looking x(p(x)  p(x))? If you’ve guessed that this is the problem of semideciding p(x), you are right. Machine has a winning strategy in this game if and only if p(x) is semidecidable, i.e., recursively enumerable. Namely, if p(x) is recursively enumerable, a winning strategy by Machine is to wait until Environment brings the game down to p(n)  p(n) for some particular n.  After that, Machine starts looking for a certificate of p(n)’s being true. If and when such a certificate is found (meaning that p(n) is indeed true), Machine makes a switch move turning  p(n)  p(n) into the true p(n); and if no certificate exists (meaning that p(n) is false), then  Machine keeps looking for a non-existent certificate forever and thus never makes any moves, meaning that the game ends as p(n), which, again, is true. And vice versa: any effective winning strategy for x(p(x)  p(x)) can obviously be seen as a semidecision procedure for p(x), which accepts an input n iff the strategy ever makes a switch move in the scenario where Environment’s initial choice of a value for x is n.

Existence of effective winning strategies for games has been shown[Jap03] to be closed under the rules ‘from AB and A conclude B’,  from A and B conclude AB’, ‘from A conclude xA’, ‘from A conclude A’. In view of these closures, the validity of the principles discussed below implies certain known facts from the theory of computation. Needless to say, those examples once again demonstrate how CoL can be used as a systematic tool for defining new interesting properties and relations between computational problems, and not only reproducing already known theorems but also discovering an infinite variety of new facts. 

The valid formula x(p(x)  p(x))  x(p(x) p(x)) x(p(x) p(x)) “expresses” the well known fact that, if both a predicate p(x) and its complement p(x) are recursively enumerable, then p(x) is decidable. Actually, the validity of this formula means something more: it means that the problem of deciding p(x) is reducible to the (-conjunction of) the problems of semideciding p(x) and p(x). In fact, a reducibility in an even stronger sense (in a sense that has no name) holds, expressed by the formula x((p(x)  p(x)) ((p(x)  (x)) p(x)  p(x)).

The formula xy(q(x p(y))   x(p(x) p(x))    x(q(x) q(x)) is also valid, which implies the known fact that, if a predicate q(x) is mapping reducible to a predicate p(x) and p(x) is recursively enumerable, then so is q(x). Again, the validity of this formula, in fact, means something even more: it means that the problem of semideciding q(x) is reducible to the problems of mapping reducing q(x) to p(x) and semideciding p(x).

Certain other reducibilities hold only in the sense of rimplications rather than implications. Here is an example. Two Turing machines are said to be equivalent iff they accept exactly the same inputs.  Let Neq(x,y) be the predicate ‘Turing machines x and y are not equivalent’. This predicate is neither semidecidable nor co-semidecidable. However, the problem of its semideciding primplicationally (and hence also brimplicationally) reduces to the halting problem. Specifically, Machine has an effective winning strategy for the following game:

zt(Halts(z,t)  Halts(z,t))  xy(Neq(x,y)  Neq(x,y)).

A strategy here is to wait till Environment specifies some values m and n for x and y. Then, create a variable i, initialize it to 1 and do the following. Specify z and t as m and i in one yet-unused copy of the antecedent, and as n and i in another yet-unused copy. That is, ask Environment whether m halts on input i and whether n halts on the same input. Environment will have to provide the correct pair of answers, or else it loses. (1) If the answers are “No,No”, increment i to i+1 and repeat the step. (2) If the answers are “Yes,Yes”, simulate both m and n on input i until they halt. If both machines accept or both reject, increment i to i+1 and repeat the step. Otherwise, if one accepts and one rejects, make a switch move in the consequent and celebrate victory. (3) If the answers are “Yes,No”, simulate m on i until it halts. If m rejects i, increment i to i+1 and repeat the step. Otherwise, if m accepts i, make a switch move in the consequent and you win. (4) Finally, if the answers are “No,Yes”, simulate n on i until it halts. If n rejects i, increment i to i+1 and repeat the step. Otherwise, if n accepts i, make a switch move in the consequent and you win.

The sequential universal quantification (“sall”) xA(x) of A(x) is essentially nothing but the infinite sequential conjunction A(0)  A(1)  A(2) …; the sequential existential quantification (“sexists”) xA(x) of A(x) is A(0)  A(1)  A(2)  ….; the sequential recurrence (“srecurrence”)  A of A is A  A  A …; and the sequential corecurrence (“cosrecurrence”) A of A is A  A  A Formally, we have:

Definition 3.9.2  Assume  A(x) = (Dm, Dn, Vr, A) is a game.

a)     xA(x) is defined as the game (Dm, Dn, Vr-{x}, G) such that:

b)     xA(x) is defined as the game (Dm, Dn, Vr-{x}, G) such that:

Definition 3.9.3 Assume A = (Dm, Dn, Vr, A) is a game.

a)     A is defined as the game (Dm, Dn, Vr, G) such that:

b)     A is defined as the game (Dm, Dn, Vr, G) such that, for any (Vr,Dm)-valuation e, we have:

For  illustration, remember the Kolmogorov complexity function k(x). The value of k(x) is known to be always smaller (in fact, logarithmically so) than x itself. While xy(y=k(x)) is not computable, Machine does have an algorithmic winning strategy for the problem xy(y=k(x)). It goes like this: Wait till Environment specifies a value m for x, thus asking “what is the Kolmogorov complexity of m?” and bringing the game down to y(y=k(m)). Answer that the complexity is m, i.e. specify y as m. After that, start simulating, in parallel, all machines n of sizes smaller than m on input 0. Whenever you find a machine n that returns m on input 0 and is smaller than any of the previously found such machines, make a switch move and, in the new copy of  y(y=k(m)), specify y as the size |n| of n.   This obviously guarantees success: sooner or later the real Kolmogorov complexity c of m will be reached and named; and, even though the strategy will never be sure that k(m) is not something yet smaller than c, it will never really find a reason to further reconsider its latest claim that c=k(m). 

Exercise 3.9.4 Describe a winning strategy for xy(k(x)=(x-y)).

As expected, sequential implication (“simplication”) ,  sequential rimplication (“srimplication”)  and sequential repudiation (“srepudiation”)  are defined as follows:

Definition 3.9.5    a) A B  =def  A  B

                              b) A B  =def  A B

                              c)  A  =def A    



3.10 Toggling operations


As all other sorts of conjunctions and disjunctions, toggling conjunction (“tand”)  and toggling disjunction (“tor”)  are dual to each other, and the definition of one is obtained from the definition of the other by interchanging the roles of the two players. Let us just focus on disjunction. One of the ways to characterize AB is the following. This game starts and proceeds as a play of A. It will also end as an ordinary play of A unless, at some point, Machine decides to switch to B, after which the game becomes and continues as B. It will also end as B unless, at some point, Machine decides to switch back to A. In such a case the game again becomes A, where A resumes from the position in which it was abandoned (rather than from its start position, as would be the case, say, in ABA). Later Machine may again switch to (the abandoned position of) B, and so on. Machine wins the overall play iff it switches from one component to another (“changes its mind”, or “corrects its mistake”) at most finitely many times and wins in its final choice, i.e., in the component which was chosen last to switch to.

An alternative way to characterize AB is to say that it is played exactly like AB, with the only difference that Machine is allowed to make a ‘choose A’ or ‘choose B’ move any number of times. If infinitely many choices are made, Machine loses. Otherwise, the winner in the play will be the player who wins in the component that was chosen last (“the eventual choice”). The case of Machine having made no choices at all is treated as if it had chosen A. Thus, as in sequential disjunction, the leftmost component is the “default”, or “automatically made”, initial choice.

It is important to note that the adversary (or perhaps even the machine itself) never knows whether a given choice of a component of AB is the last choice or not. Otherwise, if Machine was required to indicate that it has made its final choice, then the resulting operation, for static games, would essentially be the same as AB. Indeed, as we remember, in static games it never hurts a player to postpone making moves, so Environment could just inactively wait till the last choice is declared, and start playing the chosen component only after that, as in the case of AB; under these circumstances, making some temporary choices before making the final choice would not make any sense for Machine, either. 

What would happen if we did not require that Machine can change its mind only finitely meany times? There would be no “final choice” in this case. So, the only natural winning condition in the case of infinitely many choices would be to say that Machine wins iff it simply wins in one of the components. But then the resulting operation would be essentially the same as , as a smart machine would always opt for keeping switching between components forever. That is, allowing infinitely many choices would amount to not requiring any choices at all, as is the case with AB.

One may also ask what would happen if we allowed Machine to make an arbitrary initial choice between A and B and then reconsider its choice only (at most) once? Such an operation on games, albeit meaningful, would not be basic. That is because it can be expressed through our primitives as (AB)  (BA).

The very weak sort of choice captured by  is the kind of choice that, in real life, one would ordinarily call choice after trial and error. Indeed, a problem is generally considered to be solved after trial and error (a correct choice/solution/answer found) if, after perhaps coming up with several wrong solutions, a true solution is eventually found. That is, mistakes are tolerated and forgotten as long as they are eventually corrected. It is however necessary that new solutions stop coming at some point, so that there is a last solution whose correctness determines the success of the effort. Otherwise, if answers have kept changing all the time, no answer has really been given after all. Or, imagine Bob has been married and divorced several times. Every time he said “I do”, he probably honestly believed that this time, at last, his bride was “the one”, with whom he would live happily ever after.  Bob will be considered to have found his Ms. Right after all if and only if one of his marriages indeed turns out to be happy and final.   

As we remember, for a predicate p(x),  x(p(x) p(x)) is the problem of deciding p(x), and x(p(x)  p(x)) is the weaker (easier to solve) problem of semideciding p(x). Not surprisingly, x(p(x)p(x)) is also a decision-style problem, but still weaker than the problem of semideciding p(x). This problem has been studied in the literature under several names, the most common of which is recursively approximating p(x). It means telling whether p(x) is true or not, but doing so in the same style as semideciding does in negative cases: by correctly saying “Yes” or “No” at some point (after perhaps taking back previous answers several times) and never reconsidering this answer afterwards.  Observe that semideciding p(x) can be seen as always saying “No” at the beginning and then, if this answer is incorrect, changing it to “Yes” at some later time; so, when the answer is negative, this will be expressed by saying “No” and never taking back this answer, yet without ever indicating that the answer is final and will not change. Thus, the difference between semideciding and recursively approximating is that, unlike a semidecision procedure, a recursive approximation procedure can reconsider both negative and positive answers, and  do so several times rather than only once.

It is known that a predicate p(x) is recursively approximable (i.e., the problem of its recursive approximation has an algorithmic solution) iff p(x) has the arithmetical complexity Δ2, i.e., if both p(x) and its complement p(x) can be written in the form zy s(z,y,x), where s(z,y,x) is a decidable predicate. Let us see that, indeed, algorithmic solvability of x(p(x)  p(x)) is equivalent to p(x)’s being of complexity Δ2.

First, assume p(x) is of complexity Δ2, specifically, for some decidable predicates q(z,y,x) and r(z,y,x) we have  p(x) = zy q(z,y,x) and p(x) = zy r(z,y,x). Then x(p(x)  p(x)) is solved by the following strategy. Wait till Environment specifies a value m for x, thus bringing the game down to p(m)p(m). Then create variables i and j, initialize both to 1, choose the p(m) component, and do the following:

Step 1: Check whether q(i,j,m) is true. If yes, increment j to j+1 and repeat Step 1. If not, switch to the p(m) component, reset j to 1, and go to Step 2.

Step 2: Check whether r(i,j,m) is true. If yes, increment j to j+1 and repeat Step 2. If not, switch to the p(m) component, reset j to 1, increment i to i+1, and go to Step 1.

With a little thought, one can see that the above algorithm indeed solves x(p(x)  p(x)).

For the opposite direction, assume a given algorithm Alg solves x(p(x)  p(x)).  Let q(z,y,x) be the predicate such that q(i,j,m) is true iff, in the scenario where the environment specified x as m at the beginning of the play, so that the game was brought down to p(m)p(m), we have: (1) at the ith computation step, Alg chose the p(m) component; (2) at the jth computation step, Alg did not move. Quite similarly, let r(z,y,x) be the predicate such that r(i,j,m) is true iff, in the scenario where the environment specified x as m at the beginning of the play, so that the game was brought down to p(m)p(m), we have:  (1) either i=1 or, at the ith computation step, Alg chose the p(m) component; (2) at the jth computation step, Alg did not move. Of course, both q(z,y,x) and r(z,y,x) are decidable predicates, and hence so are y>z q(z,y,x) and y>z r(z,y,x). Now, it is not hard to see that p(x) = zy(y>z q(z,y,x)) and p(x) = zy (y>z r(z,y,x)). So, p(x) is indeed of complexity Δ2.

As a real-life example of a predicate which is recursively approximable but neither semidecidable nor co-semidecidable, consider the predicate k(x)<k(y), saying that number x is simpler than number y in the sense of Kolmogorov complexity. As noted earlier, k(z) (the Kolmogorov complexity of z) is always smaller than z. Here is an algorithm which recursively approximates the predicate k(x)<k(y), i.e., solves the problem xy(k(x)<k(y)  k(x)< k(y)). Wait till Environment brings the game down to k(m)<k(n)  k(m)< k(n) for some m and n. Then start simulating, in parallel, all Turing machines t of sizes smaller than tmax(m,n) on input 0. Whenever you see that a machine t returns m  and the size of t is smaller than that of any other previously found machines that return m or n on input 0, choose k(m)< k(n). Quite similarly, whenever you see that a machine t returns n and the size of t is smaller than that of any other previously found machine that returns n on input 0, as well as smaller or equal to the size of any other previously found machines that return m on input 0, choose k(m)<k(n). Obviously, the correct choice between k(m)<k(n) and k(m)<k(n) will be made sooner or later and never reconsidered afterwards. This will happen when the procedure hits --- in the role of t --- a smallest-size machine that returns either m or n on input 0.

Anyway, here is our formal definition of the toggling conjunction and disjunction:

Definition 3.10.1  Assume A0  = (Dm, Dn, Vr0, A0) and A1 = (Dm, Dn, Vr1, A1) are games.

a)     A0A1 is defined as the game (Dm, Dn, Vr0Vr1, G such that:

b)     A0A1  is defined as the game (Dm, Dn, Vr0Vr1, G)  such that:

As expected, the toggling universal quantification (“tall”)  xA(x) of A(x) is essentially nothing but A(0)A(1)A(2)…, and  the toggling existential quantification (“texists”) xA(x) of A(x) is A(0)A(1)A(2)…. Formally, we have:

Definition 3.10.2 Assume A(x) = (Dm, Dn, Vr, A) is a game.

a)     xA(x) is defined as the game (Dm, Dn, Vr-{x}, G) such that:

b)     xA(x) is defined as the game (Dm, Dn, Vr-{x}, G) such that:

For an example illustrating toggling quantifiers at work, remember that Kolmogorov complexity k(x) is not a computable function, i.e., the problem xy(y=k(x)) has no algorithmic solution. However, replacing y with y in it yields an algorithmically solvable problem. A solution for xy(y=k(x)) goes like this. Wait till the environment chooses a number m for x, thus bringing the game down to y(y=k(m)), which is essentially nothing but  0=k(m) 1=k(m)  2=k(m) … Initialize i to a sufficiently large number, such as i=π(m) where π is the bound function mentioned earlier. After that perform the following routine: Switch to the disjunct i=k(m) of 0=k(m) 1=k(m)  2=k(m) …, and then start simulating on input 0, in parallel, all Turing machines whose sizes are smaller than i; if and when you see that one of such machines returns m, update i to the size of that machine, and repeat the present routine.

We close this sections with formal definitions of toggling recurrence (“trecurrence”) , toggling corecurrence (“cotrecurrence”) , toggling implication (“timplication”)   , toggling rimplication (“trimplication”)  and toggling repudiation (“trepudiation”) :

Definition 3.10.3 Assume A = (Dm, Dn, Vr, A) is a game.

a)     A is defined as the game (Dm, Dn, Vr, G) such that:

b)     A is defined as the game (Dm, Dn, Vr, G) such that:

Definition 3.10.4    a) A B  =def  AB

                                b) AB  =def  A B

                                c) A  =def  A  



3.11 Cirquents

 

The syntactic constructs called cirquents, briefly mentioned earlier, take the expressive power of CoL to a qualitatively higher level, allowing us to form, in a systematic way, an infinite variety of game operations. Each cirquent is --- or can be seen as --- an independent operation on games, generally not expressible via composing operations taken from some fixed finite pool of primitives, such as the operations seen in the preceding subsections of the present section.

 

Cirquents come in a variety of versions, and the main common characteristic feature of them is having mechanisms for explicitly accounting for possible sharing of subcomponents between different components. Sharing is the main distinguishing feature of cirquents from more traditional means of expression such as formulas, sequents,  hypersequents [Avr87], or structures of the calculus of structures [Gug07].  While the latter can be drawn as (their parse) trees, cirquents more naturally call for circuit- or graph-style constructs.   The earliest cirquents were intuitively conceived as collections of sequents (sequences of formulas) that could share some formulas and, as such, could be drawn like circuits rather than linear expressions. This explains the etimology of the word: CIRcuit+seQUENT. All Boolean circuits are thus cirquents, but not all cirquents are Boolean circuits. Firstly, because cirquents may have various additional sorts of gates (-gates,  -gates,  -gates, etc.). Secondly, because cirquents may often have more evolved sharing mechanisms than just child-sharing between different gates. For instance, in addition to (or instead of) the possibility of sharing children, a “cluster”[Jap11b] of -gates may also share choices associated with  in game-playing: if the machine chooses the left or the right child for one gate of the cluster, then the same left or right choice automatically extends to all gates of the cluster.

For simplicity considerations, we are not going to introduce cirquents and their semantics in full generality and formal detail here. This is done in [Jap06c, Jap08a, Jap11b, Jap13a]. Instead, to get some intuitive insights, let us only focus on cirquents that look like Boolean circuits with - and -gates. Every such cirquent C can be seen as an n-ary operation on games, where n is the number of inputs of C. For instance, the cirquent

represents the 3-ary game operation informally defined as follows. Playing (A,B,C), as is the case with all parallel operations, means playing simultaneously in all components of it. In order to win, Machine needs to win in at least two out of the three components. Any attempt to express this operation in terms of ,  or other already defined operations is going to fail. For instance, the natural candidate (AB)(AC)(BC) is dramatically inadequate. This is a game on six rather than three boards, with A played on boards #1 and #3, B on boards #2 and #5, and C on boards #4 and #6. A special case of it is (AA)(AA)(AA), which fails to indicate for instance that the 1st and the 3rd occurrences of A stand for the same copy of A while the 2nd occurrence for a different copy where a different run can be generated. 

As we just had a chance to see, even at the most basic (,) level, cirquents are properly more expressive than formulas. It is this added expressiveness and flexibility that, for some fragments of CoL, makes a difference between axiomatizability and unaxiomatizability: even if one is only trying to set up a deductive system for proving valid formulas, intermediate steps in proofs of such formulas still inherently require using cirquents that cannot be written as formulas.[Jap06c, Jap13a, Jap13b]  An example is the system CL15 presented in Section 6.4.

In the present piece of writing we are exclusively focused on the formula-based version of CoL, seeing cirquents (in Section 6.4) only as technical servants to formulas. This explains why we do not attempt to define the semantics of cirquents formally.  It should however be noted that cirquents are naturally called for not only within the specific formal framework of CoL, but also in the framework of all resource-sensitive approaches in logic. Such approaches intrinsically require the ability to account for the possibility of resource sharing, as sharing is a ubiquitous phenomenon in the world of resources. Yet formula- or sequent-based approaches do not and cannot possess such an ability. The following naive example may provide some insights. It is about the earlier discussed (A,A,A) combination, seen as a combination of resources in the abstract/naive sense rather than the specific game-semantical sense of CoL.  

Imagine a vending machine that has slots for 25-cent (25c) coins, with each slot taking a single coin. Coins can be true (authentic) or false (counterfeited). Inserting a false coin into a slot fills the slot up (so that no other coins can be inserted into it until the operation is complete), but otherwise does not fool the machine into thinking that it has received 25 cents. A candy costs 50 cents, and the machine will dispense a candy if at least two of its slots receive true coins. Pressing the “dispense” button while having inserted anything less than 50 cents, such as a single coin, or one true and two false coins, results in a non-recoverable loss.

Victor has three 25c-coins, and he knows that two of them are true while one is perhaps false (but he has no way to tell which one is false). Could he get a candy? The answer depends on the number of slots that the machine has. Consider two cases: machine M2 with two slots, and machine M3 with three slots. Victor would have no problem with M3: he can insert his three coins into the three slots, and the machine, having received 50c, will dispense a candy. With M2, however, Victor is powerless. He can try inserting arbitrary two of his three coins into the two slots of the machine, but there is no guarantee that one of those two coins is not false, in which case Victor will end up with no candy and only 25 cents remaining in his pocket.

Both M2 and M3 can be understood as resources --- resources turning coins into a candy. And note that these two resources are not the same: M3 is obviously stronger (“better”), as it allows Victor to get a candy whereas M2 does not, while, at the same time, anyone rich enough to be able to make M2 dispense a candy would be able to do the same with M3 as well.  Yet, formulas fail to capture this important difference. With  , ,  here understood as abstract multiplicative-style operations on resources, M2 and M3 can be written as R2Candy  and  R3Candy, respectively: they consume a certain resource R2 or R3 and produce Candy. What makes M3 stronger than M2 is that the subresource R3 that it consumes is weaker (easier to supply) than the subresource R2 consumed by M2. Specifically, with one false and two true coins, Victor is able to satisfy R3 but not R2.

 

The resource R2 can be represented as the cirquent

which, due to being tree-like, can also be adequately written as the formula 25c  25c. As for the resource R3, either one of the following two cirquents is an adequate representation of it, with one of them probably showing the relevant part of the actual physical circuitry used in M3:

 

Figure 3.11.1: Cirquents for the “two out of three” combination of resources

 
Unlike R2, however, R3 cannot be represented through a formula. 25c25c does not fit the bill, for it represents R2 which, as we already agreed, is not the same as R3. In another attempt to find a formula, we might try to rewrite one of the above two cirquents --- let it be the one on the right --- into an “equivalent” formula in the standard way, by duplicating and separating shared nodes, as we did earlier when tackling (A,A,A). This results in (25c25c)(25c25c)(25c25c), which, however, is not any more adequate than 25c25c. It expresses not R3 but the resource consumed by a machine with six coin slots grouped into three pairs, where (at least) one slot in each of the three pairs needs to receive a true coin. Such a machine thus dispenses a candy for 75 rather than 50 cents, which makes Victor's resources insufficient.

The trouble here is related to the inability of formulas to explicitly account for resource sharing or the absence thereof. The cirquent on the right of the above figure stands for a (multiplicative-style) conjunction of three resources, each conjunct, in turn, being a disjunction of two subresources of type 25c. However, altogether there are three rather than six 25c-type subresources, each one being shared between two different conjuncts of the main resource.

From the abstract resource-philosophical point of view of CoL, classical logic and linear logic are two imperfect extremes.  In the former, all occurrences of a same subformula mean “the same” (represent the same resource), i.e., everything is shared that can be shared; and in the latter, each occurrence stands for a separate resource, i.e., nothing is shared at all. Neither approach does thus permit to account for mixed cases where certain occurrences are meant to represent the same resource while some other occurrences stand for different resources of the same type. It is a shame that linear logic fails to express simple, natural and unavoidable things such as the “two out of three” combination expressed by the cirquents of Figure 3.11.1.