Giorgi Japaridze's page for

Computability Logic
გამოთვლადობის ლოგიკა
Логика вычислимости
可计算性逻辑
(CoL)


Computability is one of the most interesting and fundamental concepts in mathematics and computer science, and it is natural to ask what logic it induces. This is where Computability Logic (CoL) comes in. It is a formal theory of computability in the same sense as classical logic is a formal theory of truth. In a broader and more proper sense, CoL is not just a particular theory but an ambitious and challenging program for redeveloping logic following the scheme “from truth to computability”.

Under the approach of CoL, logical operators stand for operations on computational problems, formulas represent such problems, and their “truth” is seen as algorithmic solvability. In turn, computational problems --- understood in their most general, interactive sense --- are defined as games played by a machine against its environment, with “algorithmic solvability” meaning existence of a machine which wins the game against any possible behavior of the environment. With this semantics, CoL provides a systematic answer to the question “what can be computed?”, just like classical logic is a systematic tool for telling what is true. Furthermore, as it happens, in positive cases “what can be computed” always allows itself to be replaced by “how can be computed”, which makes CoL of potential interest in not only theoretical computer science, but many applied areas as well, including constructive applied theories, interactive knowledge base systems, resource oriented systems for planning and action, or declarative programming languages.

Currently CoL is still at an early stage of development, with open problems prevailing over answered questions. For this reason it offers plenty of research opportunities, with good chances of interesting findings, for those with interests in logic and its applications in computer science. Come and join!

 

Contents

1    The philosophy of CoL

1.1     Syntax vs. semantics

1.2     Why game semantics?

1.3     CoL vs. classical logic

1.4     CoL vs. linear logic

1.5     CoL vs. intuitionistic logic

1.6     CoL vs. independence-friendly logic

1.7   The ‘from semantics to syntax’ paradigm

2    Games

2.1     The two players

2.2     Moves, runs and positions

2.3     Constant games

2.4     Not-necessarily-constant games

2.5     Static games

3    The CoL zoo of game operations

3.1    Preview

3.2     Prefixation

3.3    Negation

3.4     Choice operations

3.5     Parallel operations

3.6     Reduction

3.7     Blind operations

3.8     Branching operations

3.9     Sequential operations

        3.10  Toggling operations

        3.11  Cirquents

4    Interactive machines

  4.1   Interactive computability

  4.2   Interactive complexity

5    The language of CoL and its semantics

        5.1   Formulas

        5.2   Interpretations

        5.3   Validity

6    Axiomatizations

        6.1   Outline

        6.2   The Gentzen-style system CL7

        6.3   The Gentzen-style system Int+ 

        6.4   The cirquent calculus system CL15 

        6.5   The brute force system CL13

        6.6   The brute force system CL4

        6.7   The brute force system CL12

7    Clarithmetic (CoL-based arithmetic)

       7.1   Introduction

        7.2   Clarithmetic versus bounded arithmetic

        7.3   Motivations

        7.4   Common preliminaries for all our theories of clarithmetic

        7.5   Clarithmetics for polynomial time, polynomial space, elementary and primitive recursive computability

        7.6   Clarithmetics for provable computability

        7.7   Tunable clarithmetic

8    CoL-based knowledgebase and resourcebase systems

9    Literature

      9.1   Selected papers on CoL by Japaridze

      9.2   Selected papers on CoL by other authors

      9.3   PhD theses, MS theses and externally funded projects on CoL

      9.4   Lecture notes on CoL, presentations and other links

      9.5   Additional references

 

 

 

1 The philosophy of CoL

 

1.1 Syntax vs. semantics


A starting philosophical point of CoL is that syntax --- the study of axiomatizations or other string-manipulation systems --- exclusively owes its right on existence to semantics,  and is thus secondary to it. Logic is meant to be the most basic, general-purpose formal tool potentially usable by intelligent agents in successfully navigating the real life. And it is semantic that establishes that ultimate real-life meaning of logic. Syntax is important, yet it is so not in its own right but only as much as it serves a meaningful semantics, allowing us to realize the potential of that semantics in some systematic and perhaps convenient or efficient way. Not passing the test for soundness with respect to the underlying semantics would fully disqualify any syntax, no matter how otherwise appealing it is. Note --- disqualify the syntax and not the semantics. Why this is so hardly requires any explanation. Relying on an unsound syntax might result in wrong beliefs, misdiagnosed patients or crashed spaceships. An incomplete syntax, on the other hand, potentially means missing benefits that should not have been missed.

A separate question, of course, is what counts as a semantics. The model example of a semantics with a capital ‘S’ is that of classical logic. But in the logical literature this term often has a more generous meaning than what CoL is ready to settle for. As pointed out, CoL views logic as a universal-utility tool. So, a capital-‘S’-semantics should be non-specific enough, and applicable to the world in general rather than some very special and artificially selected (worse yet, artificially created) fragment of it. Often what is called a semantics is just a special-purpose apparatus designed to help analyze a given syntactic construction rather than understand and navigate the outside world. The usage of Kripke models as a derivability test for intuitionistic formulas, or as a validity criterion in various systems of modal logic is an example. An attempt to see more than a technical, syntax-serving instrument in this type of lowercase-‘s’-semantics might create a vicious circle: a deductive system L under question is “right” because it derives exactly the formulas that are valid in a such and such Kripke semantics; and then it turns out that the reason why we are considering the such and such Kripke semantics is that ... it validates exactly what L derives.

 


1.2 Why game semantics?


For CoL, a game are not just a game.  It is a foundational mathematical concept on which a powerful enough logic (=semantics) should be based. This is so because, as noted, CoL sees logic as a “real-life navigational tool”, and it is games that appear to offer the most comprehensive, coherent, natural, adequate and convenient mathematical models for the very essence of all “navigational” activities of agents: their interactions with the surrounding world. An agent and its environment  translate into game-theoretic terms as two players; their actions as moves; situations arising in the course of interaction as  positions; and success or failure as wins or losses.

It is natural to require that the interaction strategies of the party that we have referred to as an “agent” be limited to algorithmic ones, allowing us to henceforth call that player a machine.  This is a minimum condition that any non-esoteric game semantics would have to satisfy. On the other hand, no restrictions can or should be imposed on the environment, who represents a capricious user, the blind forces of nature, or the devil himself. Algorithmic activities being synonymous to computations, games thus represent computational problems --- interactive tasks performed by computing agents, with computability meaning winnability, i.e. existence of a machine that wins the game against any possible (behavior of the) environment.

In the 1930s, in the form of the famous Church-Turing thesis, mankind came up with what has been perceived as an ultimate mathematical definition of the precise meaning of algorithmic solvability. Curiously or not, such a definition was set forth and embraced before really having attempted to answer the seemingly more basic question about what computational problems are --- the very entities that may or may not have algorithmic solutions in the first place. The tradition established since then in theoretical computer science by computability simply means Church-Turing computability of functions, as the task performed by every Turing machine is nothing but receiving an input x and generating the output f(x) for some function f.

Yet most tasks that computers and computer networks perform are interactive, with strategies not always allowing themselves to be adequately understood as functions. To understand this, it would be sufficient to just reflect on the behavior of one's personal computer. The job of your computer is to play one long game against you. Now, have you noticed your faithful servant getting slower every time you use it? Probably not. That is because the computer is smart enough to follow a non-functional strategy in this game. If its strategy was a function from positions (interaction histories) to moves, the response time would inevitably keep worsening due to the need to read the entire, continuously lengthening interaction history every time before responding. Defining strategies as functions of only the latest moves is also not a way out. The actions of your computer certainly depend on more than your last keystroke.

Two main concepts on which the semantics of CoL is based are those of static games (defined later) and their winnability. Correspondingly, the philosophy of CoL relies on two beliefs that, together, present what can be considered an interactive version of the Church-Turing thesis:

Thesis 1.2.1

      (1) The concept of static games is an adequate formal counterpart of our intuition of (“pure”, speed-independent) interactive computational problems.

      (2) The concept of winnability is an adequate formal counterpart of our intuition of algorithmic solvability of such problems.


So far games in logic have been mostly used to find models and semantical justifications for syntactically introduced popular systems such as intuitionistic logic or linear logic. For instance: Lorenzen’s [Lor61, Fel85] game semantics was created for the purpose of justifying intuitionistic logic; Hintikka’s [Hin73] game-theoretic semantics was originally meant to provide an alternative semantics for classical logic; Blass’ [Bla92] game semantics was mainly motivated by the needs of linear logic, and so were the game-semantical approaches elaborated by Abramsky, Jagadeessan [Abr94] and others. In this respect, CoL turns the tables around and views games as foundational entities in their own right.  It starts with identifying the most basic and meaningful operations on games. Understanding those operations as logical operators, it then looks at the logics induced by the corresponding concept of validity, regardless of how unusual such logics may turn out. There is no target syntactic construction to serve.

 

1.3 CoL vs. classical logic


Computability in the traditional Church-Turing sense is a special case of winnability --- winnability restricted to two-step (input/output, question/answer) interactive problems. So is the classical concept of truth, which is nothing but winnability restricted to propositions, viewed by CoL as zero-step problems, i.e., games with no moves that are automatically won by the machine when true and lost when false. This way, the game  semantics of CoL is a generalization, refinement and conservative extension of that of classical logic.

Thinking of a human user in the role of the environment, computational problems are synonymous to computational tasks --- tasks performed by a machine for the user/environment. What is a task for a machine is then a resource for the environment, and vice versa. So the CoL games, at the same time, formalize our intuition of computational resources. Logical operators are understood as operations on such  tasks/ resources/games, atoms as variables ranging over tasks/resources/games, and validity of a logical formula as being “always winnable”, i.e. as existence --- under every particular interpretation of atoms --- of a machine that successfully accomplishes/provides/wins the corresponding task/resource/game no matter how the environment behaves.  It is this approach that makes CoL “a formal theory of computability in the same sense as classical logic is a formal theory of truth”. Furthermore, as mentioned, the classical concept of truth is a special case of winnability, which eventually translates into classical logic’s being nothing but a special fragment of computability logic.


1.4 CoL vs. linear logic


CoL is a semantically conceived logic, and so far its syntax has only been partially  developed.  In a sense, this situation is opposite to the case with linear logic [Gir87], where a “full” syntactic construction existed right at the beginning, but where a really good formal semantics convincingly justifying the resource intuitions traditionally associated with that construction is still being looked for. In fact, the semantics of CoL can be seen to be providing such a justification, although only in a limited sense explained below.

The set of valid formulas in a certain fragment of the otherwise more expressive language of CoL forms a logic which is similar to but by no means the same as linear logic.  When no exponentials are involved, the two logics typically agree on short and simple formulas. For instance, both logics reject   P  P   and accept P  P  P, with classical-shape propositional connectives here and later understood as the corresponding multiplicative operators of linear logic, and square-shape operators as additives ( = “with”,  = “plus”). Similarly, both logics reject P  P and accept P  P. On the other hand, CoL disagrees with linear logic on many more evolved formulas. E.g., CoL validates the following two principles rejected even by affine logic, i.e., linear logic with the weakening rule:

(P  Q)  (R  S)     (P  R)  (Q  S)  (Blass’s [Bla92] principle);

(P  (R  S)) (Q  (R  S)) ((P  Q)  R)   ((P  Q)  S)    (P  Q)  (R  S).

With  and , which are CoL’s counterparts of the exponentials !,? of linear logic,  disagreements can be observed already at the level of very short formulas, such as P  P, accepted by CoL but rejected by affine logic. Generally, every formula provable in affine logic is valid in CoL but not vice versa.

Neither the similarities nor the discrepancies are a surprise. The philosophies of CoL and linear logic overlap in their pursuit to develop a logic of resources. But the ways this philosophy is materialized are rather different. CoL starts with a mathematically strict and intuitively convincing semantics, and only after that, as a natural second step, asks what the corresponding logic and its possible axiomatizations are. On the other hand, it would be accurate to say that linear logic started directly from the second step. Even though certain companion semantics were provided for it from the very beginning, those are not quite what we earlier agreed to call capital-‘S’. As a formal theory of resources (rather than that of phases or coherence spaces), linear logic has been introduced syntactically rather than semantically, essentially by taking classical sequent calculus and deleting the rules that seemed unacceptable from a certain intuitive, naive and never formalized "resource-semantical" point of view. In the absence of a clear formal concept of resource-semantical truth or validity, the question about whether the resulting system was complete could not even be meaningfully asked. In this process of syntactically rewriting classical logic some innocent, deeply hidden principles could have easily gotten victimized. CoL believes that this is exactly what happened, with the above-mentioned formulas separating it from linear logic, along with many other principles, viewed as babies thrown out with the bath water. Of course, many retroactive attempts have been made to find semantical (often game-semantical) justifications for linear logic. Technically it is always possible to come up with some sort of a formal semantics that matches a given target syntactic construction, but the whole question is how natural and meaningful such a semantics is in its own rights, and how adequately it corresponds to the logic’s underlying philosophy and ambitions. Unless, by good luck, the target system really is “the right logic”, the chances of a decisive success when following the odd scheme from syntax to semantics could be rather slim. The natural scheme is from semantics to syntax. It matches the way classical logic evolved and climaxed in Gödel’s completeness theorem. And, as we now know, this is exactly the scheme that computability logic, too, follows.

With the two logics in a sense competing for the same market, the main --- or perhaps only --- advantage of linear logic over CoL is its having a nice and simple syntax. In a sense, linear logic is (rather than has) a beautiful syntax; and computability logic is (rather than has) a meaningful semantics. An axiomatization of CoL in the full generality of its language has not been found yet. Only certain fragments  of it have been axiomatized, including the one corresponding to the multiplicative-additive fragment of linear logic. Such axiomatizations tend to be more involved than that of linear logic, so the syntactic simplicity advantage of linear logic will always remain. Well, CoL has one thing to say: simplicity is good, yet, if it is most important, then nothing can ever beat ... the empty logic.  The latter, just like linear logic, is sound with respect to  resource semantics, whatever  such a semantics means;  it is sound  with respect to  any semantics for that matter.

From CoL’s perspective, classical logic and (loosely understood) linear logic can be seen as two extremes within its all-unifying resource- (game-) semantical vision. Specifically, the main difference between linear logic and classical logic is that the former sees all occurrences of the same atom in a formula as different copies of the same resource, while the latter sees such occurrences as the same single copy, simply written at different places for technical reasons.  So, linear logic rejects $1$1$1 because in the antecedent we have one dollar while in the consequent two, with the possibility to buy an apple with one and an orange with the other. On the other hand, classical logic accepts thus principle because it sees a single dollar in the consequent, written twice for some strange reason; if the first conjunct of the consequent is spent buying an apple, the second conjunct is also automatically spent on the same apple, with no money remaining for oranges. As for CoL, it allows us to write expressions where all occurrences of $1 stand for the same one-dollar bill, or all stand for separate bills, or we have a mixture of these two, where some occurrences stand for the same bill while some other occurrences in the same expression stand for different bills.

Blass [Bla92] was the first to attempt a game-semantical justification for linear logic. This goal was only partially achieved, as the resulting logic, just like the above-discussed fragment of CoL, turned out to validate more principles than linear logic does. It should be pointed out that the multiplicative-additive fragment of the logic induced by Blass’ semantics coincides with the corresponding fragment of CoL. This does not extend to the full language of linear logic though. For instance, Blass’ semantics validates the following principle which is invalid in CoL: [Jap12a]  

P  ((P  P  P)  (P  P  P))    P.

In full generality, the “linear-logic fragment” of CoL is strictly between affine linear logic and the logic induced by Blass’ semantics.[Jap09a]



1.5 CoL vs. intuitionistic logic


From CoL’s perspective, the situation with intuitionistic logic [Hey71] is rather similar to what we had with linear logic. Intuitionistic logic is another example of a syntactically conceived logic. Despite decades of efforts, no fully convincing semantics has been found for it. Lorenzen’s [Lor61] game semantics, which has a concept of validity without having a concept of truth, has been perceived as a technical supplement to the existing syntax rather than as having independent importance. Some other semantics, such as Kleene’s realizability [Kle52] or Gödel’s Dialectica interpretation [Göd58], are closer to what we might qualify as capital-‘S’. But, unfortunately, they validate certain principles unnegotiably rejected by intuitionistic logic.

Just like this was the case with linear logic, the set of valid formulas in a certain fragment of the language of CoL forms a logic which is properly stronger[Mez10, Jap07b] than Heyting’s intuitionistic logic. However, the two come “much closer” to each other than CoL and linear logic do. The shortest known formula separating intuitionistic logic from the corresponding fragment of CoL is

(P Q R)  ( P S T)  (P Q)  (P R)  ( P S)  ( P T),

where , , ,  are CoL’s counterparts of the intuitionistic implication, negation, disjunction and conjunction, respectively.

Just like the resource philosophy of CoL overlaps with that of linear logic, so does its algorithmic philosophy with the constructivistic philosophy of intuitionism. The difference, again, is in the ways this philosophy is materialized.  Intuitionistic logic has come up with a “constructive syntax” without having an adequate underlying formal semantics, such as a clear concept of truth in some constructive sense. This sort of a syntax was essentially obtained from the classical one by banning the offending law of  the excluded middle. But, as in the case of linear logic, the critical question immediately springs out: where is a guarantee that together with excluded middle some innocent principles would not be expelled just as well? The constructivistic claims of CoL, on the other hand, are based on the fact that it defines truth as algorithmic solvability. The philosophy of CoL does not find the term constructive syntax meaningful unless it is understood as soundness with respect to some constructive semantics, for only a semantics may or may not be constructive in a reasonable sense. The reason for the failure of P  P in CoL is not that this principle ... is not included in its axioms. Rather, the failure of this principle is exactly the reason why this principle, or anything else entailing it, would not be among the axioms of a sound system for CoL. Here “failure” has a precise semantical meaning. It is non-validity, i.e. existence of a problem A for which A  A is not algorithmically solvable. 

It is also worth noting that, while intuitionistic logic irreconcilably defies classical logic, computability logic comes up with a peaceful solution acceptable for everyone. The key is the expressiveness of its language, that has (at least) two versions for each traditionally controversial logical operator, and particularly the two versions  and  of disjunction. The semantical meaning of  conservatively extends --- from moveless games to all games --- its classical meaning, and the principle P  P survives as it represents  an always-algorithmically-solvable combination of problems, even if solvable in a sense that some constructivistically-minded might pretend to fail to understand. And the semantics of , on the other hand, formalizes and conservatively extends a different, stronger meaning which apparently every constructivist associates with disjunction. As expected, then P  P turns out to be semantically invalid. CoL's proposal for settlement between classical and constructivistic logics then reads: ‘If you are open (=classically) minded, take advantage of the full expressive power of CoL; and if you are constructivistically minded, just identify a collection of the operators whose meanings seem constructive enough to you, mechanically disregard everything containing the other operators, and put an end to those fruitless fights about what deductive methods or principles should be considered right and what should be deemed wrong’.



1.6 CoL vs. independence-friendly logic


Relatively late developments [Jap06c, Jap11b, Xu14] in CoL made it possible to switch from formulas to the more flexible and general means of expression termed cirquents. The main distinguishing feature of the latter is that they allow to account for various sorts of sharing between subexpressions. After such a generalization, independence-friendly (IF) logic [Hin73] became a yet another natural fragment of CoL.[Jap11b, Xu14, Xu16]  As such, it is a conservative fragment, just like classical logic and unlike linear or intuitionistic logics. This is no surprise because, just like CoL and unlike linear or intuitionistic logics, IF logic is a semantically conceived logic.

In fact, for a long time, IF logic remained a pure semantics without a syntax. In its full first-order language, IF logic was simply known to be unaxiomatizable. As for the propositional fragment, there was none because the traditional approaches to IF logic had failed to offer any non-classical semantics for propositional connectives.  Under CoL’s cirquent-based approach to IF logic this is no longer so, and “independence-friendly” propositional connectives are just as meaningful as their quantifier counterparts. Based on this new treatment, a sound and complete axiomatization for propositional IF logic has been later found.[Xu14, Xu16]



1.7 The ‘from semantics to syntax’ paradigm


CoL’s favorite ‘from semantics to syntax’ paradigm for developing a logic can be characterized as consisting of three main stages. The first one can be called the Informal semantics stage, at which the main intuitions are elaborated along with the underlying motivations, and the formal language of the logic is agreed upon.  The second one is the Formal semantics stage, at which those intuitions are formalized as a mathematically strict semantics for the adopted language, and definitions of truth and validity are given. Finally, the third one is the Syntax stage, at which one constructs axiomatizations for the corresponding set of valid principles,   whether in the full language of the logic or some natural fragments of it, and proves the adequacy (soundness and completeness) of such constructions. 

Figure 1.7.1: Three stages of developing a logic

CoL and classical logic went through all three stages in sequence. So did IF logic, even though for a long time it was stuck at the second stage. As for linear and intuitionistic logics, they essentially jumped from the first stage directly to the third stage, skipping the inermediary stage. It is the absence of formal rather than informal semantics that we meant when saying that the two logics were conceived syntactically rather than semantically. Why is such a jump wrong? It is impossible to directly “prove” that the proposed syntax adequately corresponds to the informal-semantical intuitions underlying the logic. After all, Syntax is mathematically well defined while Informal semantics is from the realm of philosophy or intuitions, so an adequacy claim lacks any mathematical meaning. Of course, the same is the case with Formal semantics vs. Informal semantics. But, at least, both are “semantics”, which makes it qualitatively easier to convincingly argue (albeit not prove) that one adequately corresponds to the other. Once such a correspondence claim is accepted, one can prove the adequacy of the syntax by showing that it is sound and complete with respect to the formal semantics. 

The intermediary role of Formal semantics can be compared with that of Turing machines.  Since the intuitive concept of a machine (algorithm) and the mathematical concept of a Turing machine are both about “machines”, it is relatively easy to argue in favor of the (mathematically unprovable) Church-Turing thesis, which claims an adequate correspondence between the two. Once this thesis is accepted, one can relatively easily show --- this time mathematically --- that recursive functions or lambda calculus, too, adequately correspond to our intuitions of machine-computability. Arguing in favor of such a claim directly, without having the intermediary Turing machine model, would not be easy, as recursive definitions or lambda terms do not at all resemble what our intuition perceives as machines.

Based directly on the resource intuitions associated with linear logic, can anyone tell whether, for instance, the principle P  (P  PP)  P should be accepted? An orthodox linear logician might say ‘No, because it is not provable in Girard’s canonical system’. But the whole point is that we are just trying to understand what should be provable and what should not. From similar circularity suffer the popular attempts to “semantically” justify intuitionistic provability in terms of … intuitionistic provability.

 

 

2 Games

 

2.1 The two players


The CoL games are between two players, called Machine and Environment (not always capitalized, and may take articles). On the picture below we see a few other names for these players.

We will be using  and   as symbolic names for Machine and Environment, respectively.  is a deterministic mechanical device only capable of following algorithmic strategies.  ’s strategies, on the other hand, are arbitrary. Throughout this article, we shall consistently use the green color to represent Machine, and red for Environment. As seen from the picture, our sympathies are with the machine. Why should we be fans of the machine even when it is us who act in the role of its “evil” environment? Because the machine is a tool, and what makes it valuable as such is exactly its being a good player. In fact, losing a game by the machine means that it is malfunctioning.   Imagine Bob using a computer for computing the “28% of x” function in the process of preparing his US federal tax return. This is a game where the first move is by Bob, consisting in inputting a number m and meaning asking his machine the question “what is 28% of m?”. The machine wins iff it answers by the move/output n such that n=0.28m. Of course, Bob does not want the machine to tell him that 27,000 is 28% of 100,000. In other words, he does not want to win against the machine. For then he could lose the more important game against Uncle Sam.



2.2 Moves, runs and positions


Machine and Environment interact with each other through mutually observable actions.  We will be using the term “move” instead of “action”. Looking back at the ordinary Turing machine model, it is about games where only two moves are made: the first move, called “input”, is by the environment, and the second move, called “output”, is by the machine.   

We agree on the following:


2.3 Constant games


A gamestructure is a nonempty set Lr of positions, called legal positions, such that, whenever a position is in Lr, so are all initial segments of it. The empty position   is thus always legal. We extend gamestructures to include infinite runs as well, by stipulating that an infinite run Γ is in Lr iff so is every finite initial segment of Γ.  Intuitively, Lr is the set of legal runs. The runs that are not in Lr are illegal. An illegal move in a given position Φ is a move λ such that Φ,λ is illegal. The player who made the first illegal move in a given run is said to be the offender. Intuitively, illegal moves can be thought of as moves that cannot or should not be made. Alternatively, they can be seen as actions that cause the system crash (e.g., requesting to divide a number by 0).

Gamestructures can be visualized as upside-down trees where the nodes represent positions. Each edge is labeled with a colored move. Such an edge stands for a legal move in the position represented by the upper node of it. Here is an example:  

Figure 2.3.1: A gamrestructure

This gamestructure consists of the following 16 runs:  ,   α,  β,   γ,  α, β,  α, γ,  β, α,  γ, α,  γ,β,  γ, γ,  α, γ,β,   α, γ,γ,  γ, α, β,  γ, α, γ,  γ, β, α,  γ, γ, α.  All of the infinitely (in fact, uncountably) many other runs are illegal. An example of an illegal run is γ, γ, β, β, α. The offender in it is Environment, because the offending third move is red.  

Let Lr be a gamestructure. A content on Lr is a function Wn: Lr  {,}.  When WnΓ, we say that Γ is won by the machine (and lost by the environment); and when WnΓ =, we say that Γ is won by the environment (and lost by the machine). We extend the domain of Wn to all runs by stipulating that an illegal run is always lost by the offender. Since we are fans of Machine, the default meaning of just “won” (or “lost”) is “won (or lost) by the machine”.

Definition 2.3.2 A constant game G is a pair (LrG,WnG), where LrG is a gamestructure, and WnG is a content on LrG.

Figure 2.3.3 below shows a constant game of the structure of Figure 2.3.1:

Figure 2.3.3: A constant game

Here the winner in each position is indicated by the color of the corresponding node. So, for instance, the run γ, α, β is won by the machine; but if this run had stopped after its second move, then the environment would be the winner.  Of course, such a way of indicating winners is not sufficient if there are infinitely long branches (legal runs), for such branches do not have “the corresponding nodes”.

We say that a constant game is strict iff, in every legal position, at most one of the two players has legal moves. Our games generally are not strict. For instance, in the start position of the game of Figure 2.3.3, both players have legal moves. We call such (not-necessarily-strict) games free. Free games model real-life situations more directly and naturally than strict games do. Hardly many tasks that humans, computers or robots perform are strict. Imagine you are playing chess over the Internet on two boards against two independent adversaries that, together, form the (one) environment for you. Let us say you play white on both boards. Certainly in the initial position of this game only you have legal moves. However, once you make your first move --- say, on board #1 --- the picture changes. Now both you and the environment have legal moves, and who will be the next to move depends on who can or wants to act sooner. Namely, you are free to make another opening move on board #2, while the environment --- adversary #1 --- can make a reply move on board #1. A strict-game approach would have to impose some not-very-adequate supplemental conditions uniquely determining the next player to move, such as not allowing you to move again until receiving a response to your previous move. Let alone that this is not how the real two-board game would proceed, such regulations defeat the very purpose of the idea of parallel/distributed computations with all the known benefits it offers.

Because our games are free, strategies for them cannot be defined as functions from positions to moves, because, in some positions (such as the root position in the game of Figure 2.3.3) both players may have legal moves and, if both are willing to move, which of them acts sooner will determine what will be the next move.

The exact meaning of “strategy” will be defined later, but whatever it means, we can see that the machine has a winning strategy in the game of Figure 2.3.3, which can be put this way:     Regardless of what the adversary is doing or has done, go ahead and make move α; make β as your second move if and when you see that the adversary has made move γ, no matter whether this happened before or after your first move”.  This strategy obviously guarantees that the machine will not offend. There are 5 possible legal runs  consistent with it, i.e., legal runs that could be generated when it is followed by the machine: α,  α, β,  β, α,   α, γ, β  and γ, α, β. All are won by the machine.

Let G be a constant game. G is said to be finite-depth iff there is a (smallest) integer d, called the depth of G, such that the length of every legal run of G is d. And G is perifinite-depth iff every legal run of it is finite, even if there are arbitrarily long legal runs. Let us call a legal run Γ of G maximal iff Γ is not a proper initial segment of any other legal run of G. Then we say that G is finite-breadth iff the total number of maximal legal runs of G, called the breadth of G, is finite. Note that, in a general case, the breadth of a game may be not only infinite, but even uncountable. G is said to be (simply) finite iff it only has a finite number of legal runs. Of course, G is finite only if it is finite-breadth, and when G is finite-breadth, it is finite iff it is finite-depth iff it is perifinite-depth. 

As noted in Section 2.2, computational problems in the traditional sense are nothing but functions (to be computed). Such problems can be seen as the following types of games of depth 2:


Figure 2.3.4: The “successor” game

Once we agree that computational problems are nothing but games, the difference in the degrees of generality and flexibility between the traditional approach to computational problems and our approach becomes apparent and appreciable. What we see in Figure 2.3.4 is indeed a very special sort of games, and there is no good call for confining ourselves to its limits. In fact, staying within those limits would seriously retard any more or less advanced and systematic study of computability.

First of all, one would want to get rid of the “one green node per sibling group” restriction for the third-level nodes. Many natural problems, such as the problem of finding a prime integer between n and 2n, or finding an integral root of x2-2n=0, may have more than one as well as less than one solution. That is, there can be more than one as well as less than one “right” output on a given input n.

And why not further get rid of any remaining restrictions on the colors of whatever-level nodes and whatever-level arcs.  One can easily think of natural situations where, say, some inputs do not obligate the machine to generate an output and thus the corresponding 2nd level nodes should be green. An example would be the case where the machine is computing a partially-defined function f and receives an input n on which f is undefined.

So far we have been talking about generalizations within the depth-2 restriction, corresponding to viewing computational problems as very short dialogues between the machine and its environment. Permitting longer-than-2 or even infinitely long runs would allow us to capture problems with arbitrarily high degrees of interactivity and arbitrarily complex interaction protocols.  The task performed by a network server is an example of an infinite dialogue between the server and its environment --- the collection of clients, or let us just say the rest of the network.

It also makes sense to consider “dialogues” of lengths less than 2. “Dialogues” of length 0, i.e. games of depth 0 are said to be elementary. There are exactly two elementary constant games, denoted by  and :



Note that the symbols  and  thus have dual meanings: they may stand for the two elementary games as above, or for the corresponding two players. It will usually be clear from the context which of these two meanings we have in mind.

Just like classical logic, CoL sees no extensional distinction between “snow is white” and , or between “snow is black” and : All true propositions, as games, are , and all false propositions are . In other words, a proposition is a game automatically won by the machine when true and lost when false. This way, CoL’s concept of a constant game is a generalization of the classical concept of a proposition.



2.4 Not-necessarily-constant games


We fix an infinite set Variables = {var0, var1, var2,… } of variables. As usual, lowercase letters near the end of the Latin alphabet will be used to stand (as metavariables) for variables. We further fix the set Constants = {0,1,2,3,…} of decimal numerals, and call its elements constants. With some abuse of concepts, we shall often identify constants with the numbers they represent.

A universe (of discourse) is a pair U = (Dm, Dn), where Dm, called the domain of U, is a nonempty set, and Dn, called the denotation function of U,  is a (total) function of type Constants  Dm.  The elements of Dm will be referred to as the individuals of U. The intuitive meaning of d=Dn(c) is that the individual d is the denotat of the constant c and thus c is a name (or code) of d. So, the function Nm from Dm to the powerset of Constants satisfying the condition cNm(d) d=Dn(c) can be called the naming function of U. Of course, whenever convenient, a universe can be characterized in terms of its naming function rather than denotation function. 

A universe (Dm, Dn) is said to be ideal iff Dn is a bijection. Generally, in a not-necessarily-ideal universe, some individuals may have unique names, some have many names, and some have no names at all. Real-world universes are seldom ideal: not all people living or staying in the United States have social security numbers;  most stars and planets of the Galaxy have no names at all, while some  have several names (Morning Star = Evening Star = Venus); etc. A natural example of an inherently non-ideal universe from the world of mathematics would be the one whose domain is the set  of real numbers, only some of whose elements have names, such as 5, 1/3, Ö2 or p. Generally, even if the set of constants was not fixed, no universe with an uncountable domain would be “ideal”  for the simple reason that  there can only be countably many names. This is so because names, by their very nature and purpose, have to be finite objects. Observe also that many properties of common interest, such as computability or decidability, are usually sensitive with respect to how objects (individuals) are named, as they deal with the names of those objects rather than the objects themselves. For instance, strictly speaking, computing a function f(x) means the ability to tell, after seeing a (the) name of an arbitrary object a, a (the) name of the object b with b=f(a).  Similarly, an algorithm that decides a predicate p(x) on a set S, strictly speaking, takes not elements of S --- which may be abstract objects such as numbers or graphs --- but rather names of those elements, such as decimal numerals or codes. It is not hard to come up with a nonstandard naming of the natural numbers through decimal numerals where the predicate “x is even” is undecidable.  On the other hand, for any undecidable arithmetical predicate p(x), one can come up with a naming  such that p(x) becomes decidable --- for instance, one that assigns even-length names to all a with p(a) and assigns odd-length names to all a with ¬p(a). Classical logic exclusively deals with individuals of a universe without a need to also consider names for them, as it is not concerned with decidability or computability. CoL, on the other hand, with its computational semantics, inherently calls for being more careful about differentiating between individuals and their names, and hence for explicitly considering universes in the form (Dm, Dn) rather than just Dm as classical logic does. 

Where Vr is a set of variables and Dm is the domain of some universe, by a VrDm  valuation we mean a (total) function e of type VrDm. When Vr and Dm are clear from the context, we may omit an explicit reference to them and simply say “valuation”. References to a universe U or its components can be similarly omitted when talking about individuals, denotats, names or some later-defined concepts such as those of a game or a function.

Definition 2.4.1 Let n be a natural number. An n-ary game is a tuple G=(DmG,DnG,VrG,MpG), where (DmG,DnG) is a universe, VrG is a set of n distinct variables, and MpG is a mapping of VrGDmG  valuations to constant games. 

We refer to the elements of VrG as the variables on which the game G depends.  We further refer to the pair (DmG,DnG) as the universe of G, and denote it by UnG. Correspondingly, a game sometimes can be written as the triple (UnG,VrG,MpG) rather than quadruple (DmG,DnG,VrG,MpG). We further refer to DmG as the domain of G, refer to DnG as the denotation function of G, and refer to VrGDmG valuations as G-valuations.

In classical logic, under an intensional (variable-sensitive) understanding, the definition of the concept of an n-ary predicate would look exactly like our definition of an n-ary game after omitting the (now redundant) Dn component, with the only difference that there the Mp function returns propositions rather than constant games. And, just like propositions are nothing but 0-ary predicates, constant games are nothing but 0-ary games. Thus, games generalize constant games in the same way as predicates generalize propositions. And, as constant games are generalized propositions, games are generalized predicates.

In formal contexts, we choose a similar intensional approach to functions. The definition of a function f below is literally the same as our definition of a game G, with the only difference that Mpf is now a mapping of VrfDmf valuations to Dmf (rather than to constant games).  

Definition 2.4.2 Let n be a natural number. An n-ary function is a tuple f=(Dmf, Dnf Vrf,Mpf), where (DmG,DnG) is a universe, Vrf is a set of n distinct variables, and Mpf is a mapping of  Vrf Dmf valuations (called f-valuations) to Dmf. 

Just like in the case of games, we refer to the elements of Vrf as the variables on which the function f depends, refer to Dmf as the domain of f, etc.

Given a game G and an XDmG valuation e with VrGX, we write e[G] to denote the constant game MpG(eʹ) to which MpG maps eʹ, where  eʹ is the restriction of e to VrG (i.e.,  the G-valuation that agrees with e on all variables from VrG). Such a constant game e[G] is said to be an instance of G.  Also, for readability, we usually write LreG and WneG instead of Lre[G]  and  Wne[G].  Similarly, given a function f and an XDmf valuation e with VrfX, we write e[f] to denote the individual Mpf(eʹ) to which Mpf maps eʹ, where  eʹ is the restriction of e to Vrf.

We say that a game is elementary iff so are all of its instances. Thus, games generalize elementary games in the same sense as constant games generalize  and . Further, since the “legal run” component of all instances of elementary games is trivial (the empty run   is the only legal run), and since depending on runs is the only thing that differentiates constant games from propositions, we can and will use “predicate” and “elementary game” as synonyms. Specifically, we understand a predicate p as the elementary game G which depends on the same variables as p such that, for any valuation e, WneG = iff p is true at e. And vice versa: An elementary game G will be understood as the predicate p which depends on the same variables as G and which is true at a given valuation e iff WneG = .  

Convention 2.4.3 Let U be a universe, c a constant and x a variable. We shall write cU to mean the function f with Unf=U, Vrf= and with Mpf such that, for (the only) f-valuation e,   Mpf(e)=DnU(c). And we shall write xU to mean the function f with Unf=U, Vrf={x} and with Mpf such that, for any f-valuation e, Mpf(e)=e[x].

Convention 2.4.4 Assume F is a function (resp. game). Following the standard readability-improving practice established in the literature for functions and predicates, we may fix a tuple (x1,…,xn) of pairwise distinct variables for F when first mentioning it, and write  F as F(x1,…,xn). When doing so, we do not necessarily mean that {x1,…,xn}=VrF. Representing F as F(x1,…,xn) sets a context in which, where g1,…,gn are functions with the same universe as F, we can write F(g1,…,gn) to mean the function (resp. game) H with UnH=UnF, VrH=(VrH –{x1,…,xn})Vrg1 Vrgn  and with MpH such that the following condition is satisfied:

Further, we allow for any of the above “functions” gi to be just a constant c or just a variable x. In such a case, gi should be correspondingly understood as the 0-ary function cU or the unary function xU, where U=UnF. So, for instance, F(0,x) is our lazy way to write F(0F,xU).


The entities that in common language we call games are at least as often non-constant as constant. Board games such as chess and checkers are examples of constant games. On the other hand, almost all card games are more naturally represented as non-constant games: each session/instance of such a game is set by a particular permutation of the card deck, and thus the game can be understood as a game that depends on a variable x ranging over the possible settings of the deck. Even the game of checkers has a natural non-constant generalization Checkers(x) with x ranging over positive even integers, meaning a play on the board of size x
×x where, in the initial position, the first 1.5x black cells are filled with white pieces and the last 1.5x black cells with black pieces.  Then the ordinary checkers can be written as Checkers(8). Furthermore, the numbers of pieces of either color also can be made variable, getting an even more general game Checkers(x,y,z), with the ordinary checkers being the instance Checkers(8,12,12) of it. By allowing rectangular-shape boards, we would get a game that depends on four variables, etc. Computability theory also often appeals to non-constant games to illustrate certain complexity-theory concepts such as alternating computation or PSPACE-completeness. The so called Formula Game or Generalized Geography are typical examples. Both can be understood as games that depend on a variable x, with x ranging over quantified Boolean formulas in Formula Game and over directed graphs in Generalized Geography.

Consider a game G.  What we call a unilegal run of G is a run which is a legal run of all instances of G. And we say that G is unistructural iff all legal runs of all of its instances are unilegal runs of G. The class of unistructural games is known to be closed under all of the game operations studies in CoL.[Jap03]  While natural examples of non-unistructural games exist such as the games mentioned in the above paragraph, virtually all of the other examples of particular games discussed elsewhere in the present article are unistructural.

Non-constant games, as long as they are unistructural, can be visualized as trees in the earlier seen style, with the difference that the nodes of the tree can now be labeled with any predicates rather than only propositions (colors) as before. For any given valuation e, each such label L is telling us the color of the node. Namely, the L-labeled node is green if L is true at e, and red if L is false. 


Figure 2.3.5: The “generalized successor” game

The above figure shows a game which depends on x. Specifically, for every valuation e, the game is about computing the function fe, defined by fe (n) = n+e(z) (“the zth successor of n”).  Note that we have different functions and thus different constant games for different valuations e here.

Denoting the game of Figure 2.3.5 by G(x), the game of Figure 2.3.4 can be seen to be the instance G(1) of it. The latter results from replacing z by 1 in Figure 2.3.5. This replacement turns every label m=n+z into the proposition m=n+1, i.e., into  (green filling) or  (red filling).



2.5 Static games


In the particular games that we have seen so far or will see in the subsequent sections, when talking about the existence of a winning strategy or the absence of thereof, the question about the (relative) speeds of the players was never relevant. That is because all those games share one nice property, called the static property. Below are some intuitive characterizations of this important class of games.

The games that are not static we call dynamic. The following games are dynamic: 



In either game, the player who is quick enough to make the first move becomes the winner. And asking whether Machine has a winning strategy is not really meaningful: whether Machine can win or not depends on the relative speeds of the two players. In a sense, B is even “worse” than A. Just as in A, it would not be a good idea for Machine to wait, because, unless Environment is stupid enough to also decide to wait, Machine will lose. Let us now say that Machine goes ahead and initiates the first move. What may happen in B is that Environment moves before Machine actually completes its move. This will make Machine not only the loser but also the offender. Machine cannot even be sure that its move will be legal!  If communication happens by exchanging messages through a (typically) asynchronous network, that often has some unpredictable delays, this can also be a contest of luck: assuming that the arbitration is done by Environment or a third party who is recording the order of moves, it is possible that Machine makes a move earlier than Environment does, but the message carrying that move is delivered to the  arbiter only after Environment’s move arrives, and the arbiter will be unable to see that it was Machine who moved first. An engineering-style attempt to neutralize this problem could be to let all moves carry timestamps. But this would not save the case: even though timestamps would correctly show the real order of moves by each particular player, they could not be used to compare two moves by two different players, for the clocks of the two players would never be perfectly synchronized.

Another attempt to deal with problems in the above style could be to assign to each player, in a strictly alternating order, a constant-length time slot during which the player has exclusive access to the communication medium. Let alone that this could introduce some unfair asymmetry in favor of the player who gets the first slot, the essence of the problem would still not be taken care of: some games would still essentially depend on the relative speeds of the two players, even if arguably no longer on the speed of the network.

Formally, the concept of static games is defined in terms of “delays”. We say that run D is a green (resp. red) delay of run Γ iff the following two conditions are satisfied:

  1. The moves of either color are arranged inΓ in the same order as in D.
  2. For any n,k³1, if the kth  red (resp. green) move is  made earlier than the the nth green (resp. red)  move in Γ, then so is it in D. 

In other words, D is the result of shifting to the right (“delaying”) some green (resp. red) moves in Γ without violating their order.

Example: 0,2,1,3,4,5,7,8,6,10,9 is a green delay of 0,1,2,3,4,5,6,7,8,9,10.  The former is obtained from the latter by shifting to the right some green moves. When doing so, a green move can jump over a red move, but one green move cannot jump over another green move.

Now, we say that a constant game G is static iff, for either player (color) P and any runs Γ and D such that D is a P delay of Γ, in the context of G, the following two conditions are satisfied:

  1. If Γ is won by P, then so is D.
  2. If P does not offend in Γ, then neither does it in D.

 This concept extends to all games by stipulating that a not-necessarily-constant game is static iff so are all of its instances.

All game operations studied in CoL have been shown to preserve the static property of games.[Jap03]  So, as long as the atoms of an expression represent static games, so does the whole expression. One natural subset of all static games is the closure of elementary games under the operations of CoL.

As already noted, CoL restricts its attention to static games only (at least, at its present stage of development). To see why static games are really what CoL is willing to call “pure” computational problems, imagine a play over the delay-prone Internet. If there is a central arbiter (which can be located either in one of the players' computer or somewhere on a third, neutral territory) recording the order of moves, then the players have full access to information about the official version of the run that is being generated, even though they could suffer from their moves being delivered with delays. But let us make the situation even more dramatic: assume, as this is a typical case in distributed systems, that there is no central arbiter. Rather, each players’ machine records moves in the order it receives them, so that we have what is called distributed arbitration. Every time a player makes a move, the move is appended to the player’s internal record of the run and, at the same time, mailed to the adversary. And every time a message from the adversary arrives, the move contained in it is appended to the player’s record of the run. The game starts. Seeing no messages from Environment, Machine decides to go ahead and make an opening move α. As it happens, Environment also decides to make an “opening” move β. The messages carrying α and β cross. So, after they are both delivered, Machine’s internal records show the position α,β, while Environment thinks that the current position is β,α. Both of the players decide to make two consecutive new moves: γ,δ and ε,ω, respectively, and the two pairs of messages, again, cross. 

After making their second series of moves and receiving a second series of “replies” from their adversaries, both players decide to make no further moves. The game thus ends. According to Machine’s records, the run was α,β,γ,δ,ε,ω,  while Environment thinks that the run was β,α,ε,ω,γ,δ.  As for the “real run”, i.e. the real order in which these six moves were made (if this concept makes sense at all), it can be yet something different, such as, say,  β,α,γ,ε,δ,ω. A little thought can convince us that in any case the real run, as well as the version of the run seen by Environment, will be a green delay of the version of the run seen by Machine. Similarly, the real run, as well as the version of the run seen by Machine, will be a red delay of the version of the run seen by Environment. Hence, provided that the game is static, either player can fully trust its own version of the run and only care about making good moves for this version, because regardless of whether it shows the true or a distorted picture of the real run, the latter is guaranteed to be successful as long as the former is.  Moreover: for similar reasons, the player will remain equally successful if, instead of immediately appending the adversary's moves to its own version of the run, it simply queues those moves in a buffer as if they had not arrived yet, and fetches them only later at a more convenient time, after perhaps making and appending to its records some of its own moves first. The effect will amount to having full control over the speed of the adversary, thus allowing the player to select its own pace for the play and worry only about what moves to make rather than how quickly to make them.

Thus, static games allow players to make a full abstraction from any specific assumptions regarding the type of arbitration (central or distributed), the speed of the communication network and the speed of the adversary: whatever strategy they follow, it is always safe to assume or pretend that the arbitration is fair and unique (central), the network is perfectly accurate (zero delays) and the adversary is “slow enough”.  On the other hand, with some additional thought, we can see that if a game is not static, there will always be situations when no particular one of the above three sorts of abstractions can be made. Specifically, such situations will emerge every time when a player P’s strategy generates a P-won run that has some P-lost P-delays.

 

 

 

3 The CoL zoo of game operations


3.1 Preview


As we already know, logical operators in CoL stand for operations on games. There is an open-ended pool of operations of potential interest, and which of those to study may depend on particular needs and taste. Below is an incomplete list of the operators that have been officially introduced so far.

Negation: 

Conjunctions:    (parallel),   (choice),   (sequential),   (toggling)

Disjunctions:     (parallel),   (choice),   (sequential),   (toggling)

Recurrences:   (branching),   (parallel),   (sequential),   (toggling)

Corecurrences:  (branching),   (parallel),   (sequential),   (toggling)

Universal quantifiers:    (blind),   (parallel),   (choice),   (sequential),   (toggling)

Existential quantifiers:   (blind),   (parallel),   (choice),   (sequential),   (toggling)

Implications:    (parallel),   (choice),   (sequential),  (toggling)


Rimplications: 
 (branching),   (parallel),   (sequential),   (toggling)

Refutations:   (branching),   (parallel),   (sequential),   (toggling)

Among these we see all operators of classical logic, and our choice of the classical notation for them is no accident. It was pointed out earlier that classical logic is nothing but the elementary, zero-interactivity fragment of computability logic. Indeed, after analyzing the relevant definitions, each of the classically-shaped operations, when restricted to elementary games, can be easily seen to be virtually the same as the corresponding operator of classical logic. For instance, if A and B are elementary games, then so is AB, and the latter is exactly the classical conjunction of A and B understood as an (elementary) game. In a general --- not-necessarily-elementary --- case, however, ,  , ,  become more reminiscent of (yet not the same as) the corresponding multiplicative operators of linear logic. Of course, here we are comparing apples with oranges for, as noted earlier, linear logic is a syntax while computability logic is a semantics, and it may be not clear in what precise sense one can talk about similarities or differences.

In the same apples and oranges style, our operations , , ,  can be perceived  as relatives of the additive connectives and quantifiers of linear logic; , as “multiplicative quantifiers”; ,,, as “exponentials”, even though it is hard to guess which of the two groups --- , or , --- would be closer to an orthodox linear logician's heart.  On the other hand, the blind, sequential and toggling groups of operators have no counterparts in linear logic.

In this section we are going to see intuitive explanations as well as formal definitions of all of the above-listed operators. We agree that throughout those definitions, Φ ranges over positions, and Γ over runs. Each such definition has two clauses, one telling us when a position is a legal position of the compound game, and the other telling us who wins any given legal run. The run Γ seen in the second clause of the definition is always implicitly assumed to be a legal legal run of the game that is being defined.


This section also provides many examples of particular games. Let us agree that, unless otherwise suggested by the context, in all those cases we have the ideal universe in mind. Often we let non-numerals such as people, Turing machines, etc. act in the roles of “constants”. These should be understood as abbreviations of the corresponding decimal numerals that encode these objects in some fixed reasonable encoding. It should also be remembered that algorithmicity is a minimum requirement on ’s strategies. Some of our examples implicitly assume stronger requirements, such as efficiency or ability to act with imperfect knowledge. For instance, the problem of telling whether there is or has been life on Mars is, of course, decidable, for this is a finite problem. Yet our knowledge does not allow us to actually solve the problem. Similarly, chess is a finite game and (after ruling out the possibility of draws) one of the players does have a winning strategy in it. Yet we do not know specifically what (and which player’s) strategy is a winning one.

When omitting parentheses in compound expressions, we assume that all unary operators (negation, refutations, recurrences, corecurrences and quantifiers) take precedence over all binary operators (conjunctions, disjunctions, implications, rimplications), among which implications and rimplications have the lowest precedence. So, for instance, A BC should be understood as A ((B)C)  rather than, say,  as (A B)C or as A ((BC)).   

Theorem 3.1.1 (proven in [Jap03, Jap08b, Jap11a]) All operators listed in this subsection preserve the static property of games (i.e., when applied to static games, the resulting game is also static). 


3.2 Prefixation


Unlike the operators listed in the preceding subsection, the operation of prefixation is not meant to act as a logical operator in the formal language of CoL. Yet, it is very useful in characterizing and analyzing games, and we want to start our tour of the zoo with it.

Definition 3.2.1 Assume A is a game and Ψ is a unilegal position of A (otherwise the operation is undefined).The Ψprefixation of A, denoted ΨA, is defined as the game G with UnG=UnA, VrG=VrA and with MpG such that, for any G-valuation e, we have:

Intuitively, ΨA is the game playing which means playing A starting (continuing) from position Ψ. That is, ΨA is the game to which A evolves (is  brought down) after the moves of Ψ have been made. Visualized as a tree, ΨA is nothing but the subtree of A rooted at the node corresponding to position Ψ. Below is an illustration.




3.3 Negation


For a run Γ, by Γ we mean the “negative image” of Γ (green and red interchanged). For instance,  α,β,γα,β,γ.

Definition 3.3.1  Assume A is a game.  A is defined as the game G with UnG=UnA, VrG=VrA and with MpG such that, for any G-valuation e, we have:

In other words, A is A with the roles of the two players interchanged: Machine’s (legal) moves and wins become Environment’s moves and wins, and vice versa. So, when visualized as a tree, A is the exact negative image of A, as illustrated below:


Figure 3.3.2: Negation

Let Chess be the game of chess (with draws ruled out) from the point of view of the white player. Then  Chess is Chess “upside down”, i.e., the game of chess from the point of view of the black player:


Observe that the classical double negation principle  A = A  holds: interchanging the players’ roles twice restores the original roles of the players. It is also easy to see that we always have  ΨA Ψ A. So, for instance, if α is Machine’s legal move in the empty position of A that brings A down to B, then the same α is Environment’s legal move in the empty position of  A, and it brings  A down to  B. Test the game A of   Figure 3.3.2 to see that this is indeed so.


3.4 Choice operations


Choice conjunction (“chand”)  and choice disjunction (“chor”)  combine games in the way seen below:



A  B is a game where, in the initial (empty) position, only Environment has legal moves. Such a move should be either ‘0’ or ‘1’. If Environment moves 0, the game continues as A, meaning that 0(A  B) = A; if it moves 1, then the game continues as B, meaning that 1(A  B) = B; and if it fails to make either move (“choice”), then it loses. A  B is similar, with the difference that here it is Machine who has initial moves and who loses if no such move is made.

Definition 3.4.1 Assume A0 and A1 are games with a common universe U.

a)     A0  A1 is defined as the game G with UnG=U,  VrG=VrA0VrA1  and with MpG such that, for any G-valuation e, we have:

b)     A0  A1 is defined as the game G with UnG=U, VrG=VrA0VrA1  and with MpG such that, for any G-valuation e, we have:

The game A of Figure 3.3.2 can now be easily seen to be (  )  ( ), and its negation be (  )  (  ). The De Morgan laws familiar from classical logic persist: we always have  (A B) = A  B and  (A  B) = A  B. Together with the earlier observed double negation principle, this means that A  B =  (A B) and A  B =  (A  B). Similarly for the quantifier counterparts   and   of  and .

Given a game A(x), the choice universal quantification (“chall”)  xA(x) of it is nothing but the “infinite choice conjunction” A(0)A(1)A(2) …,  and the choice existential quantification (“chexists”)   xA(x) of A(x) is the “infinite choice disjunction” A(0)A(1)A(2) …:


Specifically, xA(x) is a game where, in the initial position, only Environment has legal moves, and such a move should be one of the constants. If Environment moves c, then the game continues as A(c), and if Environment fails to make an initial move/choice, then it loses.  xA(x) is similar, with the difference that here it is Machine who has initial moves and who loses if no such move is made.  So, we always have  cxA(x)=A(c)  and cxA(x)=A(c). Below is a formal definition of all choice operations.

Definition 3.4.2 Assume x is a variable and A=A(x) is a game.

a)     xA(x)  is defined as the game G with UnG=UnA, VrG=VrA-{x}  and with MpG such that, for any G-valuation e, we have:

b)     xA(x)  is defined as the game G with UnG=UnA, VrG=VrA-{x}  and with MpG such that, for any G-valuation e, we have:

Now we are already able to express traditional computational problems using formulas. Traditional problems come in two forms: the problem of computing a function f(x), or the problem of deciding a predicate p(x). The former can be written as xyA(y=f(x)), and the latter as x(p(x)  p(x)). So, for instance, the constant “successor” game of Figure 2.3.4 will be written as xyA(y=x+1), and the unary “generalized successor” game of Figure 2.3.5 as xyA(y=x+z).  The following game, which is about deciding the “evenness” predicate, could be written as  x(y(x=2y)  y(x=2y)) ( will be officially defined later, but, as promised, its meaning is going to be exactly classical when applied to an elementary game like  x=2y). 


Classical logic has been repeatedly criticized for its operators not being constructive. Consider, for example, xy(y=f(x)). It is true in the classical sense as long as f is a total function. Yet its truth has little (if any) practical import, as “y” merely signifies existence of y, without implying that such a y can actually be found. And, indeed, if f is an incomputable function, there is no method for finding y. On the other hand, the choice operations of CoL are constructive. Computability (“truth”) of  xyA(y=f(x)) means more than just existence of y; it means the possibility to actually find (compute, construct) a corresponding y for every x.

Similarly, let Halts(x,y) be the predicate “Turing machine x halts on input y”. Consider xy(Halts(x,y)  Halts(x,y)).    It is true in classical logic, yet not in a constructive sense. Its truth means that, for all x and y, either Halts(x,y)  or Halts(x,y) is true, but it does not imply existence of an actual way to tell which of these two is true after all. And such a way does not really exist, as the halting problem is undecidable. This means that xy(Halts(x,y)  Halts(x,y)) is not computable. Generally, as pointed out earlier, the principle of the excluded middle  A OR A”, validated by classical logic and causing the indignation of the constructivistically-minded, is not valid in computability logic with OR understood as choice disjunction. The following is an example of a constant game of the form A  A with no algorithmic solution (why, by the way?):

 xy(Halts(x,y)  Halts(x,y))    xy(Halts(x,y)  Halts(x,y)).

Chess   Chess, on the other hand, is an example of a computable-in-principle yet “practically incomputable” problem, with no real computer anywhere close to being able to handle it.

There is no need to give a direct definition for the remaining choice operation of choice implication (“chimplication”), for it can be defined in terms of  in the “standard” way:

Definition 3.4.3 AB  =def   A B.

Each of the other sorts of disjunctions (parallel, sequential and toggling) generates the corresponding implication the same way.


3.5 Parallel operations


The parallel conjunction (“pand”) AB and the parallel disjunction (“por”)  AB of A and B are games playing which means playing the two games simultaneously. In order to win in AB (resp. AB),  needs to win in both (resp. at least one) of the components A,B. For instance, ChessChess is a two-board game, where  plays black on the left board and white on the right board, and where it needs to win in at least one of the two parallel sessions of chess. A win can be easily achieved here by just mimicking in Chess the moves that the adversary makes in Chess, and vice versa. This copycat strategy guarantees that the positions on the two boards always remain symmetric, as illustrated below, and thus  eventually loses on one board but wins on the other. 

This is very different from Chess  Chess. In the latter  needs to choose between the two components and then win the chosen one-board game, which makes Chess  Chess essentially as hard to win as either Chess or Chess. A game of the form AB is generally easier (at least, not harder) to win than AB, the latter is easier to win than AB, and the latter in turn is easier to win than AB.

Technically, a move α in the left (resp. right) -conjunct or -disjunct is made by prefixing α with ‘0.’. For instance, in (the initial position of) (AB)(CD), the move ‘1.0’ is legal for , meaning choosing the left -conjunct in the second -disjunct of the game. If such a move is made, the game continues as (AB)C. The player , too, has initial legal moves in (AB)(CD), which are ‘0.0’ and ‘0.1’.

In the formal definitions of this section and throughout the rest of this webpage, we use the important notational convention according to which:

Notation 3.5.1 For a run Γ and string a,  Γ α means  the result of removing from Γ all moves except those of the form ab,  and then deleting the prefix ‘α’ in the remaining moves.

So, for instance, 1.2,1.0,0.331.  2,0  and  1.2,1.0,0.330. 33. Another example:  where Γ is the leftmost branch of the tree for (  ) (  ) shown in Figure 3.5.3, we have Γ0. =  1 and Γ1.1. Intuitively, we see this Γ as consisting of two subruns, one (Γ0.) being a run in the first -disjunct of (  ) (  ) , and the other (Γ1.) being a run in the second disjunct.

Definition 3.5.2 Assume A0 and A1 are games with a common universe U.

a)     A0A1 is defined as the game G with UnG=U, VrG=VrA0VrA1  and with MpG such that, for any G-valuation e, we have:

b)     A0A1 is defined as the game G with UnG=U, VrG=VrA0VrA1  and with MpG such that, for any G-valuation e, we have:

When A and B are (constant) finite games, the depth of AB or AB is the sum of the depths of A and B, as seen below.


 

This signifies an exponential growth of the breadth, meaning that, once we have reached the level of parallel operations, continuing drawing trees in the earlier style becomes no fun. Not to be disappointed though: making it possible to express large- or infinite-size game trees in a compact way is what our game operators are all about after all.

Whether trees are or are not helpful in visualizing parallel combinations of unistructural games, prefixation is still very much so if we think of each unilegal position Φ of A as the game  ΦA. This way, every unilegal run G of A becomes a sequence of games as illustrated in the following example.

Example 3.5.4   To the (uni)legal run G1.7, 0.7, 0.49, 1.49  of game A = xy(yx2)  xy(y=x2) induces the following sequence, showing how things evolve as G runs, i.e., how the moves of G affect/modify the game that is being  played:

The run hits the true proposition 4972  49=72, and hence is won by the machine.

As we may guess, the parallel universal quantification (“pall”) xA(x) of A(x) is nothing but A(0)A(1)A(2) … and the parallel existential quantification (“pexists”) xA(x) of A(x) is nothing but A(0)A(1)A(2)

Definition 3.5.5 Assume x is a variable and A=A(x) is a game.

a)     xA(x) is defined as the game G with UnG=UnA, VrG=VrA-{x}  and with MpG such that, for any G-valuation e, we have:

b)     xA(x) is defined as the game G with UnG=UnA, VrG=VrA-{x}  and with MpG such that, for any G-valuation e, we have:

The next group of parallel operators are parallel recurrence (“precurrence”)  and parallel corecurrence (“coprecurrence”) .  A is nothing but the infinite parallel conjunction AAA…, and A is the infinite parallel disjunction AAA…. Equivalently, A and A can be respectively understood as xA and xA, where x is a dummy variable on which A does not depend. Intuitively, playing A means simultaneously playing in infinitely many “copies” of A, and  is the winner iff it wins A in all copies. A is similar, with the only difference that here winning in just one copy is sufficient.

Definition 3.5.6 Assume A is a game.

a)     A is defined as the game G with UnG=UnA, VrG=VrA  and with MpG such that, for any G-valuation e, we have:

b)     A is defined as the game G with UnG=UnA, VrG=VrA  and with MpG such that, for any G-valuation e, we have:

As was the case with choice operations, we can see that the definition of each parallel operations seen so far in this section can be obtained from the definition of its dual by just interchanging  with . Hence it is easy to verify that we always have  (A B) = AB(A B) = ABxA(x) = xA(x),  xA(x) = xA(x),  A = AA = A, This, in turn, means that each parallel operation is definable in the standard way in terms of its dual operation and negation. For instance, AB can be defined as (AB), and A as A. Three more parallel operations defined in terms of negation and other parallel operations are   parallel implication (“pimplication”) parallel rimplication (“primplication”)   and  parallel refutation (“prefutation”) . Here the prefix “p”, as before, stands for “parallel”, and the prefix “r” in “rimplication” stands for “recurrence”. 

Definition 3.5.7    a) A  B  =def   A  B

                              b) A  B  =def  A  B

                              c)  A  =def  A  

Note that, just like negation and unlike choice operations, parallel operations preserve the elementary property of games. When restricted to elementary games, the meanings of ,  and  coincide with those of classical conjunction, disjunction and implication. Further, as long as all individuals of the universe have naming constants,  the meanings of  and  coincide with those of classical universal quantifier and existential quantifier. The same conservation of classical meaning (but without any conditions on the universe) is going to be the case with the blind quantifiers , defined later; so, at the elementary level, when all individuals of the universe have naming constants,  and  are indistinguishable from  and , respectively. As for the parallel recurrence and corecurrence, for an elementary A we simply have A=A=A

While all classical tautologies automatically remain valid when parallel operators are applied to elementary games, in the general case the class of valid (“always computable”) principles shrinks. For example, P PP, i.e. P(PP), is not valid.  Back to our chess example, one can see that the earlier copycat strategy successful for ChessChess would be inapplicable to Chess(ChessChess). The best that  can do in this three-board game is to  synchronize Chess with one of the two conjuncts of ChessChess. It is possible that then Chess and the unmatched Chess are both lost, in which case the whole game will be lost.

The principle P PP is valid in classical logic because the latter sees no difference between P and PP. On the other hand, in virtue of its semantics, CoL is resource-conscious, and in it P is by no means the same as PP or PP.  Unlike P PP,  P PP is a valid principle. Here, in the antecedent, we have infinitely many “copies” of P. Pick any two copies and, via copycat, synchronize them with the two conjuncts of the consequent. A win is guaranteed.


3.6 Reduction


Intuitively, AB is the problem of reducing B to A: solving AB means solving B while having A as a computational resource. Specifically,  may observe how A is being solved (by the environment), and utilize this information in its own solving B. As already pointed out, resources are symmetric to problems: what is a problem to solve for one player is a resource that the other player can use, and vice versa. Since A is negated in AB and negation means switching the roles, A appears as a resource rather than problem for  in AB.  Our copycat strategy for ChessChess was an example of reducing Chess to Chess. The same strategy was underlying Example 3.5.4, where xy(y=x2) was reduced to itself.

Let us look at a more meaningful example: reducing the acceptance problem to the halting problem. The former, as a decision problem, will be written as xy (Accepts(x,y)  Accepts(x,y)), where Accepts(x,y) is the predicate “Turing machine x accepts input y”. Similarly, the halting problem is written as xy (Halts(x,y)  Halts(x,y)). Neither problem has an algorithmic solution, yet the following pimplication does:

xy (Halts(x,y)  Halts(x,y))  xy (Accepts(x,y)  Accepts(x,y))

Here is Machine’s winning strategy for the above game. Wait till Environment makes moves 1.m and 1.n for some m and n. Making these moves essentially means asking the question “Does machine m accept input n?”.  If such moves are never made, you win. Otherwise, the moves bring the game down to

xy (Halts(x,y)  Halts(x,y))  Accepts(m,n)  Accepts(m,n)

Make the moves 0.m and 0.n, thus asking the counterquestion “Does machine m halt on input n?”.  Your moves bring the game down to 

Halts(m,n)  Halts(m,n) Accepts(m,n)  Accepts(m,n)

Environment will have to answer this question, or else it loses (why?). If it answers by move 0.0 (“No, m does not halt on n”), you make the move 1.0 (say “m does not accept n”). The game will be brought down to Halts(m,n) Accepts(m,n). You win, because if m does not halt on n, then it does not accept n, either. Otherwise, if Environment answers by move 0.1 (“Yes, m halts on n”), start simulating m on n until m halts. If you see that m accepted n, make the move 1.1 (say “m accepts n”); otherwise make the move 1.0 (say “m does not accept n”). Of course, it is a possibility that the simulation goes on forever. But then Environment has lied when saying “m halts on n”; in other words, the antecedent is false, and you still win.  

Note that what Machine did when following the above strategy was indeed reducing the acceptance problem to the halting problem: it solved the former using an external (Environment-provided) solution of the latter.

There are many natural concepts of reduction, and a strong case can be made in favor of the thesis that  the sort of reduction captured by  is most basic among them. For this reason, we agree that, if we simply say “reduction”, it always means the sort of reduction captured by  . A great variety of other reasonable concepts of reduction is expressible in terms of .  Among those is Turing reduction. Remember that a predicate q(x) is said to be Turing reducible to a predicate p(x) if q(x) can be decided by a Turing machine equipped with an oracle for p(x).  For a positive integer n, n-bounded Turing reducibility is defined in the same way, with the only difference that here the oracle is allowed to be used only n times. It turns out that parallel rimplication  is a conservative generalization of Turing reduction. Namely, when p(x) and q(x) are elementary games (i.e. predicates), q(x) is Turing reducible to p(x) if and only if the problem x(p(x) p(x))  x(q(x)  q(x)) has an algorithmic solution. If here we change  back to , we get the same result for 1-bounded Turing reducibility. More generally, as one might guess, n-bounded Turing reduction will be captured by

x1(p(x1)  p(x1))    xn(p(xn)  p(xn)) x(q(x) q(x)).

If, instead, we write 

x1xn((p(x1)  p(x1))  (p(xn) p(xn))) x(q(x) q(x)),

then we get a conservative generalization of n-bounded weak truth-table reduction. The latter differs from n-bounded Turing reduction in that here all n oracle queries should be made at once, before seeing responses to any of those queries. What is called mapping (many-one) reducibility of q(x) to p(x) is nothing but computability of xy(q(x)p(y)), where AB abbreviates (AB)(BA).  We could go on and on with this list.

And yet many other natural concepts of reduction expressible in the language of CoL  may have no established names in the literature. For example, from the previous discussion it can be seen that a certain reducibility-style relation holds between the predicates Accepts(x,y) and Halts(x,y) in an even stronger sense than the algorithmic winnability of

xy (Halts(x,y)  Halts(x,y)) xy (Accepts(x,y)  Accepts(x,y)).

In fact, not only the above problem has an algorithmic solution, but also the generally harder-to-solve problem

 xy (Halts(x,y) Halts(x,y) Accepts(x,y)  Accepts(x,y)).

Among the merits of CoL is that it offers a formalism and deductive machinery for systematically expressing and studying computation-theoretic relations such as reducibility, decidability, enumerability, etc., and all kinds of variations of such concepts.

Back to reducibility, while the standard approaches only allow us to talk about (a whatever sort of) reducibility as a relation between problems, in our approach reduction becomes an operation on problems, with reducibility as a relation simply meaning computability of the corresponding combination (such as AB) of games. Similarly for other relations or properties such as the property of decidability. The latter becomes the operation of deciding if we define the problem of deciding a predicate (or any computational problem) p(x) as the game x(p(x) p(x)). So, now we can meaningfully ask questions such as “Is the reduction of the problem of deciding q(x) to the problem of deciding p(x) always reducible to the mapping reduction of q(x) to p(x)?”. This question would be equivalent to whether the following formula is valid in CoL:

xy(q(x) p(y)) (x(p(x)  p(x)) x(q(x)  q(x))).

The answer turns out to be “Yes”, meaning that mapping reduction is at least as strong as reduction. Here is a strategy which wins this game no matter what particular predicates p(x) and q(x) are: 

1. Wait till, for some m, Environment brings the game down to  

xy(q(x p(y)) (x(p(x)  p(x)) q(m)  q(m)).

2. Bring the game down to  

y(q(m p(y)) (x(p(x)  p(x)) q(x)  q(x)).

3. Wait till, for some n, Environment brings the game down to  

 (q(m p(n)) (x(p(x)  p(x)) q(x)  q(x)).

4. Bring the game down to  

(q(m p(n)) (p(n)  p(n) q(x)  q(x)).

5. Wait till Environment brings the game down to one of the following:  

      5a. (q(m p(n)) (p(n) q(x)  q(x)). In this case, further bring it down to (q(m p(n)) (p(n) q(x)), and you have won.

      5b. (q(m p(n)) (p(n) q(x)  q(x)). In this case, further bring it down to (q(m p(n)) (p(n) q(x)), and you have won, again.

We can also ask: “Is the mapping reduction of q(x) to p(x) always reducible to the reduction of the problem of deciding q(x) to the problem of deciding p(x)?”. This question would be equivalent to whether the following formula is valid:

( x(p(x)  p(x)) x(q(x)  q(x)))   xy(q(x p(y)).

The answer here turns out to be “No”, meaning that mapping reduction is properly stronger than reduction. This negative answer can be easily obtained by showing that the above formula is not provable in the deductive system CL4 that we are going to see later. CL4 is sound and complete with respect to validity. Its completeness implies that any formula which is not provable in it (such as the above formula) is not valid. And the soundness of CL4 implies that every provable formula is valid. So, had our ad hoc attempt to come up with a strategy for xy(q(x p(y)) (x(p(x)  p(x)) x(q(x)  q(x))) failed, its validity could have been easily established by finding a CL4-proof of it.

To summarize, CoL offers not only a convenient language for specifying computational problems and relations or operations on them, but also a systematic tool for answering questions in the above style and beyond.


3.7 Blind operations


Our definition of the blind universal quantifier (“blall”) x and blind existential quantifier (“blexists”) x assumes that the game A(x) to which they are applied satisfies the condition of unistructurality in x. This condition is weaker than the earlier seen unistructurality: every unistructural game is also unistructural in x, but not vice versa. Intuitively, unistructurality in x means that the structure of the game does not depend on (how the valuation evaluates) the variable x. Formally, we say that A(x) is unistructural in x iff, for any valuation e and any two constants a and b, we have LreA(a) = LreA(b).  All constant or elementary games are unistructural in (whatever variable) x. And all operations of CoL are known to preserve unistructurality in x. 

Definition 3.7.1 Assume x is a variable and A=A(x) is a game unistructural in x.

a)     xA(x)  is defined as the game G with UnG=UnA, VrG=VrA-{x}  and with MpG such that, for any G-valuation e, we have:

b)     xA(x)  is defined as the game G with UnG=UnA, VrG=VrA-{x}  and with MpG such that, for any G-valuation e, we have:

Intuitively, playing xA(x) or xA(xmeans just playing A(x) “blindly”, without knowing the value of x. In xA(x), Machine wins iff the play it generates is successful for every possible value of x from the domain, while in xA(x) being successful for just one value is sufficient. When applied to elementary games, the blind quantifiers act exactly like the quantifiers of classical logic, even if not all individuals of the universe have naming constants.

From the definition one can see a perfect symmetry between  and . Therefore, as with the other quantifiers seen so far, the standard De Morgan laws and interdefinabilities hold.

Unlike xA(x) which is a game on infinitely many boards, both xA(x) and xA(x)  are one-board games. Yet, they are very different from each other. To see this difference, compare the problems x(Even(x)  Odd(x)) and x(Even(x)  Odd(x)). The former is an easily winnable game of depth 2: Environment selects a number, and Machine tells whether that number is even or odd. The latter, on the other hand, is a game which is impossible to win. This is a game of depth 1, where the value of x is not specified by either player, and only Machine moves --- tells whether (the unknown) x is even or odd. Whatever Machine says, it loses, because there is always a value for x that makes the answer wrong.

This should not suggest that nontrivial -games can never be won. For instance, the problem x(Even(x)  Odd(x)  y(Even(x+y)  Odd(x+y))) has and easy solution. The idea of a winning strategy is that, for any given y, in order to tell whether x+y is even or odd, it is not really necessary to know the value of x. Rather, just knowing whether x is even or odd is sufficient. And such knowledge can be obtained from the antecedent. In other words, for any known y and unknown x, the problem of telling whether x+y is even or odd is reducible to the problem of telling whether x is even or odd. Specifically, if both x and y are even or both are odd, then x+y is even; otherwise x+y is odd. Below is the evolution sequence induced by the run 1.5, 0.0, 1.1 where Machine used such a strategy.

Machine won because the play hit the true x(Even(x Odd(x+5)). Notice how x persisted throughout the sequence. Generally, the (,)-structure of a game will remain unchanged in such sequences. The same is the case with the parallel operations such as  in the present case.

 Exercise 3.7.2 Are the following problemsalways computable?

xA(x)   xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

xA(x) xA(x)  answer

x(A(x)B(x))   xA(x)xB(x)  answer

x(A(x)B(x))   xA(x) xB(x)  answer

xA(x)xB(x)   x(A(x)B(x))  answer




3.8 Branching operations


There are only four branching operations that we consider: recurrence , corecurrence , rimplication  and refutation .  Let us talk about recurrence first.

What is common for the members of the family of game operations called recurrence operations is that, when applied to a game A, they turn it into a game playing which means repeatedly playing A. In terms of resources, recurrence operations generate multiple “copies” of A, thus making A a reusable/recyclable resource. In classical logic, recurrence-style operations would be meaningless, because classical logic, as we know, is resource-blind and thus sees no difference between one and multiple copies of A. In the resource-conscious CoL, however, recurrence operations are not only meaningful, but also necessary to achieve a satisfactory level of expressiveness and realize its potential and ambitions. Hardly any computer program is used only once; rather, it is run over and over again. Loops within such programs also assume multiple repetitions of the same subroutine. In general, the tasks performed in real life by computers, robots or humans are typically recurring ones or involve recurring subtasks.

There is more than one naturally emerging recurrence operation. The differences between various recurrence operations are related to how “repetition” or “reusage” is exactly understood. Imagine a computer that has a program successfully playing chess. The resource that such a computer provides is obviously something stronger than just Chess, for it permits to play Chess as many times as the user wishes, while Chess, as such, only assumes one play. The simplest operating system would allow to start a session of Chess, then --- after finishing or abandoning and destroying it --- start a new play again, and so on. The game that such a system plays --- i.e. the resource that it supports/provides --- is Chess,  which assumes an unbounded number of plays of Chess in a sequential fashion. A formal definition of the operation , called sequential recurrence, will be given later is Section 3.9.  

A more advanced operating system, however, would not require to destroy the old sessions before starting new ones; rather, it would allow to run as many parallel sessions as the user needs. This is what is captured by Chess, meaning nothing but the infinite parallel conjunction Chess  Chess  Chess  ...  As we remember from Section 3.5,  is called parallel recurrence.

Yet a really good operating system would not only allow the user to start new sessions of Chess without destroying old ones; it would also make it possible to branch/replicate any particular stage of any particular session, i.e., create any number of  “copies” of any already reached position of the multiple parallel plays of Chess, thus giving the user the possibility to try different continuations from the same position. What corresponds to this intuition is the branching recurrence (“brecurrence”) Chess of Chess.

At the intuitive level, the difference between  and  is that in A, unlike A,  Environment does not have to restart A from the very beginning every time it wants to reuse it (as a resource); rather, Environment is allowed to backtrack to any  of the previous --- not necessarily starting --- positions and try a new continuation from there, thus depriving the adversary of the possibility to reconsider the moves it has already made in that position. This is in fact the type of reusage every purely software resource allows or would allow in the presence of an advanced operating system and unlimited memory: one can start running process A; then fork it at any stage thus creating two threads that have a common past but possibly diverging futures  (with the possibility to treat one of the threads as a “backup copy” and preserve it for backtracking purposes); then further fork any of the branches at any time; and so on.

The less flexible type of reusage of A assumed by A, on the other hand, is closer to what infinitely many autonomous physical resources would naturally offer, such as an unlimited number of independently acting robots each performing task A, or an unlimited number of computers with limited memories, each one only capable of and responsible for running a single thread of process A. Here the effect of forking an advanced stage of A cannot be achieved unless, by good luck, there are two identical copies of the stage, meaning that the corresponding two robots or computers have so far acted in precisely the same ways.

The formal definitions of A and its dual branching corecurrence (“cobrecurrence”) A (= A) in early papers on CoL [Jap03, Jap09a] were direct formalizations of the above intuitions, with an explicit presence of “replicative” moves used by players to fork a given thread of A and create two threads out of one. Later, in [Jap12b], another definition was found which was proven to be equivalent to the old one in the sense of mutual reducibility of the old and the new versions of A. The new definition less directly corresponds to the above intuitions, but is technically simpler, and we choose it as our “canonical” definition of branching operations. To be able to state it, we agree on the following:

Notation 3.8.1 Where Γ is a run and w is a bitstring (finite or infinite sequence of 0s and 1s), Γ£w means the result of deleting from Γ all moves except those that look like u.b for some initial segment u of w, and then further deleting the prefix “u.” from such moves.

Definition 3.8.2 Assume A is a game.

a)     A is defined as the game G with UnG=UnA, VrG=VrA  and with MpG such that, for any G-valuation e, we have:

b)     A is defined as the game G with UnG=UnA, VrG=VrA  and with MpG such that, for any G-valuation e, we have

The direct intuitions underlying the above definition are as follows. To play A or A means to simultaneously play in multiple parallel copies/threads of A.  Each such thread is denoted by an infinite bitstring w (so, there are in fact uncountably many threads). Every legal move by either player looks like u.b for some finite bitstring u, and the effect/meaning of such a move is simultaneously making the move b in all threads w such that u is an initial segment of w. So, where Γ is the overall run of A or A, the run in a given thread w of A is Γ£w. In order to win A, Machine needs to win A in all threads, while for winning A it is sufficient to win in just one thread.

It is obvious that  A=A and A=A, hence A=A and A=A.

Branching recurrence  can be shown to be stronger than its parallel counterpart  , in the sense that the principle  AA is valid while AA is not. The two groups of  operators, in isolation from each other, also validate different principles. For instance, A(AAA) A is valid while  A(AAA) A is not;  (A B)   AB is valid while  (AB)   AB is not;  x(A(x) A(x)) is valid while x(A(x) A(x)) is not.

Branching rimplication (“brimplication”)  and branching refutation (“brefutation”)  are defined in terms of , and  the same way as parallel rimplication  and  refutation  are defined in terms of , and :

Definition 3.8.3 

        a) A B  =def  A B

        b)  A  =def  A  

Exercise 3.8.4 The Kolmogorov complexity k(x) of a number x is the size of the smallest Turing machine which outputs x on input 0. The Kolmogorov complexity problem  xy(y=k(x)) has no algorithmic solution. Nor is it reducible to the halting problem in the strong sense of , meaning that the problem x(Halts(x)  Halts(x)) xy(y=k(x)) has no algorithmic solution, either. The Kolmogorov complexity problem, however, is reducible to the halting problem in the weaker sense of , meaning that Machine has a winning strategy for  x(Halts(x)  Halts(x))  xy(y=k(x)). Describe such a strategy, informally.    

Just like ,   is a conservative generalization of Turing reduction. Specifically, for any predicates p(x) and q(x), the problem x(p(x)  p(x))  x(q(x)  q(x)) is computable iff q(x) is Turing reducible to p(x) iff the problem problem x(p(x)  p(x))  x(q(x)  q(x)) is computable. This means that, when restricted to traditional sorts of problems such as decision problems, the behaviors of   and   are indistinguishable. This stops being the case when these operators are applied to problems with higher degrees of interactivity though. For instance, the following problem is computable, but becomes incomputable with  instead of :

yx(Halts(x,y)  Halts(x,y))  y(x(Halts(x,y) Halts(x,y))  x(Halts(x,y) Halts(x,y))).

Generally, (AB) (AB) is valid but (AB) (AB) is not.

While both   and   are weaker than  and hence more general,  is still a more interesting operation of reduction than . What makes it special is the following belief. The latter, in turn, is based on the belief that  (and by no means )  is the operation allowing to reuse its argument in the strongest algorithmic sense possible.

Let A and B be computational problems (games). We say that B is brimplicationally (resp. primplicationally, pimplicationally, etc.) reducible to A iff AB (resp. AB, AB, etc.) has an algorithmic solution (winning strategy).

Thesis 3.8.5  Brimplicational reducibility is an adequate mathematical counterpart of our intuition of reducibility in the weakest (and thus most general) algorithmic sense possible. Specifically:

   (a) Whenever a problem B is brimplicationally reducible to a problem A, B is also algorithmically reducible to A according to anyone’s reasonable intuition.

   (b) Whenever a problem B is algorithmically reducible to a problem A according to anyone’s reasonable intuition, B is also brimplicationally reducible to A.

This is pretty much in the same sense as (by the Church-Turing thesis), a function f is computable by a Turing machine iff  f  has an algorithmic solution according to everyone’s reasonable intuition.


3.9 Sequential operations


The sequential conjunction (“sand”) AB of games A and B starts and proceeds as A. It will also end as A unless, at some point, Environment decides to switch to the next component, in which case A is abandoned, and the game restarts, continues and ends as B. The sequential disjunction (“sor”) AB of A and B is similar, with the difference that it is Machine who decides whether and when to switch from A to B.

The original formal definition of AB and AB found in [Jap08b] was a direct formalization of the above description. Definition 3.9.1 given below, while less direct, still faithfully formalizes the above intuitions as long as only static games are considered, and we opt for it because it is technically simpler. Specifically, Definition 3.9.1 allows either player to continue making moves in A even after a switch takes place; such moves are meaningless but harmless. Similarly, it allows either player to make moves in B without waiting for a switch to take place, even though a smart player would only start making such moves if and when a switch happens.  

Definition 3.9.1 Assume A0 and A1 are games with a common universe U.

a)     A0A1 is defined as the game G with UnG=U, VrG=VrA0VrA1  and with MpG such that, for any G-valuation e, we have:

b)     A0A1 is defined as the game G with UnG=U, VrG=VrA0VrA1  and with MpG such that, for any G-valuation e, we have:

Recall that, for a predicate p(x),  x(p(x)  p(x))  is the problem of deciding p(x). Then what is the similar-looking x(p(x)  p(x))? If you’ve guessed that this is the problem of semideciding p(x), you are right. Machine has a winning strategy in this game if and only if p(x) is semidecidable, i.e., recursively enumerable. Namely, if p(x) is recursively enumerable, a winning strategy by Machine is to wait until Environment brings the game down to p(n)  p(n) for some particular n.  After that, Machine starts looking for a certificate of p(n)’s being true. If and when such a certificate is found (meaning that p(n) is indeed true), Machine makes a switch move turning  p(n)  p(n) into the true p(n); and if no certificate exists (meaning that p(n) is false), then  Machine keeps looking for a non-existent certificate forever and thus never makes any moves, meaning that the game ends as p(n), which, again, is true. And vice versa: any effective winning strategy for x(p(x)  p(x)) can obviously be seen as a semidecision procedure for p(x), which accepts an input n iff the strategy ever makes a switch move in the scenario where Environment’s initial choice of a value for x is n.

Existence of effective winning strategies for games has been shown[Jap03] to be closed under the rules ‘from AB and A conclude B’,  from A and B conclude AB’, ‘from A conclude xA’, ‘from A conclude A’. In view of these closures, the validity of the principles discussed below implies certain known facts from the theory of computation. Needless to say, those examples once again demonstrate how CoL can be used as a systematic tool for defining new interesting properties and relations between computational problems, and not only reproducing already known theorems but also discovering an infinite variety of new facts. 

The valid formula x(p(x)  p(x))  x(p(x) p(x)) x(p(x) p(x)) “expresses” the well known fact that, if both a predicate p(x) and its complement p(x) are recursively enumerable, then p(x) is decidable. Actually, the validity of this formula means something more: it means that the problem of deciding p(x) is reducible to the (-conjunction of) the problems of semideciding p(x) and p(x). In fact, a reducibility in an even stronger sense (in a sense that has no name) holds, expressed by the formula x((p(x)  p(x)) ((p(x)  (x)) p(x)  p(x)).

The formula xy(q(x p(y))   x(p(x) p(x))    x(q(x) q(x)) is also valid, which implies the known fact that, if a predicate q(x) is mapping reducible to a predicate p(x) and p(x) is recursively enumerable, then so is q(x). Again, the validity of this formula, in fact, means something even more: it means that the problem of semideciding q(x) is reducible to the problems of mapping reducing q(x) to p(x) and semideciding p(x).

Certain other reducibilities hold only in the sense of rimplications rather than implications. Here is an example. Two Turing machines are said to be equivalent iff they accept exactly the same inputs.  Let Neq(x,y) be the predicate ‘Turing machines x and y are not equivalent’. This predicate is neither semidecidable nor co-semidecidable. However, the problem of its semideciding primplicationally (and hence also brimplicationally) reduces to the halting problem. Specifically, Machine has an effective winning strategy for the following game:

zt(Halts(z,t)  Halts(z,t))  xy(Neq(x,y)  Neq(x,y)).

A strategy here is to wait till Environment specifies some values m and n for x and y. Then, create a variable i, initialize it to 1 and do the following. Specify z and t as m and i in one yet-unused copy of the antecedent, and as n and i in another yet-unused copy. That is, ask Environment whether m halts on input i and whether n halts on the same input. Environment will have to provide the correct pair of answers, or else it loses. (1) If the answers are “No,No”, increment i to i+1 and repeat the step. (2) If the answers are “Yes,Yes”, simulate both m and n on input i until they halt. If both machines accept or both reject, increment i to i+1 and repeat the step. Otherwise, if one accepts and one rejects, make a switch move in the consequent and celebrate victory. (3) If the answers are “Yes,No”, simulate m on i until it halts. If m rejects i, increment i to i+1 and repeat the step. Otherwise, if m accepts i, make a switch move in the consequent and you win. (4) Finally, if the answers are “No,Yes”, simulate n on i until it halts. If n rejects i, increment i to i+1 and repeat the step. Otherwise, if n accepts i, make a switch move in the consequent and you win.

The sequential universal quantification (“sall”) xA(x) of A(x) is essentially nothing but the infinite sequential conjunction A(0)  A(1)  A(2) …; the sequential existential quantification (“sexists”) xA(x) of A(x) is A(0)  A(1)  A(2)  ….; the sequential recurrence (“srecurrence”)  A of A is A  A  A …; and the sequential corecurrence (“cosrecurrence”) A of A is A  A  A Formally, we have:

Definition 3.9.2  Assume x is a variable and A=A(x) is a game.

a)     xA(x) is defined as the game G with UnG=UnA, VrG=VrA-{x}  and with MpG such that, for any G-valuation e, we have:

b)     xA(x) is defined as the game G with UnG=U, VrG=VrA-{x}  and with MpG such that, for any G-valuation e, we have:


Definition 3.9.3 Assume A is a game.

a)     A is defined as the game G with UnG=UnA, VrG=VrA  and with MpG such that, for any G-valuation e, we have:

b)     A is defined as the game G with UnG=U, VrG=VrA and with MpG such that, for any G-valuation e, we have:

For an illustration, remember the Kolmogorov complexity function k(x). The value of k(x) is known to be bounded, not exceeding x or a certain constant const, whichever is greater. While xy(y=k(x)) is not computable, Machine does have an algorithmic winning strategy for the problem xy(y=k(x)). It goes like this: Wait till Environment specifies a value m for x, thus asking “what is the Kolmogorov complexity of m?” and bringing the game down to y(y=k(m)). Answer that it is m, i.e. specify y as m, and after that start simulating, in parallel, all machines n smaller than m on input 0. Whenever you find a machine n that returns m on input 0 and is smaller than any of the previously found such machines, make a switch move and, in the new copy of  y(y=k(m)), specify y as the size |n| of n or as const, whichever is greater. This obviously guarantees success: sooner or later the real Kolmogorov complexity c of m will be reached and named; and, even though the strategy will never be sure that k(m) is not something yet smaller than c, it will never really find a reason to further reconsider its latest claim that c=k(m). 

Exercise 3.9.4 Describe a winning strategy for xy(k(x)=(x-y)).

As expected, sequential implication (“simplication”) ,  sequential rimplication (“srimplication”)  and sequential refutation (“srefutation”)  are defined as follows:

Definition 3.9.5    a) A B  =def  A  B

                              b) A B  =def  A B

                              c)  A  =def  A  



3.10 Toggling operations


As all other sorts of conjunctions and disjunctions, toggling conjunction (“tand”)  and toggling disjunction (“tor”)  are dual to each other, and the definition of one is obtained from the definition of the other by interchanging the roles of the two players. Let us just focus on disjunction. One of the ways to characterize AB is the following. This game starts and proceeds as a play of A. It will also end as an ordinary play of A unless, at some point, Machine decides to switch to B, after which the game becomes and continues as B. It will also end as B unless, at some point, Machine decides to switch back to A. In such a case the game again becomes A, where A resumes from the position in which it was abandoned (rather than from its start position, as would be the case, say, in ABA). Later Machine may again switch to (the abandoned position of) B, and so on. Machine wins the overall play iff it switches from one component to another (“changes its mind”, or “corrects its mistake”) at most finitely many times and wins in its final choice, i.e., in the component which was chosen last to switch to.

An alternative way to characterize AB is to say that it is played exactly like AB, with the only difference that Machine is allowed to make a ‘choose A’ or ‘choose B’ move some finite number of times. If infinitely many choices are made, Machine loses. Otherwise, the winner in the play will be the player who wins in the component that was chosen last (“the eventual choice”). The case of Machine having made no choices at all is treated as if it had chosen A. Thus, as in sequential disjunction, the leftmost component is the “default”, or “automatically made”, initial choice.

It is important to note that the adversary (or perhaps even the machine itself) never knows whether a given choice of a component of AB is the last choice or not. Otherwise, if Machine was required to indicate that it has made its final choice, then the resulting operation, for static games, would essentially be the same as AB. Indeed, as we remember, in static games it never hurts a player to postpone making moves, so Environment could just inactively wait till the last choice is declared, and start playing the chosen component only after that, as in the case of AB; under these circumstances, making some temporary choices before making the final choice would not make any sense for Machine, either. 

What would happen if we did not require that Machine can change its mind only finitely meany times? There would be no “final choice” in this case. So, the only natural winning condition in the case of infinitely many choices would be to say that Machine wins iff it simply wins in one of the components. But then the resulting operation would be essentially the same as , as a smart machine would always opt for keeping switching between components forever. That is, allowing infinitely many choices would amount to not requiring any choices at all, as is the case with AB.

One may also ask what would happen if we allowed Machine to make an arbitrary initial choice between A and B and then reconsider its choice only (at most) once? Such an operation on games, albeit meaningful, would not be basic. That is because it can be expressed through our primitives as (AB)  (BA).

The very weak sort of choice captured by  is the kind of choice that, in real life, one would ordinarily call choice after trial and error. Indeed, a problem is generally considered to be solved after trial and error (a correct choice/solution/answer found) if, after perhaps coming up with several wrong solutions, a true solution is eventually found. That is, mistakes are tolerated and forgotten as long as they are eventually corrected. It is however necessary that new solutions stop coming at some point, so that there is a last solution whose correctness determines the success of the effort. Otherwise, if answers have kept changing all the time, no answer has really been given after all. Or, imagine Bob has been married and divorced several times. Every time he said “I do”, he probably honestly believed that this time, at last, his bride was “the one”, with whom he would live happily ever after.  Bob will be considered to have found his Ms. Right after all if and only if one of his marriages indeed turns out to be happy and final.   

As we remember, for a predicate p(x),  x(p(x) p(x)) is the problem of deciding p(x), and x(p(x)  p(x)) is the weaker (easier to solve) problem of semideciding p(x). Not surprisingly, x(p(x)p(x)) is also a decision-style problem, but still weaker than the problem of semideciding p(x). This problem has been studied in the literature under several names, the most common of which is recursively approximating p(x). It means telling whether p(x) is true or not, but doing so in the same style as semideciding does in negative cases: by correctly saying “Yes” or “No” at some point (after perhaps taking back previous answers several times) and never reconsidering this answer afterwards.  Observe that semideciding p(x) can be seen as always saying “No” at the beginning and then, if this answer is incorrect, changing it to “Yes” at some later time; so, when the answer is negative, this will be expressed by saying “No” and never taking back this answer, yet without ever indicating that the answer is final and will not change. Thus, the difference between semideciding and recursively approximating is that, unlike a semidecision procedure, a recursive approximation procedure can reconsider both negative and positive answers, and  do so several times rather than only once.

It is known that a predicate p(x) is recursively approximable (i.e., the problem of its recursive approximation has an algorithmic solution) iff p(x) has the arithmetical complexity D2, i.e., if both p(x) and its complement p(x) can be written in the form zy s(z,y,x), where s(z,y,x) is a decidable predicate. Let us see that, indeed, algorithmic solvability of x(p(x)  p(x)) is equivalent to p(x)’s being of complexity D2.

First, assume p(x) is of complexity D2, specifically, for some decidable predicates q(z,y,x) and r(z,y,x) we have  p(x) = zy q(z,y,x) and p(x) = zy r(z,y,x). Then x(p(x)  p(x)) is solved by the following strategy. Wait till Environment specifies a value m for x, thus bringing the game down to p(m)p(m). Then create the variables i and j, initialize both to 1, choose the p(m) component, and do the following:

Step 1: Check whether q(i,j,m) is true. If yes, increment j to j+1 and repeat Step 1. If not, switch to the p(m) component, reset j to 1, and go to Step 2.

Step 2: Check whether r(i,j,m) is true. If yes, increment j to j+1 and repeat Step 2. If not, switch to the p(m) component, reset j to 1, increment i to i+1, and go to Step 1.

With a little thought, one can see that the above algorithm indeed solves x(p(x)  p(x)).

For the opposite direction, assume a given algorithm Alg solves x(p(x)  p(x)).  Let q(z,y,x) be the predicate such that q(i,j,m) is true iff, in the scenario where the environment specified x as m at the beginning of the play, so that the game was brought down to p(m)p(m), we have: (1) at the ith computation step, Alg chose the p(m) component; (2) at the jth computation step, Alg did not move. Quite similarly, let r(z,y,x) be the predicate such that r(i,j,m) is true iff, in the scenario where the environment specified x as m at the beginning of the play, so that the game was brought down to p(m)p(m), we have:  (1) either i=1 or, at the ith computation step, Alg chose the p(m) component; (2) at the jth computation step, Alg did not move. Of course, both q(z,y,x) and r(z,y,x) are decidable predicates, and hence so are y>z q(z,y,x) and y>z r(z,y,x). Now, it is not hard to see that p(x) = zy(y>z q(z,y,x)) and p(x) = zy (y>z r(z,y,x)). So, p(x) is indeed of complexity D2.

As a real-life example of a predicate which is recursively approximable but neither semidecidable nor co-semidecidable, consider the predicate k(x)<k(y), saying that number x is simpler than number y in the sense of Kolmogorov complexity. As noted earlier, k(z) (the Kolmogorov complexity of z) is bounded, never exceeding max(z,const)  for a certain constant const. Here is an algorithm which recursively approximates the predicate k(x)<k(y), i.e., solves the problem xy(k(x)<k(y)  k(x)< k(y)). Wait till Environment brings the game down to k(m)<k(n)  k(m)< k(n) for some m and n. Then start simulating, in parallel, all Turing machines t satisying tmax(m,n,const) on input 0. Whenever you see that a machine t returns m  and the size of t is smaller than that of any other previously found machines that return m or n on input 0, choose k(m)< k(n). Quite similarly, whenever you see that a machine t returns n and the size of t is smaller than that of any other previously found machine that returns n on input 0, as well as smaller or equal to the size of any other previously found machines that return m on input 0, choose k(m)<k(n). Obviously, the correct choice between k(m)<k(n) and k(m)<k(n) will be made sooner or later and never reconsidered afterwards. This will happen when the procedure hits --- in the role of t --- a smallest-size machine that returns either m or n on input 0.

Anyway, here is our formal definition of the toggling conjunction and disjunction:

Definition 3.10.1  Assume A0 and A1 are games with a common universe U.

a)     A0A1 is defined as the game G with UnG=U, VrG=VrA0VrA1  and with MpG such that, for any G-valuation e, we have:

b)     A0A1 is defined as the game G with UnG=U, VrG=VrA0ÈVrA1  and with MpG such that, for any G-valuation e, we have:

As expected, the toggling universal quantification (“tall”)  xA(x) of A(x) is essentially nothing but A(0)A(1)A(10)A(11) …, and  the toggling existential quantification (“texists”) xA(x) of A(x) is A(0)A(1)A(10)A(11) …. Formally, we have:

Definition 3.10.2 Assume x is a variable and A=A(x) is a game.

a)     xA(x) is defined as the game G with UnG=UnA, VrG=VrA-{x}  and with MpG such that, for any G-valuation e, we have:

b)     xA(x) is defined as the game G with UnG=U, VrG=VrA-{x}  and with MpG such that, for any G-valuation e, we have:

For an example illustrating toggling quantifiers at work, remember that Kolmogorov complexity k(x) is not a computable function, i.e., the problem xy(y=k(x)) has no algorithmic solution. However, replacing y with y in it yields an algorithmically solvable problem. A solution for xy(y=k(x)) goes like this. Wait till the environment chooses a number m for x, thus bringing the game down to y(y=k(m)), which is essentially nothing but  0=k(m) 1=k(m)  2=k(m)  … Initialize i to a sufficiently large number, such as i=max(|m|+const) where const is the constant mentioned earlier, and then do the following routine: Switch to the disjunct i=k(m) of 0=k(m) 1=k(m)  2=k(m) …, and then start simulating on input 0, in parallel, all Turing machines whose sizes are smaller than i; if and when you see that one of such machines returns m, update i to the size of that machine, and repeat the present routine.

We close this sections with formal definitions of toggling recurrence (“trecurrence”) , toggling corecurrence (“cotrecurrence”) , toggling implication (“timplication”)   , toggling rimplication (“trimplication”)  and toggling refutation (“trefutation”) :

Definition 3.10.3 Assume A is a game.

a)     A is defined as the game G with UnG=UnA, VrG=VrA  and with MpG such that, for any G-valuation e, we have:

b)     A is defined as the game G with UnG=U, VrG=VrA and with MpG such that, for any G-valuation e, we have:

Definition 3.10.4    a) A B  =def  AB

                                b) AB  =def  A B

                                c) A  =def  A



3.11 Cirquents

 

The syntactic constructs called cirquents, briefly mentioned earlier, take the expressive power of CoL to a qualitatively higher level, allowing us to form, in a systematic way, an infinite variety of game operations. Each cirquent is --- or can be seen as --- an independent operation on games, generally not expressible via composing operations taken from some fixed finite pool of primitives, such as the operations seen in the preceding subsections of the present section.

 

Cirquents come in a variety of versions, and the main common characteristic feature of them is having mechanisms for explicitly accounting for possible sharing of subcomponents between different components. Sharing is the main distinguishing feature of cirquents from more traditional means of expression such as formulas, sequents,  hypersequents [Avr87], or structures of the calculus of structures [Gug07].  While the latter can be drawn as (their parse) trees, cirquents more naturally call for circuit- or graph-style constructs.   The earliest cirquents were intuitively conceived as collections of sequents (sequences of formulas) that could share some formulas and, as such, could be drawn like cirquits rather than linear expressions. This explains the etimology of the word: CIRcuit+seQUENT. All Boolean circuits are thus cirquents, but not all cirquents are Boolean circuits. Firstly, because cirquents may have various additional sorts of gates (-gates,  -gates,  -gates, etc.). Secondly, because cirquents may often have more evolved sharing mechanisms than just child-sharing between different gates. For instance, in addition to (or instead of) the possibility of sharing children, a “cluster”[Jap11b] of -gates may also share choices associated with  in game-playing: if the machine choses the left or the right child for one gate of the cluster, then the same left or right choice automatically extends to all gates of the cluster.

For simplicity considerations, we are not going to introduce cirquents and their semantics in full generality and formal detail here. This is done in [Jap06c, Jap08a, Jap11b, Jap13a]. Instead, to get some intuitive insights, let us only focus on cirquents that look like Boolean circuits with - and -gates. Every such cirquent C can be seen as an n-ary operation on games, where n is the number of inputs of C. For instance, the cirquent

represents the 3-ary game operation informally defined as follows. Playing (A,B,C), as is the case with all parallel operations, means playing simultaneously in all components of it. In order to win, Machine needs to win in at least two out of the three components. Any attempt to express this operation in terms of ,  or other already defined operations is going to fail. For instance, the natural candidate (AB)(AC)(BC) is dramatically inadequate. This is a game on six rather than three boards, with A played on boards #1 and #3, B on boards #2 and #5, and C on boards #4 and #6. A special case of it is (AA)(AA)(AA), which fails to indicate for instance that the 1st and the 3rd occurrences of A stand for the same copy of A while the 2nd occurrence for a different copy where a different run can be generated. 

As we just had a chance to see, even at the most basic (,) level, cirquents are properly more expressive than formulas. It is this added expressiveness and flexibility that, for some fragments of CoL, makes a difference between axiomatizability and unaxiomatizability: even if one is only trying to set up a deductive system for proving valid formulas, intermediate steps in proofs of such formulas still inherently require using cirquents that cannot be written as formulas.[Jap06c, Jap13a, Jap13b]  An example is the system CL15 presented in Section 6.4.

In the present piece of writing we are exclusively focused on the formula-based version of CoL, seeing cirquents (in Section 6.4) only as technical servants to formulas. This explains why we do not attempt to define the semantics of cirquents formally.  It should however be noted that cirquents are naturally called for not only within the specific formal framework of CoL, but also in the framework of all resource-sensitive approaches in logic. Such approaches intrinsically require the ability to account for the possibility of resource sharing, as sharing is a ubiquitous phenomenon in the world of resources. Yet formula- or sequent-based approaches do not and cannot possess such an ability. The following naive example may provide some insights. It is about the earlier discussed (A,A,A) combination, seen as a combination of resources in the abstract/naive sense rather than the specific game-semantical sense of CoL.  

Imagine a vending machine that has slots for 25-cent (25c) coins, with each slot taking a single coin. Coins can be true (authentic) or false (counterfeited). Inserting a false coin into a slot fills the slot up (so that no other coins can be inserted into it until the operation is complete), but otherwise does not fool the machine into thinking that it has received 25 cents. A candy costs 50 cents, and the machine will dispense a candy if at least two of its slots receive true coins. Pressing the “dispense” button while having inserted anything less than 50 cents, such as a single coin, or one true and two false coins, results in a non-recoverable loss.

Victor has three 25c-coins, and he knows that two of them are true while one is perhaps false (but he has no way to tell which one is false). Could he get a candy? Expected or not, the answer depends on the number of slots that the machine has. Consider two cases: machine M2 with two slots, and machine M3 with three slots. Victor would have no problem with M3: he can insert his three coins into the three slots, and the machine, having received ³50c, will dispense a candy. With M2, however, Victor is powerless. He can try inserting arbitrary two of his three coins into the two slots of the machine, but there is no guarantee that one of those two coins is not false, in which case Victor will end up with no candy and only 25 cents remaining in his pocket.

Both M2 and M3 can be understood as resources --- resources turning coins into a candy. And note that these two resources are not the same: M3 is obviously stronger (“better”), as it allows Victor to get a candy whereas M2 does not, while, at the same time, anyone rich enough to be able to make M2 dispense a candy would be able to do the same with M3 as well.  Yet, formulas fail to capture this important difference. With  , ,  here understood as abstract multiplicative-style operations on resources, M2 and M3 can be written as R2Candy  and  R3Candy, respectively: they consume a certain resource R2 or R3 and produce Candy. What makes M3 stronger than M2 is that the subresource R3 that it consumes is weaker (easier to supply) than the subresource R2 consumed by M2. Specifically, with one false and two true coins, Victor is able to satisfy R3 but not R2.

 

The resource R2 can be represented as the cirquent

which, due to being tree-like, can also be adequately written as the formula 25c  25c. As for the resource R3, either one of the following two cirquents is an adequate representation of it, with one of them probably showing the relevant part of the actual physical circuitry used in M3:

 

Figure 3.11.1: Cirquents for the “two out of three” combination of resources

 
Unlike R2, however, R3 cannot be represented through a formula. 25c25c does not fit the bill, for it represents R2 which, as we already agreed, is not the same as R3. In another attempt to find a formula, we might try to rewrite one of the above two cirquents --- let it be the one on the right --- into an “equivalent” formula in the standard way, by duplicating and separating shared nodes, as we did earlier when tackling (A,A,A). This results in (25c25c)(25c25c)(25c25c), which, however, is not any more adequate than 25c25c. It expresses not R3 but the resource consumed by a machine with six coin slots grouped into three pairs, where (at least) one slot in each of the three pairs needs to receive a true coin. Such a machine thus dispenses a candy for 75 rather than 50 cents, which makes Victor's resources insufficient.

The trouble here is related to the inability of formulas to explicitly account for resource sharing or the absence thereof. The cirquent on the right of the above figure stands for a (multiplicative-style) conjunction of three resources, each conjunct, in turn, being a disjunction of two subresources of type 25c. However, altogether there are three rather than six 25c-type subresources, each one being shared between two different conjuncts of the main resource.

From the abstract resource-philosophical point of view of CoL, classical logic and linear logic are two imperfect extremes.  In the former, all occurrences of a same subformula mean “the same” (represent the same resource), i.e., everything is shared that can be shared; and in the latter, each occurrence stands for a separate resource, i.e., nothing is shared at all. Neither approach does thus permit to account for mixed cases where certain occurrences are meant to represent the same resource while some other occurrences stand for different resources of the same type. It is a shame that linear logic fails to express simple, natural and unavoidable things such as the “two out of three” combination expressed by the cirquents of Figure 3.11.1.

 

 

4  Interactive machines

 

4.1 Interactive computability

 

In traditional game-semantical approaches, including those of Lorenzen [Lor61], Hintikka [Hin73], Blass [Bla92], Abramsky [Abr94] and others, players’ strategies are understood as functions --- typically as functions from interaction histories (positions) to moves, or sometimes as functions that only look at the latest move of the history. This strategies-as-functions approach, however, is generally inapplicable in the context of CoL, whose relaxed semantics, in striving to get rid of  “bureaucratic pollutants” and only deal with the remaining true essence of games,  does not impose any regulations on which player can or should move in a given situation. Here, in many cases, either player may have (legal) moves, and then it is unclear whether the next move should be the one prescribed by ’s strategy function or the one prescribed by the strategy function of . The advantages of CoL’s approach become especially appreciable when one tries to bring complexity theory into interactive computation: hardly (m)any really meaningful and interesting complexity-theoretic concepts can be defined for games (particularly, games that may last long) with the strategies-as-functions approach.  

In CoL, ( ’s effective) strategies are defined in terms of interactive machines, where computation is one continuous process interspersed with --- and influenced by --- multiple “input” (Environment’s moves) and “output” (Machine's moves) events. Of several, seemingly rather different yet equivalent, machine models of interactive computation studied in CoL, we will only discuss the most basic, HPM  (“Hard-Play Machine”) model.

An HPM  is a Turing machine with the additional capability of making moves. The adversary can also move at any time, and such moves are the only nondeterministic events from the machine’s perspective. Along with one or more ordinary read/write work tapes, the machine has an additional, read-only tape called the run tape. The latter, serving as a dynamic input, at any time spells the “current position” of the play. Its role is to make the evolving run fully visible to the machine. In these terms, an algorithmic solution ( ’s winning strategy) for a given constant game A is understood as an HPM  M such that,  no matter how the environment acts during its interaction with M (what moves it makes and when),  the run incrementally spelled on the run tape is a -won run of A.  

As for ’s strategies, there is no need to define them: all possible behaviors by  are accounted for by the different possible nondeterministic updates  of the run tape of an HPM.

In the above outline, we described HPMs in a relaxed fashion, without being specific about details  such as, say, how, exactly, moves are made by the machine, how many moves either player can make at once, what happens if both players attempt to move “simultaneously”, etc. As it happens, all reasonable design choices yield the same class of winnable games as long as we only consider static games, the only sort of games that we are willing to consider. 

While design choices are largely unimportant and “negotiable”, we still want to agree on some technical details for clarity. Just like an ordinary Turing machine, an HPM has a finite set of states, one of which has the special status of being the start state. There are no accept, reject, or halt states, but there are specially designated states called move states. Each tape of the machine has a beginning but no end, and is divided into infinitely many cells, arranged in the left-to-right order. At any time, each cell contains one symbol from a certain fixed finite set of tape symbols. The blank symbol is among the tape symbols. Each tape has its own scanning head, at any given time looking (located) at one of the cells of the tape.

For technical purposes, we additionally assume the (physical or imaginary) presence of a move buffer. The size of the latter is unlimited and, at any time, it contains some (possibly empty) finite string over the keyboard alphabet. The function of the buffer is to let the machine construct a move piece-by-piece before officially making such a move.

A transition from one computation step (“clock cycle”, “time”)   to another happens according to the fixed transition function of the machine. The latter, depending on the current state and the symbols seen by the  scanning heads on the corresponding tapes, deterministically prescribes: (1) the next state that the machine should assume; (2) the tape symbol by which the old symbol should be overwritten in the current cell   (the cell currently scanned by the  head), for each work tape individually; (3) the (finite, possibly empty) string over the keyboard alphabet that should be appended to the content of the move buffer; and (4)  the (not necessarily the same) direction --- stay put, one cell to the left, or one cell to the right --- in which each  scanning head should move. It is stipulated that when the head of a tape is looking at the first (leftmost) cell, an attempt to move to the left results in staying put. The same happens when the head tries to move to the right while looking at a blank cell (a cell containing the “blank” symbol).

When the machine starts working, it is in its start state, all scanning heads are looking at the leftmost cells of the corresponding tapes, all work tapes are blank, the run tape does not contain any green moves (but it may contain some red moves, signifying that Environment has made those moves “at the very beginning”), and the buffer is empty. 

Whenever the machine enters a move state, the green-colored move α written in the buffer by that time is automatically appended to the contents of the run tape, and the buffer is simultaneously emptied. Also, on every transition, any finite sequence β1,…,βm of red moves may be nondeterministically appended to the contents of the run tape. If the above two events happen on the same clock cycle, then both α and  β1,…,βm will be appended to the contents of the run tape, where α can (nondeterministically) go before, after or anywhere in between β1,…,βm .

When describing the work of a machine, we may use the jargon “retire”. What will be meant by retiring is going into an infinite loop that makes no moves, puts nothing into the buffer, and does not reposition the scanning heads.  Retiring thus achieves the same effect as halting would achieve if this was technically allowed.

A configuration is a full description of the situation in the machine at some given computation step. It consists of records of the (“current”) contents of the work and run tapes, the content of the buffer, the location of each scanning head, and the state of the machine. A computation branch of the machine is an infinite sequence C0,C1,C2,… of configurations, where C0 is an initial configuration (one described earlier), and every Ci+1 is a configuration that could have legally followed (again,  in the sense explained earlier) Ci according to the transition function of the machine. In less formal contexts, we may say “play” instead of “computation branch”.  For a computation branch B, the run spelled by B is the run G incrementally spelled on the run tape in the corresponding scenario of interaction. We say that such a G is a run generated by the machine.

Definition 4.1.1 Let A be a constant game. We say that an HPM M computes (solves) A --- and write M A --- iff every run generated by M is a -won run of A. We say that A is computable iff there is an HPM M with M A; such an HPM is said to be an (algorithmic) solution, or winning strategy, for A.


4.2 Interactive complexity


At present, the theory of interactive computation is far from being well developed, and even more so is the corresponding complexity theory. The studies of interactive computation in the context of complexity, while having been going on since long ago, have been relatively scattered and ad hoc:  more often than not, interaction has been used for better understanding certain complexity issues for traditional, non-interactive problems rather than being treated as an object of systematic studies in its own rights (examples would be alternating computation [Cha81], or interactive proof systems and Arthur-Merlin games [Gol89]). As if complexity theory was not “complex” enough already, taking it to the interactive level would most certainly generate a by an order of magnitude greater diversity of species from the complexity zoo.

CoL has made its first modest steps[Jap15] towards elaborating some basic complexity-theoretic concepts for game-playing strategies, which we are going to present in this section. It should however be pointed out that our way of measuring the complexity of strategies presented in this section is merely one out of a huge and interesting potential variety of complexity measures meaningful and useful in the interactive context. 

Definition 4.2.1 In the context of a given computation branch (play) of a given HPM M:

1. By the background  of a clock cycle c we mean the greatest of the sizes of Environment's moves made before time c, or is 0 if there are no such moves.

2. By the timecost of a cycle c we mean c-d, where d is the greatest cycle with d< c on which a move was made by Environmen