Giorgi
Japaridze's page for

**Computability
Logic**

გამოთვლადობის ლოგიკა

Логика
вычислимости

可计算性逻辑

(CoL)

Computability is one of the most interesting and fundamental concepts in mathematics and computer science, and it is natural to ask what logic it induces. This is where

Under
the approach of CoL,
logical operators stand for operations on computational problems,
formulas
represent such problems, and their “truth” is seen as algorithmic
solvability.
In turn, computational problems --- understood in their most general, *interactive* sense --- are defined as
games
played by a machine against its environment, with “algorithmic
solvability”
meaning existence of a machine which wins the game against any possible
behavior
of the environment. With this semantics, CoL provides a systematic
answer to
the question “what can be computed?”, just like classical logic is a
systematic
tool for telling what is true. Furthermore, as it happens, in positive
cases “*what* can be computed” always
allows
itself to be replaced by “*how* can be
computed”, which makes CoL of potential interest in not only
theoretical
computer science, but many applied areas as well, including
constructive
applied theories, interactive knowledge base systems, resource oriented
systems
for planning and action, or declarative programming languages.

Currently CoL is still at an early stage of development, with open problems prevailing over answered questions. For this reason it offers plenty of research opportunities, with good chances of interesting findings, for those with interests in logic and its applications in computer science. Come and join!

**Contents**

1.5
CoL vs.
intuitionistic logic

1.6
CoL vs.
independence-friendly logic

1.7 The ‘from
semantics
to syntax’ paradigm

2
Games

2.1
The two players

2.3
Constant games

2.4
Not-necessarily-constant
games

2.5
Static games

3
The CoL zoo of game
operations

3.1 Preview

3.2
Prefixation

3.3 Negation

3.6
Reduction

3.7
Blind operations

3.10
Toggling operations

3.11
Cirquents

5
The
language
of CoL and its semantics

5.1
Formulas

5.2
Interpretations

5.3
Validity

6.1
Outline

6.2
The Gentzen-style
system **CL7**

6.3
The Gentzen-style
system **Int ^{+}**

6.4
The cirquent
calculus system **CL15**

6.5
The brute force
system **CL13**

6.6
The brute force
system **CL4**

6.7
The brute force
system **CL12**

7
Clarithmetic (CoL-based
arithmetic)

7.1
Introduction

7.2
Clarithmetic versus
bounded arithmetic

7.3
Motivations

7.4
Common preliminaries
for all our theories of
clarithmetic

7.6
Clarithmetics for
provable computability

8
CoL-based knowledgebase
and resourcebase systems

9.1
Selected papers on
CoL by Japaridze

9.2
Selected papers on
CoL by other authors

9.3
PhD theses, MS
theses and externally funded
projects on CoL

9.4
Lecture notes on
CoL, presentations and other
links

**1
The philosophy of CoL**

** **

1.1 Syntax vs. semantics

A
separate
question, of course, is what counts as a
semantics. The model example of a semantics with a capital ‘S’ is that
of
classical logic. But in the logical literature this term often has a
more
generous meaning than what CoL is ready to settle for. As pointed out,
CoL
views logic as a universal-utility tool. So, a capital-‘S’-semantics
should be
non-specific enough, and applicable to the world in general rather than
some
very special and artificially selected (worse yet, artificially
created)
fragment of it. Often what is called a semantics is just a
special-purpose
apparatus designed to help analyze a given syntactic construction
rather than
understand and navigate the outside world. The usage of Kripke models
as a
derivability test for intuitionistic formulas, or as a validity
criterion in
various systems of modal logic is an example. An attempt to see more
than a
technical, syntax-serving instrument in this type of
lowercase-‘s’-semantics
might create a vicious circle: a deductive system *L*
under question is “right” because it derives exactly the formulas
that are valid in a such and such Kripke semantics; and then it turns
out that
the reason why we are considering the such and such Kripke semantics is
that
... it validates exactly what *L*
derives.

1.2 Why game semantics?

For
CoL, a
game are not just a game. It
is a foundational mathematical concept on
which a powerful enough logic (=semantics) should be based. This is so
because,
as noted, CoL sees logic as a “real-life navigational tool”, and it is
games
that appear to offer the most comprehensive, coherent, natural,
adequate and
convenient mathematical models for the very essence of all
“navigational”
activities of agents: their interactions with the surrounding world. An
*agent* and its *environment*
translate into
game-theoretic terms as two *players*;
their *actions* as *moves*;
*situation*s arising
in the course of interaction as *positions*;
and *success*
or *failure* as *wins*
or *losses*.

It is
natural
to require that the interaction
strategies of the party that we have referred to as an “agent” be
limited to *algorithmic* ones,
allowing us to
henceforth call that player a *machine*.
This is a minimum
condition that any
non-esoteric game semantics would have to satisfy. On the other hand,
no
restrictions can or should be imposed on the environment, who
represents a
capricious user, the blind forces of nature, or the devil
himself. Algorithmic activities being synonymous to *computations*,
games thus represent *computational problems*
**---**
interactive tasks performed by computing agents, with *computability*
meaning *winnability*,
i.e. existence of a machine that wins the game against any possible
(behavior
of the) environment.

In the
1930s,
in the form of the famous Church-Turing
thesis, mankind came up with what has been perceived as an ultimate
mathematical definition of the precise meaning of algorithmic
solvability.
Curiously or not, such a definition was set forth and embraced before
really
having attempted to answer the seemingly more basic question about what
*computational problems* are **---** the
very entities that may or may
not have algorithmic solutions in the first place. The tradition
established
since then in theoretical computer science by computability simply
means
Church-Turing computability of *functions*,
as the task performed by every Turing machine is nothing but receiving
an input
*x* and generating the output *f(x)* for
some function *f*.

Yet
most tasks
that computers and computer networks
perform are interactive, with strategies not always allowing themselves
to be
adequately understood as functions. To understand this, it would be
sufficient
to just reflect on the behavior of one's personal computer. The job of
your
computer is to play one long game against you. Now, have you noticed
your faithful
servant getting slower every time you use it? Probably not. That is
because the
computer is smart enough to follow a non-functional strategy in this
game. If
its strategy was a function from positions (interaction histories) to
moves,
the response time would inevitably keep worsening due to the need to
read the
entire, continuously lengthening interaction history every time before
responding. Defining strategies as functions of only the latest moves
is also
not a way out. The actions of your computer certainly depend on more
than your
last keystroke.

Two
main
concepts on which the semantics of CoL is
based are those of *static games*
(defined later) and
their *winnability*. Correspondingly,
the philosophy of CoL relies on two
beliefs that, together, present what can be considered an interactive
version
of the Church-Turing thesis:

**Thesis 1.2.1**

(1)
The concept of static games is an
adequate formal counterpart of our intuition of (“pure”,
speed-independent)
interactive computational problems.

(2) The concept of winnability is an adequate formal counterpart of our intuition of algorithmic solvability of such problems.

So far
games in logic have been mostly used to find models and semantical
justifications for syntactically introduced popular systems such as
intuitionistic logic or linear logic. For instance: Lorenzen’s [Lor61,
Fel85] game semantics was
created for the purpose of
justifying intuitionistic logic; Hintikka’s [Hin73]
game-theoretic semantics was originally meant to provide an alternative
semantics for classical logic; Blass’ [Bla92]
game
semantics was mainly motivated by the needs of linear logic, and so
were the
game-semantical approaches elaborated by Abramsky, Jagadeessan [Abr94]
and others. In this respect, CoL turns the tables around and views
games as
foundational entities in their own right.
It starts with identifying the most basic and meaningful
operations on
games. Understanding those operations as logical operators, it then
looks
at the
logics induced by the corresponding concept of validity, regardless of
how
unusual such logics may turn out. There is no target syntactic
construction to
serve.

1.3 CoL vs. classical logic

Computability
in the traditional Church-Turing sense
is a special case of winnability **---**
winnability restricted to two-step (input/output, question/answer)
interactive
problems. So is the classical concept of truth, which is nothing but
winnability restricted to propositions, viewed by CoL as zero-step
problems,
i.e., games with no moves that are automatically won by the machine
when true
and lost when false. This way, the game
semantics of CoL is a generalization, refinement and
conservative
extension of that of classical logic.

CoL is a semantically conceived logic, and so far its syntax has only been partially developed. In a sense, this situation is opposite to the case with linear logic [Gir87], where a “full” syntactic construction existed right at the beginning, but where a really good formal semantics convincingly justifying the resource intuitions traditionally associated with that construction is still being looked for. In fact, the semantics of CoL can be seen to be providing such a justification, although only in a limited sense explained below.

The set
of
valid formulas in a certain fragment of the
otherwise more expressive language of CoL forms a logic which is
similar to but
by no means the same as linear logic.
When no exponentials are involved, the two logics
typically agree on
short and simple formulas. For instance, both logics reject *P ** P
** **P*
and accept *P ** P
** **P*, with
classical-shape propositional connectives here and later understood as
the
corresponding multiplicative operators of linear logic, and
square-shape
operators as additives ( = “with”, =
“plus”).
Similarly, both logics reject *P ** **P* and accept
*P ** **P*. On the
other hand, CoL disagrees with linear logic on many more evolved
formulas.
E.g., CoL validates the following two principles rejected even by *affine logic*,
i.e.,
linear logic with the weakening rule:

(*P
** Q*) * *(*R
** S*) * * (*P
** R*)*
** (**Q ** S*) (Blass’s [Bla92]
principle);

(*P ** (**R ** S*)) * *(*Q ** (**R ** S*)) * *((*P
** Q*)*
** R*) * ** *((*P
** Q*)*
** S*)
* *(*P
** Q*)*
** (**R ** S*).

- With
and , which are CoL’s counterparts of the exponentials !,? of linear logic, disagreements can be observed already at the level of very short formulas, such as*P**P,*accepted by CoL but rejected by affine logic.

Neither
the
similarities nor the discrepancies are a surprise. The philosophies of
CoL and
linear logic overlap in their pursuit to develop a logic of resources.
But the
ways this philosophy is materialized are rather different. CoL starts
with a
mathematically strict and intuitively convincing semantics, and only
after
that, as a natural second step, asks what the corresponding logic and
its
possible axiomatizations are. On the other hand, it would be accurate
to say
that linear logic started directly from the second step. Even though
certain
companion semantics were provided for it from the very beginning, those
are not
quite what we earlier agreed to call capital-‘S’. As a formal theory of
resources (rather than that of phases or coherence spaces), linear
logic has
been introduced syntactically rather than semantically,
essentially by taking classical sequent calculus and deleting the rules
that
seemed unacceptable from a certain intuitive, naive and never
formalized "resource-semantical" point of
view. In the absence of a clear formal concept of resource-semantical
truth or
validity, the question about whether the resulting system was complete
could
not even be meaningfully asked. In this process of syntactically
rewriting
classical logic some innocent, deeply hidden principles could have
easily
gotten victimized. CoL believes that this is exactly what happened,
with the above-mentioned
formulas separating it from linear
logic, along with many other principles, viewed as babies thrown out
with the
bath water. Of course, many retroactive attempts have been made to find
semantical (often game-semantical) justifications for linear logic.
Technically
it is always possible to come up with some sort of a formal semantics
that
matches a given target syntactic construction, but the whole question
is how
natural and meaningful such a semantics is in its own rights, and how
adequately it corresponds to the logic’s underlying philosophy and
ambitions.
Unless, by good luck, the target system really *is*
“the right logic”, the chances of a decisive success when
following the odd scheme *from syntax to
semantics* could be rather slim. The natural scheme is *from
semantics to syntax*. It matches
the way classical logic evolved and climaxed in Gödel’s completeness
theorem.
And, as we now know, this is exactly the scheme that computability
logic, too,
follows.

With
the two
logics in
a sense competing for the same market, the main --- or perhaps only ---
advantage of linear logic over CoL is its having a nice and simple
syntax. In a
sense, linear logic *is* (rather than
*has*) a beautiful syntax; and
computability logic *is* (rather than
*has*) a meaningful semantics. An
axiomatization of CoL in the full generality of its language has not
been found
yet. Only certain fragments
of it have been
axiomatized, including the one corresponding to the
multiplicative-additive
fragment of linear logic. Such axiomatizations tend to be more involved
than
that of linear logic, so the syntactic simplicity advantage of linear
logic
will always remain. Well, CoL has one thing to say: simplicity is good,
yet, if
it is most important, then nothing can ever beat ... the empty
logic. The latter, just like linear logic, is sound with respect
to resource semantics, whatever such a semantics
means; it is sound with respect to any semantics for that matter.

From
CoL’s
perspective, classical logic and (loosely understood) linear logic can
be seen
as two extremes within its all-unifying resource- (game-) semantical
vision.
Specifically, the main difference between linear logic and classical
logic is
that the former sees all occurrences of the same atom in a formula as
different
copies of the same resource, while the latter sees such occurrences as
the same
single copy, simply written at different places for technical reasons. So, linear logic rejects $1$1$1 because in
the
antecedent we have one dollar while in the consequent two, with the
possibility
to buy an apple with one and an orange with the other. On the other
hand,
classical logic accepts thus principle because it sees a single dollar
in the
consequent, written twice for some strange reason; if the first
conjunct of the
consequent is spent buying an apple, the second conjunct is also
automatically
spent on the same apple, with no money remaining for oranges. As for
CoL, it allows us to
write expressions where all occurrences
of $1 stand for the same one-dollar bill, or all stand for separate
bills, or
we have a mixture of these two, where some occurrences stand for the
same bill
while some other occurrences in the same expression stand for different
bills.

Blass [Bla92] was the first to attempt a
game-semantical
justification for linear logic. This goal was only partially achieved,
as the
resulting logic, just like the above-discussed fragment of CoL, turned
out to
validate more principles than linear logic does. It should be pointed
out that
the multiplicative-additive fragment of the logic induced by Blass’
semantics
coincides with the corresponding fragment of CoL. This does not extend
to the
full language of linear logic though. For instance, Blass’ semantics
validates
the following principle which is invalid in CoL: ^{[Jap12a]}

*P* * *((

In full
generality,
the “linear-logic fragment” of CoL is strictly between affine linear
logic and
the
logic induced by Blass’ semantics.^{[Jap09a]}

^{
}

1.5 CoL vs. intuitionistic logic

From
CoL’s
perspective,
the situation with intuitionistic logic [Hey71]
is rather
similar to what we had with linear logic. Intuitionistic logic is
another
example of a syntactically conceived logic. Despite decades of efforts,
no
fully convincing semantics has been found for it. Lorenzen’s [Lor61]
game semantics, which has a concept of validity without having a
concept of
truth, has been perceived as a technical supplement to the existing
syntax
rather than as having independent importance. Some other semantics,
such as
Kleene’s realizability [Kle52]
or Gödel’s Dialectica
interpretation [Göd58], are
closer to what we might qualify
as capital-‘S’. But, unfortunately, they validate certain principles
unnegotiably rejected by intuitionistic logic.

Just
like this
was
the case with linear logic, the set of valid formulas in a certain
fragment of
the language of CoL forms a logic which is properly stronger^{[Mez10, Jap07b]}
than Heyting’s
intuitionistic logic. However, the two come “much closer” to each other
than
CoL and linear logic do. The shortest known formula separating
intuitionistic
logic from the corresponding fragment of CoL is

(*P **Q ** R*) (* **P ** S
** T*) (*P ** Q*) (*P ** R*) * *( *P ** S*) ( *P ** T*),

where , , , are
CoL’s counterparts of
the intuitionistic
implication, negation, disjunction and conjunction, respectively.

Just
like the
resource philosophy of CoL overlaps with that of linear logic, so does
its
algorithmic philosophy with the constructivistic philosophy of
intuitionism.
The difference, again, is in the ways this philosophy is materialized. Intuitionistic logic has
come up with a
“constructive syntax” without having an adequate underlying formal
semantics,
such as a clear concept of truth in some constructive sense. This sort
of a
syntax was essentially obtained from the classical one by banning the
offending
law of the excluded
middle. But, as in the case of linear logic, the critical
question
immediately springs out: where is a guarantee that together with
excluded
middle some innocent principles would not be expelled just as well? The
constructivistic
claims of CoL, on the other hand, are based on the fact that it defines
truth
as algorithmic solvability. The philosophy of CoL does not find the
term *constructive syntax* meaningful
unless it
is understood as soundness with respect to some *constructive
semantics*, for only a semantics may or may not be
constructive in a reasonable sense. The reason for the failure of *P* *P* in CoL is not
that
this principle ... is not included in its axioms. Rather, the failure
of this
principle is exactly the reason why this principle, or anything else
entailing
it, would not be among the axioms of a sound system for CoL. Here
“failure” has
a precise semantical meaning. It is non-validity, i.e. existence of a
problem *A* for which *A*
*A* is not
algorithmically solvable.

It is
also
worth
noting that, while intuitionistic logic irreconcilably defies classical
logic,
computability logic comes up with a peaceful
solution
acceptable for everyone. The key is the expressiveness of its language,
that
has (at least) two versions
for each traditionally
controversial logical operator, and particularly the two
versions and
of
disjunction. The
semantical meaning of conservatively extends **---** from
moveless games to all games **---** its classical meaning,
and the
principle
*P * *P
survives* as it represents
an always-algorithmically-solvable combination of
problems, even if
solvable in a sense that some constructivistically-minded might pretend
to fail
to understand. And the semantics
of ,
on the other hand, formalizes
and conservatively extends a different, stronger meaning which
apparently every
constructivist associates with disjunction. As expected, then *P
** **P
turns out to be semantically invalid*.
CoL's proposal for
settlement between classical and constructivistic logics then reads:
‘If you
are open (=classically) minded, take advantage of the full expressive
power of
CoL; and if you are constructivistically minded, just identify a
collection of
the operators whose meanings seem constructive enough to you,
mechanically
disregard everything containing the other operators, and put an end to
those
fruitless fights about what deductive methods or principles should be
considered right and what should be deemed wrong’.

1.6 CoL vs. independence-friendly logic

Relatively
late
developments [Jap06c, Jap11b,
Xu14]
in CoL made it possible to switch from formulas to the
more flexible and general means of expression termed *cirquents*. The
main distinguishing
feature of the latter is that they allow to account for various sorts
of
sharing between subexpressions. After such a generalization,
independence-friendly (IF) logic [Hin73]
became a yet
another natural fragment of CoL.^{[Jap11b,
Xu14, Xu16]
}As such, it is
a conservative fragment,
just like classical logic and unlike linear or intuitionistic logics.
This is
no surprise because, just like CoL and unlike linear or intuitionistic
logics, IF logic is a semantically
conceived logic.

In
fact, for a
long
time, IF logic remained a pure semantics without a syntax. In its full
first-order language, IF logic was simply known to be unaxiomatizable.
As for
the propositional fragment, there was none because the traditional
approaches
to IF logic had failed to offer any non-classical semantics for
propositional
connectives. Under
CoL’s cirquent-based
approach to IF logic this is no longer so, and “independence-friendly”
propositional connectives are just as meaningful as their quantifier
counterparts. Based on this new treatment, a sound and complete
axiomatization
for propositional IF logic has been later found.^{[Xu14,
Xu16]
}

^{
}

^{
}

^{}

1.7 The ‘from semantics to syntax’ paradigm

CoL’s
favorite
‘from
semantics to syntax’ paradigm for developing a logic can be
characterized as
consisting of three main stages. The first one can be called the
Informal
semantics stage, at which the main intuitions are elaborated along with
the
underlying motivations, and the formal language of the logic is agreed
upon. The second
one is the Formal
semantics stage, at which those intuitions are formalized as a
mathematically
strict semantics for the adopted language, and definitions of truth and
validity are given. Finally, the third one is the Syntax stage, at
which one
constructs axiomatizations for the corresponding set of valid
principles, whether
in the full language of the logic or
some natural fragments of it, and proves the adequacy (soundness and
completeness) of such constructions.

**Figure
1.7.1:**
Three stages
of
developing a logic

CoL and
classical
logic went through all three stages in sequence. So did IF logic, even
though
for a long time it was stuck at the second stage. As for linear and
intuitionistic logics, they essentially jumped from the first stage
directly to
the third stage, skipping the inermediary stage. It is the absence of
formal
rather than informal semantics that we meant when saying that the two
logics
were conceived syntactically rather than semantically. Why is such a
jump
wrong? It is impossible to directly “prove” that the proposed syntax
adequately
corresponds to the informal-semantical intuitions underlying the logic.
After
all, Syntax is mathematically well defined while Informal semantics is
from the
realm of philosophy or intuitions, so an adequacy claim lacks any
mathematical
meaning. Of course, the same is the case with Formal semantics vs.
Informal
semantics. But, at least, both are “semantics”, which makes it
qualitatively
easier to convincingly argue (albeit not prove) that one adequately
corresponds
to the other. Once such a correspondence claim is accepted, one can
prove the
adequacy of the syntax by showing that it is sound and complete with
respect to
the formal semantics.

The
intermediary
role of Formal semantics can be compared with that of Turing machines. Since the intuitive
concept of a machine
(algorithm) and the mathematical concept of a Turing machine are both
about
“machines”, it is relatively easy to argue in favor of the
(mathematically
unprovable) Church-Turing thesis, which claims an adequate
correspondence
between the two. Once this thesis is accepted, one can relatively
easily show **---** this time
mathematically **---** that recursive
functions or lambda
calculus, too, adequately correspond to our intuitions of
machine-computability. Arguing in favor of such a claim directly,
without
having the intermediary Turing machine model, would not be easy, as
recursive
definitions or lambda terms do not at all resemble what our intuition
perceives
as machines.

Based
directly
on
the resource intuitions associated with linear logic, can anyone tell
whether,
for instance, the principle *P* (

**2
Games**

2.1 The two players

The CoL
games
are
between two players, called Machine and Environment (not always
capitalized,
and may take articles). On the picture below we see a few other names
for these
players.

We will
be
using and
as
symbolic names
for Machine and
Environment, respectively. is
a deterministic
mechanical device only
capable of following algorithmic strategies.
’s strategies,
on
the other hand, are arbitrary. Throughout this article, we shall
consistently use
the green color to represent Machine, and red for Environment. As seen
from the
picture, our sympathies are with the machine. Why should we be fans of
the
machine even when it is us who act in the role of its “evil”
environment?
Because the machine is a tool, and what makes it valuable as such is
exactly
its being a good player. In fact, losing a game by the machine means
that it is
malfunctioning. Imagine
Bob using a
computer for computing the “28% of *x*”
function in the process of preparing his US federal tax return. This is
a game
where the first move is by Bob, consisting in inputting a number *m* and meaning asking his machine the
question “what is 28% of *m*?”. The
machine wins iff it answers by the move/output *n*
such that *n*=0.28*m*.
Of course, Bob does not want the
machine to tell him that 27,000 is 28% of 100,000. In other words, he
does not
want to win against the machine. For then he could lose the more
important game
against Uncle Sam.

2.2 Moves, runs and positions

Machine
and
Environment interact with each other through mutually observable *actions*. We will be using the term “move”
instead of “action”. Looking back at the ordinary Turing machine model,
it is
about games where only two moves are made: the first move, called
“input”, is
by the environment, and the second move, called “output”, is by the
machine. ** **

We agree on the following:

- A
is any finite string over the keyboard alphabet. We will be using α, β, γ as metavariables for moves.*move* - A
is a move a together with one of the two colors*colored move**green*or*red*, with the color indicating which player has made the move. We shall write such a move as α or α,*.*Often we omit the word “colored” and simply say “move”. The letter l - A
is a (finite or infinite) sequence of colored moves. We will be using Γ,D as metavariables for runs.*run* - A
is a finite run. We will be using Φ, Ψ, Ξ, Ω as metavariables for positions.*position* - We will be
writing runs and positions as 〈
**α, β, γ****〉, 〈****Φ****〉, 〈****Φ, Ψ, β, Γ****〉, etc. The meanings of such expressions should be clear. For instance,****〈****Φ, Ψ, β, Γ****〉** - 〈
.*empty position*

2.3 Constant games

A ** gamestructure**
is a nonempty set

Gamestructures
can
be visualized as upside-down trees where the nodes represent positions.
Each
edge is labeled with a colored move. Such an edge stands for a legal
move in
the position represented by the upper node of it. Here is an example:

**Figure 2.3.1: **A gamrestructure

This
gamestructure
consists of the following 16 runs: 〈** **〉, 〈α〉, 〈β〉, 〈γ〉, 〈α,
β〉, 〈α, γ**〉, 〈β, α〉, 〈γ, α〉, 〈γ,β〉,**** **〈γ, γ〉, 〈α, γ,β〉, 〈α, γ,γ〉,** **〈γ, α, β〉, 〈γ, α, γ〉, 〈γ, β, α〉, 〈γ, γ, α〉. All
of the infinitely (in
fact, uncountably)
many other runs are illegal. An example of an illegal run is 〈γ, γ,
β, β, α〉. The offender
in it
is Environment, because the offending third move is red.

Let **Lr** be a gamestructure. A *content** on***Lr** is a function **Wn**:
**Lr** **{**,**}**. When
**Wn**〈**Γ****〉 = , we say that ****Γ**** is **** won** by
the machine (and

**Definition
2.3.2** A *constant
game**G*
is a pair **(Lr**^{G}**,Wn**^{G}**)**,
where **Lr**^{G}** **is
a gamestructure, and **Wn*** ^{G}*
is a content on

Figure
2.3.3
below
shows a constant game of the structure of Figure
2.3.1:

**Figure 2.3.3:** A constant game

Here
the
winner in
each position is indicated by the color of the corresponding node. So,
for
instance, the run 〈γ, α, β〉 is won by the
machine; but if this run had stopped after its second move, then the
environment would be the winner. Of
course, such a way of indicating winners is not sufficient if there are
infinitely long branches (legal runs), for such branches do not have
“the
corresponding nodes”.

We say
that a
constant game is ** strict**
iff, in every legal position, at most one of the two
players has legal moves. Our games generally are not strict. For
instance, in
the start position of the game of Figure
2.3.3, both
players have legal moves. We call such (not-necessarily-strict) games

Because
our
games
are free, strategies for them cannot be defined as functions from
positions to
moves, because, in some positions (such as the root position in the
game of Figure 2.3.3)
both players may have legal moves and, if
both are willing to move, which of them acts sooner will determine what
will be
the next move.

The
exact
meaning of
“strategy” will be defined later, but whatever it means, we can see
that the
machine has a winning strategy in the game of Figure
2.3.3, which can be put this way:
*Regardless of what the adversary
is doing or has done, go ahead and make move *α*; make *β* as your
second move if and when you see that the
adversary has made move *γ*, no
matter
whether this happened before or after your
first move*”. This strategy obviously
guarantees that the
machine will not offend. There are 5 possible legal runs
consistent with it, i.e.,
legal runs that
could be generated when it is followed by the machine: 〈α〉, ** **〈α, β〉,** **〈β, α〉,** **〈α, γ,
β〉 ** **and** **〈γ, α, β〉. All are won
by the
machine.

Let *G* be a constant game. *G*
is said to be ** finite-depth**
iff there is
a (smallest) integer

As
noted in Section 2.2, computational
problems in the traditional
sense are nothing but functions (to be computed). Such problems can be
seen as
the following types of games of depth 2:

**Figure 2.3.4:** The “successor” game

- Why is the root green here? Because it corresponds to the situation where there was no input. The machine has nothing to answer for, so it wins.
- Why are the 2
^{nd}level nodes red? Because they correspond to situations where there was an input but no output was generated by the machine. So the machine loses. - Why does each
group
of 3
^{rd}level nodes has exactly one green node? Because a function has exactly one (“correct”) value for each argument. - What
particular
function is this game about? The successor function:
*f*(*n*)=*n*+1.

Once we
agree
that
computational problems are nothing but games, the difference in the
degrees of
generality and flexibility between the traditional approach to
computational
problems and our approach becomes apparent and appreciable. What we see
in Figure 2.3.4 is
indeed a very special sort of games,
and there is no good call for confining ourselves to its limits. In
fact,
staying within those limits would seriously retard any more or less
advanced
and systematic study of computability.

First
of all,
one
would want to get rid of the “one green node per sibling group”
restriction for
the third-level nodes. Many natural problems, such as the problem of
finding a
prime integer between *n*
and 2*n*,
or finding an
integral root of *x*^{2}-2*n*=0,
may have more than one as well as less than one solution. That is,
there can be
more than one as well as less than one “right” output on a given input *n*.

And why
not
further
get rid of any remaining restrictions on the colors of whatever-level
nodes and
whatever-level arcs. One
can easily
think of natural situations where, say, some inputs do not obligate the
machine
to generate an output and thus the corresponding 2^{nd}
level nodes
should be green. An example would be the case where the machine is
computing a
partially-defined function *f*
and receives an input *n*
on which *f*
is undefined.** **

So far
we have
been
talking about generalizations within the depth-2 restriction,
corresponding to
viewing computational problems as very short dialogues between the
machine and
its environment. Permitting longer-than-2 or even infinitely long runs
would
allow us to capture problems with arbitrarily high degrees of
interactivity and
arbitrarily complex interaction protocols.
The task performed by a network server is an example of an
infinite
dialogue between the server and its environment **---**
the collection of clients, or let us just say the rest of the
network.

It also
makes
sense
to consider “dialogues” of lengths less than 2. “Dialogues” of length
0, i.e.
games of depth 0 are said to
be ** elementary**.
There are exactly two elementary
constant games, denoted by and
:

Note
that the
symbols and
** **thus have dual
meanings: they may stand for the two
elementary games as above, or for the corresponding two players. It
will
usually be clear from the context which of these two meanings we have
in mind.

Just like classical logic, CoL sees no extensional distinction between “snow is white” and , or between “snow is black” and : All true propositions, as games, are , and all false propositions are . In other words, a proposition is a game automatically won by the machine when true and lost when false. This way, CoL’s concept of a constant game is a generalization of the classical concept of a proposition.

2.4 Not-necessarily-constant games

We
fix an infinite set *Variables*
= {*var*_{0}, *var*_{1},
*var*_{2},… } of ** variables**.
As usual, lowercase letters near the end of the Latin alphabet will be
used to
stand (as metavariables) for variables. We further fix the set

A
** universe** (of
discourse) is a pair

A
universe (**Dm**,
**Dn**) is said to be *ideal*** **iff

Where
*Vr* is
a set of variables and *Dm* is the
domain of some universe, by
a *Vr**Dm* ** valuation**
we
mean a (total) function

**Definition**** 2.4.1 **Let
*n* be a natural number. An *n*-ary
** game**
is a tuple

We
refer to the elements of **Vr*** ^{G}*
as the
variables on which the game

In
classical logic, under an intensional (variable-sensitive)
understanding, the definition of the concept of an *n*-ary
predicate would look exactly like our definition of an *n*-ary
game after omitting the (now
redundant) **Dn** component, with the
only difference that there the **Mp**
function returns propositions rather than constant games. And, just
like
propositions are nothing but 0-ary predicates, constant games are
nothing but
0-ary games. Thus, games generalize constant games in the same way as
predicates generalize propositions. And, as constant games are
generalized
propositions, games are generalized predicates.

In
formal contexts, we choose a similar intensional
approach to functions. The definition of a function *f*
below is literally the same as our definition of a game *G*,
with the only difference that **Mp*** ^{f}*
is now a mapping of

**Definition
2.4.2 **Let *n*
be a natural
number. An *n*-ary ** function**
is a tuple

Just
like in the case of games, we refer to the
elements of **Vr*** ^{f}*
as the variables on which the function

Given
a game *G*
and an *X***Dm*** ^{G}* valuation

**Convention
2.4.4** Assume *F*
is a function (resp. game). Following
the standard readability-improving practice established in the
literature for
functions and predicates, we may fix a tuple (*x*_{1},…,*x*_{n}) of
pairwise distinct variables for *F*
when
first mentioning it, and write *F*
as *F*(*x*_{1},…,*x*_{n}).
When doing so, we do not necessarily mean that {*x*_{1},…,*x*_{n}}=**Vr*** ^{F}*.
Representing

- For any
*H*-valuation*e*,*e*[*H*]=*e*ʹ[*F*], where*e*ʹ is the*F*-valuation with*e*ʹ[*x*_{1}]=*e*[*g*_{1}], …,*e*ʹ[*x*_{n}]=*e*[*g*_{n}].

Further, we
allow
for any of the above “functions” *g _{i}*
to be just a constant

The entities that in common language we call games are at least as often non-constant as constant. Board games such as chess and checkers are examples of constant games. On the other hand, almost all card games are more naturally represented as non-constant games: each session/instance of such a game is set by a particular permutation of the card deck, and thus the game can be understood as a game that depends on a variable

Consider
a
game *G*.
What we call a ** unilegal**
run of

Non-constant
games, as long as they are unistructural,
can be visualized as trees in the earlier
seen
style, with the difference that the nodes of the tree can now be
labeled with
any predicates rather than only propositions (colors) as before. For
any given
valuation *e*, each such label *L* is
telling us the color of the node.
Namely, the *L*-labeled node is green
if *L* is true at *e*,
and red if *L* is
false.

**Figure
2.3.5:**
The
“generalized successor” game

The above figure
shows a game which depends on *x*.
Specifically, for
every valuation *e*,
the game is about computing the function* f _{e}*,
defined by

Denoting
the game of Figure
2.3.5 by *G*(*x*),
the game of Figure 2.3.4
can be seen
to be the instance *G*(1) of it. The
latter results from replacing z
by
1
in Figure 2.3.5. This replacement turns every label *m*=*n*+*z*
into the proposition *m=n*+1,
i.e., into (green
filling) or (red
filling).

2.5 Static games

In the
particular
games that we have seen so far or will see in the subsequent sections,
when
talking about the existence of a winning strategy or the absence of
thereof,
the question about the (relative) speeds of the players was never
relevant.
That is because all those games share one nice property, called the ** static**
property. Below are some intuitive characterizations of this important
class of
games.

- Static games are games where the speed of the adversary is not an issue: if a player has a winning strategy, the strategy will remain good no matter how fast its adversary is in making moves. And if a player does not have a winning strategy, it cannot succeed no matter how slow the adversary is.
- Static games are games where “it never hurts a player to postpone making moves” (Blass’ words from his AMS review of [Jap03]).
- Static games
are
contests of intellect rather than speed. In such games, what matters is
*what*to do (strategy) rather than*how fast*to do (speed).

The
games that
are
not static we call ** dynamic**. The
following games are dynamic:

In
either
game, the
player who is quick enough to make the first move becomes the winner.
And
asking whether Machine has a winning strategy is not really meaningful:
whether
Machine can win or not depends on the relative speeds of the two
players. In a
sense, *B*
is even “worse” than *A*.
Just as in *A*,
it would not be a
good idea for Machine to wait, because, unless Environment is stupid
enough to
also decide to wait, Machine will lose. Let us now say that Machine
goes ahead
and initiates the first move. What may happen in *B*
is that Environment moves before Machine actually completes its
move. This will make Machine not only the loser but also the offender.
Machine
cannot even be sure that its move will be legal!
If communication happens by exchanging
messages through a (typically) asynchronous network, that often has
some
unpredictable delays, this can also be a contest of luck: assuming that
the
arbitration is done by Environment or a third party who is recording
the order
of moves, it is possible that Machine makes a move earlier than
Environment
does, but the message carrying that move is delivered to the arbiter only after
Environment’s move
arrives, and the arbiter will be unable to see that it was Machine who
moved
first. An engineering-style attempt to neutralize this problem could be
to let
all moves carry timestamps. But this would not save the case: even
though
timestamps would correctly show the real order of moves by each
particular
player, they could not be used to compare two moves by two different
players,
for the clocks of the two players would never be perfectly synchronized.

Another
attempt to
deal with problems in the above style could be to assign to each
player, in a
strictly alternating order, a constant-length time slot during which
the player
has exclusive access to the communication medium. Let alone that this
could
introduce some unfair asymmetry in favor of the player who gets the
first slot,
the essence of the problem would still not be taken care of: some games
would
still essentially depend on the relative speeds of the two players,
even if
arguably no longer on the speed of the network.

Formally,
the
concept of static games is defined in terms of “delays”.
We say that run D
is a ** green**
(resp.

- The moves of either color are arranged inΓ in the same order as in D.
- For any
*n*,*k*³1, if the*k*th red (resp. green) move is made earlier than the the*n*th green (resp. red) move in Γ, then so is it in D.

In other words, D is the result of shifting to the right (“delaying”) some green (resp. red) moves in Γ without violating their order.

Example: 〈**0,2,1,3,4,5,7,8,6,10,9〉**** is a green
delay of 〈****0,1,2,3,4,5,6,7,8,9,10〉****.****
** The former
is obtained from the latter by shifting
to the right some green moves. When doing so, a green move can jump
over a red
move, but one green move cannot jump over another green move.

Now, we
say
that a
constant game *G* is ** static**
iff, for either player (color)

- If Γ is won by
*P*, then so is D. - If
*P*does not offend in Γ, then neither does it in D.

All
game
operations
studied in CoL have been shown to preserve the static property of games.^{[Jap03]
}So, as
long as the atoms of an expression represent static games, so does the
whole
expression. One natural subset of all static games is the closure of
elementary
games under the operations of CoL.

As already noted, CoL restricts
its attention to static
games only (at least, at its present stage of development). To see why
static
games are really what CoL is willing to call “pure” computational
problems,
imagine a play over the delay-prone Internet. If there is a central
arbiter
(which can be located either in one of the players' computer or
somewhere on a
third, neutral territory) recording the order of moves, then the
players have
full access to information about the official version of the run that
is being
generated, even though they could suffer from their moves being
delivered with
delays. But let us make the situation even more dramatic: assume, as
this is a
typical case in distributed systems, that there is no central arbiter.
Rather,
each players’ machine records moves in the order it receives them, so
that we
have what is called distributed arbitration. Every time a player makes
a move,
the move is appended to the player’s internal record of the run and, at
the
same time, mailed to the adversary. And every time a message from the
adversary
arrives, the move contained in it is appended to the player’s record of
the
run. The game starts. Seeing no messages from Environment, Machine
decides to
go ahead and make an opening move α**.**_{ }As it happens,
Environment also decides to make an “opening” move β. The messages
carrying α and β cross. So,
after
they are both delivered, Machine’s internal records show the
position 〈**α,β****〉, while
Environment
thinks that the current position is 〈****β,α****〉. Both of the
players decide to make two consecutive new moves: γ,δ****
**and ε,ω,
respectively, and the two pairs of messages, again,
cross.

After
making
their
second series of moves and receiving a second series of “replies” from
their
adversaries, both players decide to make no further moves. The game
thus ends.
According to Machine’s records, the run was 〈**α,β,γ,δ,ε,ω****〉, while
Environment thinks
that the run was 〈****β,α,ε,ω,γ,δ****〉. As
for the “real run”,
i.e. the real order in
which these six moves were made (if this concept makes sense at all),
it can be
yet something different, such as, say, 〈****β,α,γ,ε,δ,ω****〉. A little
thought
can convince us that in any case the real run, as well as the version
of the
run seen by Environment, will be a green delay of the version of the
run seen
by Machine. Similarly, the real run, as well as the version of the run
seen by
Machine, will be a red delay of the version of the run seen by
Environment.
Hence, provided that the game is static, either player can fully trust
its own
version of the run and only care about making good moves for this
version,
because regardless of whether it shows the true or a distorted picture
of the
real run, the latter is guaranteed to be successful as long as the
former
is. Moreover: for
similar reasons, the
player will remain equally successful if, instead of immediately
appending the
adversary's moves to its own version of the run, it simply queues those
moves
in a buffer as if they had not arrived yet, and fetches them only later
at a
more convenient time, after perhaps making and appending to its records
some of
its own moves first. The effect will amount to having full control over
the
speed of the adversary, thus allowing the player to select its own pace
for the
play and worry only about what moves to make rather than how quickly to
make
them. **

Thus,
static
games
allow players to make a full abstraction from any specific assumptions
regarding the type of arbitration (central or distributed), the speed
of the
communication network and the speed of the adversary: whatever strategy
they
follow, it is always safe to assume or pretend that the arbitration is
fair and
unique (central), the network is perfectly accurate (zero delays) and
the
adversary is “slow enough”. On
the other
hand, with some additional thought, we can see that if a game is not
static,
there will always be situations when no particular one of the above
three sorts
of abstractions can be made. Specifically, such situations will emerge
every
time when a player *P*’s strategy
generates a *P*-won run that has some
*P*-lost *P*-delays.

**3
The CoL zoo of game operations**

3.1 Preview

As we
already
know,
logical operators in CoL stand for operations on games. There is an
open-ended
pool of operations of potential interest, and which of those to study
may
depend on particular needs and taste. Below is an incomplete list of
the
operators that have been officially introduced so far.

*Negation: **
*

*Conjunctions: **parallel*), *choice*), *sequential*), (*toggling*)

*Disjunctions: **parallel*), *choice*), *sequential*),
*toggling*)

*Recurrences: *** **(

*Corecurrences:**branching*), (*parallel*),
*sequential*),
*toggling*)

*Universal
quantifiers:** *** **(

*Existential
quantifiers:*** **(

*Implications***:** *parallel*), *choice*),
*sequential*),
*toggling*)

*Rimplications:**branching*),
*parallel*),
*sequential*),
*toggling*)

Among
these we
see
all operators of classical logic, and our choice of the classical
notation for
them is no accident. It was pointed out earlier
that
classical logic is nothing but the elementary, zero-interactivity
fragment of
computability logic. Indeed, after analyzing the relevant definitions,
each of
the classically-shaped operations, when restricted to elementary games, can
be easily seen to be
virtually the same as the corresponding operator of classical logic.
For
instance, if *A* and *B*
are elementary games, then so is *A**B*, and the
latter is
exactly the classical conjunction of *A*
and *B* understood as an (elementary)
game. In a general --- not-necessarily-elementary --- case, however, , , , become
more reminiscent of
(yet not the same
as) the corresponding multiplicative operators of linear
logic. Of course, here we are comparing apples with oranges
for, as noted
earlier, linear logic is a syntax while computability logic is a
semantics, and
it may be not clear in what precise sense one can talk about
similarities or
differences.

In the
same
apples
and oranges style, our operations , , , can
be perceived as relatives of the
additive connectives and
quantifiers of linear logic; , as
“multiplicative
quantifiers”; ,,, as
“exponentials”,
even though it is hard to guess which of the two groups

In this
section we are going to see intuitive
explanations as well as formal definitions of all of the above-listed
operators. We agree that throughout those definitions, Φ ranges over
positions, and Γ over**
**runs.** **Each
such
definition has two clauses, one telling us
when a position is a legal position of the compound game, and the other
telling
us who wins any given legal run. The run Γ seen in the
second clause of the definition is always
implicitly assumed to be a legal legal run of the game that is being
defined.

This section
also provides many examples of particular
games. Let us agree that, unless otherwise suggested by the context, in
all
those cases we have the ideal universe in mind. Often we let
non-numerals such
as people, Turing machines, etc. act in the roles of “constants”. These
should
be understood as abbreviations of the corresponding decimal numerals
that
encode these objects in some fixed reasonable encoding. It should also
be
remembered that algorithmicity is a minimum requirement on ’s strategies.
Some of
our examples implicitly assume stronger requirements, such as
efficiency or
ability to act with imperfect knowledge. For instance, the problem of
telling
whether there is or has been life on Mars is, of course, decidable, for
this is
a finite problem. Yet our knowledge does not allow us to actually solve
the
problem. Similarly, chess is a finite game and (after ruling out the
possibility of draws) one of the players does have a winning strategy
in it.
Yet we do not know specifically what (and which player’s) strategy is a
winning
one.

When
omitting
parentheses in compound expressions, we assume that all unary operators
(negation, refutations, recurrences, corecurrences and quantifiers)
take precedence
over all
binary operators (conjunctions, disjunctions, implications,
rimplications),
among which implications and rimplications have the lowest precedence.
So, for
instance, *A* *B**C* should be
understood as *A* ((*B*)*C*) rather
than,
say, as (*A* *B*)*C* or as *A* ((*B**C*)).

**Theorem
3.1.1**
(proven in [Jap03, Jap08b,
Jap11a]) All operators listed
in
this subsection preserve the static property of games (i.e., when
applied to
static games, the resulting game is also static).

3.2 Prefixation

Unlike
the
operators
listed in the preceding subsection, the operation of prefixation is not
meant
to act as a logical operator in the formal language of CoL. Yet, it is
very
useful in characterizing and analyzing games, and we want to start our
tour of
the zoo with it.

**Definition
3.2.1**
Assume *A* is
a game and Ψ is** **a unilegal
position of *A *(otherwise the operation is undefined).The Ψ–** prefixation**
of

**Lr**_{e}=^{G}**{**Φ| 〈Ψ,Φ〉∈**Lr**_{e}^{A}**}**;**Wn**_{e}^{G}〈Γ^{ }**〉****=****Wn**〈_{e}^{A}**Ψ,Γ****〉.**

Intuitively, 〈**Ψ〉***A* is the game
playing which means playing *A*
starting (continuing) from position Ψ. That
is, 〈Ψ**〉***A** is the game
to which **A* ** evolves** (is

3.3 Negation

For
a run Γ, by Γ we mean the
“negative image” of Γ (green and red
interchanged). For instance, 〈α,β,γ**〉**** = **〈α,β,γ**〉. **

**Definition
3.3.1**** **Assume *A*
is a game**.
***A*** **is defined as
the
game *G* with **Un*** ^{G}*=

**Lr**_{e}={Φ: Φ∈^{G}**Lr**_{e}^{A}**};****Wn**_{e}^{G}〈Γ^{ }**〉****=****Wn**〈Γ_{e}^{A}**〉****=**.

In
other words, *A* is *A*
with the roles of the two players
interchanged: Machine’s (legal) moves and wins become Environment’s
moves and
wins, and vice versa. So, when visualized as a tree, *A *is the exact
negative image of *A*, as illustrated
below:

Figure 3.3.2**:**
Negation

Let
*Chess* be
the game of chess (with draws ruled out) from the point of view of the
white
player. Then *Chess* is *Chess*
“upside down”, i.e., the game of
chess from the point of view of the black player:

Observe
that the classical double negation
principle *A*
= *A*
holds: interchanging the players’ roles twice restores the
original
roles of the players. It is also easy to see that we always have 〈**Ψ〉***A* = 〈** Ψ〉 ***A*. So, for
instance,
if α is Machine’s
legal move in the empty position of *A*
that brings *A* down to *B*, then the
same α **is
Environment’s legal move in the empty position of ***A*, and it
brings *A* down to *B*. Test the
game *A* of
Figure 3.3.2
to see that this is indeed so.

3.4 Choice operations

** Choice
conjunction **(“

*A ** B* is a game
where, in the initial (empty) position,
only Environment has legal moves. Such a move should be either ‘0’ or
‘1’. If
Environment moves 0, the game continues as *A*,
meaning that 〈**0**〉(*A
** B*) = *A*;
if it
moves 1, then the game continues as *B*,
meaning that 〈**1**〉(*A
** B*) = *B*;
and
if it fails to make either move (“choice”), then it loses. *A
** B *is similar,* *with
the difference that here it is Machine who has initial moves and who
loses if
no such move is made.

**Definition
3.4.1 **Assume** ***A*_{0
}and_{ }*A*_{1 }are games_{
}with
a common universe *U*.

a)
*A*_{0 }_{ }*A*_{1}** **is defined as
the game
*G* with **Un*** ^{G}*=

**Lr**_{e}={〈 〉} ∪ {〈^{G},Φ〉:*i**i*∈{0,1}, Φ∈**Lr**_{e}^{A}};^{i}**Wn**_{e}〈 〉 =^{G}**Wn**〈_{e}^{G},Γ〉 =*i***Wn**_{e}^{A}〈Γ〉.^{i}

b)
*A*_{0 }_{ }*A*_{1}** **is defined as
the game
*G* with **Un*** ^{G}*=

**Lr**={〈 〉} ∪ {〈^{G},Φ〉:*i**i*∈{0,1}, Φ∈**Lr**^{A}};^{i}**Wn**〈 〉 = ;^{G}**Wn**〈^{G},Γ〉 =*i***Wn**^{A}〈Γ〉.^{i}

The
game *A*
of Figure 3.3.2 can now
be easily seen to be ( ) ( ), and its
negation
be ( ) ( ). The De
Morgan
laws familiar from classical logic persist: we always have (*A* *B*) = *A * *B* and (*A ** B*) = *A ** **B*. Together
with the
earlier observed double negation principle, this means that *A
* *B *=
(*A* *B*) and *A ** B
*= (*A * *B*). Similarly
for the
quantifier counterparts and
of
and
.

Given
a game *A*(*x*),
the ** choice universal
quantification**
(“

Specifically,
*xA*(*x*)
is a game where, in the initial
position, only Environment has legal moves, and such a move should be
one of
the constants. If Environment moves *c*,
then the game continues as *A*(*c*), and
if Environment fails to make an
initial move/choice, then it loses.
*xA*(*x*)
is similar,* *with the difference
that here it is Machine who has initial moves
and who loses if no such move is made.
So, we always have 〈*c*〉*xA*(*x*)=*A*(*c*)
and 〈*c*〉*xA*(*x*)=*A*(*c*).
Below is a formal definition of all
choice operations.

**Definition
3.4.2 **Assume
*x* is
a variable and *A*=*A*(*x*)
is a game.

a)
*xA*(*x*)
** **is
defined as the game *G* with **Un*** ^{G}*=

**Lr**_{e}={〈 〉} ∪ {〈^{G},Φ〉:*c**c*∈*Constants*, Φ∈**Lr**_{e}^{A}^{(c)}};**Wn**_{e}〈 〉 =^{G}**Wn**〈_{e}^{G},Γ〉 =*c***Wn**_{e}^{A}^{(c)}〈Γ〉.

b)
*xA*(*x*)
** **is
defined as the game *G* with **Un*** ^{G}*=

**Lr**_{e}={〈 〉} ∪ {〈^{G},Φ〉:*c**c*∈*Constants*, Φ∈**Lr**_{e}^{A}^{(c)}};**Wn**_{e}〈 〉^{G}**=****Wn**〈_{e}^{G},Γ〉 =*c***Wn**_{e}^{A}^{(c)}〈Γ〉.

Now
we are already able to express traditional
computational problems using formulas. Traditional problems come in two
forms:
the problem of computing a function *f*(*x*),
or the problem of deciding
a predicate p(x). The former can be written as *x**yA*(*y*=*f*(*x*)), and the
latter as *x*(*p*(*x*)
*p*(*x*)). So, for
instance,
the constant “successor” game of Figure
2.3.4 will
be written as *x**yA*(*y*=*x*+1),
and the unary “generalized successor” game of Figure
2.3.5 as *x**yA*(*y*=*x*+*z*).
The following game, which is about deciding the “evenness”
predicate,
could be written as *x*(*y*(*x=2y*)
*y*(*x=2y*)) ( will be
officially
defined later, but,
as promised, its meaning is
going to be exactly classical when applied to an elementary game like

Classical
logic has been repeatedly criticized for its
operators not being constructive. Consider, for example,** ***x**y*(*y*=*f*(*x*)). It is true
in the
classical sense as long as* f* is a
total function. Yet its truth has little (if any) practical import, as “*y*” merely
signifies *existence*
of *y*, without implying that such a *y*
can actually be *found*.**
**And,
indeed, if *f* is an incomputable
function, there is no method for finding *y*.**
**On the other hand, the choice operations of CoL *are*
constructive.**
**Computability (“truth”) of *x**yA*(*y*=*f*(*x*)) means more
than
just existence of *y*;
it means the possibility to actually *find* (*compute*,
*construct*) a corresponding *y*
for every *x*.

*x**y*(*Halts*(*x,y*)
*Halts*(*x,y*)) *x**y*(*Halts*(*x,y*)
*Halts*(*x,y*)).

*Chess* *Chess*,
on the other hand, is an example of a
computable-in-principle yet “practically incomputable” problem, with no
real
computer anywhere close to being able to handle it.

There
is no need to give a
direct definition for the remaining choice operation of ** choice
implication** (“

**Definition
3.4.3** *A**B *=_{def
} _{ }*A** B*.

Each of the other sorts of disjunctions (parallel, sequential and toggling) generates the corresponding implication the same way.

3.5 Parallel operations

The
** parallel conjunction **(“

This
is very different from *Chess ** Chess*. In the
latter needs
to choose between the
two
components and then win the chosen one-board game, which makes *Chess ** Chess* essentially
as hard to win as either *Chess *or* Chess.*
A game of the form *A**B* is generally
easier (at
least, not harder) to win than *A**B*,*
*the latter is easier to win than *A**B*, and the
latter in turn is
easier to win than *A**B*.

Technically,
a move** **α** **in the left
(resp. right) -conjunct or -disjunct is
made by prefixing α with ‘0.’.
For instance, in (the initial position of) (*A**B*)(*C**D*), the move
‘1.0’ is legal
for , meaning
choosing the left -conjunct in
the second -disjunct of
the game. If such a move is made, the game continues as (*A**B*)*C*. The player , too, has
initial legal moves in (*A**B*)(*C**D*), which are
‘0.0’ and
‘0.1’.

In
the formal definitions of
this section and throughout the rest of this webpage, we use the
important
notational convention according to which:

**Notation
3.5.1**
For a
run Γ and string a, Γ^{
}^{α} means
the result of removing
from Γ all moves
except those of
the form ab, and
then deleting the
prefix ‘α’ in the
remaining moves.

So,
for instance, 〈**1.2,1.0,0.33〉**^{1.}^{ }= 〈2,0〉** **and** ** 〈**1.2,1.0,0.33〉**** ^{0. }**= 〈

**Definition
3.5.2 **Assume** ***A*_{0
}and_{ }*A*_{1 }are games_{
}with
a common universe *U*.

a)
*A*_{0}*A*_{1}** **is defined as
the
game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**_{e}iff every move of Φ has the prefix ‘0.’ or ‘1.’ and, for both^{G}*i*∈{0,1}, Φ^{i}∈^{.}**Lr**_{e}^{A};^{i} **Wn**_{e}〈Γ^{G}**〉****=***i***∈{0,1},****Wn**_{e}^{A}〈^{i}**Γ**^{i}〉^{.}**=.**

b)
*A*_{0}*A*_{1}** **is defined as
the
game *G *with **Un*** ^{G}*=

- Φ∈
**Lr**_{e}iff every move of Φ has the prefix ‘0.’ or ‘1.’ and, for both^{G}*i*∈{0,1}, Φ^{i}∈^{. }**Lr**_{e}^{A};^{i} **Wn**_{e}〈^{G}**Γ〉****=***i***∈{0,1},****Wn**_{e}^{A}〈^{i}**Γ**^{i}〉^{.}**=.**

When
*A* and *B*
are (constant) finite games, the depth
of *A**B* or *A**B* is the sum of
the
depths of *A* and *B*,
as seen below.

This
signifies
an exponential growth of the breadth, meaning
that, once we have reached the level of parallel operations, continuing
drawing
trees in the earlier style becomes no fun. Not to be disappointed
though:
making it possible to express large- or infinite-size game trees in a
compact
way is what our game operators are all about after all.

Whether
trees
are or are not helpful in visualizing
parallel combinations of unistructural games, prefixation is still very
much so
if we think of each unilegal position Φ of *A*
as the game 〈Φ〉*A*.
This way, every unilegal
run G of *A*
becomes a sequence of games as illustrated in the following example.

**Example
3.5.4 ** To the
(uni)legal
run G = 〈1.7, 0.7, 0.49,
1.49**〉**** of
game ***A*
= *x**y*(*y*≠*x*^{2}) *x**y*(*y*=*x*^{2})
induces the following sequence, showing how things evolve as G runs, i.e.,
how the moves of G affect/modify
the game that is being played:

*x**y*(*y*≠*x*^{2})*x**y*(*y*=*x*^{2}) i.e.*A**x**y*(*y*≠*x*^{2})*y*(*y*=7^{2}) i.e. 〈1.7〉*A**y*(*y*≠7^{2})*y*(*y*=7^{2}) i.e. 〈1.7,0.7〉*A*- 49≠7
^{2}*y*(*y*=7^{2}) i.e. 〈1.7,0.7,0.49〉*A* - 49≠7
^{2}49=7^{2}i.e. 〈1.7,0.7,0.49,1.49〉*A*

The
run hits
the true proposition 49≠7^{2}
49=7^{2},
and hence is won by the
machine.

As we
may
guess, the ** parallel
universal quantification **(“

**Definition
3.5.5 **Assume
*x* is
a variable and *A*=*A*(*x*)
is a game.

a)
*xA*(*x*)**
**is defined as the game *G*
with **Un*** ^{G}*=

- Φ∈
**Lr**_{e}iff every move of Φ has the prefix ‘^{G}*c*.’ for some*c*∈*Constants*and, for all such*c*, Φ^{c}∈^{. }**Lr**_{e}^{A}^{(c)}; **Wn**_{e}〈Γ〉^{G}**=***c*∈*Constants*,**Wn**〈Γ_{e}^{A}^{c}〉^{.}**=.**

b)
*xA*(*x*)**
**is defined as the game *G*
with **Un*** ^{G}*=

- Φ∈
**Lr**_{e}iff every move of Φ has the prefix ‘^{G}*c*.’ for some*c*∈*Constants*and, for all such*c*, Φ^{c}∈^{. }**Lr**_{e}^{A}^{(c)}; **Wn**_{e}〈Γ〉^{G}**=***c*∈*Constants*,**Wn**〈Γ_{e}^{A}^{c}〉^{.}**=.**

The next group of parallel operators are ** parallel
recurrence** (“

**Definition
3.5.6 **Assume
*A* is
a game.

a)
*A*** **is defined as
the game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**_{e}iff every move of Φ has the prefix ‘^{G}*c*.’ for some*c*∈*Constants*and, for all such*c*, Φ^{c}∈^{. }**Lr**_{e};^{A} **Wn**_{e}〈Γ〉^{G}**=***c*∈*Constants*,**Wn**〈Γ_{e}^{A}^{c}〉^{.}**=.**

b)
*A*** **is defined as
the game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**_{e}iff every move of Φ has the prefix ‘^{G}*c*.’ for some*c*∈*Constants*and, for all such*c*, Φ^{c}∈^{. }**Lr**_{e};^{A} **Wn**_{e}〈Γ〉^{G}**=***c*∈^{}*Constants*,**Wn**〈Γ_{e}^{A}^{c}〉^{.}**=.**

As
was the
case with choice operations, we can see
that the definition of each parallel operations seen so far in this
section can be
obtained
from the definition of its dual by just interchanging with
. Hence it is
easy
to verify that we always have (*A* *B*) = *A**B*,
(*A* *B*) = *A**B*,
*xA*(x) = *x**A*(x),
*xA*(*x*) = *x**A*(*x*), *A *= *A*, *A *= *A*,
This, in turn, means that each parallel operation is definable in the
standard
way in terms of its dual operation and negation. For instance, *A**B *can be defined
as* **(**A**B*), and *A *as *A*.
Three more parallel operations defined in
terms of negation and other parallel operations are ** parallel implication** (“

**Definition
3.5.7 **a) *A*_{
}_{ }*B*** **=_{def
}** ***A*_{ }_{ }*B*

*
*b)* A*_{ }_{ }*B*** **=_{def}** ***A*_{ }_{ }*B*_{ }

c) *A*** **=_{def}** ***A*_{ }_{ }

Note
that,
just like negation and unlike choice
operations, parallel operations preserve the elementary property of
games. When
restricted to elementary games, the meanings of , and
coincide
with those of
classical conjunction,
disjunction and implication. Further, as long as all individuals of
the
universe have naming constants, the meanings of and
coincide
with those of
classical universal
quantifier and existential quantifier. The same conservation of
classical
meaning (but without any conditions on the universe) is going to be the
case with the blind quantifiers

While
all
classical tautologies automatically remain
valid when parallel operators are applied to elementary games, in the
general
case the class of valid (“always computable”) principles shrinks. For
example, *P ** P**P*, i.e. *P*(*P**P*), is not
valid. Back to our
chess example, one
can see that the earlier copycat
strategy successful for
*Chess**Chess* would be
inapplicable to *Chess*(*Chess**Chess*). The best
that can
do in this three-board
game is to synchronize *Chess* with one of
the two
conjuncts of *Chess**Chess*. It is
possible
that then *Chess* and the
unmatched *Chess* are both lost, in
which case the
whole game will be lost.

The
principle *P ** P**P* is valid in
classical logic because the latter sees no difference between *P* and *P**P. *On the other
hand,
in virtue of its semantics, CoL is *resource-conscious*,
and in it *P*
is by no means the same as *P**P* or *P**P*. Unlike
*P ** P**P*,*
* *P ** P**P* is a valid
principle. Here, in the antecedent, we have infinitely many “copies” of
*P*. Pick any two copies and, via
copycat,
synchronize them with the two conjuncts of the consequent. A win is
guaranteed.

3.6 Reduction

Intuitively,
*A**B* is the
problem of *reducing**B* to *A*:
solving *A**B* means solving
*B* while having *A*
as a computational resource. Specifically, may
observe how *A* is being solved (by the environment),
and utilize this information
in its own solving *B*. As already
pointed out, resources
are symmetric
to problems: what is a problem to solve for one player is a resource
that the
other player can use, and vice versa. Since *A*
is negated in *A**B* and negation
means
switching the roles, *A* appears as a
resource rather than problem for in
*A**B*. Our
copycat
strategy
for *Chess**Chess* was an
example of
reducing *Chess*
to *Chess*.
The same strategy was
underlying Example 3.5.4,
where *x**y*(*y*=*x*^{2})
was reduced to itself.

Let us
look at
a
more meaningful example: reducing the acceptance problem to the
halting
problem. The former, as a decision problem, will be written as *x**y* (*Accepts*(*x*,*y*)
*Accepts*(*x*,*y*)), where *Accepts*(*x*,*y*) is the predicate
“Turing machine *x* accepts input *y*”.
Similarly, the halting problem is
written as *x**y* (*Halts*(*x*,*y*)
* Halts*(*x*,*y*)). Neither problem has an algorithmic
solution, yet the following pimplication does:

*x**y* (*Halts*(*x*,*y*) * Halts*(*x*,*y*)) *x**y* (*Accepts*(*x*,*y*) * Accepts*(*x*,*y*))

Here is
Machine’s
winning strategy for the above game. Wait till Environment makes moves 1.*m*
and 1.*n*
for some *m* and *n*.
Making these moves essentially means asking the question “*Does machine m accept input n?*”. If such moves are never
made, you win.
Otherwise, the moves bring the game down to

*x**y* (*Halts*(*x*,*y*) * Halts*(*x*,*y*)) *Accepts*(*m*,*n*) * Accepts*(*m*,*n*)

Make
the moves
0.*m* and 0.*n*,
thus asking the counterquestion “*Does machine m
halt on input n?*”.
Your moves
bring the game down to

*Halts*(*m*,*n*) * Halts*(*m*,*n*) *Accepts*(*m*,*n*) * Accepts*(*m*,*n*)

Environment
will
have to answer this question, or else it loses (why?).** **If
it answers by
move 0.0 (“*No,
m
does not halt on n*”),
you make the move 1.0
(say “*m does
not accept n*”).
The game will be
brought down to *Halts*(*m*,*n*) *Accepts*(*m*,*n*).
You win, because if *m*
does not halt on *n*,
then it does not accept *n*,
either. Otherwise, if
Environment answers by move 0.1 (“*Yes,
m halts
on n*”),
start simulating *m*
on *n*
until *m*
halts. If you see
that *m*
accepted *n*,
make the move 1.1 (say “*m accepts
n*”);
otherwise make the
move 1.0 (say “*m
does not accept
n*”).
Of course, it is a possibility that the simulation goes on forever. But
then Environment
has lied when saying “*m
halts on n*”;
in other
words, the antecedent is false, and you still win.

Note
that what
Machine did when following the above
strategy was indeed reducing the acceptance problem to the halting
problem: it
solved the former using an external (Environment-provided) solution of
the
latter.

There
are many
natural concepts of reduction, and a
strong case can be made in favor of the thesis that
the sort of reduction captured by is
most basic among them.
For this reason, we
agree that, if we simply say “reduction”, it always means the sort of
reduction
captured by . A great
variety of
other reasonable concepts of reduction is expressible in terms of . Among
those is Turing
reduction. Remember
that a predicate *q*(*x*)
is said to be *Turing reducible* to a
predicate *p*(*x*)
if *q*(*x*)
can be decided by a
Turing machine equipped with an oracle for *p*(*x*).
For a positive integer *n*,
*n*-*bounded
Turing reducibility* is defined in the same way, with the only
difference
that here the oracle is allowed to be used only *n*
times. It turns out that parallel rimplication is
a conservative
generalization of Turing
reduction. Namely, when *p*(*x*) and *q*(*x*) are elementary games
(i.e. predicates), *q*(*x*)
is Turing reducible to *p*(*x*)
if and only if the problem *x*(*p*(*x*) * p*(*x*))
*x*(*q*(*x*)
* q*(*x*)) has an
algorithmic
solution. If here we change back
to , we get the
same
result for 1-bounded Turing reducibility. More generally, as one might
guess, *n*-bounded Turing reduction
will be
captured by

*x*_{1}(*p*(*x*_{1})
* p*(*x*_{1}))
… *x*_{n}(*p*(*x*_{n})
* p*(*x*_{n})) *x*(*q*(*x*) * q*(*x*)).

If,
instead,
we write

*x*_{1}…*x*_{n}((*p*(*x*_{1})
* p*(*x*_{1})) …
(*p*(*x*_{n})
* p*(*x*_{n}))) *x*(*q*(*x*) * q*(*x*)),

then
we get a
conservative generalization of *n*-*bounded
weak truth-table reduction*. The latter differs from *n*-bounded
Turing reduction in that here
all *n* oracle queries should be made
at once, before seeing responses to
any of those queries. What is called *mapping*
(*many-one*) *reducibility*
of *q*(*x*)
to *p*(*x*)
is nothing but
computability of *x**y*(*q*(*x*)↔*p*(*y*)), where A↔B abbreviates (*A**B*)(*B**A*). We
could go on and on with
this list.

And
yet many
other natural concepts of reduction
expressible in the language of CoL
may
have no established names in the literature. For example, from the
previous
discussion it can be seen that a certain reducibility-style relation
holds
between the predicates *Accepts*(*x*,*y*)
and *Halts*(*x*,*y*)
in an even stronger sense than the algorithmic
winnability of

*x**y* (*Halts*(*x*,*y*) * Halts*(*x*,*y*)) *x**y* (*Accepts*(*x*,*y*) * Accepts*(*x*,*y*)).

In
fact, not
only
the above problem has an algorithmic solution, but also the generally
harder-to-solve problem

*x**y* (*Halts*(*x*,*y*) *
Halts*(*x*,*y*) *Accepts*(*x*,*y*) * Accepts*(*x*,*y*)).

Among
the
merits of CoL is that it offers a formalism
and deductive machinery for systematically expressing and studying
computation-theoretic
relations such as reducibility, decidability, enumerability, etc., and
all
kinds of variations of such concepts.

Back to
reducibility, while the standard approaches only allow us to talk about
(a
whatever sort of) reducibility as a *relation*
between problems, in our
approach reduction becomes an *operation* on
problems, with reducibility
as a relation simply meaning computability of the corresponding
combination
(such as *A**B*) of games.
Similarly for other relations or
properties such as the property of *decidability*.
The latter becomes the
operation of *deciding* if we define the problem of
deciding a predicate
(or any computational problem) *p*(*x*) as the game *x*(*p*(*x*) * p*(*x*)). So, now we
can
meaningfully ask questions such as “*Is
the* *reduction
of
the problem of deciding q(x) to the
problem of deciding p(x) always reducible to the mapping
reduction of
q(x) to
p(x)?*”.
This
question would be equivalent to whether the
following formula is valid in CoL:

*x**y*(*q*(*x*)
↔ *p*(*y*)) (*x*(*p*(*x*)
* p*(*x*)) *x*(*q*(*x*)
* q*(*x*))).

The answer
turns out to be “Yes”, meaning that mapping
reduction is at least as strong as reduction. Here is a strategy which
wins this
game no matter what particular predicates*
p*(*x*)
and *q*(*x*)
are:

1.** **Wait till, for
some *m*,
Environment brings the game down to ** **

*x**y*(*q*(*x*) ↔ *p*(*y*)) (*x*(*p*(*x*)
* p*(*x*)) *q*(*m*)
* q*(*m*)).

2.
Bring
the game down to ** **

*y*(*q*(*m*) ↔ *p*(*y*)) (*x*(*p*(*x*)
* p*(*x*)) *q*(*x*)
* q*(*x*)).

3. Wait
till,
for
some *n*,
Environment brings the game down to ** **

(*q*(*m*) ↔ *p*(*n*)) (*x*(*p*(*x*)
* p*(*x*)) *q*(*x*)
* q*(*x*)).

4.
Bring the
game
down to ** **

(*q*(*m*) ↔ *p*(*n*)) (*p*(*n*)
* p*(*n*) *q*(*x*)
* q*(*x*)).

5.
Wait
till Environment brings the game down to one of
the following: ** **

5a.
(*q*(*m*) ↔ *p*(*n*)) (*p*(*n*) *q*(*x*)
* q*(*x*)). In this
case,
further bring it down to (*q*(*m*) ↔ *p*(*n*)) (*p*(*n*) *q*(*x*)), and you have
won.

5b.
(*q*(*m*) ↔ *p*(*n*)) (*p*(*n*) *q*(*x*)
* q*(*x*)). In this
case,
further bring it down to (*q*(*m*) ↔ *p*(*n*)) (*p*(*n*) *q*(*x*)), and you have
won,
again.

We can
also
ask: “*Is the mapping reduction of* *q*(*x*)
*to* *p*(*x*)
*always reducible to the reduction
of the problem of deciding q*(*x*)
*to the problem of deciding* *p*(*x*)?”.
This question would be equivalent to whether the following formula is
valid:

( *x*(*p*(*x*)
* p*(*x*)) *x*(*q*(*x*)
* q*(*x*))) *x**y*(*q*(*x*) ↔ *p*(*y*)).

The
answer
here turns
out to be “No”, meaning that mapping reduction is properly stronger
than
reduction. This negative answer can be easily obtained by showing that
the
above formula is not provable in the deductive system **CL4**
that we are
going to see later. **CL4**
is sound and
complete with respect to validity. Its completeness implies that any
formula
which is not provable in it (such as the above formula) is not valid.
And the
soundness of **CL4** implies that every provable
formula is valid. So, had
our ad hoc attempt to come up with a strategy for *x**y*(*q*(*x*) ↔ *p*(*y*)) (*x*(*p*(*x*)
* p*(*x*)) *x*(*q*(*x*)
* q*(*x*))) failed, its
validity could have been easily
established by finding a **CL4**-proof of it.

To summarize, CoL offers not only a convenient language for specifying computational problems and relations or operations on them, but also a systematic tool for answering questions in the above style and beyond.

3.7 Blind operations

Our
definition
of
the ** blind universal quantifier** (“

**Definition
3.7.1****
**Assume *x* is
a variable and *A*=*A*(*x*)
is a game
unistructural in *x*.

a)
*xA*(*x*)
** **is
defined as the game *G* with **Un*** ^{G}*=

**Lr**_{e}=^{G}**Lr**;_{e}^{A}**Wn**_{e}〈Γ^{G}**〉****=***a*∈^{}**Dm**,^{G}**Wn**_{e}^{A}^{(a) }〈Γ**〉****=**

b)
*xA*(*x*)
** **is
defined as the game *G* with **Un*** ^{G}*=

**Lr**_{e}=^{G}**Lr**;_{e}^{A}**Wn**_{e}〈Γ^{G}**〉****=***a*∈^{}**Dm**,^{G}**Wn**_{e}^{A}^{(a) }〈Γ**〉****=.**

Intuitively,
playing *xA*(*x*) **or ***xA*(*x*) **means
just playing**** ***A*(*x*)
“blindly”, without knowing the value of *x*.
In *xA*(*x*),
Machine wins iff the play it
generates is successful for every possible value of *x* from the domain,
while in *xA*(*x*)** **being
successful for just one value is
sufficient. When applied to elementary games, the blind quantifiers act
exactly
like the quantifiers of classical logic, even if not all individuals of
the
universe have naming constants.

From
the
definition
one can see a perfect symmetry between and

Unlike *xA*(*x*)
which is a game on infinitely many
boards, both *xA*(*x*)
and *xA*(*x*)
are one-board games. Yet, they are very different from
each other. To
see this difference, compare the problems *x*(*Even*(*x*)
* Odd*(*x*)) and *x*(*Even*(*x*)
* Odd*(*x*)). The former
is an
easily winnable game of depth 2: Environment selects a number, and
Machine
tells whether that number is even or odd. The latter, on the other
hand, is a
game which is impossible to win. This is a game of depth 1, where the
value of *x* is not specified by
either player, and
only Machine moves --- tells whether (the unknown) *x*
is even or odd. Whatever Machine says, it loses, because there is
always a value for *x* that makes the
answer wrong.

This
should
not
suggest that nontrivial -games can
never be
won. For instance, the problem

*x*(*Even*(*x*)*Odd*(*x*)*y*(*Even*(*x*+*y*)*Odd*(*x*+*y*)))*x*(*Even*(*x*)*Odd*(*x*)*Even*(*x*+5)*Odd*(*x*+5))*x*(*Even*(*x*)*Even*(*x*+5)*Odd*(*x*+5))*x*(*Even*(*x*)*Odd*(*x*+5))

Machine
won
because
the play hit the true *x*(*Even*(*x*) *Odd*(*x*+5)). Notice how *x* persisted
throughout the sequence. Generally, the (,

** Exercise 3.7.2** Are the
following problemsalways
computable?

*xA*(*x*) *xA*(*x*)
answer

*xA*(*x*) *xA*(*x*)
answer

*xA*(*x*) *xA*(*x*)
answer

*xA*(*x*) *xA*(*x*)
answer

*xA*(*x*) *xA*(*x*)
answer

*xA*(*x*) *xA*(*x*)
answer

*xA*(*x*) *xA*(*x*)
answer

*xA*(*x*) *xA*(*x*)
answer

*xA*(*x*) *xA*(*x*)
answer

*x*(*A*(*x*)*B*(*x*)) *xA*(*x*)*xB*(*x*)
answer

*x*(*A*(*x*)*B*(*x*)) *xA*(*x*)* xB*(*x*)
answer

*xA*(*x*)*xB*(*x*) *x*(*A*(*x*)*B*(*x*)) answer

3.8 Branching operations

There
are only
four
branching operations that we consider:** **recurrence

What is
common
for
the members of the family of game operations called *recurrence*** operations**
is that, when applied to a game

There
is more
than
one naturally emerging recurrence operation. The differences between
various
recurrence operations are related to how “repetition” or “reusage” is
exactly
understood. Imagine a computer that has a program successfully playing
chess.
The resource that such a computer provides is obviously something
stronger than
just *Chess*,
for it
permits to play *Chess*
as
many times as the user wishes, while *Chess*,
as such, only assumes one play. The simplest operating system would
allow to
start a session of *Chess*,
then --- after finishing or abandoning and destroying it --- start a
new play
again, and so on. The game that such a system plays --- i.e. the
resource that
it supports/provides --- is *Chess*, which
assumes an unbounded
number of plays of
*Chess* in a
sequential
fashion. A formal definition of the operation , called *sequential recurrence*, will
be
given later is Section 3.9.

A more
advanced
operating system, however, would not require to destroy the old
sessions before
starting new ones; rather, it would allow to run as many parallel
sessions as
the user needs. This is what is captured by *Chess*, meaning
nothing
but the infinite parallel conjunction *Chess
** Chess
** Chess
*** **...**
**As
we remember
from Section
3.5, is
called *parallel recurrence*.

Yet a
really
good
operating system would not only allow the user to start new sessions of
*Chess*
without destroying old ones;
it would also make it possible to branch/replicate any particular stage
of any
particular session, i.e., create any number of
“copies” of any already reached position of the multiple
parallel plays
of *Chess*,
thus giving the
user the possibility to try different continuations from the same
position.
What corresponds to this intuition is the ** branching
recurrence **(“

At the
intuitive
level, the difference between and
is
that in

The
less
flexible
type of reusage of *A*
assumed by *A*, on the other
hand, is closer to what infinitely many
autonomous physical resources would naturally offer, such as an
unlimited
number of independently acting robots each performing task *A*,
or an unlimited
number of computers with limited memories, each one only capable of and
responsible
for running a single thread of process *A*.
Here the effect of
forking an
advanced stage of *A*
cannot be achieved unless, by good luck, there are two
identical copies of the stage, meaning that the corresponding two
robots or
computers have so far acted in precisely the same ways.

The
formal
definitions of *A* and its dual ** branching
corecurrence** (“

**Notation
3.8.1**
Where 〈Γ〉 is a run and *w* is a ** bitstring** (finite
or
infinite sequence of 0s and 1s), Γ

**Definition
3.8.2 **Assume
*A* is
a game.

a)
*A*** **is defined as
the game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**_{e}iff every move of F has the prefix ‘^{G}*u*.’ for some finite bitstring*u*, and, for every infinite bitstring*w*, Φ^{≤}^{w}**Lr**_{e};^{A} **Wn**_{e}〈Γ〉^{G}**=***w*,**Wn**〈Γ_{e}^{A}^{≤}^{}〉^{w}

b)
*A*** **is defined as
the game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**_{e}iff every move of F has the prefix ‘^{G}*u*.’ for some finite bitstring*u*, and, for every infinite bitstring*w*, Φ^{≤}^{w}**Lr**_{e};^{A} **Wn**_{e}〈Γ〉^{G}**=***w*,**Wn**〈Γ_{e}^{A}^{≤}^{}〉^{w}

It
is obvious
that *A=**A*
and *A=**A*,
hence *A=**A *and
*A=**A*.

Branching
recurrence can be shown
to be
stronger than its parallel counterpart
, in the sense
that
the principle

** Branching
rimplication** (“

**Definition
3.8.3 **

a)** ***A*_{
}*B*_{ }** **=_{def
}** ***A*_{ }_{ }*B*_{}

b)** **** ***A*_{ }** **=_{def
}** ***A*_{ }_{ }

**Exercise** **3.8.4**
The *Kolmogorov
complexity* *k*(*x*)
of a number *x* is the size of the
smallest Turing machine which outputs *x*
on input 0. The Kolmogorov complexity problem
*x**y*(*y*=*k*(*x*)) has no
algorithmic
solution. Nor is it reducible to the halting problem in the strong
sense of , meaning that
the
problem *x*(*Halts*(*x*)
* Halts*(*x*)) *x**y*(*y*=*k*(*x*)) has no
algorithmic
solution, either. The Kolmogorov complexity problem, however, is
reducible to
the halting problem in the weaker sense of , meaning that
Machine has a winning strategy for
*x*(*Halts*(*x*)
* Halts*(*x*))
*x**y*(*y*=*k*(*x*)).
Describe
such a
strategy, informally.

Just like , is
a conservative
generalization of Turing
reduction. Specifically, for any predicates *p*(*x*) and *q*(*x*), the problem
*x*(*p*(*x*)
* p*(*x*))
*x*(*q*(*x*)
* q*(*x*))
is
computable iff *q*(*x*)
is Turing reducible to *p*(*x*) iff the
problem problem *x*(*p*(*x*)
* p*(*x*))
*x*(*q*(*x*)
* q*(*x*))
is
computable. This
means that, when restricted to traditional sorts of problems such as
decision
problems, the behaviors of and
are
indistinguishable. This
stops being the
case when these operators are applied to problems with higher degrees
of
interactivity though. For instance, the following problem is
computable, but
becomes incomputable with instead
of :

*y**x*(*Halts*(*x*,*y*)
*Halts*(*x*,*y*)) *y*(*x*(*Halts*(*x*,*y*) *Halts*(*x*,*y*))
*x*(*Halts*(*x*,*y*) *Halts*(*x*,*y*))).

Generally,
(*A**B*) (*A**B*) is valid but
(*A**B*) (*A**B*) is not.

While
both and
are
weaker than and
hence more general, is
still a more interesting
operation of
reduction than . What makes
it
special is the following belief. The latter, in turn, is based on the
belief
that (and
by no means ) is
the operation allowing
to reuse its
argument in the strongest algorithmic sense possible.

Let
A and B be computational problems
(games). We say that B is brimplicationally (resp. primplicationally,
pimplicationally, etc.) reducible to A iff *A**B* (resp. AB,
AB, etc.) has an algorithmic
solution (winning strategy).

**Thesis
3.8.5** Brimplicational
reducibility is an
adequate
mathematical counterpart of our intuition of reducibility in the
weakest (and
thus most general) algorithmic sense possible. Specifically:

(a) Whenever a problem* B* is
brimplicationally reducible to a problem *A*, *B*
is also algorithmically reducible to *A*
according to anyone’s reasonable intuition.

(b) Whenever a problem* B*
is algorithmically
reducible to a problem *A* according
to
anyone’s reasonable intuition, *B* is
also brimplicationally
reducible to *A*.

This
is pretty
much in the same sense as (by the
Church-Turing thesis), a function *f*
is computable by a Turing
machine iff *f***
** has an
algorithmic solution according to everyone’s
reasonable intuition.

3.9 Sequential operations

The
** sequential conjunction**
(“

The
original
formal definition of *A**B* and *A**B* found in [Jap08b] was a direct formalization
of the above description.
Definition 3.9.1
given below, while less direct,
still faithfully formalizes the above intuitions as long as only static
games
are considered, and we opt for it because it is technically simpler.
Specifically, Definition 3.9.1 allows either player to continue making
moves in
*A* even after a switch takes place;
such moves are meaningless but harmless. Similarly, it allows either
player to
make moves in *B* without waiting for
a
switch to take place, even though a smart player would only start
making such
moves if and when a switch happens.

**Definition
3.9.1****
**Assume** ***A*_{0
}and_{ }*A*_{1 }are games_{
}with
a common universe *U*.

a)
*A*_{0}*A*_{1}** **is defined as
the
game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ,Ω〉 or Φ=〈Ξ,1,Ω〉, where every move of 〈Ξ,Ω〉^{G}*i*∈{0,1}, 〈Ξ,Ω〉^{i}∈^{. }**Lr**^{A};^{i} - If
*switch*”) move 1, then**Wn**〈Γ〉 =^{G}**Wn**^{A}^{0}〈Γ^{0.}〉; otherwise**Wn**〈Γ〉 =^{G}**Wn**^{A}^{1}〈Γ^{1.}〉.

b)
*A*_{0}*A*_{1}** **is defined as
the
game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ,Ω〉 or Φ=〈Ξ,1,Ω〉, where every move of 〈Ξ,Ω〉^{G}*i*∈{0,1}, 〈Ξ,Ω〉^{i}∈^{. }**Lr**^{A};^{i} - If
*switch*”) move 1, then**Wn**〈Γ〉^{G}**=****Wn**^{A}^{0}〈Γ^{0.}〉; otherwise**Wn**〈Γ〉^{G}**=****Wn**^{A}^{1}〈Γ^{1.}〉.

Recall that, for a predicate *p*(*x*), *x*(*p*(*x*)
* p*(*x*)) is the problem
of
deciding *p*(*x*).
Then what is the similar-looking *x*(*p*(*x*)
*p*(*x*))? If you’ve
guessed that this is the problem of *semideciding*
*p*(*x*),
you are right. Machine has a winning strategy in this game if and only
if *p*(*x*)
is semidecidable, i.e., recursively enumerable. Namely, if *p*(*x*)
is recursively
enumerable, a winning strategy by Machine is to wait until Environment
brings
the game down to *p*(*n*)
*p*(*n*) for some
particular *n*. After
that, Machine starts
looking for a
certificate of *p*(*n*)’s
being true. If and when such a certificate is found (meaning
that *p*(*n*)
is indeed true), Machine makes a switch move turning
*p*(*n*)
*p*(*n*) into the true *p*(*n*);
and if no certificate exists
(meaning that *p*(*n*)
is false), then Machine
keeps looking for a non-existent certificate forever and thus never
makes any
moves, meaning that the game ends as *p*(*n*),
which, again, is true. And vice
versa: any effective winning strategy for *x*(*p*(*x*)
*p*(*x*)) can obviously
be
seen as a semidecision procedure for *p*(*x*),
which accepts an input *n*
iff the strategy ever makes a switch
move in the scenario where Environment’s initial choice of a value for *x* is *n*.

Existence
of effective winning strategies for games
has been shown^{[Jap03]} to be closed
under the rules ‘*from
A**B and A
conclude*
*B*’,
‘*from A* *and
B conclude A**B*’, ‘*from A
conclude **xA*’, ‘*from A
conclude **A*’. In view of
these
closures, the validity of the principles discussed below implies
certain known
facts from the theory of computation. Needless to say, those examples
once
again demonstrate how CoL can be used as a systematic tool for defining
new
interesting properties and relations between computational problems,
and not
only reproducing already known theorems but also discovering an
infinite
variety of new facts.

The
valid formula *x*(*p*(*x*)
*p*(*x*)) *x*(*p*(*x*)
*p*(*x*)) *x*(*p*(*x*)
*p*(*x*)) “expresses”
the
well known fact that, if both a predicate *p*(*x*)
and its complement *p*(*x*)
are recursively enumerable, then *p*(*x*)
is decidable. Actually, the validity of this formula means something
more: it
means that the problem of deciding *p*(*x*)
is reducible to the (-conjunction
of) the
problems of semideciding *p*(*x*) and *p*(*x*).
In fact, a reducibility in an even
stronger sense (in a sense that has no name) holds, expressed by the
formula *x*((*p*(*x*)
*p*(*x*)) ((*p*(*x*) (*x*)) *p*(*x*) *p*(*x*)).

The
formula *x**y*(*q*(*x*) ↔ *p*(*y*)) *x*(*p*(*x*) *p*(*x*))* * *x*(*q*(*x*) *q*(*x*)) is also
valid,
which implies the known fact that, if a predicate *q*(*x*) is mapping reducible
to a predicate *p*(*x*)
and *p*(*x*)
is recursively enumerable, then so is
*q*(*x*).
Again, the validity of this formula, in fact, means something even
more: it
means that the problem of semideciding *q*(*x*)
is reducible to the problems of
mapping reducing *q*(*x*)
to *p*(*x*)
and semideciding *p*(*x*).

Certain
other reducibilities hold only in the sense of
rimplications rather than implications. Here is an example. Two Turing
machines
are said to be *equivalent* iff they
accept exactly the same inputs. Let
*Neq*(*x*,*y*) be the
predicate ‘Turing machines *x* and *y*
are not equivalent’. This predicate is neither semidecidable nor
co-semidecidable. However, the problem of its semideciding primplicationally (and hence also brimplicationally) reduces to
the
halting problem. Specifically,
Machine has
an effective winning strategy for the following game:

*z**t*(*Halts*(*z*,*t*)
*Halts*(*z*,*t*)) *x**y*(*Neq*(*x*,*y*)
*Neq*(*x*,*y*)).

A
strategy here is to wait till Environment specifies
some values *m* and *n*
for *x* and *y*.
Then, create a variable i,
initialize it to 1 and do the following. Specify
*z* and *t*
as *m* and *i
*in one yet-unused copy of the
antecedent, and as *n* and *i* in another
yet-unused copy. That is,
ask Environment whether *m* halts on
input *i* and whether *n*
halts on the same input. Environment
will have to provide the correct pair of answers, or else it loses. (1)
If the
answers are “No,No”, increment *i* to
*i*+1 and repeat the step. (2) If the
answers are “Yes,Yes”, simulate both *m*
and *n* on input *i*
until they halt. If both machines accept or both reject,
increment *i* to *i*+1
and repeat the step. Otherwise, if one accepts and one rejects,
make a switch move in the consequent and celebrate victory. (3) If the
answers
are “Yes,No”, simulate *m* on *i* until
it halts. If *m*
rejects *i*, increment *i*
to *i*+1 and repeat the step.
Otherwise, if *m* accepts *i*, make a
switch move in the consequent
and you win. (4) Finally,
if the answers are “No,Yes”, simulate *n*
on *i* until it halts. If *n* rejects *i*,
increment *i* to *i*+1
and repeat the step. Otherwise, if *n*
accepts *i*, make a switch move in
the consequent and you win.

The
** sequential universal
quantification **(“

**Definition
3.9.2 **Assume
*x* is a variable and** ***A***=***A*(*x*)_{
}is a game.

*xA*(*x*)**
**is defined as the game *G*
with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ^{G}_{0}, 1,_{1}, 2, Ξ_{2}, …,*n*, Ξ〉 (_{n}*n*³0), where every move of 〈Ξ_{0},_{1}, Ξ_{2},…, Ξ〉 has the prefix ‘_{n}*c*.’ for some constant*c*and, for every constant*c*, 〈Ξ_{0},_{1}, Ξ_{2},…, Ξ〉_{n}^{c}∈^{. }**Lr**^{A}^{(c)}; - Call the
moves 1,2,,…
*switch*moves. If**Wn**〈Γ〉 =^{G}**Wn**^{A}^{(0)}〈Γ^{0.}〉; if Γ contains infinitely many switch moves, then**Wn**〈ΓΓ〉 = ;^{G}*n*is the last switch move of Γ,**Wn**〈Γ〉 =^{G}**Wn**^{A}^{(n)}〈Γ^{n}〉.^{.}

b)
*xA*(*x*)_{
}is defined as the game *G*
with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ^{G}_{0}, 1, Ξ_{1},2, Ξ_{2}, …,*n*, Ξ〉 (_{n}*n*³0), where every move of 〈Ξ_{0},_{1}, Ξ_{2}, …, Ξ〉 has the prefix ‘_{n}*c*.’ for some constant*c*and, for every constant*c*, 〈Ξ_{0},_{1}, Ξ_{2}, …, Ξ〉_{n}^{c}∈^{. }**Lr**^{A}^{(c) }; - Call the
moves 1,2,3,…
*switch*moves. If**Wn**〈Γ〉 =^{G}**Wn**^{A}^{(0)}〈Γ^{0.}〉; if Γ contains infinitely many switch moves, then**Wn**〈Γ〉 =;^{G}*n*is the last switch move of Γ,**Wn**〈Γ〉 =^{G}**Wn**^{A}^{(n)}〈Γ^{n}〉.^{.}

**Definition
3.9.3 **Assume** ***A*_{
}is a game.

a)
*A*** **is defined as
the game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ^{G}_{0}, 1,_{1}, 2,_{2}, …,*n*,〉 (_{n}*n*³0), where every move of 〈Ξ_{0},_{1},_{10},_{11},…,〉 has the prefix ‘_{n}*c*.’ for some constant*c*and, for every constant*c*, 〈Ξ_{0},_{1}, Ξ_{2},…, Ξ〉_{n}^{c}∈^{. }**Lr**;^{A} - Call the
moves 1,2,…
*switch*moves. If**Wn**〈Γ〉 =^{G}**Wn**〈Γ^{A}^{0.}〉; if Γ contains infinitely many switch moves, then**Wn**〈Γ〉 = ;^{G}*n*is the last switch move of Γ,**Wn**〈Γ〉 =^{G}**Wn**〈Γ^{A}^{n}〉.^{.}

b)
*A*_{ }is defined as
the game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ^{G}_{0},_{1}, 2, Ξ_{2}, …,*n*, Ξ〉 (_{n}*n*³0), where every move of 〈Ξ_{0}, Ξ_{1}, Ξ_{2}, …,Ξ〉 has the prefix ‘_{n}*c*.’ for some constant*c*and, for every constant c, 〈Ξ_{0}, Ξ_{1}, Ξ_{2}, …,Ξ〉_{n}^{c}∈^{. }**Lr**;^{A} - Call the
moves 1,2,…
*switch*moves. If**Wn**〈Γ〉 =^{G}**Wn**〈Γ^{A}^{0.}〉; if Γ contains infinitely many switch moves, then**Wn**〈Γ〉 =;^{G}*n*is the last switch move of Γ,**Wn**〈Γ〉 =^{G}**Wn**〈Γ^{A}^{n}〉.^{.}

For
an illustration, remember the Kolmogorov
complexity function *k*(*x*). The
value of *k*(*x*) is known to be bounded,
not exceeding *x*
or a certain constant *const*,
whichever
is greater. While *x**y*(*y*=*k*(*x*)) is not
computable,
Machine does have an algorithmic winning strategy for the problem *x**y*(*y*=*k*(*x*)). It goes like
this:
Wait till Environment specifies a value *m*
for *x*, thus asking “what is the
Kolmogorov complexity of *m*?” and
bringing the game down to *y*(*y*=*k*(*m*)). Answer that
it is *m*, i.e. specify *y* as *m*,
and after that
start simulating, in parallel, all machines *n*
smaller than *m* on input 0. Whenever
you find a machine *n* that returns *m*
on input 0 and is smaller than any of
the previously found such machines, make a switch move and, in the new
copy
of *y*(*y*=*k*(*m*)), specify *y*
as the size |*n*|
of *n* or as *const*,
whichever is greater. This obviously
guarantees success: sooner or later the real Kolmogorov complexity *c* of *m*
will be reached and named; and, even though the strategy will never be
sure
that *k*(*m*)
is not something yet
smaller than *c*, it will never
really
find a reason to further reconsider its latest claim that *c*=*k*(*m*).

**Exercise
3.9.4**
Describe a
winning strategy for *x**y*(*k*(*x*)=(*x*-*y*)).

As
expected, ** sequential
implication **(“

**Definition
3.9.5 **a) *A*_{
}*B*** **=_{def}** ***A*_{ }_{ }*B*

*
*b)* A*_{ }*B*** **=_{def}** ***A*_{ }_{ }*B*

c)* ** A*_{ }** **=_{def}** ***A*_{ }**
**

3.10 Toggling operations

As
all other
sorts of conjunctions and disjunctions, ** toggling
conjunction** (“

An
alternative
way to characterize *A**B* is to say
that it
is played exactly like *A**B*, with the
only
difference that Machine is allowed to make a ‘choose *A*’
or ‘choose *B*’ move
some finite number of times. If infinitely many choices are made,
Machine
loses. Otherwise, the winner in the play will be the player who wins in
the
component that was chosen last (“the eventual choice”). The case of
Machine
having made no choices at all is treated as if it had chosen *A*. Thus, as in sequential disjunction,
the leftmost component is the “default”, or “automatically made”,
initial
choice.

It
is
important to note that the adversary (or perhaps
even the machine itself) never knows whether a given choice of a
component of *A**B* is the last
choice
or not. Otherwise, if Machine was required to indicate that it has made
its
final choice, then the resulting operation, for static games, would
essentially
be the same as *A**B*. Indeed, as
we
remember, in static games it never hurts a player to postpone making
moves, so
Environment could just inactively wait till the last choice is
declared, and
start playing the chosen component only after that, as in the case of *A**B*; under these
circumstances, making some temporary choices before making the final
choice
would not make any sense for Machine, either.

What
would
happen if we did not require that Machine
can change its mind only finitely meany times? There would be no “final
choice”
in this case. So, the only natural winning condition in the case of
infinitely
many choices would be to say that Machine wins iff it simply wins in
one of the
components. But then the resulting operation would be essentially the
same as , as a smart
machine
would always opt for keeping switching between components forever. That
is,
allowing infinitely many choices would amount to not requiring any
choices at
all, as is the case with *A**B*.

One
may also
ask what would happen if we allowed
Machine to make an arbitrary initial choice between *A*
and *B* and then
reconsider its choice only (at most) once? Such an operation on games,
albeit meaningful, would not be basic. That is because it can be
expressed
through our
primitives as (*A**B*) (*B**A*).

The
very weak
sort of choice captured by is
the kind of choice that,
in real life, one
would ordinarily call *choice after trial
and error*. Indeed, a problem is generally considered to be
solved after
trial and error (a correct choice/solution/answer found) if, after
perhaps
coming up with several wrong solutions, a true solution is eventually
found.
That is, mistakes are tolerated and forgotten as long as they are
eventually
corrected. It is however necessary that new solutions stop coming at
some
point, so that there is a last solution whose correctness determines
the success
of the effort. Otherwise, if answers have kept changing all the time,
no answer
has really been given after all. Or, imagine Bob has been married and
divorced
several times. Every time he said “I do”, he probably honestly believed
that
this time, at last, his bride was “the one”, with whom he would live
happily
ever after. Bob
will be considered to
have found his Ms. Right after all if and only if one of his marriages
indeed
turns out to be happy and final.

As
we remember, for a
predicate *p*(*x*), *x*(*p*(*x*)
*p*(*x*)) is the problem
of deciding *p*(*x*),
and *x*(*p*(*x*)
*p*(*x*)) is the weaker
(easier to
solve) problem of semideciding *p*(*x*).
Not surprisingly, *x*(*p*(*x*)*p*(*x*)) is also a
decision-style problem, but still weaker than the problem of
semideciding *p*(*x*).
This problem has been studied in the literature under several names,
the most
common of which is *recursively
approximating**p*(*x*). It means telling whether *p*(*x*)
is true or not, but doing so in the same style as semideciding does in
negative
cases: by correctly saying “Yes” or “No” at some point (after perhaps
taking
back previous answers several times) and never reconsidering this
answer
afterwards. Observe
that semideciding *p*(*x*)
can be seen as always saying “No” at the beginning and then, if this
answer is
incorrect, changing it to “Yes” at some later time; so, when the answer
is
negative, this will be expressed by saying “No” and never taking back
this
answer, yet without ever indicating that the answer is final and will
not
change. Thus, the difference between semideciding and recursively
approximating
is that, unlike a semidecision procedure, a recursive approximation
procedure
can reconsider *both* negative and
positive answers, and do
so several
times rather than only once.

It is
known
that a
predicate *p*(*x*)
is recursively approximable (i.e., the problem of its recursive
approximation has an algorithmic solution) iff *p*(*x*) has the arithmetical
complexity D_{2}, i.e., if
both *p*(*x*)
and its complement *p*(*x*)
can be written in the form *z**y* *s*(*z*,*y*,*x*),
where *s*(*z*,*y*,*x*)
is a decidable
predicate. Let us see that, indeed, algorithmic solvability of *x*(*p*(*x*)
*p*(*x*))
is
equivalent to *p*(*x*)’s
being of complexity D_{2}.

First,
assume *p*(*x*)
is of complexity D_{2},
specifically, for some decidable predicates *q*(*z*,*y*,*x*)
and *r*(*z*,*y*,*x*)
we have *p*(*x*)
= *z**y* *q*(*z*,*y*,*x*)
and *p*(*x*)
= *z**y* *r*(*z*,*y*,*x*).
Then *x*(*p*(*x*) * p*(*x*))
is
solved by the
following strategy. Wait till Environment specifies a value *m*
for *x*,
thus bringing the game down to *p*(*m*)*p*(*m*).
Then create the variables *i* and *j*,
initialize both to 1,
choose the *p*(*m*) component, and do the
following:

*Step 1:* Check whether
*q*(*i*,*j*,*m*)
is true. If yes, increment *j* to *j*+1
and repeat Step 1. If not, switch to
the *p*(*m*)
component, reset *j* to 1, and go to
Step 2.

*Step 2: *Check whether *r*(*i*,*j*,*m*)
is true. If yes, increment *j* to *j*+1
and repeat Step 2. If not, switch to
the *p*(*m*)
component, reset *j* to
1, increment *i* to *i*+1,
and go to Step 1.

With
a little
thought, one can see that the above
algorithm indeed solves *x*(*p*(*x*)
*p*(*x*)).

For
the
opposite direction, assume a given algorithm *Alg*
solves *x*(*p*(*x*)
*p*(*x*)).
Let *q*(*z*,*y*,*x*) be the predicate such that *q*(*i*,*j*,*m*)
is true iff, in the scenario where the environment specified *x* as *m*
at the beginning of the play, so that the game was brought down to *p*(*m*)*p*(*m*),
we have: (1) at the *i*th computation
step, *Alg* chose the *p*(*m*)
component; (2) at
the *j*th computation step, *Alg* did not
move. Quite similarly, let *r*(*z*,*y*,*x*)
be the predicate such that *r*(*i*,*j*,*m*) is true iff, in the scenario where
the environment specified *x* as *m* at
the beginning of the play, so that
the game was brought down to *p*(*m*)*p*(*m*),
we have: (1) either
i=1 or, at the *i*th computation
step, *Alg* chose the *p*(*m*)
component; (2) at the *j*th
computation step, *Alg* did not move.
Of course, both *q*(*z*,*y*,*x*)
and *r*(*z*,*y*,*x*)
are decidable predicates, and hence so are *y*>*z * *q*(*z*,*y*,*x*)
and *y*>*z * *r*(*z*,*y*,*x*).
Now, it is not hard to see that *p*(*x*)
= *z**y*(*y*>*z* *q*(*z*,*y*,*x*)) and *p*(*x*)
= *z**y*
(*y*>*z* *r*(*z*,*y*,*x*)). So, *p*(*x*)
is indeed of complexity D_{2}.

As
a real-life
example of a predicate which is
recursively approximable but neither semidecidable nor
co-semidecidable,
consider the predicate *k*(*x*)<*k*(*y*), saying that number *x*
is simpler than number *y* in the
sense of Kolmogorov
complexity. As noted earlier,
*k*(*z*)
(the Kolmogorov
complexity of *z*) is bounded, never
exceeding max(*z*,*const*)
for a certain constant *const*.
Here is an algorithm which recursively approximates the predicate *k*(*x*)<*k*(*y*),
i.e., solves the problem *x**y*(*k*(*x*)<k(y)
*k*(*x*)< *k*(*y*)). Wait till
Environment brings the game down to *k*(*m*)<k(n)
*k*(*m*)< *k*(*n*)
for
some *m* and *n*.
Then start simulating, in parallel, all Turing machines t satisying t≤max(*m*,*n*,*const*)
on
input 0. Whenever
you see that a machine *t* returns *m*
and the size of *t* is
smaller
than that of any other previously found machines that return *m* or *n*
on input 0, choose *k*(*m*)<
*k*(*n*).
Quite similarly,
whenever you see that a machine *t*
returns *n* and the size of *t* is
smaller than that of any other
previously found machine that returns *n*
on input 0, as well as smaller or equal to the size of any other
previously
found machines that return *m* on
input 0, choose *k*(*m*)<k(*n*). Obviously, the correct choice between *k*(*m*)<*k*(*n*)
and *k*(*m*)<*k*(*n*)
will be made sooner or later and never reconsidered afterwards. This
will
happen when the procedure hits --- in the role of *t*
--- a smallest-size machine that returns either *m*
or *n*
on input 0.

Anyway,
here
is our formal definition of the toggling
conjunction and disjunction:

**Definition
3.10.1 **Assume** ***A*_{0
}and_{ }*A*_{1 }are games_{
}with
a common universe *U*.

a)
*A*_{0}*A*_{1}** **is defined as
the
game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ^{G}_{0},*i*_{1}, Ξ_{1},*i*_{2}, Ξ_{2},…,*i*_{n}, Ξ〉 (_{n}*n*³0), where*i*_{1},*i*_{2},…,*i*∈{0,1}, every move of 〈Ξ_{n }_{0}, Ξ_{1}, Ξ_{2},…, Ξ〉 has the prefix ‘0.’ or ‘1.’ and, for both_{n}*i*∈{0,1}, 〈Ξ_{0}, Ξ_{1}, Ξ_{2},…, Ξ〉_{n}^{i}∈^{. }**Lr**^{A};^{i} - Call 0 and 1
*switch moves*. If**Γ does not contain any switch moves, then****Wn**〈^{G}**Γ〉 =****Wn**^{A}^{0}〈Γ^{0.}〉; if**Γ has infinitely many occurrences of switch moves, then****Wn**〈^{G}**Γ〉 = ;***i*is the last switch move,**Wn**〈Γ〉 =^{G}**Wn**^{A}〈^{i}**Γ**^{i}〉.^{.}

b)
*A*_{0}*A*_{1}** **is defined as
the
game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ^{G}_{0},*i*_{1}, Ξ_{1},*i*_{2}, Ξ_{2},…,*i*_{n}, Ξ〉 (_{n}*n*³0), where*i*_{1},*i*_{2},…,*i*∈{0,1}, every move of 〈Ξ_{n }_{0}, Ξ_{1}, Ξ_{2},…, Ξ〉 has the prefix ‘0.’ or ‘1.’ and, for both_{n}*i*∈{0,1}, 〈Ξ_{0}, Ξ_{1}, Ξ_{2},…, Ξ〉_{n}^{i}∈^{. }**Lr**^{A};^{i} - Call 0 and 1
*switch moves*. If**Γ does not contain any switch moves, then****Wn**〈^{G}**Γ〉 =****Wn**^{A}^{0}〈**Γ**^{0.}〉; if**Wn**〈^{G}**Γ〉 =;***i*is the last switch move,**Wn**〈Γ〉 =^{G}**Wn**^{A}〈^{i}**Γ**^{i}〉.^{.}

As
expected,
the ** toggling
universal quantification **(“

**Definition
3.10.2 **Assume
*x* is
a variable and**
***A***=***A*(*x*)_{
}is a game.

*xA*(*x*)**
**is defined as the game *G*
with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ^{G}_{0},*c*_{1}, Ξ_{1},*c*_{2}, Ξ_{2}, …,*n*, Ξ〉 (_{n}*n*³0), where every move of 〈Ξ_{0}, Ξ_{1}, Ξ_{2}, …,Ξ〉 has the prefix ‘_{n}*c*.’ for some constant*c*and, for every constant*c*, 〈Ξ_{0}, Ξ_{1}, Ξ_{2}, …, Ξ〉_{n}^{c}∈^{. }**Lr**^{A}^{(c)}; - Call the
moves 0,1,2,,…
*switch*moves. If**Wn**〈Γ〉 =^{G}**Wn**^{A}^{(0)}〈Γ^{0.}〉; if Γ has infinitely many occurrences of switch moves, then**Wn**〈Γ〉 = ;^{G}*n*is the last switch move of Γ,**Wn**〈Γ^{G}**〉 =****Wn**^{A}^{(n)}〈Γ^{n}〉.^{.}

b)
*xA*(*x*)_{
}is defined as the game *G*
with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ^{G}_{0},*c*_{1}, Ξ_{1 },*c*_{2}, Ξ_{2}, …,*n*, Ξ〉 (_{n}*n*³0), where every move of 〈Ξ_{0}, Ξ_{1}, Ξ_{2}, …,Ξ〉 has the prefix ‘_{n}*c*.’ for some constant*c*and, for every constant*c*, 〈Ξ_{0}, Ξ_{1}, Ξ_{2}, …,Ξ〉_{n}^{c}∈^{. }**Lr**^{A}^{(c) }; - Call the
moves 0,1,2,…
*switch*moves. If**Wn**〈Γ〉 =^{G}**Wn**^{A}^{(0)}〈Γ^{0.}〉; if Γ has infinitely many occurrences of switch moves, then**Wn**〈Γ〉 =;^{G}*n*is the last switch move of Γ,**Wn**〈Γ〉 =^{G}**Wn**^{A}^{(n)}〈Γ^{n}〉.^{.}

For
an example
illustrating toggling quantifiers at
work, remember that Kolmogorov
complexity *k*(*x*)
is not a computable function, i.e., the problem *x**y*(*y*=*k*(*x*)) has no
algorithmic
solution. However, replacing *y* with *y* in it yields
an
algorithmically solvable problem. A solution for *x**y*(*y*=*k*(*x*)) goes like
this.
Wait till the environment chooses a number *m*
for *x*, thus bringing the game down
to
*y*(*y*=*k*(*m*)), which is
essentially nothing but 0=*k*(*m*)
1=*k*(*m*)
2=*k*(*m*) …
Initialize *i*
to a sufficiently large number, such as *i*=max(|*m*|+*const*)
where *const* is the constant
mentioned earlier,
and then do the following routine: Switch to the disjunct *i*=*k*(*m*) of 0*=k*(*m*)
1=*k*(*m*)
2=*k*(*m*) …, and then
start
simulating on input 0, in parallel, all Turing machines whose sizes are
smaller
than *i*; if and when you see that
one
of such machines returns *m*, update *i*
to the size of that machine, and
repeat the present routine.

We
close this
sections with formal definitions of *toggling*** recurrence**
(“

**Definition
3.10.3 **Assume** ***A*_{
}is a game.

*A*** **is defined as
the game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ^{G}_{0},*c*_{1}, Ξ_{1},*c*_{2}, Ξ_{2}, …,*c*, Ξ_{n}〉 (_{n}*n*³0), where every move of 〈Ξ_{0}, Ξ_{1}, Ξ_{2}, …,Ξ〉 has the prefix ‘_{n}*c*.’ for some constant*c*and, for every constant*c*, 〈Ξ_{0}, Ξ_{1}, Ξ_{2}, …,Ξ〉_{n}^{c}∈^{. }**Lr**;^{A} - Call the
moves 0,1,2,…
*switch*moves. If**Wn**〈Γ〉^{G}**=****Wn**〈Γ^{A}^{0.}〉; if Γ has infinitely many occurrences of switch moves, then**Wn**〈Γ^{G}**〉 = ;***n*is the last switch move of Γ,**Wn**〈Γ^{G}**〉****=****Wn**〈Γ^{A}^{n}〉.^{.}

b)
*A*_{ }is defined as
the game *G* with **Un*** ^{G}*=

- Φ∈
**Lr**iff Φ=〈Ξ^{G}_{0},*c*_{1}, Ξ_{1},*c*_{2}, Ξ_{2}, …,*n*, Ξ〉 (_{n}*n*³0), where every move of 〈Ξ_{0}, Ξ_{1}, Ξ_{2}, …,Ξ〉 has the prefix ‘_{n}*c*.’ for some constant*c*and, for every constant*c*, 〈Ξ_{0}, Ξ_{1}, Ξ_{2}, …,Ξ〉_{n}^{c}∈^{. }**Lr**;^{A} - Call the
moves 0,1,2,…
*switch*moves. If**Wn**〈Γ^{G}**〉****=****Wn**〈G^{A}^{0.}〉; if Γ has infinitely many occurrences of switch moves, then**Wn**〈Γ^{G}**〉****=;***n*is the last switch move of Γ,**Wn**〈Γ^{G}**〉 =****Wn**〈Γ^{A}^{n}〉.^{.}

**Definition
3.10.4 **a) *A*_{
}*B*** **=_{def}** ***A**B*

*
*b)* A**B*** **=_{def}** ***A*_{ }_{ }*B*_{ }

*
*c)* **A*** **=_{def}** ***A***
**

** **

3.11
Cirquents

The syntactic
constructs called ** cirquents**,
briefly mentioned earlier,
take the expressive power of CoL to a qualitatively higher level,
allowing us
to form, in a systematic way, an infinite variety of game operations.
Each
cirquent is --- or can be seen as --- an independent operation on
games,
generally not expressible via composing operations taken from some
fixed finite
pool of primitives, such as the operations seen in the preceding
subsections of the present section.

For simplicity
considerations, we are not going to
introduce cirquents and their semantics in full generality and formal
detail
here. This is done in [Jap06c,
Jap08a,
Jap11b, Jap13a].
Instead, to get
some intuitive insights, let us only focus on cirquents that look like
Boolean
circuits with - and -gates. Every
such
cirquent *C* can be seen as an *n*-ary
operation on games, where *n* is the number of inputs of
*C*. For instance, the cirquent

represents the
3-ary game operation ♥ informally
defined as follows. Playing ♥(*A*,*B*,*C*), as is the
case with all parallel operations, means
playing simultaneously in all
components of it. In order to win, Machine needs to win in
at least two
out of the three components. Any attempt to express this operation in
terms of , or
other already defined
operations is going
to fail. For instance, the natural candidate (*A**B*)(*A**C*)(*B**C*) is
dramatically
inadequate. This is a game on six rather than three boards, with *A* played on boards #1 and #3, *B* on
boards #2 and #5, and *C*
on boards #4 and #6. A special case of
it is (*A**A*)(*A**A*)(*A**A*), which fails
to
indicate for instance that the 1^{st} and the 3^{rd}
occurrences of *A* stand for the same
copy of *A* while the 2^{nd}
occurrence for a different copy where a different run can be generated.

As we
just had
a
chance to see, even at the most basic (,) level,
cirquents
are properly more expressive than formulas. It is this added
expressiveness and
flexibility that, for some fragments of CoL, makes a difference between
axiomatizability and unaxiomatizability: even if one is only trying to
set up a
deductive system for proving valid formulas, intermediate steps in
proofs of
such formulas still inherently require using cirquents that cannot be
written
as formulas.^{[Jap06c,
Jap13a, Jap13b] }An
example is the system **CL15**
presented
in Section 6.4.

In the
present
piece
of writing we are exclusively focused on the formula-based version of
CoL,
seeing cirquents (in Section 6.4)
only as
technical servants to formulas. This explains why we do not attempt to
define
the semantics of cirquents formally.
It
should however be noted that cirquents are naturally called for not
only within
the specific formal framework of CoL, but also in the framework of all
resource-sensitive approaches in logic. Such approaches intrinsically
require
the ability to account for the possibility of *resource
sharing*, as sharing is a ubiquitous phenomenon in the
world of resources. Yet formula- or sequent-based approaches do not and
cannot
possess such an ability. The following naive example may provide some
insights.
It is about the earlier
discussed ♥(*A*,*A*,*A*)
combination, seen as a combination of resources in the abstract/naive
sense
rather than the specific game-semantical sense of CoL.

Victor has
three 25c-coins, and he knows that two of
them are true while one is perhaps false (but he has no way to tell
which one
is false). Could he get a candy? Expected or not, the answer depends on
the
number of slots that the machine has. Consider two cases: machine *M2* with two slots, and machine *M3* with
three slots. Victor would have
no problem with *M3*: he can insert
his
three coins into the three slots, and the machine, having received ³50c, will
dispense a candy. With *M2*, however,
Victor is powerless. He can try inserting arbitrary
two of his three coins into the two slots of the machine, but there is
no
guarantee that one of those two coins is not false, in which case
Victor will
end up with no candy and only 25 cents remaining in his pocket.

Both
*M2* and *M3*
can be understood as resources ---
resources turning coins into a candy. And note that these two resources
are not
the same: *M3* is obviously stronger
(“better”), as it allows Victor to get a candy whereas *M2*
does not, while, at the same time, anyone rich enough to be able
to make *M2* dispense a candy would
be
able to do the same with *M3* as
well. Yet, formulas
fail to capture this
important difference. With , , here
understood as abstract
multiplicative-style operations on resources, *M2*
and *M3* can be written
as *R2**Candy * and
*R3**Candy*,
respectively: they
consume a certain resource *R2* or *R3*
and produce *Candy*.
What makes *M3*
stronger than *M2* is that the
subresource *R3* that it consumes is
weaker (easier to supply) than the subresource *R2*
consumed by *M2*.
Specifically, with one false and two true coins, Victor is able to
satisfy *R3* but not *R2*.

The
resource *R2*
can be represented as the cirquent

which,
due to
being tree-like, can also be adequately
written as the formula 25c 25c.
As for the resource *R3*, either one of the following
two
cirquents is an adequate
representation of it, with one of them probably showing the relevant
part of the
actual physical circuitry used in *M3*:

**Figure
3.11.1**:
Cirquents
for the
“two out of three” combination of resources

Unlike
*R2*,
however, *R3* cannot be represented
through a formula. 25c25c does not
fit the
bill, for it represents *R2* which,
as
we already agreed, is not the same as *R3*.
In another attempt to find a formula, we might try to rewrite one of
the above
two cirquents --- let it be the one on the right --- into an
“equivalent”
formula in the standard way, by duplicating and separating shared
nodes, as we
did earlier when
tackling ♥(*A*,*A*,*A*).
This results in (25c25c)(25c25c)(25c25c), which,
however, is not any more adequate than 25c25c. It
expresses
not *R3* but the resource consumed by
a
machine with six coin slots grouped into three pairs, where (at least)
one slot
in each of the three pairs needs to receive a true coin. Such a machine
thus
dispenses a candy for ≥75 rather than
≥50 cents,
which makes Victor's resources insufficient.

The
trouble
here is related to the inability of
formulas to explicitly account for resource sharing or the absence
thereof. The
cirquent on the right of the above figure stands for a
(multiplicative-style)
conjunction of three resources, each conjunct, in turn, being a
disjunction of two
subresources of type 25c. However, altogether there are three rather
than six
25c-type subresources, each one being shared between two different
conjuncts of
the main resource.

**4 Interactive
machines**

4.1
Interactive computability

In
traditional
game-semantical approaches, including
those of Lorenzen [Lor61],
Hintikka [Hin73],
Blass [Bla92], Abramsky [Abr94]
and
others, players’ strategies are understood as *functions*
--- typically as functions from interaction histories
(positions) to moves, or sometimes as functions that only look at the
latest
move of the history. This *strategies-as-functions*
approach, however, is generally inapplicable in the context of CoL,
whose
relaxed semantics, in striving to get rid of
“bureaucratic pollutants” and only deal with the remaining
true essence
of games, does not
impose any
regulations on which player can or should move in a given situation.
Here, in
many cases, either player may have (legal) moves, and then it is
unclear
whether the next move should be the one prescribed by ’s strategy
function
or the one prescribed by the strategy function of . The
advantages of
CoL’s approach become especially appreciable when one tries to bring
complexity
theory into interactive computation: hardly (m)any really meaningful
and
interesting complexity-theoretic concepts can be defined for games
(particularly, games that may last long) with the
strategies-as-functions
approach.

In
CoL, ( ’s effective)
strategies are defined in terms of interactive machines, where
computation is
one continuous process interspersed with --- and influenced by ---
multiple
“input” (Environment’s moves) and “output” (Machine's moves) events. Of
several, seemingly rather different yet equivalent, machine models of
interactive computation studied in CoL, we will only discuss the most
basic, ** HPM** (“Hard-Play
Machine”)
model.

An
HPM is a
Turing machine with the additional capability of making moves. The
adversary
can also move at any time, and such moves are the only nondeterministic
events
from the machine’s perspective. Along with one or more ordinary
read/write ** work
tapes**, the machine has an additional, read-only
tape called the

As
for ’s strategies,
there
is no need to define them: all possible behaviors by are
accounted for by the
different possible
nondeterministic updates of
the run tape
of an HPM.

In
the above
outline, we described HPMs in a relaxed fashion,
without being specific about details
such as, say, how, exactly, moves are made by the machine,
how many
moves either player can make at once, what happens if both players
attempt to
move “simultaneously”, etc. As it happens, all reasonable design
choices yield
the same class of winnable games as long as we only consider static games, the only sort
of games that we are
willing to consider.

While
design
choices are largely unimportant and
“negotiable”, we still want to agree on some technical details for
clarity.
Just like an ordinary Turing machine, an HPM has a finite set of *states*, one of which has the special
status of being the *start state*.
There are no accept, reject, or halt states, but there are specially
designated
states called *move states*. Each
tape
of the machine has a beginning but no end, and is divided into
infinitely many *cells*, arranged in
the left-to-right
order. At any time, each cell contains one symbol from a certain fixed
finite
set of *tape symbols*. The *blank* symbol
is among the tape symbols.
Each tape has its own *scanning head*,
at any given time looking (located) at one of the cells of the tape.

For
technical
purposes, we additionally assume the
(physical or imaginary) presence of a *move*
*buffer*. The size of the latter is
unlimited and, at any time, it contains some (possibly empty) finite
string
over the keyboard alphabet. The function of the buffer is to let the
machine
construct a move piece-by-piece before officially making such
a move.

A
transition
from one *computation step* (“*clock
cycle*”, “*time*”) to
another
happens according to the fixed *transition
function* of the machine. The
latter, depending on the current state and the symbols seen by the scanning heads on the
corresponding tapes,
deterministically prescribes: (1) the next state that the machine
should
assume; (2) the tape symbol by which the old symbol should be
overwritten in
the current cell (the
cell currently
scanned by the head),
for each work tape
individually; (3) the (finite, possibly empty) string over the keyboard
alphabet that should be appended to the content of the move buffer; and
(4) the (not
necessarily the same)
direction --- stay put, one cell to the left, or one cell to the right
--- in
which each scanning
head should move. It
is stipulated that when the head of a tape is looking at the first
(leftmost)
cell, an attempt to move to the left results in staying put. The same
happens
when the head tries to move to the right while looking at a blank cell
(a cell
containing the “blank” symbol).

When the machine
starts working, it is in its start state,
all scanning heads are looking at the leftmost cells of the
corresponding
tapes, all work tapes are blank, the run tape does not contain any
green moves
(but it may contain some red moves, signifying that Environment has
made those
moves “at the very beginning”), and the buffer is empty.

Whenever
the
machine enters a move state, the
green-colored move α written in
the
buffer by that time is automatically appended to the contents of the
run tape,
and the buffer is simultaneously emptied. Also, on every transition,
any finite
sequence β_{1},…,β_{m} of red moves
may be
nondeterministically appended to the contents of the run tape. If the
above two
events happen on the same clock cycle, then both α
and β_{1},…,β_{m} will be
appended to
the contents of the run tape, where α can
(nondeterministically) go before, after or anywhere in between β_{1},…,β_{m} .

When
describing the work of a machine, we may use the
jargon “** retire**”.
What will be
meant by
retiring is going into an infinite loop that makes no moves, puts
nothing into
the buffer, and does not reposition the scanning heads.
Retiring thus achieves the same effect as
halting would achieve if this was technically allowed.

A
*configuration*
is a full description of the situation in the machine at some given
computation
step. It consists of records of the (“current”) contents of the work
and run
tapes, the content of the buffer, the location of each scanning head,
and the
state of the machine. A *computation
branch* of the machine is an infinite sequence *C*_{0},*C*_{1},*C*_{2},…
of configurations, where
*C*_{0} is an *initial configuration*
(one described earlier), and every *C _{i}*

**Definition
4.1.1**
Let *A* be a
constant game. We say that an HPM *M*
** computes
**(

4.2 Interactive complexity

At
present,
the theory of interactive computation is
far from being well developed, and even more so is the corresponding
complexity
theory. The studies of interactive computation in the context of
complexity,
while having been going on since long ago, have been relatively
scattered and
ad hoc: more often
than not, interaction
has been used for better understanding certain complexity issues for
traditional, non-interactive problems rather than being treated as an
object of
systematic studies in its own rights (examples would be alternating
computation
[Cha81], or interactive proof
systems and Arthur-Merlin
games [Gol89]).
As if
complexity theory was not “complex” enough already, taking it to the
interactive level would most certainly generate a by an order of
magnitude
greater diversity of species from the complexity zoo.

CoL
has made
its first modest steps^{[Jap15]}
towards elaborating some basic
complexity-theoretic concepts for game-playing strategies, which we are
going
to present in this section. It should however be pointed out that our
way of
measuring the complexity of strategies presented in this section is
merely one
out of a huge and interesting potential variety of complexity measures
meaningful and useful in the interactive context.

**Definition
4.2.1** In
the
context of a given computation branch (play)
of a given HPM *M*:

1. By the ** background** of a clock
cycle