(CoL)

**Clarithmetic (CoL-based
arithmetic) **

7.1 Introduction

7.2
Clarithmetic versus
bounded arithmetic

7.3
Motivations

7.4
Common preliminaries
for all our theories of
clarithmetic

7.6
Clarithmetics for
provable computability

7.1
Introduction

As
we agreed earlier, the
ultimate purpose of logic is providing a tool for navigating the real
life. As
such, first or foremost it should be able to serve as a basis for ** applied**
(“

In
this
respect, CoL is a reasonable alternative to
classical logic. One can base applied theories on CoL just as
meaningfully as
on classical logic
(and certainly more
meaningfully than on intuitionistic
or linear logics which
offer no concept of truth), but
with significant advantages, especially if we care about constructive,
computational and complexity-theoretic aspects of the theory. Number
theories
based on CoL are called “** Clarithmetics**”.

The
nonlogical
axioms of a clarithmetic generally will
be a collection of (formulas expressing) problems whose algorithmic
solutions
are known. Often, together with nonlogical axioms, we also have
nonlogical
rules of inference, preserving the property of computability. Then, the
soundness of the corresponding underlying axiomatization of CoL ---
which
usually comes in the strong, constructive
sense ---
guarantees that every theorem *T* of
the theory also has an algorithmic solution and that, furthermore, such
a
solution can be mechanically extracted from a proof of *T*.
Does not this look like exactly what the constructivists have
been calling for?

Unlike
the
mathematical or philosophical
constructivism, however, and even unlike the early-day theory of
computation,
modern computer science has long understood that what really matters
is not
just computability but rather efficient computability. And here comes
the good
news: CoL, without any modifications whatsoever, can be used for
studying
efficient or complexity-conscious computability just as successfully as
for
studying computability-in-principle. This is no surprise. Let us take
the
fragment **CL12**
of CoL for specificity. As we saw from Theorem
6.7.4,
**CL12** is a sound and complete logic
of not only (1) computability-in-principle, but also of (2) linear
amplitude,
constant
space and linear time computability. This means it is also a logic of
any sort
of computability between these two extremes. If different sorts of
computability required different versions of CoL, then CoL would not be
universal enough, and thus would not quite be a Logic with a capital
“L”. It would be no
fun to develop a separate
“logic” for each of the so many interesting
classes
from the complexity zoo.

But
if CoL
does not differentiate between various
sorts of computational complexity, then how could it be useful for
studying
them? The whole point is that the differences between clarithmetical
theories
tailored to various complexity classes should be and is achieved due to
differences in the non-logical postulates of those theories rather than
the
underlying logic. Roughly speaking, if the nonlogical axioms of a
clarithmetic
represent problems computable within a given complexity constraints and
if the
nonlogical inference rules preserve this property, then CoL as a pure
logic is gentle enough
to guarantee that all theorems of the
theory also enjoy the same property. Of course, any very weak logic ---
take
the empty logic as an extreme --- would also have this “virtue”. But
CoL is as
strong as a logic could be and, as we are going to see later, it allows
us to
achieve not only soundness but also completeness of clarithmetics based
on it.
Here we should differentiate between two --- ** extensional**
and

By
now eleven
clarithmetical theories, named **CLA1**
through **CLA11**, have been introduced
and studied [Jap10, Jap11c, Jap14, Jap16a, Jap16b,
Jap16c].
These theories
are notably simple: most of them happen to be conservative extensions
of **PA** whose only non-classical
axiom is
the sentence *x**y*(*y*
= *x*')
asserting computability of the successor
function ', and whose only non-logical rule of inference is
“constructive
induction”, the particular form of which varies from system to system. The diversity of
such theories is
typically related to different complexity conditions imposed on the
underlying
concept of interactive computability. For instance, **CLA4** soundly and completely
captures the set of
polynomial time solvable interactive number-theoretic problems,^{[Jap11c] }** CLA5**
does the same for
polynomial space,^{[Jap16a]
}**CLA6** for elementary
recursive time (=space),^{ [Jap16a]
} and **CLA7**
for primitive
recursive time (=space).^{ [Jap16a] } Most
interesting is the
system **CLA11**,
which, due
to having four parameters, is in fact a scheme of clarithmetical
theories
rather than a particular theory. Tuning its parameters in an
essentially
mechanical way, **CLA11** yields sound
and complete systems for an infinite and diverse class of computational
complexities, including those to which the earlier disparate systems of
clarithmetic were tailored. Parameters #1, #2 and #3 are sets of terms
or
pseudoterms used as bounds in certain postulates, and parameter #4 is a
(typically empty yet “expandable”) set of formulas used as
supplementary axioms.
The latter determines the intensional strength of the theory, while
parameters
#1, #2 and #3 respectively govern the amplitude, space and time
complexities of the
problems
represented by the theorems of the theory.

The
logical
basis of all clarithmetical theories
studied so far is **CL12**.** ** More
expressive fragments of CoL can be
considered in the future for this purpose, if the syntax of such
fragments is
sufficiently developed.

7.2 Clarithmetic vs. bounded arithmetic

It
has been
long noticed that many complexity classes
can be characterized by certain versions of arithmetic. Of those,
systems of *bounded
arithmetic*^{[Haj93,
Kra95]
} should
be named as the closest
predecessors of our systems of clarithmetic. Just like our
clarithmetical
systems, they control computational complexity by explicit resource
bounds
attached to quantifiers, usually in induction or similar postulates.
The best
known alternative line of research, primarily developed by recursion
theorists,
controls computational complexity via type information instead. Here we
will
not attempt any comparison with these alternative approaches because of
big
differences in the defining frameworks.

The
story of
bounded arithmetic starts with Parikh's
1971 work [Par71], where the
first system IΔ_{}_{0} of bounded
arithmetic was introduced. Paris and
Wilkie, in [Par85] and a series
of other papers, advanced
the study of IΔ_{}_{0}_{}_{}_{} and of how it
relates to complexity theory. Interest
towards the area dramatically intensified after the appearance of Buss'
1986
influential work [Bus86], where
systems of bounded
arithmetic for polynomial hierarchy, polynomial space and exponential
time were
introduced. Clote and Takeuti [Clo92],
Cook and Nguyen [Coo10] and
others introduced a host of theories related to
other complexity classes.

All
theories
of bounded arithmetic are weak
subtheories of **PA**, typically
obtained by imposing certain syntactic restrictions on the induction
axiom or
its equivalent, and then adding some old theorems of **PA**
as new axioms to partially bring back the baby thrown out with
the bath water. Since the weakening of the deductive strength of **PA** makes certain important functions or
predicates no longer definable, the non-logical vocabularies of these
theories
typically have to go beyond the original vocabulary {0, ',+,✕} of **PA**.
These theories achieve soundness and extensional completeness with
respect to
the corresponding complexity classes in the sense that a function *f*(*x*)
belongs to the target class if and only if it is provably total in the
system
--- that is, if there is a Σ_{1}-formula *F*(*x*,*y*)
that represents the graph of *f*(*x*),
such that the system proves *x*!

Here
we want
to point out several differences between
clarithmetic and bounded arithmetic.

**7.2.1.
Generality**
While bounded
arithmetic and the other approaches are
about functions, clarithmetics are about interactive problems, with
functions
being nothing but special cases of the latter. This way, clarithmetics
allow us
to systematically study not only computability in its narrow sense, but
also
many other meaningful properties and relations, such as, for instance,
various
sorts of enumerabilities, approximabilities or reducibilities. As we
remember, just like function
effectiveness, such
relations happen to be special cases of our broad concept of
computability.
Having said that, the differences discussed in the subsequent
paragraphs of
this subsection hold regardless of whether one keeps in mind the full
generality of clarithmetics or restricts attention back to functions only, the “common
denominators” of the two
approaches.

**7.2.2.
Intensional strength** Our systems
extend rather than restrict **PA**.
Furthermore, instead of **PA**, as a
classical basis one can take
anything from a very wide range of sound theories, beginning from
certain weak
fragments of **PA** and ending with the
absolute-strength theory **Th(N)** of
the standard model of arithmetic (the “truth arithmetic”). It is
exactly due to
this flexibility that we can achieve not only extensional but also
intensional
completeness --- something inherently unachievable within the
traditional
framework of bounded arithmetic, where computational soundness by its
very
definition entails deductive weakness.

**7.2.3.
Language**
Due to the
fact that our theories are no longer weak,
there is no need to have any new *non-logical*
primitives in the language and the associated new axioms in the theory:
all
recursive or arithmetical relations and functions can be expressed
through 0,
',+,✕ in
the standard way.
Instead, the languages
of the theories **CLA1**-**CLA11**
of clarithmetic only have the two
additional *logical* connectives , and
two additional
quantifiers , . It is CoL’s
constructive semantics for these operators that allows us to express
nontrivial
computational problems. Otherwise, formulas not containing these
operators ---
formulas of the traditional language of **PA**,
that is --- only express elementary
problems
(moveless games). This explains how our approach makes it possible to
reconcile
unlimited deductive strength with computational soundness. For
instance, the
formula *x**yF*(*x*,*y*)
may be provable even if *F*(*x*,*y*)
is the graph of a function which is “too hard” to compute.
This does not have any
relevance to the
complexity class characterized by the theory because the formula *x**yF*(*x*,*y*),
unlike its “constructive counterpart” *x**yF*(*x*,*y*),
carries no nontrivial computational meaning.

**7.2.4.
Quantifier alternation** Our approach
admits arbitrarily many alternations of
bounded quantifiers in induction or whatever similar postulates,
whereas the
traditional bounded arithmetics are typically very sensitive in this
respect,
with different quantifier complexities yielding different computational
complexity classes.

**7.2.5.
Uniformity**
As already
pointed out, our system **CLA11**
offers
uniform treatments of otherwise disparate systems for various
complexity
classes. It should be noted that the same holds for the
bounded-arithmetic
approach of [Coo10], albeit for
a rather different
spectrum of complexity classes. The ways uniformity is achieved,
however, are
drastically different. In the case of [Coo10],
the way to
build your own system is to add, to the base theory, an axiom
expressing a
complete problem of the target complexity class. Doing so thus requires
quite
some nontrivial complexity-theoretic knowledge. In our case, on the
other hand,
adequacy is achieved by straightforward, brute force tuning of the
corresponding parameters of **CLA11**.
E.g., for linear space, we simply need to take parameter #2 to be the
set of
(0, ',+)-combinations of variables, i.e., the set of terms that
“canonically”
express the linear functions. If we (simultaneously) want to achieve
adequacy
with respect to polynomial time, we shall (simultaneously) take
parameter #3 to
be the set of (0, ',+,✕)-combinations
of
variables, i.e., the set of terms that express the polynomial
functions. And so
on.

7.3
Motivations

The
main
motivations for studying clarithmetics are as
follows, somewhat arbitrarily divided into the categories “general”,
“theoretical” and “practical”.

**7.3.1.
General**
Increasingly
loud voices are being heard^{[Gol06]} that, since
the real computers are interactive, it might be time in theoretical
computer
science to seriously consider switching from Church’s narrow
understanding of
computational problems as functions to more general, interactive
understandings. Clarithmetics serve the worthy job of lifting
“efficient
arithmetics” to the interactive level. Of course, the already existing
results
are only CoL’s first modest steps in this direction, and there is still
a long
way to go. In any case, our generalization from functions to
interaction
appears to be beneficial even if, eventually, one is only interested in
functions, because it allows a smoother treatment and makes our systems
easy-to-understand in their own rights. Imagine how awkward it would be
if one
had tried to restrict the language of classical logic only to formulas
with at
most one alternation of quantifiers because more complex formulas
seldom
express things that we comprehend or care about, and, besides, things can always be
Skolemized anyway. Or,
if mankind had let the Roman-European tradition prevail in its
reluctance to go
beyond positive integers and accept 0 as a legitimate quantity, to say
nothing
about the negative, fractional, or irrational numbers.

The
“smoothness” of our approach is related to the fact
that, in it, all formulas --- rather than only those of the form *x*!

As
mentioned,
our approach extends rather than
restricts **PA**. This allows us to
safely continue relying on our standard arithmetical intuitions when
reasoning
within clarithmetic, without our hands being tied by various
constraints,
without the caution necessary when reasoning within weak theories.
Generally, a
feel for a formal theory and a “sixth sense” that it takes for someone
to
comfortably reason within the theory require time and efforts to
develop. Many
of us have such a “sixth sense” for **PA**
but not so many have it for weaker theories. This is so because weak
theories,
being artificially restricted and thus forcing us to pretend that we do
not
know certain things that we actually do know, are farther from a
mathematician’s normal intuitions than **PA**
is. Even if this was not the case, mastering the one and universal
theory **PA** is still easier and
promises a
greater payoff than trying to master tens of disparate yet equally
important
weak theories that are out there.

**7.3.2.
Theoretical** Among the
main motivations for studying bounded
arithmetics has been a hope that they can take us closer to solving
some of the
great open problems in complexity theory, for, quoting Cook and Nguen [Coo10], “it ought to be easier to
separate the theories
corresponding to the complexity classes than to separate the classes
themselves”.
The same applies to our systems of clarithmetic and **CLA11**
in particular which allows us to capture, in a uniform way, a
very wide and diverse range of complexity classes.

While
the
bounded arithmetic approach has been around
and extensively studied since long ago, the progress towards realizing
the
above hope has been very slow. This
fact
alone justifies all reasonable attempts to try something substantially
new and
so far not well explored. The clarithmetics line of research qualifies
as such.
Specifically, studying “nonstandard models” of clarithmetics, whatever
they may
mean, could be worth the effort.

Among
the
factors which might be making clarithmetics
more promising in this respect than their traditional alternatives is
that the
former achieves intensional completeness while the latter inherently
have to
settle for merely extensional completeness. Separating theories
intensionally
is generally easier than separating them extensionally, yet intensional
completeness implies that the two sorts of separation mean the same.

Another
factor
relates to the ways in which theories
are axiomatized in uniform treatments, namely, the approach of **CLA11** versus that of [Coo10].
As noted earlier, the uniform method of [Coo10]
achieves
extensional completeness with respect to a given complexity class by
adding to
the theory an axiom expressing a complete problem of that class. The
same
applies to the method used in [Clo92].
Such axioms are
typically long formulas as they carry nontrivial complexity-theoretic
information. They
talk --- through
encoding and arithmetization --- about graphs, computations, etc. rather

than about numbers. This makes such axioms hard to
comprehend directly as number-theoretic statements, and makes the
corresponding
theories hard to analyze. This approach essentially means translating
our
complexity-theoretic knowledge into arithmetic. For this reason, it is
likely
to encounter the same kinds of challenges as the ordinary, informal
theory of
computation does when it comes to separating complexity classes. Also, oftentimes we may
simply fail to know a
complete problem of a given, not very well studied, complexity class.

The
uniform
way in which **CLA11** axiomatizes its
instances is very different from the above.
Here all axioms and rules are “purely arithmetical”, carrying no direct
complexity-theoretic information. This means that the number-theoretic
contents
of such theories are easy to comprehend, which, in turn, carries a
promise that
their model theories might be easier to successfully study, develop and
use in
proving independence/separation results.

**7.3.3.
Practical**
More often
than not, the developers of
complexity-bound arithmetics have also been motivated by the potential
of
practical applications in computer science. Here we quote
Schwichtenberg's [Sch06] words:
‘It is
well known that it is undecidable in general whether a given program
meets its
specification. In contrast, it can be checked easily by a machine
whether a
formal proof is correct, and from a constructive proof one can
automatically
extract a corresponding program, which by its very construction is
correct as
well. This at least in principle opens a way to produce correct
software, e.g.
for safety-critical applications. Moreover, programs obtained from
proofs are
“commented” in a rather extreme sense. Therefore it is easy to apply
and
maintain them, and also to adapt them to particular situations.’ In a
more
ambitious and, at this point, somewhat fantastic perspective, after
developing
reasonable theorem-provers, CoL-based efficiency-oriented systems can
be seen
as declarative programming languages in an extreme sense, where human
“programming” just means writing a formula expressing the problem whose
efficient solution is sought for systematic usage in the future. That
is, a
program simply coincides with its specification. The compiler’s job
would be
finding a proof (the hard part) and translating it into a
machine-language code
(the easy part). The process of compiling could thus take long but,
once
compiled, the program would run fast ever after.

What
matters
in applications like the above, of
course, is the intensional rather than extensional strength of a
theory. The
greater that strength, the better the chances that a proof/program will
be
found for a declarative, ad hoc or brute force specification of the
goal.
Attempts to put an intensionally weak theory (regardless of its
extensional
strength) to practical use would usually necessitate some
pre-processing of the
goal, such as expressing it through a certain standard-form Σ_{1}_{}-formula. But
this
sort of pre-processing often essentially means already finding ---
outside the
formal system --- a solution of the target problem or, at least,
already
finding certain insights into such a solution.

In
this
respect, **CLA11**
is exactly what we need. Firstly, because it is easily, “mechanically”
adjustable to a potentially infinite variety of target complexities
that one
may come across in real life. It allows us to adequately capture a
complexity
class from that variety without any preliminary complexity-theoretic
knowledge
about the class, such as knowledge of some complete problem of the
class (yet
another sort of “pre-processing”) as required by the approaches in the
style of
[Clo92] or [Coo10].
All relevant
knowledge about the class is automatically extracted by the system from
the
definition (ad hoc description) of the class, without any need to look
for help
outside the formal theory itself. Secondly, and more importantly, **CLA11** fits the bill because of its
intensional strength, which includes the full deductive power of **PA** and which is only limited by the
Goedel incompleteness phenomenon. Even when the theory possesses no
arithmetical
knowledge on top of what is implied by the Peano axioms, is still
provides
“practically full” information about computability within the
corresponding
complexity constraints. This is in the same sense as **PA**,
despite Goedel’s incompleteness, provides
“practically full” information about
arithmetical truth. Namely, if a formula *F*
is not*F
*within the required complexity restrictions: either such an
algorithm does not exist, or showing its correctness requires going
beyond
ordinary combinatorial reasoning formalizable in **PA**.

7.4 Common preliminaries for all our theories of clarithmetic

All
systems of
clarithmetic presented on this website
have the same ** language**,
obtained from the language
of

*successor*, unary. We write*t**ʹ*for*successor*(*t*).*sum*, binary. We write*t*_{1}+*t*_{2}for*sum*(*t*_{1},*t*_{2}).*product*, binary. We write*t*_{1}✕*t*_{2}for*product*(*t*_{1},*t*_{2}).

Let
us call
this language L. Throughout
the
rest of Section 7, unless otherwise specified or implied by the
context, when we say “** formula**”,
it is to be understood as formula of L. As always,

A
formula is
said to be ** elementary**
iff it contains no choice operators. We will be using the lowercase

As
one can
see, L is an
extension of
the language of **PA**
--- namely, the extension obtained by adding
the choice operators , , , . The language
of **PA** is the *elementary
fragment* of L, in the sense
that
formulas of the former are nothing but elementary formulas of the
latter. We
remind the reader that, deductively, **PA**
is the theory based on classical first-order logic with the following
nonlogical axioms, that we shall refer to as the ** Peano
axioms**:

1. *x *(0≠*x*ʹ)

2. *x**y *(*x*ʹ=*y*ʹ *x*=*y*)

3. *x *(*x*+0=*x*)

4. *x**y *(*x*+*y*ʹ = (*x*+*y*)ʹ)

5. *x *(*x*✕0 = 0)

6. *x**y *(*x*✕*y*ʹ = (*x*✕*y*) + *x*)

7. (

The
concept of
an interpretation explained in Section
5.2 can now be restricted to interpretations
that (as functions) are only defined
on ʹ, +
and ✕, as the
present language L has no other
nonlogical function or predicate letters. Of such interpretations, the *standard
interpretation*** ^{†}** is the one
whose universe

Terminologically
we will further identify natural
numbers with the corresponding decimal numerals (constants). Usually it
will be
clear from the context whether we are talking about a number or a decimal
numeral. For instance, if we say that *x*
is greater than *y*, then we
obviously
mean *x* and *y*
as numbers; on the other hand, if we say that *x*
is longer than *y*, then*
x* and *y* are seen as
numerals. Thus, 999 is
greater but not longer than 100.

All
theories
of clarithmetic presented in the following
subsections share not only the language L but also
logical
postulates. Namely, the only **logical
rule of inference** of each theory is our old friend LC
(Logical Consequence). There
are no
logical axioms, but the instances of LC that take no premises can be
considered
as such. All theories also share the ** Peano axioms**.
Having said this, in our following descriptions of those theories we
will only
present their (nonlogical)

A
sentence* F*
is considered provable in such a theory iff there is a sequence of
sentences,
called a ** proof**
of

A
weaker
property than provability is what we call
“prepresentability” (“pr” for “provably”). We say that a sentence *F* is ** prepresentable**
in a given theory

Generally,
as
in the above definition of provability
and proofs, in all clarithmetical theories we will only be interested
in
proving *sentences*.
So, for technical convenience, we agree that, if a formula *F*
is not a sentence but we say that it is
provable in such a
theory, it simply means that *F* is provable.
Correspondingly, for readability, when formulating an inference rule

Keep this
convention in mind when reading the
subsequent subsectios of this section.

An *n*-ary
(*n*≥0) pseudoterm,
or ** pterm**
for short, is an elementary formula

In
our
metalanguage, |*x*| will refer to the
length of the binary representation of *x*.
As in the case of other standard
functions, the expression |*x*| will
be
simultaneously understood as a pterm naturally representing the
function |*x*|. The delimiters “|…|”
will automatically also be
treated as
parentheses, so, for instance, when *f*
is a unary function or pterm, we usually write “*f|x*|”
to mean the same as the more awkward expression “*f*(*|x*|)”.
Further generalizing this notational
convention, if **x** stands for an *n*-tuple
(*x*_{1},…,*x _{n}*)
(

7.5
Clarithmetics for polynomial time,
polynomial space,
elementary and primitive recursive
computability

For a variable *x*,
by a ** polynomial
sizebound** for

We
say that a
formula *F* is ** polynomially
bounded** iff the
following condition is satisfied:
Whenever

Where
*t* is a
term, we will be using *t***0** and *t***1** as abbreviations for
the terms 0ʹʹ✕*t* and (0ʹʹ✕*t*)ʹ,
respectively.

**Theory CLA4**
only has two
extra-Peano axioms: *x**y*(*y*=*x*ʹ), *x**y*(*y*=*x***0**)
and a single nonlogical rule (of ** Induction**),
where

**Theory CLA5** has a single
extra-Peano axiom: *x**y*(*y*=*x*ʹ) and a single
nonlogical rule (of Induction), where *F*(*x*)
is any polynomially bounded formula:

**Theory CLA6**
only differs
from **CLA5** in that *F*(*x*)
in Induction is
required to be an exponentially bounded formula.

**Theory CLA7**
only differs
from **CLA5** and **CLA6**
in that there are no restrictions at all on *F*(*x*)
in Induction.

**Fact
7.5.1** Every (elementary
L -) sentence
provable in **PA** is also provable in
any of our clarithemtical theories, including the ones defined in
subsequent
subsections.

This
fact
allows us to construct “lazy” proofs where
some steps can be justified by simply indicating their provability in **PA**. That is, we can treat theorems of **PA**
as if they were axioms of our
clarithmetical theory.

**Example
7.5.2**
The following
sequence is a lazy proof of *x*(*x*=0
*x*≠0) in **CLA4**.
This sentence says that the “zeroness”
predicate is decidable:

I. 0=0 0≠0
LC:
(no premises)

II. *x *(*x*=0 *x***0**=0)
**PA**

III. *x *(*x*≠0 *x***0**≠0)
**PA**

IV. *x* (*x*=0 *x*≠0 *x***0**=0 *x***0**≠0)
LC:
II,III

V. *x *(*x***1**≠0)
**PA**

VI. *x* (*x*=0 *x*≠0 *x***0**=0 *x***0**≠0)
LC: V

VII. *x* (*x*=0 *x*≠0)
Induction:
I,IV,VI

An
extended
version of the above proof will include
the following three additional justifications (**CL12**-proofs):

1. 0=0
Wait: (no
premises)

2. 0=0
0≠0
-Choose: 1

1. *x *(*x*=0 *x***0**=0),
*x*(*x*≠0 *x***0**≠0)
*s*=0
*s***0**=0
Wait: (no premises)

2. *x *(*x*=0 *x***0**=0),
*x*(*x*≠0 *x***0**≠0)
* s*=0 *s***0**=0 *s***0**≠0
-Choose: 1

3. *x *(*x*=0 *x***0**=0),
*x(x*≠0 *x***0**≠0) *s*≠0 *s***0**≠0
Wait: (no premises)

4. *x *(*x*=0 *x***0**=0),
*x*(*x*≠0 *x***0**≠0)
*s*≠0 *s***0**=0 *s***0**≠0
-Choose: 3

5. *x *(*x*=0 *x***0**=0), *x*(*x*≠0 *x***0**≠0)
*s*=0
*s*≠0 *s***0**= 0 *s***0**≠0
Wait:
2,4

6. *x *(*x*=0 *x***0**= 0),
*x*(*x*≠0 *x***0**≠0)
*x*(*x*=0
*s*≠0 *x***0**=0 *x***0**≠0) Wait:
5

A
justification for Step VI:

1. *x *(*x***1**≠0)
*s*=0 *s***1**≠0
Wait: (no premises)

2. *x *(*x***1**≠0)
*s*≠0 *s***1**≠0
Wait: (no premises)

3. *x *(*x***1**≠0)
*s*=0
*s*≠0 *s***1**≠0
Wait:
1,2

4. *x *(*x***1**≠0)
*s*=0
*s*≠0 *s***1**=0 *s***1**≠0
-Choose:
3

5. *x *(*x***1**≠0)
x(*x*=0
*x*≠0 *x***1**=0 *x***1**≠0)
Wait: 4

As
we just
saw, (additionally) justifying an
application of LC takes more
space than the (non-extended)
proof itself. And this is a typical case
for clarithmetical proofs. Luckily, however, there is no real need to
formally
justify LC. Firstly, this is so because **CL12** is an
analytic system, and proof-search in
it is a routine (even if sometimes long) syntactic exercise. Secondly,
thanks
to Thesis 6.7.5, there
is no need to generate formal
**CL12**-proofs anyway: instead, we can
use intuition on games and strategies.

Below
we write
**CLA4**!
for the extension of **CLA4** obtained
from the latter by taking all true (in the standard model) sentences of
the
language of **PA** as additional
axioms. Similarly
for **CLA5**!, **CLA6**!,
**CLA7**!.

**Theorem
7.5.3** (adequacy of **CLA4**)
Consider an arbitrary sentence *S* of
the language L.

**
Constructive
soundness**: If

**Theorem
7.5.4**
The same
adequacy theorem holds for **CLA5** [resp. **CLA6**, resp. **CLA7**],
only
with “polynomial space” [resp. “elementary recursive time (=space)”,
resp.
“primitive recursive time (=space)”]
instead of “polynomial time”.

**CLA4****
**was introduced and proven
adequate in [Jap11c].
The same for CLA5, CLA6 and CLA7 was done in [Jap16a].

7.6 Clarithmetics for provable computability

** Theory CLA8
**is
obtained
from the earlier seen

A
justification for the above rule is
that, if we know how to decide the predicate *p*(*x*)
(left premise), and
we also know that the predicate is true of at least one number (right
premise),
then we can apply
the decision procedure
to *p*(0), *p*(1),
*p*(2),… until a
number *n* is hit such that the
procedure finds *p*(*n*)
true, after which the conclusion *x p*(*x*)
can be solved by choosing *n* for *x* in
it.

**Theory CLA9
**is
obtained
from**
CLA7 **by taking the
following rule of ** Infinite
Search** as
an additional rule of inference, where

Note
that this rule merely “modifies” Finite
Search by
changing the status of *x* *p*(*x*)
from being a premise of the rule to being an antecedent of the
conclusion. A
justification for Infinite Search is that, if we know how to decide the
predicate *p*(*x*),
then we can apply the decision procedure to *p*(0),
*p*(1), *p*(2),
… until (if
and when) a number *n* is hit such
that
the procedure finds *p*(*n*)
true, after which the conclusion can
be solved by choosing *n* for *x* in its
consequent.
Note that, unlike the earlier-outlined
strategy for Finite Search, the present strategy may look for *n* forever, and thus never make a move.
This, however, only happens when *x p*(*x*)
is false, in which case the
conclusion is automatically won.

**Theory CLA10**
is obtained
from** CLA7
**by adding to it the
above rule of Infinite Search
and the following
rule of ** Constructivization**,
where

The
admissibility of this rule, simply allowing us to change *x* to *x*, is obvious
in view
of the restriction that *x* *p*(*x*)
is a sentence (as *p*(*x*)
contains no free variables other than
*x*). Indeed, if an *x*
satisfying *p*(*x*)
exists, then it can
as well be “computed” (generated), even if we do not know what
particular
machine “computes” it. Note the non-constructive character of this
justification.

The
three
theories **CLA8**-**CLA10**
are adequate
with respect to the three natural concepts of computability explained
below: **PA**-provably recursive time
(=space)
computability, constructively **PA**-provable
computability, and (simply) **PA**-provable
computability.

A
natural
extreme beyond primitive recursive time is **PA**-** provably recursive time**.
That means
considering

Not
all
computable problems have recursive (let alone
provably so) time complexity bounds though. In other words, unlike the
situation in traditional theory of computation, in our case not all
computable
problems are *recursive time computable*.
An example is *x*(*y* *p*(*x*,*y*) *y* *p*(*x*,*y*)), where *p*(*x*,*y*) is a decidable binary predicate such
that the unary predicate *y* *p*(*x*,*y*) is undecidable (for instance, *p*(*x*,*y*) means “Turing
machine *x*
halts within *y* steps”). The
problem *x*(*y* *p*(*x*,*y*) *y* *p*(*x*,*y*)) is solved by
the
following effective strategy: Wait till Environment chooses a value *m* for *x*.
After that, for *n*=0,1,2,…,
figure out whether *p*(*m*,*n*)
is true. If and when you find an *n*
such that *p*(*m*,*n*) is
true, choose *n
*for *y* in the consequent
and retire. On the other
hand, if there was a recursive bound *t*
for the time complexity of a solution *M*
of *x*(*y* *p*(*x*,*y*) *y* *p*(*x*,*y*)), then the
following
would be a decision procedure for (the undecidable) *y* *p*(*x*,*y*):
Given an input *m* (in
the role
of *x*), run *M*
for *t*(*|m*|+1)
steps in the scenario where
Environment chooses *m* for *x* at the
very beginning of the play of *x*(*y* *p*(*x*,*y*) *y* *p*(*x*,*y*)), and does not
make
any further moves. If, during this time, *M*
chooses a number *n* for *y* in the
consequent such that *p*(*m*,*n*)
is true, accept; otherwise reject.
So, a next natural step on the road of constructing incrementally
strong
clarithmetical theories for incrementally weak concepts of
computability is to
go beyond **PA**-provably recursive
time
computability and consider the weaker concept of *constructively***PA**-** provable computability**
of (the
problem represented by) a sentence

An
even weaker concept of (simply) **PA**-** provable
computability**
is obtained
from that of constructively

**CLA8,
CLA9 **and**
CLA10 **were introduced and proven
adequate in [Jap14].

7.7 Tunable clarithmetic

Among
the
pterms/functions that we use in this section is (*x*)* _{y}*, standing
for the

By a ** bound**
we shall mean a pterm

Where ** p**
is a pterm and

Let *F* be a formula and *B*
a boundclass. We say that *F*
is *B*-** bounded**
iff every -subformula [resp. -subformula]
of

Every boundclass triple *R* and
set *A*
of sentences
induces the *theory***CLA11*** ^{R}_{A
}*that
we
deductively define as follows.

The
extra-Peano ** axioms**
of

*x**y*(*y*=*x*ʹ)*x**y*(*y*=|*x*|)*x**y*(*Bit*(*y*,*x*)*Bit*(*y*,*x*))- All sentences
of
*A*.

**CLA11**^{R}* _{A}* has two
nonlogical

The rule of *R*-** Induction**
is

where *x*
and
all items of the tuple **s** are
pairwise distinct variables,
*F*(*x*)
is an *R _{space}*-bounded
formula, and

The rule of *R*-** Comprehension**
is

where *x*,*y*
and all items of the tuple **s** are
pairwise distinct variables, * p*(*y*)
is an elementary formula
not containing *x*,
and ** b**(

Let *B*
be a
set of bounds. We define the ** linear closure**
of

Let ** b**=

**Definition
7.7.1** We
say that a
boundclass
triple *R* is ** regular**
iff the following conditions are satisfied:

- For every
bound
(*b***s**)∈*R*_{amplitude}**∪***R*_{space}**∪***R*and any (=some) variable_{time}*z*not occurring in(*b***s**), the game*z*(*z*=|*b***s**|) has an*R*tricomplexity solution, and such a solution can be effectively constructed from(*b***s**). *R*_{amplitude}is*at least linear*,*R*_{space}is*at least logarithmic*, and*R*_{time}_{ }is*at least polynomial*. This is in the sense that, for any variable*x*, we have*x*≤*R*, |_{amplitude}*x*|≤*R*and_{space}*x*,*x*^{2},*x*^{3},…≤*R*._{time}- All three
components
of
*R*are linearly closed and, in addition,*R*is also polynomially closed._{time} - For each component
*B*∈{*R*,_{amplitude}*R*,_{space}*R*} of_{amplitude}*R*, whenever(*b**x*_{1},…,*x*) is a bound in_{n}*B*and*c*_{1},…,*c*∈_{n}*R*∪_{amplitude}*R*, we have_{space}(*b**c*_{1},…,*c*)≤_{n}*B*. - For every
triple (
*a*_{1}(**x**),*s*_{1}(**x**),*t*_{1}(**x**)) of bounds in*R*✕_{amplitude}*R*✕_{space}*R*there is a triple (_{time}*a*_{2}(**x**),*s*_{2}(**x**),*t*_{2}(**x**)) in*R*✕_{amplitude}*R*✕_{space}*R*such that (_{time}*a*_{1}(**x**),*s*_{1}(**x**),*t*_{1}(**x**))≤(*a*_{2}(**x**),*s*_{2}(**x**),*t*_{2}(**x**)) and |*t*_{2}(**x**)|≤*s*_{2}(**x**) ≤*a*_{2}(**x**) ≤*t*_{2}(**x**).

**Definition
7.7.2 **We say that a
theory

- Every sentence
of
*A*has an*R*tricomplexity solution. Here, if*A*is infinite, we additionally require that there is an effective procedure that returns an*R*tricomplexity solution for each sentence of*A*. - For every
bound
(*b***x**) from*R*∪_{amplitude}*R*∪_{space}*R*and every (=some) variable_{time}*z*not occurring in(*b***x**),**CLA11**proves^{R}_{A}*z*(*z*=|*b***x**|).

We agree that,
whenever *A* is a set of sentences, *A*!
is an abbreviation of *A*∪**Th(N)**, where **Th(N)** is the set of
all
true (in the standard model) sentences of **PA**.

**Theorem
7.7.3** (Adequacy of **CLA11**)
Assume
a theory **CLA11*** ^{R}_{A}*
is regular,
and consider
an arbitrary sentence

If*Constructive soundness:**S*is provable in**CLA11**^{R}or_{A}**CLA11**^{R}_{A}_{!}, then*S*^{†}*R*tricomplexity solution. Such a solution can be automatically extracted from an extended**CLA11**^{R}- or_{A}**CLA11**^{R}_{A}_{!}-proof of*S*.If*Extensional completeness:**S*has an^{†}*R*tricomplexity solution, then*S*is prepresentable in**CLA11**^{R}._{A}

If*Intensional completeness:**S*^{†}*R*tricomplexity solution, then*S*is provable in**CLA11**^{R}_{A}_{!}.

Below
we are
going
to see an infinite yet incomplete series of natural theories that are
regular
and thus adequate in the sense of the above theorem. All these theories
look
like

Given a set *S*
of bounds,
by *S*^{♥} [resp. *S*^{♠}] we shall
denote
the linear closure [resp. polynomial closure]
of *S*. We define the series *B*_{1}^{1},* B*_{1}^{2},* B*_{1}^{3},…,* B*_{2}, *B*_{3}, *B _{4}*,

*B*_{1}^{1 }= {|*x*|}^{♥}(boundclass);*logarithmic**B*_{1}^{2}= {|*x*|^{2}}^{♥};*B*_{1}^{3}= {|*x*|^{3}}^{♥}; … ;*B*_{2}= {|*x*|}^{♠}(boundclass);*polylogarithmic*

*B*_{3 }= {*x*}^{♥}(boundclass);*linear**B*_{4}= {*x*✕|*x*|,*x*✕|*x*|^{2},*x*✕|*x*|^{3},…}^{♥}(boundclass);*quasilinear**B*_{5}= {*x*}^{♠}(boundclass);*polynomial**B*_{6}= {2^{|x|}, 2^{|x|2}, 2^{|x|3},…}^{♠}(boundclass);*quasipolynomial**B*_{7 }= {2}^{x}^{♠}(boundclass);*exponential-with-linear-exponent**B*_{8}= {2, 2^{x}^{x2}, 2^{x3},…}^{♠}(boundclass).*exponential-with-polynomial-exponent*

**Fact
7.7.4**
For any boundclass triple
*R* listed below, the theory **CLA11**^{R}_{} is regular:

(*B*_{3},*B*_{1}^{1},*B*_{5}); (*B*_{3},*B*_{1}^{2},*B*_{5}); (*B*_{3},*B*_{1}^{3},*B*_{5}); …;
(*B*_{3},*B*_{2},*B*_{5}); (*B*_{3},*B*_{2},*B*_{6}); (*B*_{3},*B*_{2},*B*_{7}); (*B*_{3},*B*_{3},*B*_{5}); (*B*_{3},*B*_{3},*B*_{6}); (*B*_{3},*B*_{3},*B*_{7}); (*B*_{4},*B*_{1}^{1},*B*_{5}); (*B*_{4},*B*_{1}^{2},*B*_{5}); (*B*_{4},*B*_{1}^{3},*B*_{5}); …;
(*B*_{4},*B*_{2},*B*_{5});
(*B*_{4},*B*_{2},*B*_{6});
(*B*_{4},*B*_{4},*B*_{5}); (*B*_{4},*B*_{4},*B*_{6}); (*B*_{4},*B*_{4},*B*_{7}); (*B*_{5},*B*_{1}^{1},*B*_{5}); (*B*_{5},*B*_{1}^{2},*B*_{5}); (*B*_{5},*B*_{1}^{3},*B*_{5}); …; (*B*_{5},*B*_{2},*B*_{5}); (*B*_{5},*B*_{2},*B*_{6}); (*B*_{5},*B*_{5},*B*_{5}); (*B*_{5},*B*_{5},*B*_{6}); (*B*_{5},*B*_{5},*B*_{7}); (*B*_{5},*B*_{5},*B*_{8}).