\documentstyle[epsf,twoside,11pt]{article}
\newfont{\Bbb}{msbm10 scaled 1100} % openface letters
\newtheorem{Definition}{Definition}[section]
\newtheorem{Theorem}[Definition]{Theorem}
\newtheorem{Lemma}[Definition]{Lemma}
\newtheorem{Proposition}[Definition]{Proposition}
\newtheorem{Example}[Definition]{Example}
\newtheorem{Corollary}[Definition]{Corollary}
\newtheorem{Equation}[Definition]{Equation}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% Sizes, first for US letter format %%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\textwidth 6.5 in % = 16.51cm paper = 8.5x11in
\textheight 8.50in % = 21.59cm + foot 30pt = 8.92in = 22.66cm
\oddsidemargin 0 pt % 1 inch = 2.54cm margin comes for free
\evensidemargin 0 pt % 1 inch = 2.54cm margin comes for free
\topmargin 0 pt % 3 margins of 1in each, bottom 1.08in
\headheight 0 pt % default 12pt = 0.42cm
\headsep 0 pt % default 25pt = 0.88cm
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%% On A4 paper the right margin is 6 mm smaller, %%%%
%%%% whereas the bottom margin is 17.6 mm longer. %%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newlength{\shiftside} \newlength{\shifttop}
\setlength{\shiftside}{-3mm} \setlength{\shifttop}{8.8mm}
\typeout{On what kind of paper are you going to print this?}
\typein[\paper]{Enter 0 for American paper or 1 for A4.}
\addtolength\oddsidemargin{\paper\shiftside}
\addtolength\evensidemargin{\paper\shiftside}
\addtolength\topmargin{\paper\shifttop}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand{\co}[1]{{\it concl}(#1)} % conclusion of a proof
\newcommand{\tp}{{\it top}} % premises in a proof
\newcommand{\rest}{\unitlength 1mm % restriction operator
\begin{picture}(2.16,3.2)
\thinlines
\put(1.12,-0.48){\line(0,1){3.52}}
\put(0.8,1.6){\tiny $\backslash$}
\end{picture} }
\newcommand{\Rbis}{{\underline{\leftrightarrow}}_R} % bisimulation_R
\newcommand{\step}[1]{ \stackrel {#1} {\longrightarrow} }
\newcommand{\nstep}[1]{\stackrel {#1} {\longrightarrow}\!\!\!\!\!\!/~}
\newcommand{\Rule}[2]{
\frac{\raisebox{.7ex}{\normalsize{$#1$}}}
{\raisebox{-1.0ex}{\normalsize{$#2$}}}}
\begin{document}
\title{ \bf Ntyft/ntyxt Rules Reduce to Ntree Rules}
\author{ \sf Wan Fokkink\thanks{Current affiliation:
Department of Philosophy, Utrecht University,
Utrecht, The Netherlands.}\\
\footnotesize \sl Department of Computer Science\\[-3pt]
\footnotesize \sl CWI\\[-3pt]
\footnotesize \sl Amsterdam, The Netherlands\\[-3pt]
\footnotesize \tt wan@cwi.nl
\and \sf Rob van Glabbeek\thanks{This work was supported
by ONR under grant number N00014-92-J-1974.}\\
\footnotesize \sl Computer Science Department\\[-3pt]
\footnotesize \sl Stanford University\\[-3pt]
\footnotesize \sl Stanford, CA 94305, USA\\[-3pt]
\footnotesize \tt rvg@cs.stanford.edu }
\date{}
\maketitle
\begin{quote}\small
\noindent
Groote and Vaandrager introduced the {\it tyft/tyxt format}
for Transition System Specifications (TSSs), and established
that for each TSS in this format that is {\it well-founded}, the
bisimulation equivalence it induces is a congruence. In this paper,
we construct for each TSS in tyft/tyxt format an equivalent
TSS that consists of {\it tree rules} only. As a corollary we can
give an affirmative answer to an open question, namely whether the
well-foundedness condition in the congruence theorem for tyft/tyxt
can be dropped. These results extend to tyft/tyxt with negative
premises and predicates.
\end{quote}
\section{Introduction}
A current method to provide process algebras and specification languages
with an operational semantics is based on the use of transition
systems, advocated by Plotkin \cite{3Pl81}. Given a set of states,
the transitions between these states are
obtained inductively from a Transition System Specification
(TSS), which consists of transition rules. Such a rule,
together with a number of transitions, may imply the validity
of another transition.
We will consider a specific type of transition systems, in which
states are the closed terms generated by a single-sorted
signature, and transitions are supplied with labels. A great deal
of the operational semantics of formal languages in Plotkin style
that have been defined over the years, are within the scope
of this format.
To distinguish such labelled transition systems, many different
equivalences have been defined, the finest of which is
the strong bisimulation equivalence of Park \cite{3Pa81}.
In general, this equivalence is not a congruence,
i.e.\ the equivalence class of a term $f(p_1,...,p_m)$
modulo strong bisimulation is not always
determined by the equivalence classes of the terms $p_i$.
However, congruence is an essential property, for instance,
to fit the equivalence into an axiomatic framework.
Several formats have been developed which ensure that the
bisimulation equivalence induced by a TSS in such a format
is always a congruence. A first proposal was made by De Simone
\cite{3dS85}, which was generalized by Bloom, Istrail and Meyer
\cite{3BIM88} to the GSOS format. Next, Groote and Vaandrager
\cite{3GrV92} introduced the tyft/tyxt format, and proved a
congruence theorem for TSSs in this format that satisfy a
well-foundedness criterion.
Up to now, it has been an open question whether or not
well-foundedness is an essential ingredient of this
congruence theorem. The requirement popped up in the proof,
but no counter-example was found to show that the theorem breaks
down if well-foundedness were omitted from it.
In this paper, we prove that the congruence theorem does
hold for general TSSs in tyft/tyxt format, i.e.\ that the
requirement of well-foundedness can be omitted.
In fact, we will establish a stronger result, namely that
for each TSS in tyft/tyxt format, there is an equivalent
TSS which consists of `tree rules' only.
A tree rule is a well-founded rule of the form
\[
\Rule{\{z_i\step{a_i}y_i~|~i\in I\}}
{f(x_1,...,x_m)\step{a}t}
\]
where the $y_i$ and the $x_j$ are
distinct variables and are the only variables that occur in the
rule, the $z_i$ are variables, $f$ is a function symbol, and $t$ is
any term. Using terminology from \cite{3GrV92}, we can say that a tree
rule is a pure xyft rule.
Since tree rules are well-founded, the reduction of
tyft/tyxt rules to tree rules immediately implies that
the congruence theorem concerning the tyft/tyxt format can do
without well-foundedness.
A major advantage of the main theorem is that it facilitates
reasoning about the tyft/tyxt format. Because
often it is much easier to prove a theorem for
TSSs in tree format than for TSSs in tyft/tyxt format. For example,
this is the case with the congruence theorem itself. Another
striking example consists of Theorems 8.6.6 and 8.9.1 in
\cite{3GrV92}. With our result at hand, the complicated proof of
the second theorem can be skipped, because now the second theorem
follows from the first one.
Furthermore, the removal of well-foundedness from the congruence
theorem for tyft/tyxt increases the convenience
of applying this theorem, since the user no longer
has to recall and check the complicated well-foundedness criterion.
The main result of this paper was obtained independently by the
authors in \cite{3vG93b} and \cite{3Fo94b}. Our present proof improves
the ones envisioned in \cite{3vG93b} and given in \cite{3Fo94b}.
It makes heavy use of a standard
result from unification theory, which says that for each
set of equations that is unifiable, there exists an idempotent
most general unifier. In unification theory, this result is
proved for finite sets of equations, and
for substitutions that have a finite domain. However, we will need
the result in a setting which does not satisfy these finiteness
constraints. A proof of the unification result in the infinite case
can be found in \cite{3Fo94c}. Here we prove the special case of this
result that is needed for our main theorem.
Groote \cite{3Gr93} added negative premises to tyft/tyxt, resulting
in the ntyft/ntyxt format (that also generalizes the GSOS format of
\cite{3BIM88}), and proved that the congruence theorem
extends to certain well-founded TSSs in ntyft/ntyxt format.
We will show that the reduction of tyft/tyxt rules to tree rules
can be lifted to the positive part of rules in ntyft/ntyxt format,
but a simple example learns that this reduction cannot be
applied to the negative premises. Again, we
will find that the congruence theorem
concerning the ntyft/ntyxt format can do without well-foundedness.
Verhoef \cite{3Ver94} defined the panth
format, which adds predicates to ntyft/ntyxt,
and proved that the congruence theorem
holds for well-founded TSSs in panth format.
We will show that our results extend to the panth format too.
\vspace{2mm}
\noindent
{\bf Acknowledgments.}~Catuscia Palamidessi and Fer-Jan de Vries
noted the link with unification. Frits Vaandrager and Chris Verhoef
provided useful comments.
\section{Preliminaries}\label{preliminaries}
This section contains the basic definitions.
\subsection{The signature}
In the sequel we assume the existence of an infinite set of variables
$V$.
\begin{Definition}
A {\it (single-sorted) signature}
$\Sigma$ consists of a set of function symbols, disjoint with $V$,
together with their arities.
The collection $\mbox{\Bbb T}(\Sigma)$ of {\em (open)
terms} over $\Sigma$ is defined as the least set satisfying:
\begin{itemize}\item[]\begin{itemize}\vspace{-1ex}
\item
each variable from $V$ is in $\mbox{\Bbb T}(\Sigma)$,
\item
if $f\in\Sigma$ has arity $n$, and $t_1,...,t_n\in\mbox{\Bbb T}(\Sigma)$,
then $f(t_1,...,t_n)\in\mbox{\Bbb T}(\Sigma)$.
\end{itemize}\end{itemize}\vspace{-1ex}
A term is called {\em closed} if it does not contain
any variables.
\end{Definition}
In the sequel we assume a fixed signature $\Sigma$.
A {\it substitution} is a mapping $\sigma:V\rightarrow
\mbox{\Bbb T}(\Sigma)$. Each substitution is extended to a mapping
from terms to terms in the standard way. As usual,
$\sigma\rho$ denotes the composition of the substitutions $\sigma$ and
$\rho$, in which $\rho$ is applied first.
\subsection{Transition system specifications}
In the sequel we assume the existence of a set of labels $A$.
\begin{Definition}\label{TSS}
For each label $a$, the expression $\step{a}$ denotes a binary relation
on terms. A pair $t\step{a}t'$ is called a {\em transition}.
A transition is {\em closed} if it involves closed terms only.
\end{Definition}
\begin{Definition}
A {\em (transition) rule} $r$ is an expression of the form $H/c$, with $H$ a
collection of transitions, called the {\em premises} (or the {\em hypotheses}),
of $r$, and $c$ a transition, called the {\em conclusion} of $r$.
In the sequel, $\co r$ will denote the conclusion of the rule $r$.
A {\em Transition System Specification (TSS)} is a collection
of transition rules.
A TSS is {\em small} if for each of its rules, the cardinality of its
collection of premises does not exceed the cardinality of the set $V$
of variables.
\end{Definition}
The notion of substitution extends to transitions and rules as expected.
\begin{Definition}
A {\em proof structure} is a tuple $(B,r,\phi)$, where
\begin{itemize}\item[]\begin{itemize}\vspace{-1ex}
\item
$B$ is a collection of transition rules which do not have any variables
in common,
\item
$r\in B$,
\item
$\phi$ is an injective mapping from $B\backslash\{r\}$ to the collection
of premises of rules in $B$, such that each chain $b_0,b_1,b_2,...$
in $B$ with $\phi(b_{i+1})$ a premise of $b_i$ is finite.
\end{itemize}\end{itemize}\vspace{-1ex}
In the sequel, $\tp(B,r,\phi)$ will denote the collection of premises
of rules in $B$ that are outside the image of $\phi$.
Write $(B',r',\phi') < (B,r,\phi)$ iff $B'\subset B$,
$\phi'=\phi\rest (B'\backslash \{r'\})$, $\tp(B',r',\phi')\subseteq
\tp(B,r,\phi)$ and there is a chain $r=b_0, b_1,...,b_n=r'$ with
$n>0$ and $\phi(b_{i+1})$ a premise of $b_i$.
\end{Definition}
Note that $<$ is a partial well-order, i.e.\ any chain $(B_0,r_0,\phi_0) >
(B_1,r_1,\phi_1) > (B_2,r_2,\phi_2) > \cdots$ is finite. Hence we may
apply induction w.r.t.\ $<$.
\pagebreak[3]
\begin{Definition}
A substitution $\sigma$ {\em matches} with a proof structure $(B,r,\phi)$
if $\sigma(\co b)=\sigma(\phi(b))$ for each $b\in B\backslash\{r\}$.
A rule $H/c$ is {\em provable from} a small TSS $R$ if $c\in H$ or
there exists a proof structure $(B,r,\phi)$ where each rule in $B$
is in $R$ modulo $\alpha$-conversion (bijective renaming of
variables), and a substitution $\sigma$ that matches with
$(B,r,\phi)$, such that $\sigma(\tp(B,r,\phi))\subseteq H$ and
$\sigma(\co r)=c$.
\end{Definition}
\begin{Example}{\bf (A fragment of CCS with replication operator)}\label{ccs}.
Let ${\cal A}$ be a set of {\em names}. The set
$\bar{\cal A}$ of {\em co-names} is given by $\bar{\cal A}=\{\bar{a}
\mid a\in {\cal A}\}$, and ${\cal L}={\cal A} \cup \bar{\cal A}$ is
the set of {\em visible actions}. The function $\bar{\cdot}$ is extended to
$\cal L$ by declaring $\bar{\bar{a}}=a$. Furthermore $A = L\cup
\{\tau\}$ is the set of {\em actions}.
Note that $\bar{\tau}$ is undefined. The language CCS has a constant
$0$, a unary operator $a$ for $a \in A$, binary operators $+$ and
$|$, and a few constructs that are omitted here. In addition we
consider the unary {\em replication operator} $!$.
The transition system specification {\rm CCS!} is given by the
transition rules below. These rules are actually schemata, where $a$
ranges over $A$.
\begin{center}
\framebox{$\begin{array}{cccc}
ax \step{a} x & \displaystyle\frac{x \step{a} x'}{x+y \step{a} x'} &
\displaystyle\frac{y \step{a} y'}{x+y \step{a} y'}\\[4ex]
\displaystyle\frac{x\step{a} x'}{x\mid y \step{a} x'\mid y} &
\displaystyle\frac{x\step{a} x' ,~ y \step{\bar{a}} y'}
{x\mid y \step{\tau} x'\mid y'} &
\displaystyle\frac{y \step{a} y'}{x\mid y \step{a} x\mid y'}&
\displaystyle\frac{!x \mid x \step{a} x'}{!x \step{a} x'}
\end{array}$}
\end{center}
Here follows an example of a proof structure $(B,r,\phi)$, together with a
matching substitution $\sigma$. The rule on the bottom is $r$, and
$\phi$ is indicated by the arrows. $Top(B,r,\phi)=\{w\step {\bar{a}} w'\}$.\\
\unitlength 1.8cm
%\begin{picture}(0,5.3)(0,-.2)
\begin{picture}(3.5,5.3)(-2.2,-.2)
\put(0,0){\makebox(0,0){$\displaystyle\frac{!z \mid z \step \tau z'}
{!z \step \tau z'}$}}
\put(0,.65){\vector(0,-1){.3}}
\put(0,1){\makebox(0,0){$\displaystyle\frac{v \step a v' ~~ w \step
{\bar{a}} w'}{v \mid w \step \tau v' \mid w'}$}}
\put(.9,1.65){\vector(-1,-1){.3}}
\put(1,2){\makebox(0,0){$\displaystyle\frac{x \step {\bar{a}} x'}
{y + x \step {\bar{a}} x'}$}}
\put(-.9,1.65){\vector(1,-1){.3}}
\put(-1,2){\makebox(0,0){$\displaystyle\frac{!u \mid u \step a u'}
{!u \step a u'}$}}
\put(-1,2.65){\vector(0,-1){.3}}
\put(-1,3){\makebox(0,0){$\displaystyle\frac{t \step a t'}
{s \mid t \step a s \mid t'}$}}
\put(-1,3.65){\vector(0,-1){.3}}
\put(-1,4){\makebox(0,0){$\displaystyle\frac{q \step a q'}
{q + r \step a q'}$}}
\put(-1,4.65){\vector(0,-1){.3}}
\put(-1,4.85){\makebox(0,0){$\displaystyle ap \step a p$}}
\end{picture}
\hfill
$\begin{array}[b]{@{}lcl@{}}
\sigma(p)&=&0\\
\sigma(q)&=&a0\\
\sigma(q')&=&0\\
\sigma(r)&=&x\\
\sigma(s)&=&!(a0+x)\\
\sigma(t)&=&a0+x\\
\sigma(t')&=&0\\
\sigma(u)&=&a0+x\\
\sigma(u')&=&!(a0+x) \mid 0\\
\sigma(v)&=&!(a0+x)\\
\sigma(v')&=&!(a0+x) \mid 0\\
\sigma(w)&=&a0+x\\
\sigma(w')&=&x'\\
\sigma(x)&=&x\\
\sigma(x')&=&x'\\
\sigma(y)&=&a0\\
\sigma(z)&=&a0+x\\
\sigma(z')&=&!(a0+x) \mid 0 \mid x'\\
\end{array}$
\hfill\mbox{}\\[2ex]
This structure and $\sigma$ demonstrate that the rule $\displaystyle
\frac{x \step {\bar{a}} x'}{!(a0+x) \step \tau !(a0+x)\mid 0 \mid x'}$
is provable from {\rm CCS!}.
\end{Example}
We say that a transition $t\step{a}t'$ is provable from $R$,
if the rule with no premises and conclusion $t\step{a}t'$ is provable
from $R$. The {\em transition relation $\step\ _R$ determined by a
TSS $R$} is the set of all closed transitions provable from $R$.
\begin{Definition}
Two TSSs are {\em transition equivalent} if they determine the same
transition relation.
\end{Definition}
Our notion of provability is chosen in such a way that we can easily
obtain our main result. In order to show that it coincides with the
notions of provability found elsewhere in the literature, we need the
following definition.
\begin{Definition}
The {\em provable closure} of a TSS $R$ is the
smallest set $R^\vdash$ of rules such that
\begin{itemize}
\item if $c\in H$ then $H/c \in R^\vdash$, and
\item if $K/c \in R$ and $H/\sigma(d) \in R^\vdash$ for $d\in K$
and some substitution $\sigma$, then $H/\sigma(c) \in R^\vdash$.
\end{itemize}
\end{Definition}
For notions of provability found elsewhere in the literature (e.g.\
\cite{3BV93,3BolG91,3Fo94b,3vG93b,3vG95,3Gr93,3GrV92,3Ver94}) the
following proposition is easily obtained. By establishing the same for
our notion, it follows that it coincides with the others.
The proposition only holds for small TSSs, but this restriction will
turn out to be inessential for our main result. Moreover, every TSS
can be made `small' by adding sufficiently many variables.
\begin{Proposition}\label{pr.provable}
A rule $H/c$ is provable from a small TSS $R$ iff it belongs to $R^\vdash$.
\end{Proposition}
{\bf Proof.} ``Only if'': The case $c\in H$ is trivial. The other case
is established by induction on the partial well-order $<$ between
proof-structures. Let $H/\sigma(c)$ be provable from $R$ by means of
a proof structure $(B,K/c,\phi)$ and a matching substitution $\sigma$.
Assume that any formula provable by means of a smaller proof structure
belongs to $R^\vdash$. Then $H/\sigma(d) \in R^\vdash$ for any $d\in
K$. It follows that $H/\sigma(c) \in R^\vdash$.
``If'': By induction on the construction of $R^\vdash$. The induction
base, $c\in H$, is again trivial. Now suppose $K/c\in R$
and $H/\rho(d)$ is provable from $R$ for $d\in K$ and some
substitution $\rho$. Let $(B_d,r_d,\phi_d)$ be proof structures
with matching substitutions $\sigma_d$ that establish $H/\rho(d)$ for
$d\in K$. Since there exist at least as many variables as there are
premises in $K$, the variables in these proof structures can be renamed
to become all different, and different from the ones in $K/c$, and a
substitution $\sigma$ can be constructed that matches with each of
these proof structures so as to yield the corresponding rule, and
equals $\rho$ on the variables in $K/c$. Now $(\bigcup_{d\in K} B_d
\cup \{K/c\}, K/c, \bigcup_{d\in K} \phi_d \cup \phi')$, where $\phi'$
is the function that sends $r_d$ to $d$ for $d\in K$, is a proof
structure that matches with $\sigma$, yielding $H/\rho(c)$. $\Box$
\\[1.5ex]
The proof of the following lemma is straightforward and
left to the reader.
\begin{Lemma}
\label{la.prov}
If all the rules in a TSS $S$ are provable from a TSS $R$, then
all the rules that are provable from $S$ are also provable from $R$.
\end{Lemma}
\subsection{Strong bisimulation}
\begin{Definition}
\label{Dn.bis}
Assume a TSS $R$. Two closed terms $p_0,q_0$ are
$R$-{\em bisimilar}, notation $p_0~\Rbis~q_0$, if there exists a
symmetric binary relation ${\cal B}$ on closed terms such that
\begin{itemize}
\item[\bf --]
$p_0{\cal B}q_0$,
\item[\bf --]
if $p{\cal B}q$ and $p\step{a}_Rp'$,
then there is a closed term $q'$ such that $q\step{a}_Rq'$
and $p'{\cal B}q'$.
\end{itemize}
\end{Definition}
\subsection{The tyft/tyxt format}
In general, bisimulation equivalence it is not a {\it congruence},
i.e.\ it may be the case that $p_i~\Rbis~q_i$ for $i=1,...,n$,
but $f(p_1,...,p_n)$ and $f(q_1,...,q_n)$ are not
$R$-bisimilar. Therefore, Groote and Vaandrager
\cite{3GrV92} have introduced the {\it tyft/tyxt format}.
If a TSS is in this format, and if it satisfies a well-foundedness
criterion, then the bisimulation it induces is a congruence.
\begin{Definition}
A transition rule is a {\em tyft} rule if it is of the form
\[
\Rule{\{t_i\step{a_i}y_i~|~i\in I\}}
{f(x_1,...,x_m)\step{a}t}
\]
where the $x_k$ and the $y_i$ are distinct variables (and $I$ is
some, not necessarily finite, index set).
Similarly, a {\em tyxt} rule is of the form
\[
\Rule{\{t_i\step{a_i}y_i~|~i\in I\}}{x\step{a}t}
\]
where $x$ and the $y_i$ are distinct variables.
A TSS is said to be in {\em tyft/tyxt} format if it
consists of tyft and tyxt rules only.
\end{Definition}
The TSS CCS! from Example~\ref{ccs} is in tyft/tyxt format. All its
rules are tyft rules.
Note that any TSS in tyft/tyxt format is `small' in the sense of
Definition~\ref{TSS}.
\begin{Definition}
\label{Dn.wellf}
Assume a set $\{t_i\step{a_i}t_i'~|~i\in I\}$ of transitions.
Its `dependency graph' is a directed graph,
with the collection of variables $V$ as vertices, and with
as edges the collection
\[
\{\langle x,y\rangle~|~\mbox{$x$ and $y$ occur in $t_i$ and
$t_i'$ respectively, for some }i\in I\}.
\]
A set of transitions is called {\em well-founded} if any
backward chain of edges in its dependency graph is finite.
A transition rule is well-founded if its collection
of premises is so, and a TSS is well-founded if all its rules are so.
\end{Definition}
\begin{Example}
Examples of sets of transitions that are {\it not} well-founded are:
\begin{itemize}
\item[\bf --]
$\{y\step{a}y\}$,
\item[\bf --]
$\{y_1\step{a}y_2,~y_2\step{b}y_1\}$,
\item[\bf --]
$\{y_{i+1}\step{a}y_i~|~i=0,1,2,...\}$.
\end{itemize}
\end{Example}
The following congruence theorem originates from \cite{3GrV92}.
\begin{Theorem}
\label{Tm.cong}
If a TSS $R$ is well-founded and in tyft/tyxt format, then
$\Rbis$ is a congruence.
\end{Theorem}
In Section \ref{sec.tree} we will see that the requirement of
well-foundedness in this theorem can be dropped.
\section{Unification}
A standard result from logic programming says that if a finite
collection $E$ of equations between terms is {\it unifiable}, then
there exists a {\it unifier} $\rho'$ for $E$ such that each unifier
for $E$ is also a unifier for $\rho'$. This result follows from
the well-known Martelli-Montanari algorithm \cite{3MM82}.
See \cite{3Apt90} for the basic definitions and for
an introduction to the field of logic programming and unification.
In Fokkink \cite{3Fo94c}, this theorem is generalized
to the case where $E$ may be infinite. The first property in
Lemma \ref{la.rho}, which will be vital in the proof of the main theorem,
is a corollary of this unification result.
However, we present a full proof of the lemma, because
we will need two extra properties of the unifier $\rho'$,
which follow most easily from its construction. Also, the proof
of this lemma is much simpler than the proof of the stronger
unification result in \cite{3Fo94c}.
\begin{Definition}
A substitution $\sigma$ is a {\em unifier} for a substitution $\rho$
if $\sigma\rho=\sigma$. In this case, $\rho$ is called {\em unifiable}.
\end{Definition}
\begin{Lemma}
\label{la.rho}
If a substitution $\rho$ is unifiable, then there exists a
unifier $\rho'$ for $\rho$ with the following properties:
\begin{enumerate}
\item
Each unifier for $\rho$ is also a unifier for $\rho'$.
\item
If $\rho(x)=x$, then $\rho'(x)=x$.
\item
If $\rho^n(x)$ is a variable for all $n\geq 0$, then
$\rho'(x)$ is a variable.
\end{enumerate}
\end{Lemma}
{\bf Proof.}
Let $W$ denote the collection of variables $x$ for which $\rho^n(x)$
is a variable for all $n\geq 0$. First, we define the restriction
$\rho_0'$ of $\rho'$ to $W$.
Define a binary relation $\sim$
on $W$ by $x\sim x'$ if $\rho^m(x)=\rho^n(x')$ for certain $m$ and $n$.
Note that $\sim$ is an equivalence relation. Under $\rho_0'$,
we contract the elements of each equivalence class $C\subseteq W$
to one variable from this class as follows.
\begin{itemize}
\item[\bf --]
If $\rho(x_0)=x_0$ for some $x_0\in C$, then for all $x\in C$
$\rho^n(x)=x_0$ for some $n$. This implies $\rho(x)\not=x$ for
$x\in C\backslash\{x_0\}$, so $x_0$ is determined uniquely.
Put $\rho_0'(x)=x_0$ for $x\in C$.
\item[\bf --]
If $\rho(x)\not=x$ for all $x\in C$, then
just pick some $x_0\in C$ and put $\rho_0'(x)=x_0$ for $x\in C$.
\end{itemize}
Put $\rho_0'(y)=y$ for $y\not\in W$.
We construct $\rho'(y)$ as follows. By assumption, $\rho$ allows
a unifier $\sigma$. Since $\sigma\rho=\sigma$, it follows that
$\sigma\rho^n=\sigma$ for $n\geq 0$. Clearly, the {\it size} of
each $\rho^n(y)$ (that is, the number of function symbols it
contains) is smaller or equal than the size of
$\sigma\rho^n(y)=\sigma(y)$. Moreover, each term $\rho^{n+1}(y)$
has at least the size of $\rho^n(y)$.
Since the sizes of the $\rho^n(y)$ cannot
grow beyond the size of $\sigma(y)$, it follows that
from a certain natural number $N(y)$ onwards,
the terms $\rho^n(y)$ all have the same size. Hence,
for $n\geq N(y)$, $\rho^{n+1}(y)$ is obtained from $\rho^n(y)$
by replacing variables by variables.
This means that all variables in $\rho^{N(y)}(y)$ are in $W$. Put
\[
\rho'(y)=\rho_0'\rho^{N(y)}(y).
\]
Note that $N(x)=0$ if $x\in W$, so $\rho'$ equals $\rho_0'$
on $W$. We check the required properties for $\rho'$.
\begin{itemize}
\item
$\rho'$ is a unifier for $\rho$.
First, consider a variable $x\in W$. Since $\rho(x)\sim x$, and
$\rho_0'$ contracts variables in the same equivalence class, we have
$\rho_0'\rho(x)=\rho_0'(x)$. Since $\rho'$ equals $\rho_0'$
on $W$, this implies $\rho'\rho(x)=\rho'(x)$.
Next, consider a variable $y\not\in W$. Then clearly
$N(y)=N(\rho(y))+1$, so
\[
\rho'\rho(y)=\rho_0'\rho^{N(\rho(y))}\rho(y)=
\rho_0'\rho^{N(y)}(y)=\rho'(y).
\]
\item
Each unifier $\sigma$ for $\rho$ is a unifier for $\rho'$.
First, consider a variable $x\in W$.
Since $\rho_0'(x)\sim x$, there are $m$ and $n$ such that
$\rho^m\rho_0'(x)=\rho^n(x)$. After applying $\sigma$ to both sides
we get $\sigma\rho_0'(x)=\sigma(x)$. Since $\rho_0'(y)=y$
for variables $y\not\in W$, it follows that $\sigma\rho_0'=\sigma$.
So for each variable $y$ we have
\[
\sigma\rho'(y)=\sigma\rho_0'\rho^{N(y)}(y)=\sigma\rho^{N(y)}(y)
=\sigma(y).
\]
\item
If $\rho(x)=x$, then $\rho'(x)=x$.
Clearly $x\in W$, so $\rho'(x)=\rho_0'(x)$.
Since $\rho(x)=x$, the construction of $\rho_0'$ ensures that
$\rho_0'(x)=x$.
\item
If $\rho^n(x)$ is a variable for all $n\geq 0$, then
$\rho'(x)$ is a variable.
By definition $x\in W$, so $\rho'(x)=\rho_0'(x)$.
From the construction of $\rho_0'$ it follows that its image
contains variables only. $\Box$
\end{itemize}
\section{Tyft/Tyxt Reduces to Tree}
\label{sec.tree}
This section contains the proof of the main theorem, which says
that for each TSS in tyft/tyxt format there exists a transition
equivalent TSS in the more restrictive tree format.
\subsection{Tyft/tyxt reduces to tyft}
The following lemma from \cite{3GrV92} indicates
that we can refrain from tyxt rules.
\begin{Lemma}
\label{la.tyft}
Each TSS $R$ in tyft/tyxt format is transition equivalent to
a TSS in tyft format.
\end{Lemma}
{\bf Proof.}
Replace each tyxt rule $r$ in $R$
by a collection of tyft rules
$\{r_f|f\in\Sigma\}$, where each $r_f$ is obtained by substituting
$f(x_1,...,x_n)$ for $x$ in $r$, with $x_1,...,x_n$
variables that do not yet occur in $r$. Let $R'$ denote
the collection of tyft rules that is thus obtained.
Clearly, for each proof from $R$ of a certain closed
transition, there is a proof from $R'$ of the same transition,
and vice versa. Hence, $R$ and $R'$ are transition equivalent. $\Box$
\subsection{Tyft reduces to xyft}
\begin{Definition}
A transition rule is said to be a {\em xytt} rule if the terms at
both sides of its premises are all single variables.
\end{Definition}
\begin{Definition}
A transition rule is called {\em xyft} if it is both tyft and xytt.
\end{Definition}
In this section, we show that each TSS in tyft format is xytt equivalent
to a TSS in xyft format, where xytt equivalence is a stronger equivalence
notion than transition equivalence.
\begin{Definition}
Two TSSs are {\em xytt equivalent} if exactly
the same xytt rules are provable from both.
\end{Definition}
\begin{Theorem}
\label{Tm.xyft}
Each TSS $R$ in tyft format is xytt equivalent to a TSS in xyft format.
\end{Theorem}
{\bf Proof.} We shall prove $R$ xytt equivalent to the TSS $S$ of xyft
rules that are provable from $R$. Since all rules in $S$ are provable
from $R$, Lemma \ref{la.prov} yields that the xytt rules provable from
$S$ are provable from $R$. We show that the converse is also true,
i.e.\ that each xytt rule $H/c$ provable from $R$ is provable from
$S$. We apply induction on the partial well-order $<$ between proof
structures, so suppose that $(B,r,\phi)$ derives $H/c$ from $R$, and
the case has been proved for xytt rules that are derivable from $R$ by
means of a proof structure smaller than $(B,r,\phi)$.
Since $(B,r,\phi)$ is a proof structure for $H/c$, there exists
a substitution $\sigma$ that matches with $(B,r,\phi)$
such that $\sigma({\tp}(B,r,\phi))\subseteq H$ and $\sigma(\co r)=c$.
From $(B,r,\phi)$ we construct recursively a sub-structure $(B',r,\phi')$
which is a proof structure for a rule $s\in S$. In parallel,
we construct a partial substitution $\rho$ which is unified by
$\sigma$ in the sense that $\sigma (\rho(x))=\sigma(x)$ for those variables
$x$ for which $\rho$ has been defined.
\begin{itemize}
\item
$r\in B'$,
\item
if $b\in B\backslash\{r\}$, and if $\phi(b)$ is a premise $t\step ay$
of a rule in $B'$ such that for some $k\geq 0$:
\begin{enumerate}
\item
$\rho^i(t)$ is defined for $i=0,...,k$,
\item
$\rho^i(t)$ is a variable for $i=0,...,k-1$,
\item
$\rho^k(t)$ is of the form $f(t_1,...,t_n)$,
\end{enumerate}
then $b\in B'$.
Since $\sigma$ matches with $(B,r,\phi)$, we have
$\sigma(\co b)=\sigma(t\step ay)$. By assumption,
$\sigma$ is a unifier for the partially defined $\rho$, so
$\sigma(t)=\sigma\rho^k(t)=\sigma(f(t_1,...,t_n))$.
Hence, $\co b$ is of the form $f(x_1,...,x_n)\step au$,
with $\sigma(x_j)=\sigma(t_j)$ for $j=1,...,n$ and $\sigma(u)=\sigma(y)$.
Define $\rho(x_j)=t_j$ for $j=1,...,n$ and $\rho(y)=u$.
Note that $\sigma$ is a unifier for the extended $\rho$.
\end{itemize}
In order to extend $\rho$ to a full substitution, we define
$\rho(x)=x$ for all variables $x$ for which $\rho$ has not yet
been defined. Finally, $\phi'$ is the restriction of $\phi$ to
$B'\backslash\{r\}$.
Since $\sigma$ is a unifier for $\rho$,
Lemma \ref{la.rho} indicates the existence of a
unifier $\rho'$ for $\rho$ with the following properties.
\begin{enumerate}
\item
$\sigma\rho'=\sigma$.
\item
If $\rho(x)=x$, then $\rho'(x)=x$.
\item
If $\rho^k(x)$ is a variable for all $k\geq 0$, then $\rho'(x)$
is a variable.
\end{enumerate}
Consider the rule $b$ in the construction of $B'$ and $\rho$.
Recall that its conclusion is of the form $f(x_1,...,x_n)\step au$ and
$\phi'(b)=t\step ay$, where $\rho^k(t)=f(t_1,...,t_n)=
\rho(f(x_1,...,x_n))$ and $\rho(y)=u$. Since $\rho'$ is a unifier
for $\rho$, it follows that
\[
\rho'(\phi'(b))=\rho'(t\step ay)=\rho'(\rho^k(t)\step a\rho(y))
=\rho'(\rho(f(x_1,...,x_n))\step au)=\rho'(concl(b)).
\]
So $\rho'$ matches with $(B',r,\phi')$. Hence the rule
$s=\rho'(\tp(B',r,\phi')/\co r)$ is provable from $R$.
We show that $s$ is xyft. From the construction of $\rho$ it
follows that its {\it domain} (i.e.\ the variables $x$ for which
$\rho(x)\not=x$) consists of two kinds of variables:
\begin{enumerate}
\item
variables that occur at the left-hand side of the conclusion of
rules in $B'\backslash\{r\}$,
\item
variables that occur at the right-hand side of premises in the
range of $\phi'$.
\end{enumerate}
Hence, if $g(x_1,...,x_m)\step bt$ is the conclusion of $r$, then
$\rho(x_j)=x_j$ for $j=1,...,m$. Now property 2 of
$\rho'$ yields $\rho'(x_j)=x_j$ for $j=1,...,m$, so the conclusion
$\rho'(g(x_1,...,x_m)\step bt)$ of $s$ is of the form
$g(x_1,...,x_m)\step b\rho'(t)$.
The premises of $s$ are in $\rho'(\tp(B',r,\phi'))$, so they are
of the form $\rho'(t\step ay)$ where $t\step ay$ is a premise of
a rule in $B'$ outside the range of $\phi'$. Hence $y$ is not in
the domain of $\rho$, i.e.\ $\rho(y)=y$, so property 2 of $\rho'$
yields $\rho'(y)=y$. Moreover, as in a proof structure no two rules
have variables in common, all variables $y$ at the right-hand side of
these premises and $x_1,...,x_m$ are distinct. In order to
show that $\rho'(t)$ is a variable, we distinguish two cases.
\begin{enumerate}
\item
$t\step ay\in\tp(B,r,\phi)$.
Then $\sigma(t\step ay)\in H$, so $\sigma(t)$ is a variable.
As $\sigma\rho'(t)=\sigma(t)$, also $\rho'(t)$ is a variable.
\item
$t\step ay\not\in\tp(B,r,\phi)$.
Then $\phi(b)=t\step ay$ for some $b\in B$. Since $t\step ay$
is outside the range of $\phi'$, it follows that $b\not\in B'$.
Hence the inductive construction of $B'$ and $\rho$ implies that
$\rho^k(t)$ is a variable for $k\geq0$. So property 3 of $\rho'$
yields that $\rho'(t)$ is a variable.
\end{enumerate}
Hence, $s$ is xyft.
Since $s$ is provable from $R$ and xyft, by definition $s\in S$.
For $c'\in\sigma(\tp(B',r,\phi'))$,
the xytt rule $H/c'$ is provable from $R$ by means of a strictly
smaller sub-structure of $(B,r,\phi)$, so by induction such rules $H/c'$ are
provable from $S$. Since $\sigma(s)=\sigma\rho'(\tp(B',r,\phi')/concl(r))
=\sigma(\tp(B',r,\phi'))/c$ it follows from Proposition \ref{pr.provable}
that $H/c$ is provable from $S$. $\Box$
\begin{Example} Applying this construction to the proof structure
$(B,r,\phi)$ of Example~\ref{ccs} gives rise to the sub-structure
$(B',r,\phi')$ displayed below, together with the (partial)
substitution $\rho$. Applying the construction in the proof of the
unification lemma to $\rho$ gives the substitution $\rho'$
(with $\rho'(x)=x$ for variables $x$ not explicitly mentioned).\\
\unitlength 1.8cm
\begin{picture}(2.5,3.6)(-1.6,-.2)
\put(0,0){\makebox(0,0){$\displaystyle\frac{!z \mid z \step \tau z'}
{!z \step \tau z'}$}}
\put(0,.65){\vector(0,-1){.3}}
\put(0,1){\makebox(0,0){$\displaystyle\frac{v \step a v' ~~ w \step
{\bar{a}} w'}{v \mid w \step \tau v' \mid w'}$}}
\put(-.9,1.65){\vector(1,-1){.3}}
\put(-1,2){\makebox(0,0){$\displaystyle\frac{!u \mid u \step a u'}
{!u \step a u'}$}}
\put(-1,2.65){\vector(0,-1){.3}}
\put(-1,3){\makebox(0,0){$\displaystyle\frac{t \step a t'}
{s \mid t \step a s \mid t'}$}}
\end{picture}
\hfill
$\begin{array}[b]{@{}lcl@{}}
\rho(v)&=&!z\\
\rho(w)&=&z\\
\rho(z')&=&v' \mid w'\\
\\
\rho(u)&=&z\\
\rho(v')&=&u'\\
\\
\rho(s)&=&!u\\
\rho(t)&=&u\\
\rho(u')&=&s \mid t'\\
\\
\makebox[0pt][l]{The resulting xyft rule $s$ is $\displaystyle \frac{z
\step a t'~~~z \step {\bar{a}} w'}{!z \step \tau !z\mid t' \mid w'}$.}
\\[-1ex]
\end{array}$
\hfill
$\begin{array}[b]{@{}lcl@{}}
\rho'(v)&=&!z\\
\rho'(w)&=&z\\
\rho'(z')&=&!z \mid t' \mid w'\\
\\
\rho'(u)&=&z\\
\rho'(v')&=&!z\mid t'\\
\\
\rho'(s)&=&!z\\
\rho'(t)&=&z\\
\rho'(u')&=&!z \mid t'\\ \\ \\ \
\\[-1ex]
\end{array}$
\end{Example}
Although according to Theorem \ref{Tm.xyft} the tyft/tyxt format
reduces to the more restrictive xyft format, this is by no means an
argument to abandon the tyft/tyxt format. Because a simple TSS in
tyft/tyxt format may take a much more complicated form if it is
described in xyft format. This is demonstrated by the following
example.
\begin{Example}
Assume two functions $a,b$ of arity zero, a function $f$ of
arity one, and a label $l$. Consider the following TSS in tyft format.
\[
a\step{l}a\hspace{3cm}
\Rule{a\step{l}y}{a\step{l}f(y)}
\]
In order to describe this TSS in xyft format, we need an infinite
number of rules: $a\step{l}f^n(a)$ for $n=0,1,2,...$
(The auxiliary function symbol $b$ is present to avoid that the TSS
can be described by the single rule $a\step{l}x$.)
\end{Example}
\subsection{Xyft reduces to tree}
The following terminology originates from \cite{3GrV92}.
\begin{Definition}
A variable is called {\em free} in a rule if it does not
occur at the right-hand side of the premises,
nor at the left-hand side of the conclusion of the rule.
A rule is called {\em pure} if it is well-founded and does not contain
any free variables. A {\em tree} rule is a pure xyft rule.
\end{Definition}
\begin{Theorem}
Each TSS $R$ in xyft format is transition equivalent to
a TSS in tree format.
\end{Theorem}
{\bf Proof.}
We prove $R$ transition equivalent with
the TSS $S$ of tree rules that can be proved from $R$.
Since all rules in $S$ can be proved from $R$, Lemma \ref{la.prov}
implies that each transition provable from $S$ is
also provable from $R$. We check the converse, namely that a closed
transition $p\step{a}p'$ provable from $R$ is provable from $S$.
Since $p\step{a}p'$ is provable from $R$, there exist a rule
$r\in R$ and a substitution $\sigma$ such that the premises
of $r$ under $\sigma$ are provable from $R$ and the conclusion
of $r$ under $\sigma$ yields $p\step{a}p'$. Let $r$ be of the form
\[
\Rule{\{z_i\step{a_i}y_i~|~i\in I\}}{f(x_1,...,x_m)\step{a}t}
\]
Using induction, we may assume that
$\sigma(z_i\step{a_i}y_i)$ is provable from $S$ for $i\in I$.
We construct from $r$ a rule $r'$ in $S$ as follows.
If there is no backward path in the dependency graph of $r$
from a vertex $y_i$ to a vertex $x_j$, then replace the variables
$z_i$ and $y_i$ in $r$ by $\sigma(z_i)$ and $\sigma(y_i)$ respectively.
Moreover, replace free variables $z$ in $t$ by $\sigma(z)$.
As $p\step{a}p`$ is a closed transition, $\sigma(z)$ does not contain
any variables.
The resulting rule $r''$ is a substitution instance of $r$, so $r''$
is provable from $R$. Remove each premise $\sigma(z_i\step{a_i}y_i)$
from $r''$. Since those transitions are provable from $R$, the resulting
rule $r'$ is provable from $R$ as well.
Clearly, $r'$ is xyft and without free variables. Moreover, $r'$ is
well-founded, because for each premise $z_i\step{a_i}y_i$ in $r'$,
the (only) backward path from
the vertex $y_i$ in the dependency graph of $r'$ terminates at
a vertex $x_j$. Hence, $r'$ is a tree rule, so $r'\in S$.
Since the premises of $r'$ under $\sigma$ are provable from $S$,
and since the conclusion of $r'$ under $\sigma$ yields $p\step{a}p'$,
Proposition \ref{pr.provable} implies that $p\step a p'$ is provable from $S$.
$\Box$
\vspace{2mm}
So, we have found that for each TSS in tyft/tyxt format there exists
a transition equivalent TSS in tree format.
Since tree rules are well-founded tyft rules, this result
implies that the congruence theorem
for tyft/tyxt can do without well-foundedness.
\begin{Corollary}
If a TSS $R$ is in tyft/tyxt format, then $\Rbis$ is a congruence.
\end{Corollary}
\section{Extensions to Other Formats}
\subsection{The ntyft/ntyxt format}
Groote \cite{3Gr93} extended the tyft/tyxt format to the
{\it ntyft/ntyxt} format, which as extra feature allows
transition rules to contain negative premises, i.e.\ expressions
of the form $t\nstep{a}$. In a setting with negative premises,
the definition of the transition relation determined by
a TSS has to be adapted. Certain TSSs may fail to determine a transition
relation at all, for instance due to rules such as
\[
\Rule{t\nstep{a}}{t\step a t'}
\]
One of the most general ways to associate transitions to TSSs with
negative premises is through the notion of a {\it
stability}, which was introduced by Gelfond and Lifschitz
\cite{3GL88} in logic programming. The transition relation determined
by a TSS is then its {\em unique stable transition relation} if such
exists. Bol and Groote \cite{3BolG91}, who adapted this notion for
TSSs, showed that there exist TSSs in ntyft/ntyxt format with a unique
stable transition relation for which bisimulation is not a congruence.
However, they found a subclass of such TSSs for which it is. They
defined a (somewhat complicated) notion of {\it reduction} of TSSs,
inspired by the work of Van Gelder, Ross and Schlipf \cite{3GRS88} in
logic programming, and proved a congruence theorem for well-founded
TSSs in the ntyft/ntyxt format that are {\em positive} (that is
without negative premises) after applying reduction. The transition
relation associated to a TSS that is positive after reduction consist
of the closed transitions that are provable from the reduced TSS\@.
This is then the unique stable transition relation of the TSS.
Earlier, Groote \cite{3Gr93} had adapted the concept of {\em
stratification}---also found in logic programming, see Apt
\cite{3Apt90}---to transition system specifications, and showed how a
{\em stratified} TSS determines a transition relation. He also proved
that bisimulation equivalence is a congruence for well-founded
stratified TSSs in the ntyft/ntyxt format. A TSS that is stratified
is surely positive after reduction, and the transition relation
determined by the method of stratification is the same as the one
determined by the method of reduction. Thus we have a hierarchy of
properties
\begin{center}
\it positive $~\Rightarrow~$ stratified $~\Rightarrow~$ positive after
reduction $~\Rightarrow~$ has unique transition relation.
\end{center}
The reverse of these inclusions does not hold.
In Van Glabbeek \cite{3vG95} the notion of a {\it complete} TSS is
proposed, which is equivalent to {\it positive after reduction}.
For this purpose, the notion of provability is extended in order to
allow the derivation of negative transitions. Then, a TSS is said to be {\em
complete} if for each closed transition $p\step a p'$, the TSS can
prove either $p\step a p'$ or its negation $p\nstep a p'$. In the same
paper it is also argued that the unique stable transition relation of
an incomplete TSS is not always convincing as the determined
transition relation. If for any reason a transition relation needs to
be associated to arbitrary TSSs, it is suggested to take the set of
closed transitions $p\step a p'$ that are {\em irrefutable}, in the
sense that $p\nstep a p'$ is not provable using the extended concept
of provability. Although this method yields the `right' transition
relation for complete TSSs, in the case of incomplete TSSs with a
unique stable transition relation it may yield a different---and
equally unconvincing---result as the method of stability. The
transition relation associated to incomplete TSSs usually has very
unpleasant properties. In particular, the congruence result for TSSs
in ntyft/ntyxt format does not extend to such TSSs \cite{3vG95}. The
following proposition, taken from
\cite{3vG95}, gives a sufficient condition for two TSSs to be
transition equivalent according to each of the methods stability,
completeness (=reduction) and irrefutability.
\begin{Proposition}
Let $R$ and $R'$ be TSSs such that $R \vdash N/c
\Leftrightarrow R' \vdash {N}/{c}$ for any closed transition
rule $N/c$ with only negative premises. Here $\vdash$
denotes provability in the sense of Section~\ref{preliminaries}. Then
\begin{itemize}\vspace{-1ex}
\item $R$ has a unique stable transition relation iff $R'$ has, and in that
case these relations coincide;\vspace{-1ex}
\item $R$ is complete iff $R'$ is, and in that case they determine the same
transition relation;\vspace{-1ex}
\item and the transitions irrefutable from $R$ are the same as the ones
irrefutable from $R'$.\vspace{-1ex}
\end{itemize}
\end{Proposition}
Thus without committing ourselves on their precise meaning, we can
extend our results to TSSs with negative premises by strengthening the
requirement of transition equivalence to provability of the same
closed transition rules without positive premises. All definitions,
lemmas and propositions of Section~\ref{preliminaries} generalize
straightforwardly to TSSs with negative premises, except that a rule
is now called well-founded if its collection of {\em positive}
premises is so.
\begin{Definition}
A {\em xyntt} rule is an xytt rule enriched with arbitrary negative
premises $t\nstep a$.
A transition rule is called {\em xynft} if it is both ntyft and xyntt.
It is an {\em ntree} rule if it moreover is pure.
\end{Definition}
Without any further complications, we can repeat the construction
from the previous section to show that each complete TSS in
ntyft/ntyxt format is transition equivalent---it proves the
same closed rules without positive premises---to a complete TSS in the
ntree format.
Again, TSSs in the latter format are well-founded, so
as a corollary we see that the well-foundedness condition in
the congruence theorem for the ntyft/ntyxt format can be dropped.
\begin{Corollary}
If a complete TSS $R$ is in ntyft/ntyxt format, then $\Rbis$ is a
congruence.
\end{Corollary}
We show that in general, terms in negative premises cannot be
reduced to variables. The {\it simple negative tree format} allows
complete TSSs which consist of
pure and well-founded ntyft/ntyxt rules, where the variables of all
the premises (so also of the negative premises) are variables.
We present a complete TSS in ntyft/ntyxt format for which there does
not exist a transition equivalent TSS in simple negative tree format.
Our counter-example is presented in the setting of the process algebra
basic CCS. This formalism assumes a constant 0, a binary function
alternative composition $x+y$, and unary functions prefix sequential
composition $ax$, where $a$ ranges over an alphabet $A$.
Basic CCS assumes relations $\step a$ for $a\in A$, and its
operational semantics is defined in Example~\ref{ccs}.
Add two functions $f$ and $g$ with arity one
to the signature of basic CCS, and extend the operational semantics
by the following transition rules, to obtain the TSS $R$.
\[
\Rule{x\step{a}y_1~~~~y_1\step{a}y_2}{g(x)\step{a}0}
\hspace{2cm}
\Rule{g(x)\nstep{a}}{f(x)\step{a}0}
\]
The TSS $R$ is complete and in ntyft/ntyxt format.
The premise $g(x)\nstep{a}$ cannot be reduced.
An obvious attempt to delete this negative premise
would be to replace the second rule by the following two rules.
\[
\Rule{x\nstep{a}}{f(x)\step{a}0}
\hspace{2cm}
\Rule{x\step{a}y~~~~y\nstep{a}}{f(x)\step{a}0}
\]
However, this adapted TSS is not transition equivalent to $R$.
For example, $f(aa0+a0)\step{a}0$ holds in the new TSS,
but not in $R$.
In order to provide a rigorous argument that $R$ does not reduce to a
TSS in simple negative tree format, we need the following lemma.
First note that a TSS $T$ in simple negative tree format is always
stratified and hence complete \cite{3vG95}, so that there is no
ambiguity about the associated transition relation. The latter can
thus be taken to be the set of closed transitions that are provable
from $T$ in the extended sense of \cite{3vG95}. This is the concept of
provability used below.
\begin{Lemma}
\label{lem.neg}
Let $T$ be a TSS in simple negative tree format
and $p_0$ and $p_1$ closed terms, such that:
\begin{enumerate}
\vspace{-1ex}\item
if $T$ proves $p_0\step a q$, then $T$ proves $p_1\step a q$,
\vspace{-1ex}\item
if $T$ proves $p_0\nstep a$, then $T$ proves $p_1\nstep a$.
\vspace{-1ex}\end{enumerate}
If $T$ proves $f(p_0)\step bq$, then $T$ proves $f(p_1)\step bq'$
for some $q'$.
\end{Lemma}
{\bf Proof.}
Let $f(p_0)\step bq$ be provable from $T$. Then, by Proposition 14
in \cite{3vG95}, there exists a rule $r\in T$ and a substitution $\sigma$,
such that the premises of $r$ under $\sigma$ are provable from $T$
and the conclusion of $r$ under $\sigma$ yields $f(p_0)\step bq$.
Since $r$ is in ntyft format, it has a conclusion of the form
$f(x)\step bt$, where $\sigma(x)=p_0$ and $\sigma(t)=q$.
Define a substitution $\sigma'$ by $\sigma'(x)=p_1$, and $\sigma'(x)=
\sigma(y)$ for $y\not=x$. Since $r$ is in simple negative tree format,
and since the premises of $r$ under $\sigma$ are provable from $T$,
properties 1,2 of the transition systems of $p_0$ and $p_1$ ensure
that the premises of $r$ under $\sigma'$ are provable from $T$.
So according to Proposition 13 in \cite{3vG95}, the conclusion of $r$ under
$\sigma'$, $f(p_1)\step b\sigma'(t)$, is provable from $T$ as well. $\Box$
\vspace{3mm}
Suppose that the TSS $R$ that was defined before is transition equivalent
to a TSS $T$ in simple negative tree format. If $p_0=a0$ and $p_1=aa0+a0$,
then it is easy to see that the two properties that were
formulated in Lemma \ref{lem.neg} are satisfied.
On the other hand, $R$ (and so $T$) proves $f(a0)\step{a}0$ and
$f(aa0+a0)\nstep{a}$. According to Lemma \ref{lem.neg} this cannot be,
so apparently $R$ cannot be transition equivalent to a TSS in simple
negative tree format.
\subsection{The panth format}
Baeten and Verhoef \cite{3BV93} extended the tyft/tyxt format
with predicates, i.e.\ not only relations $t\step{a}t'$,
but also predicates such as $t\step{a}\surd$ are allowed
to occur in transition rules.
The definition of strong bisimulation, Definition \ref{Dn.bis},
is adapted accordingly by adding a third condition:
\begin{itemize}
\item[\bf --]
if $p{\cal B}q$ and $p\step{a}_R\surd$, then $q\step{a}_R\surd$.
\end {itemize}
Next, Verhoef \cite{3Ver94} extended the resulting format
with negative premises. A congruence theorem holds
for well-founded complete TSSs that are in the
so-called {\it panth} format, which is essentially the natural
extension of ntyft/ntyxt with predicates.
Without any further complications, we can repeat the construction
from the previous section to show that each complete TSS in
panth format is transition equivalent to a complete TSS in an extension of
the tree format, which allows rules to have premises of the form
$z\step{a}\surd$ and $t\nstep{a}$ and $t\nstep{a}\surd$,
and a conclusion of the form $f(x_1,...,x_m)\step a\surd$.
As a corollary, we see that the well-foundedness condition in the
congruence theorem for the panth format can be dropped.
\begin{Corollary}
If a complete TSS $R$ is in panth format, then $\Rbis$ is a
congruence.
\end{Corollary}
\bibliographystyle{plain}
\begin{thebibliography}{10}
\bibitem{3Apt90}
{\sc K.R. Apt} (1990):
\newblock {\em Logic programming.}
\newblock In J.~van Leeuwen, editor: {\sl Handbook of Theoretical Computer Science, Volume B, Formal Methods and Semantics}, Elsevier, pp. 493--574.
\bibitem{3BV93}
{\sc J.C.M. Baeten \& C.~Verhoef} (1993):
\newblock {\em A congruence theorem for structured operational semantics with predicates.}
\newblock In E.~Best, editor: {\sl Proceedings 4th Conference on Concurrency Theory (CONCUR'93), {\rm Hildesheim}}, {LNCS 715}, Springer-Verlag, pp. 477--492.
\bibitem{3BIM88}
{\sc B.~Bloom, S.~Istrail \& A.R. Meyer} (1988):
\newblock {\em Bisimulation can't be traced: preliminary report.}
\newblock In {\sl Proceedings 15th ACM Symposium on Principles of Programming Languages, {\rm San Diego, California}}, pp. 229--239.
\newblock To appear in {\sl Journal of the ACM}.
\bibitem{3BolG91}
{\sc R.N. Bol \& J.F. Groote} (1991):
\newblock {\em The meaning of negative premises in transition system specifications.}
\newblock In J.~Leach Albert, B.~Monien \& M.~Rodr\'{\i}guez Artalejo, editors: {\sl Proceedings 18th International Colloquium on Automata, Languages and Programming (ICALP'91), {\rm Madrid}}, {LNCS 510}, Springer-Verlag, pp. 481--494.
\bibitem{3Fo94b}
{\sc W.J. Fokkink} (1994):
\newblock {\em The tyft/tyxt format reduces to tree rules.}
\newblock In M.~Hagiya \& J.C. Mitchell, editors: {\sl Proceedings 2nd
Symposium on Theoretical Aspects of Computer Software (TACS'94)},
Sendai, Japan, {LNCS 789}, Springer-Verlag, pp. 440--453.
\bibitem{3Fo94c}
{\sc W.J. Fokkink} (1994):
\newblock {\em Idempotent most general unifiers for infinite sets.}
\newblock Report CS-R9442, CWI, Amsterdam.
\bibitem{3GRS88}
{\sc {A. van} Gelder, K.~Ross \& J.S. Schlipf} (1991):
\newblock {\em The well-founded semantics for general logic programs}, JACM 38(3), pp. 620--650.
\bibitem{3GL88}
{\sc M.~Gelfond \& V.~Lifschitz} (1988):
\newblock {\em The stable model semantics for logic programming.}
\newblock In R.~Kowalski \& K.~Bowen, editors: {\sl Proceedings 5th Conference on Logic Programming}, MIT press, pp. 1070--1080.
\bibitem{3vG93b}
{\sc R.J.~van Glabbeek} (1993):
\newblock {\em Full abstraction in structural operational semantics
(extended abstract).}
\newblock In M.~Nivat, C.~Rattray, T.~Rus \& G.~Scollo, editors: {\sl Proceedings 3rd Conference on Algebraic Methodology and Software Technology (AMAST'93), {\rm Twente, The Netherlands}}, Workshops in Computing, Springer-Verlag, pp. 77--84.
\bibitem{3vG95}
{\sc R.J.~van Glabbeek} (1995):
\newblock {\em The meaning of negative premises in transition system
specifications II.}
\newblock Available at {\tt ftp://boole.stanford.edu/pub/DVI/negative.dvi.Z}.
\bibitem{3Gr93}
{\sc J.F. Groote} (1993):
\newblock {\em Transition system specifications with negative premises.}
\newblock {\sl Theoretical Computer Science} 118(2), pp. 263--299.
\bibitem{3GrV92}
{\sc J.F. Groote \& F.W. Vaandrager} (1992):
\newblock {\em Structured operational semantics and bisimulation as a congruence.}
\newblock {\sl Information and Computation} 100(2), pp. 202--260.
\bibitem{3MM82}
{\sc A.~Martelli \& U.~Montanari} (1982):
\newblock {\em An efficient unification algorithm.}
\newblock {\sl ACM Transactions on Programming Languages and Systems} 4(2), pp. 258--282.
\bibitem{3Pa81}
{\sc D.M.R. Park} (1981):
\newblock {\em Concurrency and automata on infinite sequences.}
\newblock In P.~Deussen, editor: {\sl 5th GI Conference}, {LNCS 104}, Springer-Verlag, pp. 167--183.
\bibitem{3Pl81}
{\sc G.D. Plotkin} (1981):
\newblock {\em A structural approach to operational semantics.}
\newblock Report DAIMI FN-19, Aarhus University.
\bibitem{3dS85}
{\sc R.~de Simone} (1985):
\newblock {\em Higher-level synchronising devices in {\sc Meije}-{SCCS}.}
\newblock {\sl Theoretical Computer Science} 37, pp. 245--267.
\bibitem{3Ver94}
{\sc C.~Verhoef} (1994):
\newblock {\em A congruence theorem for structured operational semantics with predicates and negative premises.}
\newblock In B.~Jonsson \& J.~Parrow, editors: {\sl Proceedings 5th Conference on Concurrency Theory (CONCUR'94), {\rm Uppsala}}, {LNCS 836}, Springer-Verlag, pp. 433--448.
\end{thebibliography}
\end{document}