Amir Ben-Amram's Annotated Publication List

This list is organized by topics and (mostly) chronologically (within a topic). It is meant to be useful for finding and using my work (rather than for paper counting), so it omits publications that have been superseded by subsequent ones (typically conference proceedings), and includes a brief summary of each paper. For a full list that includes all types of publications, see DBLP. Or use Google Scholar to also see which of my works are cited, by whom etc.
The sections are:

  1. Program Analysis - mostly termination and complexity - and related computability and complexity issues
  2. Efficiency and Expressivity of programs with various modes of memory access
  3. Instruction Sets and the Efficiency of RAM Algorithms
  4. Data Structures
  5. Foundations of Complexity Theory - e.g., hierarchy theorems
  6. Implicit Computational Complexity
  7. Turing-Machine Stuff
  8. Odds and Ends

Program Analysis (Mostly Termination and Complexity) and Related Issues


Chin Soon Lee, Neil D. Jones, Amir M. Ben-Amram
The Size-Change Principle for Program Termination
ACM Symposium on Principles of Programming Languages (POPL), ACM press, January 2001.
BibTeX
preprint
published version ©ACM
If any infinite path in a program's flow graph must involve infinite descent in the size of something, then the program always terminates. This article formalizes this familiar principle using size-change graphs. We show that the termination criterion can be expressed in terms of Büchi Automata as well as in terms of graph compositions, and prove that its decision is PSPACE-complete

Amir M. Ben-Amram and Chin Soon Lee
Size-Change Termination in Polynomial Time
ACM Transactions on Programming Languages and Systems (TOPLAS) 29:1 (January 2007), Article No.5
BibTeX
preprint
published version ©ACM
This paper presents an efficient algorithm to recognize certain size-change terminating programs. More precisely, we address the graph problem defined in the paper by Lee, Jones and Ben-Amram above. Since the problem is PSPACE-complete, a precise decision procedure will inevitably have an exponential worst-case time. Our new algorithm has polynomial worst-case time and constitutes a conservative approximation to the hard problem. We give empirical evidence and theoretical considerations to sustain that the algorithm is a practical alternative to the precise procedure.

Amir M. Ben-Amram
Size-Change Termination with Difference Constraints
ACM Transactions on Programming Languages and Systems (TOPLAS) 30:3 (May 2008), Article No.16
BibTeX
preprint
published version ©ACM
This paper considers a natural extension of the size-change termination criterion (for integer data). The extension consists of allowing bounds on differences in sizes of data values before and after every transition. The criterion subsumes the original one, but is harder to decide. This paper shows that unless suitably restricted, this criterion is undecidable. It points out an interesting restricted case, not previously studied, that is decidable: the fan-in free case, where we may use of at most one bound per target variable in each transition. The decision problem is PSPACE-complete.

Amir M. Ben-Amram
General Size-Change Termination and Lexicographic Descent
in: The Essence of Computation: Complexity, Analysis, Transformation. Essays Dedicated to Neil D. Jones, LNCS Vol. 2566, pp. 3-17, Springer-Verlag, 2002.
BibTeX
published version
©Springer-Verlag
This paper confirms a conjecture of Neil Jones, proving that two classes of programs are equivalent in their computational power: programs whose termination can be verified with the Size-Change Termination principle (see previous papers) and programs satisfying the (more restricted) Lexicographic Descent condition.
A reader of this paper may find this private copy of the manuscript helpful, where I fixed a flaw in one of the proofs and added an example. However, note that the main theorem of this paper is an easy corollary of recent results about ranking functions for SCT.

Amir M. Ben-Amram
A Complexity Tradeoff in Ranking-Function Termination Proofs
Acta Informatica 46:1 (2009), 57-72
BibTeX
article on-line    preprint
©Springer-Verlag
There is an intimate relation of SCT termination proofs to proofs based on local, linear ranking functions (these concepts are taken from previous work but definitions are given in the paper). It's the locality that allows for the ranking functions to have a simple form (global ranking functions for SCT programs may be quite complex). But then, in the local approach, many ranking functions may be needed. This paper proves (under various sets of assumptions) that such a tradeoff is inherent.

Amir M. Ben-Amram, Mike Codish
A SAT-Based Approach to Size Change Termination with Global Ranking Functions
in: 14th Intl. Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), LNCS, Springer-Verlag, 2008.
BibTeX
preprint
on-line version
©Springer-Verlag
We present a class of ranking functions which is strong enough for supplying termination witnesses for a significant subset of SCT instances. The subset of SCT instances covered is an NP-complete set. We have implemented an algorithm that recognizes these instances, using a SAT solver, and demonstrated its effectivity using a benchmark suite of over 4000 test cases. This algorithm has been incorporated into Isabelle in 2009.

Amir M. Ben-Amram, Neil D. Jones and Lars Kristiansen
Linear, Polynomial or Exponential? Complexity Inference in Polynomial Time
in: Computability in Europe 2008: Logic and Theory of Algorithms, LNCS 5028, Springer-Verlag, 2008.
BibTeX
on-line paper    preprint
©Springer-Verlag
A very natural question about a program is what the growth-rate of its time complexity is, e.g., is it polynomial. Improving on some previous algorithms to answer such questions automatically, our work is the first one to completely solve the problem for a certain core language that includes non-deterministic branching and bounded loops, as well as restricted arithmetics over integers. A demo program can be provided upon request.

A. M. Ben-Amram and Chin Soon Lee
Ranking Functions for Size-Change Termination II
Logical Methods in Computer Science, 5 (2), 2009.
BibTeX
article on-line
We show how to construct global ranking functions for SCT programs, under certain restrictions on the instances. These constructions improve the complexity from triply-exponential in the number of variables (the best previously known bound) to singly exponential; we also prove (almost matching) lower bounds.

Amir M. Ben-Amram
Size Change Termination, Monotonicity Constraints and Ranking Functions
Logical Methods in Computer Science, 6 (3), 2010.
BibTeX
article on-line
erratum
Considers Monotonicity Constraints, a program abstraction that extends the SCT abstraction, and carries the essentials of the theory developed for SCT into this extended framework. It also presents an algorithm to construct global ranking functions for any such program, which also settles the problem of constructing GRFs for unrestricted SCT instances in singly-exponential complexity. An asbtract was presented at CAV 2009. A Java program that demonstrates ranking-function construction from constraint graphs can be found here.

Amir M. Ben-Amram
Monotonicity Constraints for Termination in the Integer Domain
Logical Methods in Computer Science, Vol. 7 (3), 2011.
BibTeX
article on-line
Unlike size-change graphs used in most previous work on SCT, MCs (as in the previous entry) have an advantage of applicability to the Integer domain, which is not well-founded. This paper transfers the theory of MC transition-system termination from the well-founded domain to the integer domain.

Amir M. Ben-Amram
On Decidable Growth-Rate Properties of Imperative Programs
in: Developments in Implicit Computational complExity (DICE 2010), EPTCS Vol. 23.
BibTeX
article on-line
erratum
A very natural question about a program is what the growth-rate of its time complexity is, e.g., is it polynomial. Improving on previous work with Jones and Kristiansen, this paper completely solves the problem for an extension of the core language described there, where the extension is support for resets (assignments X := 0). It is proved that unlike the polynomial-time complexity of analyzing the language without resets, the current extension makes the problem PSPACE-complete. A demo program can be provided upon request.



Amir M. Ben-Amram and Lars Kristiansen
On the Edge of Decidability in Complexity Analysis of Loop Programs
International Journal on the Foundations of Computer Science, Vol. 23, No. 07, pp. 1451-1464.
BibTeX
published version   preprint
© World Scientific Publishing
Questions about growth rates (see previous paper) can be undecidable in very tiny languages. We are trying to map the frontier of decidability.
This paper is dedicated to Neil D. Jones on the occasion of his 70th birthday.

Mike Codish, Igor Gonopolskiy, Amir M. Ben-Amram, Carsten Fuhs and Jürgen Giesl,
SAT-Based Termination Analysis Using Monotonicity Constraints over the Integers
Theory and Practice of Logic Programming (TPLP), ICLP'11 Special Issue, volume 11, issue 4-5, pp. 503-520.
BibTeX
published version   preprint
© Cambridge University Press
We introduce a class of termination witnesses, generalizing lexicographic ranking functions, designed for Integer Monotonicity Constraint Transition Systems. The subset of Constraint Transition Systems that has such witnesses is in NP and we have implemented an algorithm that recognizes these instances, using a SAT solver. We demonstrated its effectivity on benchmark suites of over 200 CTSs extracted from Java Bytecode programs by two different front-ends, and a few dozen CTSs from other sources.

Amir M. Ben-Amram, Samir Genaim and Abu Naser Masud,
On the Termination of Integer Loops
ACM Transactions on Programming Languages and Systems (TOPLAS), Volume 34 Issue 4, December 2012
BibTeX
article on-line   preprint
©ACM
We study the decidability of the termination problem for several variants of simple integer loops, namely while loops without branching in the loop body and with affine constraints as the loop guard (and possibly a precondition). The loop body may be a deterministic update or a non-deterministic update subject to linear constraints. Some cases are proved undecidable, by reduction from similar problems for counter programs. Other are proved EXPSPACE-hard, by reduction from Petri-net termination.
A preliminary version was presented in VMCAI 2012.

Amir M. Ben-Amram, Michael Vainer,
Bounded Termination of Monotonicity-Constraint Transition Systems, technical report, 2012
BibTeX
on-line TR
We show that Monotonicity Constraint Transition Systems have a decidable complexity property: one can determine whether the length of all transition sequences can be bounded in terms of the initial state. We prove that the decision problem is PSPACE-complete. Moreover, if a bound exists, it must be polynomial.

Amir M. Ben-Amram and Samir Genaim,
Ranking Functions for Linear-Constraint Loops
Journal of the ACM, july 2014 (preliminary, partial version presented at POPL 2013)
BibTeX
full paper ©ACM
preprint of full paper   demo
We study problems of generating ranking functions given a loop, described by linear constraints over a finite set of integer variables (we also consider rational data). We ask: is there a linear ranking function for this loop? Is there a lexicographic-linear ranking function? The case of linear functions and rational (or real) data has been proved before to be PTIME decidable. In this paper we prove that for integer variables, the decision problem is coNP-complete. We also present algorithms (necessarily, exponential) for synthesizing linear ranking functions. We give corresponding results for lexicographic-linear functions (including a new PTIME algorithm for rational data). Finally, we point out some special cases of interest that have PTIME complexity over the integers.

Amir M. Ben-Amram,
Mortality of Iterated Piecewise Affine Functions over the Integers: Decidability and Complexity
Computability 4:1, 19-56, 2015.
Preliminary version in 30th International Symposium on Theoretical Aspects of Computer Science (STACS), 2013)
BibTeX
published version ©IOS Press
preprint
Given a piecewise-affine function on Zn, with integer coefficients, we would like to know whether iterating it sufficiently many times, starting with an arbitrary initial point, must eventually yield some "final point". The decidability of questions of this type has been previously studied over the rationals, and here it is studied over the integers, with the main result being that undecidability holds for n ≥ 2. In fact, we can more precisely classify the hardness of the problem both in one and two dimensions. The undecidability proofs involve generalizations of the classic "Collatz problem".

Amir M. Ben-Amram,
The Hardness of Finding Linear Ranking Functions for Lasso Programs
5th International Symposium on Games, Automata, Logics and Formal Verification (GANDALF), 2014
BibTeX
article on-line
A "lasso program" consists of a simple loop (one block that is iterated) which is reached in some known way, meaning that we know something about the initial state of the loop. Suppose that the loop is represented (approximately, perhaps) by a set of linear inequalities. In this case, it is well known how to find linear ranking functions (LRF) that hold universally, i.e., when all initial states are considered as possible. This may be over-conservative. We give some indications to the hardness of the LRF problem when the initial condition is taken into account. Over the reals or rationals (with rational coefficients), it is PSPACE-hard. Over integers it is at least as hard as the Vector Addition System (VAS) reachability problem, hence EXPSPACE-hard (results for the "universal" case over the integers are in my article with Samir Genaim).

Amir M. Ben-Amram and Samir Genaim,
Complexity of Bradley-Manna-Sipma Lexicographic Ranking Functions
in: 27th Intl. Conference on Computer Aided Verification (CAV), LNCS, Springer-Verlag, 2015.
BibTeX
technical report
Bradley, Manna and Sipma in a CAV 2005 paper invented a kind of lexicographic ranking function (LexRF) that can be used in termination analysis of loops defined by linear constraints over a set of numeric variables. In this paper we highlight the difference of their definition from other classes of LexRFs (used by other researchers for essentially the same purpose) and we evaluate the computational complexity of (1) finding such functions, given a linear-constraint loop and (2) minimizing the number of components in the function. The results, briefly: when variables are assumed to range over reals or rationals, (1) is PTIME, while (2) is NP-complete; over integers, (1) is coNP-complete, while (2) is Σ2P-complete.

Amir M. Ben-Amram and Aviad Pineles,
Flowchart Programs, Regular Expressions, and Decidability of Polynomial Growth-Rate
in: Fourth International Workshop on Verification and Program Transformation (VPT 2016), EPTCS vol. 216, 2016
BibTeX
article on-line
A very natural question about a program is what the growth-rate of its time complexity is, e.g., is it polynomial. We extend previous work with Jones and Kristiansen, which handled a structured loop-based program, to an unstructured "flowchart" type program, with the same restricted instruction set (basically, assignment, addition and multiplication of integers). We prove that for this language, the decision problem polynomial growth-rate is PTIME-decidable. The main technical contribution is a technique to reduce the analysis of the flowchart program to analysis of a structured program. This reduction is based on the standard transformation of a NFA to a regexp. A demo program can be provided upon request.

Amir M. Ben-Amram and Samir Genaim,
On Multiphase-Linear Ranking Functions
in: 29th Intl. Conference on Computer Aided Verification (CAV), LNCS, Springer-Verlag, 2017, and:
Multiphase-Linear Ranking Functions and Their Relation to Recurrent Sets
in: 26th Static Analysis Symposium (SAS), 2019
BibTeX
technical report 1
technical report 2
The goal of these two papers is to understand multiphase-linear termination proofs. These are formed by a collection of linear functions, which describe the behaviour of the loop in phases: function f1 descends (like an ordinary ranking function), but when it becomes negative, the loop may still continue; however, at least from that moment, f2 is descending, and so on. The last function fd is not allowed to become negative, so that the loop must terminate.
In the first paper, we evaluate the computational complexity of finding such functions, given a linear-constraint loop and d; we also ask what such a function implies on the number of iterations the loop may make. The answer to the first question is what we expected, but obtained in an unexpected way; and the answer to the second is also surprising. There is also a surprising expressivity results: for conjunctive linear-constraint loops, lexicographic linear r.f.'s are no stronger than multiphase ones.
In the second paper, we are investigating an algorithm designed to search for a multiphase ranking function without a prior bound on the number of phases. The most interesting aspect may be the discovery that the algorithm naturally provides a witness for nontermination (a recurrent set) in some cases (in which no termination proof is found, obviously).

Amir M. Ben-Amram and Geoff W. Hamilton
Tight Worst-Case Bounds for Polynomial Loop Programs
in: Logical Methods in Computer Science, 2020
BibTeX
article on-line
A very natural question about a program is to find the growth-rate of its time complexity as a function of input parameters, up to constant factors. Our paper gives the first algorithm to solve this problem for polynomially-bounded growth rates, for any program in a certain core language that includes non-deterministic branching and bounded loops, as well as restricted arithmetics over integers.

Amir M. Ben-Amram and Geoff W. Hamilton
Tight Polynomial Bounds for Loop Programs in Polynomial Space
in: Logical Methods in Computer Science, 2021
BibTeX
article on-line
This is a follow up to the previous paper. It shows an efficient computation of the bounds.

Efficiency and Expressivity of programs with various modes of memory access


A. M. Ben-Amram, H. Petersen
Backing up in Singly Linked Lists
Journal of the ACM 53(4), 681--705, July 2006.
BibTeX
preprint
published version ©ACM
Intuitively, working with a singly-linked list may be slower than working with a doubly-linked one because backing up may necessitate reworking your way from the beginning. This article formalizes this intuition and proves upper and lower bounds on the gap between the two modes of access to a read-only list.

Amir M. Ben-Amram, Zvi Galil
On Pointers versus Addresses
Journal of the ACM, 39(3), pp. 617-648, July 1992.
BibTeX
preprint
published version ©ACM
We prove that under certain conditions, the worst-case gap in time complexity between a program using random-access memory and a program simulating it with pure pointer access is precisely logarithmic in the data-structure size. The lower bound uses generalized incompressibility, introduced in this paper.
A preliminary version appeared in FOCS '88 (BibTeX).

Amir M. Ben-Amram
Unit-cost pointers versus logarithmic-cost addresses
Theoretical Computer Science, Vol. 132, Number 1-2, pp. 377-385, 26 September 1994.
BibTeX
preprint
This note complements the JACM paper On Pointers versus Addresses. Here, we consider a model where random access to memory is possible but costs logarithmically in the addresses. We show that this model is weaker than unit-cost pointer access.

Amir M. Ben-Amram, Holger Petersen
CONS-free Programs with Tree Input
Proceedings, 25th Int. Colloquium on Automata Languages and Programming (ICALP), LNCS, Vol. 1443, pp. 271-282, Springer, 1998.
BibTeX
preprint
published version
©Springer-Verlag
Consider programs that access a read-only input structure and cannot allocate dynamic memory (hence, in LISP-talk, they are CONS-free). What decision problems can such programs solve? When the input is a string, the classes of problems decidable by various language variants are well understood; interestingly, they often coincide with standard complexity classes. In this article we attempt such a characterization w.r.t. programs that compute over binary trees.

Amir M. Ben-Amram
Notes on Pippenger's Comparison of Pure and Impure LISP
TOPPS group report, June 1996.
BibTeX
report
Nick Pippenger published in POPL '96 a remarkable proof that partly confirms and age-old conjecture: that pure LISP is weaker than "impure" LISP, where impurity refers to the possibility of modifying an existing cons-cell. This report discusses some aspects of this proof. (see also: Pippenger's journal paper)

Amir M. Ben-Amram
What is a ``Pointer Machine''?
SIGACT News, 26(2), pp. 88-95, June 1995.
BibTeX
preprint
published version
I wrote this note since I realized that different authors mean different things when referring to "pointer machines". The note explains the different notions that compete on this term and suggests unequivocal terminology. I also prepared an annotated bibliography (last update: 1998) of papers using the different kinds of "pointer machines".

Amir M. Ben-Amram
On the Power of Random Access Machines
PhD thesis, Tel-Aviv University, 1995.
BibTeX
dvi files
My PhD thesis, of which most chapters appeared in more polished form as separate articles. There is, however, one part that hasn't, namely Part 7 on the off-line simulation of RAMs by LISP programs (aka LISP machines, pointer machines).

Instruction Sets and the Efficiency of RAM Algorithms


Amir M. Ben-Amram, Zvi Galil
Topological Lower Bounds on Algebraic Random Access Machines
SIAM Jour. on Comput Vol. 31 No. 3 pp 722-761, 2001.
BibTeX
Topolgical lower bounds are bounds on the time complexity of set recognition problems, based on the topological complexity of the set. Ben-Or and Yao originated the use of the method for lower bounds on the height of computation trees over (respectively) real and integer numbers. Here we extend the realm of the technique to algebraic random access machines.
A preliminary exposition of this work appeared in ICALP '95 (BibTeX).

Amir M. Ben-Amram
When can We Sort in o(n log n) Time?
Journal of Computer and System Sciences, 54(2), pp. 345-370, April 1997.
BibTeX
preprint
published version
The complexity of sorting integers on a RAM may drop below n log n: for example using Radix Sort for short words or Fusion Trees for large ones. We consider several factors such as word length, instruction set, and available memory, and give a series of theorems that show which combinations of such factors allow for fast sorting and which do not.
A preliminary version appeared in FOCS '993 (BibTeX).

Amir M. Ben-Amram
Average-case complexity of sorting on a RAM
DIKU Technical report 95/23, February 1995.
BibTeX
This paper considers the same questions as the JCSS paper above, however with respect to the average running time over random input.

Amir M. Ben-Amram
"When can we sort in o(n log n) time ?" Revisited
DIKU Technical report 95/21, 1995.
BibTeX
This note revisits the results of the above JCSS paper, attempting to answer questions raised by the publication of the fast sorting algorithm by Andersson, Hagerup, Nilsson and Raman (STOC '95; JCSS 1998). In particular, are the lower bounds valid for randomized algorithms? And what is the fastest sort today?

Amir M. Ben-Amram, Zvi Galil
On the Power of the Shift Instruction
Information and Computation, 117(1), pp. 19-36, 15 February 1995.
BibTeX
preprint
A unit-cost variable shift operation on unbounded integers is almost as strong as unit-cost random access to an unbounded memory of such integers. This is proved by encoding a random-access memory into a constant number of integers using shifts. The encoding implements a data structure whose amortized access time is inverse-Ackermann.

Data Structures


Amir M. Ben-Amram, Holger Petersen
CONS-free Programs with Tree Input
Proceedings, 25th Int. Colloquium on Automata Languages and Programming (ICALP), LNCS, Vol. 1443, pp. 271-282, Springer, 1998.
BibTeX
preprint
published version
©Springer-Verlag
What can you do with a binary tree data-structure if you are not allowed to change it or to allocate additional memory? Is it, for example, possible to traverse the tree? Does it matter if the tree is singly- or doubly-linked? This paper sheds some light on this problem, although it is just part of work that was not completely published.

A related paper, listed earlier, is Backing up in Singly Linked Lists (with H. Petersen).


S. Alstrup, A. M. Ben-Amram, T. Rauhe
Worst-case and Amortised Optimality in Union-Find
ACM Symposium on Theory of Computing (STOC), 1999.
BibTeX
preprint
published version ©ACM
We answer two questions regarding the Union-Find data structure: what tradeoff between the efficiency of updates and that of queries is achievable? And can an implementation be optimal both in the worst-case time per operation and in amortized time? Both upper bounds and matching lower bounds are provided.

Amir M. Ben-Amram, Zvi Galil
A generalization of a lower-bound technique due to Fredman and Saks
Algorithmica, Vol. 30, no. 1, pp 34-66, 2001.
BibTeX
Fredman and Saks proved renowned lower bounds for the cell probe model, in particular for dynamic union-find and prefix-sum. This paper recasts their results into a general framework, which derives each lower bound from an inequality between problem hardness and computational model's flexibility. The gain: insight into FS's results, and a tool for new results (see below).
A preliminary exposition of this work appeared in FOCS '91 (BibTeX).

Amir M. Ben-Amram, Zvi Galil
Lower Bounds for Dynamic Data Structures on Algebraic RAMs
Algorithmica, Vol. 32, no. 3, pp 364-395, 2002.
BibTeX
Here the framework of the previous paper is used to transfer Fredman and Saks' results from the cell probe model to the Algebraic RAM. Thanks to the modular structure of the framework, only the analysis of the computational model has to be replaced.
A preliminary exposition of this work appeared in FOCS '91 (BibTeX).

A. M. Ben-Amram
The Euler Path to Static Level Ancestors
technical report, 2009.
BibTeX
on-line paper
I describe how to construct and query a data structure for answering level ancestor queries on trees (what is v's ancestor at level i?). The algorithm is based on Berkman and Vishkin's Euler Tour technique and is, in essence, a simplification of their PRAM algorithm (JCSS 1994). In contrast to the original, this version of the algorithm is simple and practical (and I can also provide a C implementation). I also give an extension to find level descendants and level successors.

Amir M. Ben-Amram, Simon Yoffe
A Simple and Efficient Union-Find-Delete Algorithm
Theoretical Computer Science, Vol. 412, No. 4-5, Pages 487-492, 2011.
BibTeX
preprint
published version ©Elsevier
Following the footsteps of Alstrup, Gørtz, Rauhe, Thorup, and Zwick (ICALP 2005), we present a simplified implementation of the Union-Find-Delete Structure. C# code is available upon request. Please note this corrigendum.

A. M. Ben-Amram, Z. Galil
On Data Structure Tradeoffs and an Application to Union-Find
ECCCTR: Electronic Colloquium on Computational Complexity, technical reports, 1995.
BibTeX
We describe how to construct an algorithm with best amortized time from a family of algorithms that describe an update/query tradeoff curve. We also construct such a family of algorithms out of the amortized-time optimal Union-Find algorithm, thus travelling the same road in the other direction. I later found out a far simpler construction for the latter result, which can be found here.

Implicit Computational Complexity


A. M. Ben-Amram, B. Loff, I. Oitavem
Monotonicity Constraints in Characterisations of PSPACE
Journal of Logic and Computation, available on-line from Feb 2010.
BibTeX
article on-line    preprint
©Oxford University Press
A celebrated paper by Bellantoni and Cook showed how a syntactic restriction on recursion schemes can limit their expressive power to exactly PTIME-computable functions. Later, Oitavem showed how to do that for PSPACE. Here, we investigate two modifications to her recursion scheme: using iteration instead of primitive recursion, and enforcing monotonicity of the recursive function. In the latter case, we obtain an implicit characterisation of the polynomial hierarchy. We also present a type of Turing machine that captures Delta_2^P.

Foundations of Complexity Theory


A. M. Ben-Amram, N. D. Jones
Computational Complexity via Programming Languages: Constant Factors Do Matter
Acta Informatica, Vol. 37, no. 2, pp. 83-120, 2000.
BibTeX
preprint
published version
This paper proposes a simple programming language as a model for complexity. Under this model we prove a series of hierarchy theorems, contradicting constant-factor speedup. ©Springer-Verlag

A. M. Ben-Amram, N. D. Jones
A precise version of a time hierarchy theorem
Fundamenta Informaticae, Vol. 38, pp. 1-15, 1999.
BibTeX
preprint
This is a technical addendum to the previous paper (by Ben-Amram and Jones). We give full source code for programs, and replace a statement saying "there is a constant" by one saying "232". Theoretically more interesting is the (nonconstructive) improvement in the next paper, where the constant is replaced with 1+ε.

Amir M. Ben-Amram
Tighter Constant-Factor Time Hierarchies
Information Processing Letters 87(1), pp. 37-44, July 2003.
BibTeX
preprint
published version
For certain computational models, and "nice" time bounds T(n), there are problems which can be decided in time (1+ε)T(n) but not in time T(n). This result is obtained by padding arguments from previous hierarchy theorems as in the paper with Neil Jones.

Amir M. Ben-Amram, N. H. Christensen, and J. Grue Simonsen
Computational Models with No Linear Speedup
Chicago Journal of Theoretical Computer Science 2012, article 7.
BibTeX
link to paper
We present two proofs that contradict linear speedup in certain computational models. For Turing machines of a fixed alphabet and two worktapes (plus an input tape), we describe a language that can be recognized in 1.50001n steps, but cannot be sped up below 1.4999n (with a change of alphabet, or by adding tapes, one can get as close to n as desired).
The second contribution is a formulation of the classical hierarchy theorem (based on diagonalization) in a generalized manner, attempting to capture all models amenable to such a proof: we call this class of models PICSTI (why? see the paper!).

Amir M. Ben-Amram
A Fundamental Theorem on Optimality and Speedup
A post in the blog Speedup in Computational Complexity, Oct 2012.

access the blog
In this blog post I discuss a remarkable theorem by Leonid Levin (related to previous work by some US researchers -named in the post). I argue that this theorem formulates an (almost) ultimate answer to the following (vague) question: “what can be said about optimal algorithms and speedup in general, that is, not regarding specific problems?” Well known examples are Blum's speedup theorem and hierarchy theorems, and they along with others can all be derived from the fundamental theorem.

Some Investigations into Turing Machines


A. M. Ben-Amram, B. A. Julstrom, U. Zwick
A Note on Busy Beavers and Other Creatures
Mathematical Systems Theory, 29(4), pp. 375-386, July / August 1996.
BibTeX
preprint
published version
©Springer-Verlag
The Busy Beaver function, characterizing in some sense the amount of work that can be encoded into n states of a Turing machine, is known to grow faster than any computable function. This note investigates how the growth rate of this function compares with that of other functions of a similar flavor.
Some of the results here have later been improved, see the next paper.

A. M. Ben-Amram, H. Petersen
Improved bounds for functions related to Busy Beavers
Theory of Computing Systems, Vol. 35, no. 1, pp 1-11, 2002.
BibTeX
preprint
published version
©Springer-Verlag
This paper improves on some of the previously known inequalities between functions related to the Busy Beaver function. Two techniques are used: (1) counting crossing sequences; (2) construction of a "stored program" TM. The latter technique has been recently used by Aaronson and Yedidia who gave it the nice name introspective encoding.

A. M. Ben-Amram, O. Berkman, H. Petersen
Element Distinctness on One-Tape Turing Machines
Acta Informatica, Vol. 40, no. 2, pp. 81-94, 2003.
BibTeX
preprint
published version
©Springer-Verlag
A complete analysis of the complexity of the Element Distinctness problem on single-tape Turing machines, both deterministic and non-deterministic. The lower bounds rely on communication complexity; the upper bounds use perfect hashing (indeed, on a TM!).

Amir M. Ben-Amram, N. H. Christensen, and J. Grue Simonsen
Computational Models with No Linear Speedup
Chicago Journal of Theoretical Computer Science 2012, article 7.
BibTeX
link to paper
See the summary in the previous section (as the paper is not only about Turing machines).

Odds and Ends

Amir M. Ben-Amram
A Comment on Budach's Mouse-in-an-Octant Problem
BibTeX
article on-line
Budach's Mouse-in-an-Octant Problem (attributed to Lothar Budach in a 1980 article by van Emde Boas and Karpinski) concerns the behaviour of a very simple finite-state machine ("the mouse") moving on the integer two-dimensional grid. Its decidability is apparently still open. This note sketches a proof that an extended version of the problem (a super-mouse) is undecidable

Amir M. Ben-Amram
Introducing: Reasonable Complete Programming Languages
Bulletin of the European Association for Theoretical Computer Science, Vol. 64, pp. 153-155, February 1998.
BibTeX
preprint
This note defines a class of programming languages that are equivalent as models for computability (for instance, Rice's theorem applies to all). The notion is stronger than mere equivalence in computational power, and is important if one wishes to study computability in programming languages.

A. M. Ben-Amram
The Church-Turing Thesis and its Look-Alikes
SIGACT News (ACM Special Interest Group on Automata and Computability Theory), Vol. 36, 2005.
BibTeX
preprint
Do you ever cite the Church-Turing thesis in a lecture or text? Read this note to make sure you are aware of the distinction between the thesis and its look-alikes. The file given here is a slightly extended version of the SIGACT News note.

A. M. Ben-Amram
A Complexity-Theoretic Proof of a Recursion-Theoretic Theorem
SIGACT News (ACM Special Interest Group on Automata and Computability Theory), Vol. 35, 2004.
BibTeX
preprint
This note provides an example where tools from Complexity Theory simplify the proof of a classic Recursion-Theoretical result. Specifically, recall that a simple set is an RE set whose complement does not contain any infinite RE set. We use the Time Hierarchy Theorem from Complexity Theory to prove that a simple set cannot be RE-complete.

A. M. Ben-Amram
Some Notions on Notations
SIGACT News (ACM Special Interest Group on Automata and Computability Theory), Vol. 22, 1991.
BibTeX
Some variants of the familiar O and Omega notation are introduced and motivated in this note. I have actually found the notation useful for purposes not originally planned - the dotted o and Omega appear in this paper.

Amir M. Ben-Amram, Omer Berkman, Costas S. Iliopoulos, Kunsoo Park
The Subtree Max Gap Problem with Application to Parallel String Covering
Proceedings of the 5th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 501-510, ACM Press, January 1994.
BibTeX
The article gives a PRAM solution to the String Covering problem, based on solving the so-called Subtree Max Gap problem. The algorithms run in O(log n) time using n processors, which is optimal.
A matching lower bound, applicable to sequential decision trees, is given for a special case of the Tree Max Gap problem.