The Fine-Grained Complexity of Multi-Dimensional Ordering Properties

We define a class of problems whose input is an n-sized set of d-dimensional vectors, and where the problem is first-order definable using comparisons between coordinates. This class captures a wide variety of tasks, such as complex types of orthogonal range search, model-checking first-order properties on geometric intersection graphs, and elementary questions on multidimensional data like verifying Pareto optimality of a choice of data points. Focusing on constant dimension d, we show that any such k-quantifier, d-dimensional problem is solvable in O(nk-1logd-1n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n^{k-1} \log ^{d-1} n)$$\end{document} time. Furthermore, this algorithm is conditionally tight up to subpolynomial factors: we show that assuming the 3-uniform hyperclique hypothesis, there is a k-quantifier, (3k-3)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(3k-3)$$\end{document}-dimensional problem in this class that requires time Ω(nk-1-o(1))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Omega (n^{k-1-o(1)})$$\end{document}. Towards identifying a single representative problem for this class, we study the existence of complete problems for the 3-quantifier setting (since 2-quantifier problems can already be solved in near-linear time O(nlogd-1n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n\log ^{d-1} n)$$\end{document}, and k-quantifier problems with k>3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k>3$$\end{document} reduce to the 3-quantifier case). We define a problem Vector Concatenated Non-Domination VCNDd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathsf {VCND}_d$$\end{document} (Given three sets of vectors X, Y and Z of dimension d, d and 2d, respectively, is there an x∈X\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x \in X$$\end{document} and a y∈Y\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y \in Y$$\end{document} so that their concatenation x∘y\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x \circ y$$\end{document} is not dominated by any z∈Z\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$z \in Z$$\end{document}, where vector u is dominated by vector v if ui≤vi\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$u_i \le v_i$$\end{document} for each coordinate 1≤i≤d\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1 \le i \le d$$\end{document}), and determine it as the “unique” candidate to be complete for this class (under fine-grained assumptions).


Introduction
Algorithmic problems based on comparing elements according to a total ordering relation are as fundamental as they are useful. Any introductory algorithms textbook starts with sorting and other comparison-based problems. For higher dimensional data, problems involving comparisons for multiple components, such as range queries, are equally fundamental in computational geometry. In databases, queries need to handle data with many fields that can be compared (beyond other relations on the data), such as listing all employees who are not managers of another employee, with seniority in one range and salary in another.
In this paper, we give a general, systematic study of the complexity of multidimensional comparison problems. We define complexity classes capturing the notion of "multi-dimensional comparison problems", as appropriate in geometry and in databases, with the classes PT O d representing geometric problems in d dimensional data, and T O d representing problems that combine ordering and other relations for such data, as would be found in databases. We then identify the maximum complexity of problems in these classes under standard assumptions in fine-grained complexity, and relate the classes to each other and other studied complexity classes. For many subclasses, we find natural complete or hard problems where progress on better algorithms for these problems would result in better algorithms for the entire subclass.
While our results are varied, with upper bounds, conditional lower bounds and completeness results, a consistent theme emerges. Our classes are intermediate between two previously studied classes of logically defined problems, first-order in the sparse representation (e.g., graph problems in adjacency list format) and first-order in the dense representation (e.g, graph problems in adjacency matrix format). While orderings are dense relations, with quadratically many pairs for which they hold, they are a special case that can be represented succinctly, by giving an array of ranks for each element. What emerges in our results is that multi-dimensional ordering problems are very tightly connected to first-order in the sparse representation, and not directly connected to the dense representation. Thus, while they give substantially different settings, we give many senses in which sparse relations can be coded in terms of orders, and where orderings can be reduced to sparse relations.

A Class of Geometric Ordering Problems: PTO k,d
As an example for multi-dimensional comparison problems, consider 2D orthogonal range searching: given a set of 2-dimensional data points D, answer Boolean queries of the form where [ 1 , u 1 ] × [ 2 , u 2 ] is a given orthogonal range. Note that here, we may without loss of generality replace each point's coordinate in dimension d by its rank among the coordinates in dimension d of all points in D. Typical variants include to report, count or optimize over all elements in the query range. A long line of research starting in the 70s, including [11,19,23,41,44,46], gives fast algorithms for such tasks, e.g., an algorithm to preprocess D such as to answer queries in time O(log log n) using space O(n log log n), see [19]. Many more complex algorithmic tasks can be solved using orthogonal range techniques, see [8,26] for an overview.
Also more complex tasks than mere orthogonal range searching arise naturally: In a set of d-dimensional data points D, consider a feature (or property) F of the data points that can be described as being contained in an orthogonal range [ 1 , Given a family F of such features, there are several natural questions to ask: • decide if all features are present in the dataset: x ∈ F • decide if some data point displays all features: ∃x ∈ D ∀F = [ 1 , u 1 ] × · · · × [ d , u d ] ∈ F : x ∈ F • decide if two different features are equivalent on D: Some of these questions can be quickly answered using orthogonal range reporting queries, for others it seems that already the output size of single such query might pose a possibly unnecessary bottleneck. Furthermore, some features might be comparisonbased, but more complex than a simple orthogonal range, e.g., 1 x ∈ F( 1 , u 1 , . . . , d , u d ) ).
In such cases, it would not be immediate whether orthogonal range search techniques can be used at all. We formalize a notion of "multi-dimensional comparison problems" by introducing a class of problems PT O k,d (for "purely total ordering property") of model-checking a k-quantifier first-order property on a relational structure with d total ordering relations (each succinctly represented as a sorted list of objects) as well as unary relations (to enable comparison of coordinates with constants). In particular, this class contains any property ψ of the form (2) . . . Q k x (k) : φ(x (1) , . . . , x (k) ), where Q i ∈ {∃, ∀}, x (i) ranges over d-dimensional vectors (which we also call objects), and φ is an arbitrary Boolean formula involving only comparisons of the form x denotes the i-th dimension of x (a) , x (b) , respectively), as well as comparisons with constants. We will refer to d as the dimension of a formula ψ ∈ PT O k,d . For this paper throughout, we think of φ as fixed formula, and thus k, d are constants. See Sect. 2.1 for further details.
The class PT O k,d includes all problems as mentioned above, but also tasks such as verifying Pareto optimality of a given set of d-dimensional data points 2 , or given a set of d-dimensional geometric objects, determine whether there are k distinct such objects whose bounding boxes intersect.
We furthermore extend this class to T O k,d , where we allow, beyond d total ordering relations, also arbitrary additional relations (represented explicitly). These two classes encompass in particular the following types of problems: • Model-checking first-order properties of geometric intersection graphs: Presence of an edge in an intersection graph of axis-parallel boxes can be decided using comparisons of coordinates. Thus, any k-quantifier first-order property on such geometric intersection graphs in R d can be formulated as a problem in PT O k,d , such as finding k pair-wise non-intersecting d-dimensional axis-parallel unit-cubes [42]. 3 • temporal logic: using a single total ordering relation, we may represent precedence in a time domain. Thus, we may express temporal logical statements involving expressions over future or past events in T O k,1 . • relational databases with ordered types: in relational databases, we may use totally ordered data types (salaries of employees, time events, rank in a sorted list, etc.) as succinct representation to enable comparisons. In this context, studying the complexity of a problem in T O k,d corresponds to studying the data complexity of a fixed query.

Our Results
Let k ≥ 2. We show that any problem in PT O k,d involving n objects can be solved in time O(n k−1 log d−1 n) which isÕ(n k−1 ) for any constant dimension d. We extend this algorithm to run in time O(m k−1 log d−1 m) for sentences in T O k,d , where m denotes the sum of the number of objects and the size of the additional relations, i.e., the number of tuples contained in the relation. We show the matching conditional lower bound that there is some sentence in PT O k,3k−3 that requires time (n k−1−o (1) ) under the 3-uniform hyperclique hypothesis [2,15,37,40] -this hypothesis postulates that n k±o(1) running time is essentially best possible for finding cliques in hypergraphs. (See Sect. 2.2 for further details.) Beyond these general upper and lower bounds, we also seek to identify hard or even complete problems for this class. Such problems capture the full generality of these classes, in the sense that finding a significantly improved algorithm for this problem would give an improved algorithm for all problems in the class. We use the following fine-grained notion of hardness/completeness: Formally, let P be a problem whose best known algorithm runs in time T P (n) and let C be a class of problems whose best known algorithms runs in time T C (n). We say P is hard for a class of problems C, if any T P (n) 1− -time algorithm for P with > 0 gives a T C (n) 1− -time algorithm for all problems in C for some > 0. We say that P is complete for C, if it is hard for C and contained in C. In particular, if P is complete for C, then P admits substantial improvements over time T P (n) if and only if all problems in C admit substantial improvements over T C (n). We use fine grained reductions to show such results. Refer to Sect. 2 for its formal definition.
We identify such problems for specific quantifier structures. In particular, we focus on the 3-quantifier case, since all 2-quantifier O(1)-dimensional total order properties can be solved in near-linear timeÕ(n) (Theorem 3), and all k-quantifier properties with k > 3 can be reduced to the 3-quantifier case via brute forcing (Corollary 5). Focusing on PT O k,d , we obtain the following results (see Table 1): 1. For existentially quantified pure total ordering properties (denoted by PT O ∃∃∃,d ), we give anÕ(n 2ω/(ω+1) ) =Õ(n 1.407 ) time algorithm and identify the well-studied triangle detection in sparse graphs as a complete problem 4 . 2. For the quantifier structure ∀∃∃, we also give anÕ(n 2ω/(ω+1) ) =Õ(n Note that this covers all quantifier structures for k = 3, as deciding These results identify the VCND d problem as the essentially only candidate (up to fine-grained equivalence) to be complete for PT O 3,d under NSETH: It is complete for ∃∃∀, and all problems with a different 3-quantifier structure have either improved deterministic or (co-)nondeterministic algorithms, and thus cannot be complete without major consequences in fine-grained complexity. It remains a challenge to prove or disprove completeness of VCND d for PT O 3,d (beyond its completeness for PT O ∃∃∀,d ).
Since the above results motivate VCND d as a central problem for PT O k,d , we work towards algorithmic improvements for this problem. In particular, we obtain añ O(n 2− 1 2 d )-time algorithm for VCND whenever vectors in X have dimension 2 and vectors in Y have dimension d (or vice versa). Note that obtaining such an O(n 2− (d) ) time algorithm with (d) > 0 for general VCND d would refute the 3-uniform hyperclique hypothesis by our conditional lower bound and completeness result.
Finally, we show that our algorithmic results extend to the class T O k,d (see Sect. 3 for details), while all hardness results trivially apply, since they are already proven for the subclass PT O k,d . Generally speaking, this shows that the database setting (with additional sparse relations) does not increase the fine-grained complexity compared to the geometric setting of purely total ordering properties.

Previous Work
This work continues a relatively new direction, fine-grained complexity of complexity classes. Fine-grained complexity aims to not only qualitatively classify problems as "easy" or "hard", but (to the extent possible) pin-point their exact complexities. We now have a wide collection of standard algorithmic problems where any significant improvements in algorithmic running time would refute one or more conjectures about well-studied problems, such as the k-SUM problem [27], All Pairs Shortest Paths [3,40,49], SAT [34,45], or Orthogonal Vectors [1, 2, 6, 9, 12-14, 16, 38, 43]. Recent work in fine-grained complexity has gone from considering problems one at a time to following traditional complexity in considering classes of problems. Finegrained reductions often cut across the usual complexity classes (with reductions from N P-complete problems to first-order properties, for example), but on the other hand, fine-grained complexity distinguishes between problems with the same traditional complexities (e.g., two different N P-complete problems might have very different properties in fine-grained complexity). Nevertheless, there are now a number of classes of problems, grouped by logical structure or common format, whose fine-grained complexity is at least partially understood: dense first-order properties [48]; sparse first-order properties [15,17,30]; several extensions of first-order [29]; and certain formats of dynamic programming problems [28,38].
The most closely related previous work to our results are [30,48]. Both of these papers consider the class of first-order definable properties, the first for the dense case (where each relation is given as a matrix, aka adjacency matrix format), and the second for the sparse case (where the input is given as a list of tuples in the relations, e.g., for graphs, adjacency list format). This class is natural both in terms of computational complexity, where it is the uniform version of AC 0 ( [31]), and in database theory, because these are the queries expressible in basic SQL [7]. First-order logic can also express many polynomial time computable problems: Orthogonal Vectors, k-Orthogonal Vectors, k-Clique, k-Independent Set, k-Dominating Set, etc. Not only were the likely complexities of the hardest problems (as a function of number of quantifiers) given, but in the second paper, a natural complete problem was identified, the Orthogonal Vectors problem (OV). The conclusion was that there were substantial improvements possible in the worst-case complexity of model checking for first-order properties if and only if the known algorithms for Orthogonal Vectors can be substantially improved. Using a recent sub-polynomial improvement in OV algorithms by [4,21], they obtained a similar improvement in model checking for every first-order property. [29] extends this work to related logics such as transitive closure logics, firstorder logic on totally ordered sets, and first-order logic with function symbols. They show that model checking for first-order logic with a single total ordering is actually equivalent to that for unordered structures under fine-grained reductions. In contrast, we show that for even two orderings, the model checking problem becomes substantially harder, meaning we require new techniques to characterize the complexity of problems on multi-dimensional data.
There is also work on classes of problems that are related in spirit, but do not form a well-studied complexity class. V.-Williams and Williams [49] study problems related to shortest paths in graphs, and shows that many are subcubic-time equivalent. Künnemann et al. [38] study dynamic programming problems with a similar structure and give a unified treatment of their fine-grained complexities. Gao [28] extends this class of dynamic programming problems from lines to tree-like structures such as bounded treewidth graphs.

Preliminaries
The following notion of fine-grained reductions was introduced in [49].
Definition 1 (Fine-grained reduction) Let ( 1 , T 1 (m)) ≤ FGR ( 2 , T 2 (m)) denote that for every > 0 there is a δ > 0 and a Turing reduction from 1 to 2 so that the time for the reduction (not counting oracle calls) is O(T 1 (m) 1−δ ) and q (T 2 (|q|)) 1− = O(T 1 (m) 1−δ ), where the sum is over all oracle calls q made by the reduction on an instance of size m.
In other words, if there is some > 0 such that problem 2 is in TIME((T 2 (m)) 1− ), then problem 1 is in TIME((T 1 (m)) 1−δ ) for some δ > 0, i.e., if 2 can be solved substantially faster than T 2 then 1 can be solved substantially faster than T 1 . If both T 1 and T 2 are (m 2 ), the reduction is called a subquadratic reduction. We say that 1 and 2 are fine-grained equivalent if there is a fine-grained reduction from 1 to 2 and vice versa.
We use this notation not only on single problems but also on classes of problems. Let C 1 and C 2 be classes of problems. (C 1 , T 1 (m)) ≤ FGR (C 2 , T 2 (m)) if for all problems 1 ∈ C 1 there is a 2 ∈ C 2 so that ( 1 , T 1 (m)) fine-grained reduces to ( 2 , T 2 (m)).

Details on PTO k,d and TO k,d
In this paper, we consider the fine-grained complexity of model checking problems definable in first-order logic on structures with d binary relations where each binary relation is a total pre-order of the universe (i.e., transitive, reflexive, total, but not necessarily anti-symmetric.) Total orders. We use x ≤ i y to represent the i'th relation in our family holding between x and y. Such a relation is dense, holding for (n 2 ) pairs of elements. However, we can represent such a representation succinctly, by giving an array which for each element specifies its rank in a list sorted by the ordering relation (with some elements having the same rank, if inequality holds in both directions). It is in this succinct format that ordering relations are described for our problems. Equivalently, we may represent all ordering relations by representing each object x as a d-dimensional vector (x 1 , . . . , x d ), where x i denotes the rank of x in the i'th ordering relation. Thus, it is equivalent to write x ≤ i y or x i ≤ y i , and we will switch between these two based on which seems clearer for the given circumstance.
The vectors we get in this way are very special, in that the coordinates are always positive integers from 1 to n. However, also problems defined about d dimensional vectors over any totally ordered domain (such as R) fall into our setting. This is because we will still have only n vectors in our setting from that domain and in O(n log n) time we can replace each x i with its rank in the set of i'th coordinates of vectors.
Unary relations. We also allow unary relations, or, equivalently, comparisons to constants. More precisely, any unary relation U is represented as a list of objects for which U holds. Apart from allowing us to put objects into categories (sometimes called colored properties), this enables us to express comparisons of coordinates with constants: To express whether x ≤ i γ for some constant γ , we introduce a unary relation symbol U ≤γ i that holds for all x with x i ≤ γ . Thus from now on, it suffices to declare constants γ explicitly, and afterwards we may express arbitrary comparisons like x i = γ or x i > γ . Note that since we always consider fixed formulas ψ, each considered property will use O(1) constants for comparisons.
Definition of PT O k,d . We denote the class of purely total ordering model-checking problems for first-order formulas in pre-orderings and unary relations specified as above where the formula has d distinct ordering relations and k total occurrences of quantifiers by PT O k,d . PT O k is the union of PT O k,d over all constants d. We can further divide PT O k into 2 k sub-classes based on the quantifier structure, so for example PT O ∃∃∃ is the sub-class of PT O 3 where the model-checking problems are for formulas of the form ∃x∃y∃z (x, y, z) where is quantifier-free. We let n be the size of the universe of the structure, which is also, up to constant factors, the size in terms of O(log n)-bit words required to specify all total pre-orderings and unary relations. Algorithm time for problems in PT O is thus measured in terms of n. In this 123 format, it is a constant time operation to evaluate whether any relation is true or false for specified elements.
Definition of T O k,d . We generalize PT O k,d to the class T O k,d by also allowing the formula and models to have any constant number of sparse relations of any constant arity. These are specified as lists of tuples where the relation holds. Let the problem size be denoted by m, which is equal to the sum of the number of elements n and the number of tuples.
We assume all algorithms start with quasi-linear time preprocessing steps to create data structures such as hash tables or binary search trees that allow fast determination (constant time or logarithmic time) of whether a relation holds for given elements, and allows one to list the tuples in a relation that contain a given element in at most poly-log time + poly-log time times the number of such tuples.
On the difference. PT O k,d is a more "geometric" class of problems, and so it is interesting when we can reduce combinatorial problems to this class. Therefore, we will focus on these classes when giving conditional hardness results. T O k,d is closer to the type of problems that might arise in applications such as database queries. Therefore, we will focus on and denote this by u ≥ dom v. Furthermore, given a set of d-dimensional real vectors A, we say that A is Pareto optimal if there are no distinct a, a ∈ A so that a is coordinate-wise at least as large as a .
• Vector Domination Problem (see, e.g. [18,32]): Given two sets of d-dimensional real vectors A and B, are there two vectors u ∈ A and v ∈ B such that u ≥ dom v?
In small dimensions, this problems turns out to be equivalent to the lowdimensional Orthogonal Vectors problem by a recent result of Chan [18]. • Pareto Optimality Verification (see, e.g. [33]): Given a set A of vectors, determine if A is Pareto optimal.
From the definition, both problems are in PT O 2,d . As we will see, they can be solved in time O(n log d−1 n). For superconstant dimension d, [18,32] give further improvements.

Conjectures From Fine-Grained Complexity
We list the fine-grained hardness assumptions used in this paper. While some of these assumptions imply others (or are implied by them), they might turn out to be very different: It is conceivable, e.g., that SETH turns out to be false, while the Orthogonal Vectors Conjecture might indeed hold. Hence, we aim to classify our conditional hardness results by the weakest hypothesis that suffices. Specifically, we use the following hypotheses:

SAT hypotheses
• Strong Exponential Time Hypothesis (SETH) [34]: For all > 0, there exists a k such that k-CNF-SAT cannot be solved in time O(2 n(1− ) ). • Nondeterministic Strong Exponential Time Hypothesis (NSETH) [17]: For every ε > 0, there exists a k so that k-TAUT is not in NTIME(2 (1− )n ), where k-TAUT is the language of all k-DNF which are tautologies, i.e., always true.

OV hypotheses
The Orthogonal Vectors problem (OV) is defined as follows: Given sets A, B of vectors in {0, 1} d , the task is to determine whether there exists an orthogonal pair is equal to 0. While this problem is clearly solvable in time O(n 2 poly(d)), it conjectured that we cannot achieve strongly quadratic running time: there is a C so that there is no O(n 2− ) time algorithm for OV with dimension d = C log n. This is implied by SETH by [34] and [47].
for OV where m is the total Hamming weight of all the input vectors [30]. This is equivalent to MDOVC [30].
Relevant for our work is also the correspondence of OV to model-checking first-order properties: • First-order property conjecture (FOPC) [30]: There exists an integer k ≥ 2 so that there is a (k + 1)-quantifier first-order property that cannot be decided in time O(m k− ) for any > 0 (here structures are over universe size n and a list of constant arity relations over these structures is given. The total number of relations given is m). This is equivalent to MDOVC [30].
Finally, we also use the following generalization of OV to a problem with conjectured complexity n k±o (1) : The k-Orthogonal Vectors (k-OV) problem asks to determine, time algorithm that solves k-OV with dimension d. This is implied by SETH [47].

Hitting Set hypothesis
The Hitting Set problem is defined as follows: Given two families of subsets over the same universe U , is there a set in the first family that has non-empty intersection with each set in the second family? Equivalently, given two sets A, B of vectors in {0, 1} d , determine whether there is an element a ∈ A such that for all b ∈ B there is some • Hitting set conjecture: For all > 0, there is a C so that there is no O(n 2− ) time algorithm for Hitting Set with dimension d = C log n. The Hitting Set conjecture 123 implies the Low-Dimension OV conjecture [5], but there are reasons to believe it is not implied by SETH ( [17]).

Hyperclique hypothesis
• h-uniform k-HyperClique Hypothesis: Let k > h > 2 be integers. The h-uniform k-HyperClique Hypothesis states that for no > 0, we can detect a k-clique in a huniform hypergraph on n nodes in time O(n k− ), see [40] for a detailed discussion of its plausibility and [2,15,37,40] for recent applications.
For all of these conjectures, complexity is measured in the word RAM model with O(log n) bit words.

The VCND Problem
We formally define the perhaps most important problem for PT O k,d .

Definition 2 (Vector Concatenated Non-Domination) Given a set
where x • y ∈ Z d 1 +d 2 denotes the concatenation of x and y. We denote by VCND d the special case of d 1 We can view VCND as first constructing the set X • Y = {x • y | x ∈ X , y ∈ Y } and then asking whether there is some z ∈ Z that dominates some element of X • Y . There are other operations which could replace concatenation, such as the coordinate-wise max operation Max(X , Y ).

Relationships to Other Classes
For fine-grained complexity, the representation of the input is significant. In considering the complexity of dense first-order properties, we view the input as a matrix or tensor representing each relation; for binary relations, this is the familiar adjacency matrix representation for graphs. While the relations are not necessarily dense, the algorithms cannot assume or utilize sparsity. The "sparse" version of the same properties represents the input relations as lists of tuples where the relation holds, generalizing the adjacency list representation for graphs. The input is not necessarily sparse, but the algorithm is allowed more time for denser instances, so sparse instances are the most difficult ones. Total order relations are intermediate between "dense" and "sparse" relations, because while they are actually dense, containing a quadratic number of pairs, they can be succinctly represented by the sorted list. In particular, total orders can be obtained as the transitive closure operation performed on the sparse "successor" relation. So our hardness results also imply hardness for sparse first-order augmented by transitive closures, a class considered in [29].

Technical Overview
In this section, we give the main ideas for all of our results, see Table 1 for an overview. One of our main results is an upper bound on model-checking sentences in PT O k,d and T O k,d .

Theorem 3
There is an algorithm running in time O(n log d−1 (n)) for model-checking a two-quantifier formula Q 1 x Q 2 yϕ(x, y) with d ordering relations and unary predicates.
Specifically, we obtain this result using the following lemma, which we obtain by a reduction to orthogonal range counting.

Lemma 4 Given a formula ϕ(x, y) with d ordering relations and unary predicates and two sets X , Y of vectors in R d , there is an O(n log d−1 (n)) time algorithm that returns an array A indexed by each x ∈ X so that A[x] is the number of y ∈ Y such that ϕ(x, y) is true.
Combining the above theorem with exhaustive search over the first k −2 quantifiers yields If we have additional explicitly represented relations, more work is required. For such cases, throughout the paper, we will always assume that these relations are sparse, i.e., the total input size is m = O(n). In this case, we obtain the same asymptotic running time.

Theorem 6 Model-checking formulas in T O k,d is in
The idea is to reduce the problem to the purely totally ordered case by assuming that all sparse relations are empty; using Lemma 4 for the 2-quantifier case, we can obtain for each x the number of y satisfying the condition. We then repair these counts to the true values by iterating over the additional sparse relations, similar to the baseline algorithm in [30].
We prove our baseline algorithms in Sect. 4. Note that in Sect. 3.4, we discuss a lower bound proving these baseline algorithms to be conditionally optimal under fine-grained hardness assumptions.
In the remainder of the section, we distinguish our results based on the quantifier structure. Since any k-quantifier formula with k > 3 reduces to the 3-quantifier setting via brute force over the first k − 3 quantifiers, we only regard 3-quantifier structures.

Quantifier Structures Ending in ∃∃∃
Recall that informally, we call a problem complete for a class if it is contained in the class and model-checking any sentence in the class reduces to our problem. For sentences in PT O k,d ending in ∃∃∃, we show that detecting triangles in a sparse graph is complete for this class. By current running time bounds for the problem [10], we obtain a running time ofÕ(n 2ω/(ω+1) ) =Õ(n 1.407... ).

Theorem 7 The triangle detection problem in sparse graphs is fine-grained equivalent to a problem that is complete for model-checking ∃∃∃ formulas with only ordering relations and unary relations.
More precisely, the following ordering property is shown to be complete: ∃x∃y∃z : 3 which is easy to be seen equivalent to triangle detection in sparse graphs. Intuitively, we reduce to this problem as follows: Given a formula ∃x∃y∃zφ(x, y, z), we can determine whether φ(x, y, z) holds once we know all comparisons between x, y, z in each dimension i. A challenge here is to reduce comparisons like x i < y i to an equality check: Similar to a trick used in [50], we do this by guessing the highestorder bit of divergence between x i and y i to obtain a "proof" only involving equalities; since we may assume that 1 ≤ x i , y i ≤ n (by working in rank space), there are only O(log n) choices for a single comparison. The key observation is that the quantifier structure is sufficiently well behaved to make this reduction work: we only need to guess these bits of divergence for O(d) many comparisons and can express correctness of all proofs for comparisons between x and z using equality on the first dimension, between x and y using the second dimension, and between y and z using the third dimension. In total, this results in an admissible blow-up of log O(d) n. We prove the result in Sect. 5.1.1.
We turn to the setting with additional sparse relations, i.e., formulas in T O ∃∃∃,d . Here we establish the triangle counting problem in sparse graphs as hard for the class. Since the approach of [10] also gives a counting algorithm in the same running time as detection, we establish the same algorithmic upper bound. Handling the additional sparse relations is highly non-trivial. In particular, to obtain our result, we first show that the triangle counting problem is hard for model-counting ∃∃∃ formulas in the sparse setting of [30], which is interesting in its own right. For the proof, we refer to Sect. 5.1.2.
Since triangle detection is a classical problem, improving the bound of O(n 1.407 ) for ∃∃∃ structures already in the purely total ordering case would be a major algorithmic result.

Quantifier Structures Ending in ∀∃∃
For quantifier structures ending in ∀∃∃, we obtain a hard problem: We show that every problem in T O ∀∃∃,d (and thus also PT O ∀∃∃,d ) reduces to that of determining, for each edge in a sparse graph, how many triangles contain this edge; we call this problem Edgewise Triangle Counting (ETC). Again, currently the best algorithm for this problem is essentially the same as that for triangle detection and counting [10].

Theorem 9 Edgewise Triangle Counting is hard for model-checking T O ∀∃∃,d formulas.
Since the high-level arguments for this result substantially build on the completeness result for T O ∃∃∃,d given in the previous section, we defer a discussion of the techniques to Sect. 5.1.3, where we give the proof.

Quantifier Structures Ending in ∃∀∃
For the quantifier structure of ∃∀∃, we are unable to establish a complete problem. However, this quantifier structure admits (co-)nondeterministic algorithms that are faster than the baseline algorithm. The main idea is as follows: Consider any ∃x∀y Qzφ(x, y, z) property. For the nondeterministic algorithm, we simply (nondeterministically) guess x and solve the remaining 2-quantifier problem ∀y Qzφ(x, y, z) in time O(n log d−1 n) using the baseline algorithm. For the co-nondeterministic algorithm, we need to verify that ∀x∃y Qzφ(x, y, z). Here, for every x, we (nondeterministically) guess a witness y x and solve the remaining Qzφ(x, y x , z) formula using the approach of Theorem 3.
For the case of total ordering properties with additional sparse relations, this approach is not directly applicable: If, e.g., all guessed witnesses y x happen to participate in many tuples of the sparse relations, we have to repeatedly solve problems with a large input size. We remedy this problem by taking care of such large degree witness y x explicitly; while this incurs a certain slow-down, we can limit it to a factor of O( √ n).

Theorem 11 Model-checking formulas in T O k,d ending in ∃∀∃ can be done in nondeterministic and co-nondeterministic time O(m k−3/2 log d−1 (m)).
We prove the above (co-)nondeterministic algorithms in Sect. 5.2. As a consequence of the above nondeterministic algorithms, assuming NSETH [17], we cannot establish hardness beyond n k−2−o (1) for PT O ∃∀∃,d using deterministic SETH-based reductions. However, by reducing from a problem with low (co-)nondeterministic complexity, specifically, the Hitting Set conjecture [5], we can give a conditional lower bound already for PT O ∃∀∃,d (as d → ∞) that matches our baseline algorithm. The proof of this result is reminiscent to some reductions in [24] and is given in Sect. 6. We reduce from Hitting Set (given sets of vectors A, B ⊆ {0, 1} c log n for arbitrary c, determine whether some a ∈ A is non-orthogonal to all b ∈ B) to a formula ∃x∀y∃zψ(x, y, z) as follows: We think of x ranging over vectors a ∈ A, y ranging over b ∈ B, and think of z as a "proof" of the fact that a, b are non-orthogonal, given by a prover Merlin. There is a trade-off between size of the proofs and the required dimension to represent the vectors, which we set in a way that bounds the number of possible proofs to O(n), resulting in a dimension d growing only with c (independently of n).
We also give a conditional lower bound from SETH for k > 3 that matches the NSETH barrier following from the (co-)nondeterministic algorithms. Notably, this lower bound already applies to dimension d = 2.
Theorem 13 Assuming SETH, model checking formulas in P T O k,2 ending in ∃∀∃ requires time (n k−2− ) for any > 0.
We reduce the k-Orthogonal Vectors problem into an ∃ k ∀∃-quantified 2-dimensional formula. Intuitively, the first k existential quantifiers choose k vectors, the ∀-quantifier ranges over all vector-dimensions to test, and crucially, the final ∃-quantifier enables to guess which of the k vectors has a 0-coordinate in this vector-dimension. Here, the final ∃-quantifier is instrumental in making the formula's dimension independent of the vector dimensions. We give the full proof in Sect. 6.

Quantifier Structures Ending in ∃∃∀
For sentences in PT O k,d ending in ∃∃∀, we obtain the complete problem VCND d : Given three sets of vectors X , Y and Z of dimension d, d and 2d, respectively, determine if there an x ∈ X and a y ∈ Y so that their concatenation x • y is not dominated by any z ∈ Z .

Theorem 14 For all d, there exists a d such that VCND d is complete for modelchecking ∃∃∀ formulas in P T O k,d .
This is one of our most interesting results, proven in Sect. 5.3. We reduce a formula ∃x ∈ X ∃y ∈ Y ∀z ∈ Z : ψ(x, y, z) to VCND d as follows: We carefully divide all pairs in X × Y into instances (X 1 , Y 1 ), . . . , (X L , Y L ) such that for each instance (X , Y ), all comparisons x i < y i , x i = y i , x i > y i for all dimensions i have the same outcome among pairs x ∈ X , y ∈ Y . Thus, for each , we may simplify ψ to a formula ψ not involving comparisons between x and y. In particular, we may express ψ in CNF, where each clause is a disjunction of {<, ≤, ≥, >}-comparisons between x i and z i or between y i and z i (in some dimension i). Since all such clauses need to be fulfilled simultaneously, for each z ∈ Z and clause C, we introduce some z C chosen such that the clause C is falsified if and only if x • y are dominated by z C .
We show a matching conditional lower bound of n k−o (1) for PT O ∃ k ∀,d under the 3-uniform hyperclique hypothesis. We use the first k quantifiers to represent a choice of clique nodes, each represented in its own dimension, and use the ∀ quantifier to check that no forbidden configuration is used (a non-edge in the given hypergraph). Naively, this would create (n h ) rather than O(n) objects, which we remedy by reducing from finding hypercliques of size hk (rather than k). The proof is given in Sect. 6.
We also establish a SETH-based lower bound directly for VCND d . The reduction (given in Sect. 6) is very similar to our Hitting-Set-based lower bound for ∃∀∃structures.

Theorem 16
Assuming SETH, for every > 0, there is a d such that VCND d requires time (n 2− ).
Specialized algorithm for VCND d Since our completeness results establish VCND d as a central problem for the study of PT O k,d , we consider special cases of the problem in Sect. 7. In particular, if X contains vectors of dimension 2 and Y contains vectors of dimension d, we show the following algorithm, which uses the Erdös-Szekeres Theorem as main ingredient. We use this to extract lists of vectors so that when we restrict to any dimension, the vectors appear in monotonic increasing or decreasing order. This way, the vectors that dominate some fixed vector x form an interval, which allows us to take advantage of fast segment trees that solve an interval covering problem.

Baseline Algorithms
In this section, we give our baseline algorithms via a reduction to orthogonal range counting. We note that we do not aim to optimize logarithmic factors.

Lemma 4 Given a formula ϕ(x, y) with d ordering relations and unary predicates and two sets X , Y of vectors in R d , there is an O(n log d−1 (n)) time algorithm that returns an array A indexed by each x ∈ X so that A[x] is the number of y ∈ Y such that ϕ(x, y) is true.
Proof Consider a fixed x in the domain. The task is to count the number of y such that ϕ(x, y) is satisfied. Assume the unary relations in the vocabulary are R 1 , . . . , R k . The truth value of ϕ(x, y) will depend on two factors: the order between x and y in each of the d dimensions, and the unary relations R 1 , . . . , R k satisfied by y. We will denote the first by a vector α and the second by a vector β. Since k and d are constant, there are finitely many possibilities for α and β, so we may consider them all. So, when we find an α and β that makes ϕ(x, y) true, we can orthogonal range search for vectors x, y that have order α and vectors y that satisfy the unary relations given by β.
Formally  | ϕ(x, y) is satisfied}|, for each (α, β) that satisfies ϕ(x, y), we will count the number of y ∈ Y such that their order with x is given by α and the unary relations they satisfy is given by β. Then, we will sum these values over every (α, β) that satisfy ϕ(x, y).
Specifically, observe that in linear time, we can compute, for each β ∈ {0, 1} k , the set Y β of vectors with unary relations given by β. To count, given α ∈ {0, 1, −1} d , the number of y ∈ Y β so that their order with x is α, we apply a standard orthogonal range counting [22,35]. If α i = 1, the range in the i th dimension will be [0, x i ). If α i = 0, then the range in the i th dimension will be {x i }, and if α i = −1, then the range in the i th dimension will be (x i , n]. For example, let x = (3, 4, 5), and let α = (1, 0, −1). Then, we query the number of y ∈ Y β that lie in the range [0, 3) × {4} × (5, n]. Such a query can be done in time O(log d−1 n), see [22,35]. Overall, we can compute the total number of y satisfying ϕ(x, y) in time O(log d−1 n).
Performing this for each x ∈ X takes total time O(n log d−1 n).
We obtain our baseline algorithm for purely total ordering relations by observing that the above information is sufficient to decide any PT O 2,d formula.

Theorem 3
There is an algorithm running in time O(n log d−1 (n)) for model-checking a two-quantifier formula Q 1 x Q 2 yϕ(x, y) with d ordering relations and unary predicates.
Proof From Lemma 4, we can compute an array A indexed by vectors x ∈ X so that A[x] = #ϕ(x, ·) in time O(n log d−1 n). If Q 1 Q 2 is ∃∃, it is enough to check that #ϕ(x, ·) > 0 for some x ∈ X . Similarly, if Q 1 Q 2 is ∃∀, it is enough to check that #ϕ(x, ·) = |Y | for some x ∈ X . Both can be done by simply scanning the array. All other formulas are equivalent to one of these cases by negation.

Corollary 5 Model-checking formulas in P T O k,d is in TIME(n k−1 log d−1 (n)).
Proof Simply brute force search over the first k−2 quantifiers, then use the 2-quantifier algorithm that runs in time O(n log d−1 (n)). This takes time O(n k−1 log d−1 (n)).
In fact, we can extend these ideas to give a baseline algorithm for the class T O k,d .

Theorem 6 Model-checking formulas in T O k,d is in TIME(m k−1 log d−1 (m)).
Proof We will exhaustively search over the first k − 2 quantifiers. Then, our plan will be to try to count |{y | ϕ(x, y) is satisfied}| by separating into two cases: these are when objects x and y appear (or do not appear) together in a sparse relation. We will create an auxiliary formula ϕ * (x, y) where every relation R(x, y) that appears in ϕ(x, y) is set to false. Then, we use the algorithm from Lemma 4 to compute an array A * with A * [x] = |{y | ϕ * (x, y) is satisfied}|. If y shares no relations with x, then ϕ(x, y) is true if and only if ϕ * (x, y) is true. However, if x and y appear together in some relation, then it is possible that ϕ * (x, y) and ϕ(x, y) have different truth values. Since our relation is sparse, we can correct this in O(m) time by exhaustively searching over the vectors y that appear in some relation with x. If ϕ(x, y) is true and 123 ϕ * (x, y) is true, then we do not alter our current count. If ϕ(x, y) is true but ϕ * (x, y) is false, then we increment our count by 1. Similarly, if ϕ(x, y) is false and ϕ * (x, y) is true, we decrement our count by 1. Lastly, if both ϕ(x, y) and ϕ * (x, y) are false, we do not alter the count. With this, we can compute an array A indexed by x in time Therefore, given a sentence in T O k,d , we exhaustively search over the first k − 2 quantifiers then compute this array A. If the last two quantifiers are ∃∃, it is enough to check that some entry in the array is greater then 1, and if the last two quantifiers are ∃∀, it is enough to check that some entry in the array is the size of the domain for the last quantifier. Overall this takes time O(m k−1 log d−1 m).

Completeness for Quantifier Structures
In this section, we give our completeness results for each quantifier structure, including evidence why we do not expect any quantifier structure other than ∃∃∀ to contain a complete problem for the full classes PT O k,d , T O k,d .

Quantifier Structures Reducing to Triangle Problems
In this section, we characterize the complexity of problems for the class of ∃∃∃ and ∀∃∃ formulas with ordering relations. We show that for PT O ∃∃∃ , the complexity of the hardest problem in the class is equivalent to that of triangle detection in sparse graphs. In other words, triangle detection is (equivalent to) a complete problem for this class. We show that every problem in T O ∃∃∃ (i.e., when we also allow sparse relations in addition to orderings) reduces to the problem of counting triangles in sparse graphs. (Thus, the complexity of this class is somewhere between deciding whether a triangle exists and counting the number of such triangles. Currently, the best algorithms for these problems are identical ( [10]), but there is no known proof of equivalence.) We show that every problem in T O ∀∃∃ reduces to that of determining, for each edge in a sparse graph, how many triangles contain this edge. Again, currently the best algorithm for this problem is essentially the same as that for triangle detection and counting ( [10]).
In particular, these results show that these classes are all decidable in time O(m 1.41 ). Hence, these quantifier structures are significantly easier to check than the others we consider (assuming SETH and Low-dimension hitting set conjectures).

Theorem 7 The triangle detection problem in sparse graphs is fine-grained equivalent to a problem that is complete for model-checking ∃∃∃ formulas with only ordering relations and unary relations.
Proof The triangle detection problem in sparse graphs asks if for a graph G = (V , E) given in adjacency list format, there exist x, y, z ∈ G such that (x, y), (y, z), (x, z) ∈ E. We first show that this problem is equivalent under exact-time preserving reductions to an ordering problem with the same logical structure and dimension 3. The problem is: Given three sets of vectors A, B, C of dimension 3, are there a ∈ A, b ∈ B, and c ∈ C with a 1 = c 1 , a 2 = b 2 and b 3 = c 3 ? To reduce triangle detection to this problem, assign the vertices names that are positive integers, i.e., we identify V with {1, . . . , n}. For each edge (x, y) ∈ E, we create vectors (x, y, 0) ∈ A, (0, x, y) ∈ B and (y, 0, x) ∈ C. Thus, the number of vectors is linear in the number of edges in our graph, and the sets of vectors can be created in linear time. If there is a triangle, x, y, z in the graph, (x, y, 0) ∈ A, (0, y, z) ∈ B, and (x, 0, z) ∈ C are three vectors satisfying the constraints. Contrapositively, any three vectors satisfying the constraints must be of the above form, so must correspond to a triangle in the original graph.
In the reverse direction, for each vector (a 1 , a 2 , a 3 ) ∈ A, create vertices (1, a 1 ) and (2, a 2 ) if not already present and add an edge from vertex (1, a 1 ) to vertex (2, a 2 ). Similarly, we have edges from The graph created has a linear number of edges and triangles correspond exactly to solutions to our problem. So this problem is equivalent to triangle detection.
Thus, the maximum complexity of predicates in PT O ∃∃∃ is at least that of triangle detection, since it is equivalent to a member of this class.
We next show how to reduce any ∃∃∃ pure ordering problem to triangle detection. The first step is to reduce such a problem to one with only equality checks.
Consider a formula of the form ∃x∃y∃zϕ(x, y, z), where there are d ordering relations. Say that a < b for positive integers a and b, where a = a k . . . a 0 in binary, and b = b k . . . b 0 in binary. Call the position of divergence the first j (starting at the high order bits) so that that b j > a j . Then a k . . . a j+1 = b k . . . b j+1 , a j = 0 < b j = 1. If a = b, we call −1 the point of divergence. We break up the possible triples x, y, z into cases based on their ordering in the d different orders, and the point of divergence between their ranks for every pair in every order. Since the ranks are integers from 1 to n, there are 1 + log n possible points of divergence. Furthermore, since there are only two possible orderings for each pair per order, there are at most O(log 3d n) cases in total. We further break up into sub-cases based on which subsets of unary relations are true for x, y, z, which is at most constantly many sub-cases per case.
We determine for each case, whether there is an x, y, z with those comparisons and points of divergence and unary relations for which ϕ(x, y, z) holds. However, since each case specifies all comparisons and unary relations, ϕ(x, y, z) is either constantly true or constantly false for these cases. So this simplifies to determining, for each sub-case where ϕ(x, y, z) is true, whether there is a triple (x, y, z) consistent with that sub-case. For this, we use the characterization above. First, for each vector, we discard it if it does not match the unary relations for this sub-case. Secondly, if in the case x i < y i , and the point of divergence for the comparison is j, we discard x as a possibility if x i, j = 0 and y as a possibility if y i, j = 1, and the reverse if x i > y i .
For each non-discarded vector x, we create a new vector X of dimension 3, where in the second coordinate we concatenate in order of i all the strings x i,k , ..x i, j i +1 where j i is the point of divergence for x i and y i , and do likewise for the points of divergence for x i and z i in the first coordinate. The third coordinate has a default value like −1. For y, we do likewise, putting the parts related to the points of divergence with x in the second coordinate, and with z in the third to create Y , and for z we put the parts related to x in the first coordinate, and the parts related to y in the third.
Then as above, we ask: is there a triple X , Y , Z so that X 1 = Z 1 , X 2 = Y 2 and Y 3 = Z 3 ? If so, since all the strings concatenated have a fixed length, each concatenated string must be identical, so the corresponding x, y, z do have the orders and the points of divergence for the sub-case we are considering. Conversely, if x, y and z have those points of divergence, each string concatenated will be identical, so the equations will hold. As noted above, this problem is equivalent to triangle detection.
Thus, triangle detection is hard for PT O ∃∃∃ , and the equivalent problem is complete for this class.

TO ∃∃∃
A more general statement of the construction above is:

be sets of vectors of constant dimension d. In time O(n log O(1) n), we can construct a family of O(log O(1) (n)) tripartite multi-graphs so that:
1. For the tripartition of vertices into A, B, C of each graph G i , each element x ∈ X corresponds to at most a single edge e x between A and B, each element y ∈ Y corresponds to at most a single edge e y between B and C, and each element z ∈ Z corresponds to at most a single edge e z between A and C, and there are no other edges. Given x, i, one can compute e x in constant time, or say it doesn't exist.

Given i, we can compute in constant time a complete set of values for all unary
relations on x,y, and z, and values for order relations between x j , y j and z j for 1 ≤ j ≤ d, so that for every triangle e x , e y , e z in G i , x, y, z satisfy these relations. 3. For every triple x, y, z, e x , e y , e z form a triangle in exactly one G i .
We will use this lemma to show:

Theorem 8 Every problem in T O ∃∃∃,d reduces to the problem of counting the number of triangles in a sparse graph via reductions that preserve time up to polylog factors.
Proof We will first show that the triangle counting problem is complete for the class #F O 3 : given a quantifier-free formula (x, y, z) with only sparse relations, count the number of solutions x, y, z.

Lemma 20
We can reduce a problem in #F O 3 to the case where we have to count the number of solutions for constantly many formulas where each is a conjunction of positive relations.
Proof We first reduce from the general case to the case when is a conjunction of relations and negated relations. We branch on all possible settings of the relations that could hold among x, y, z, and we count the number of triples satisfying each conjunction that satisfies (ie., we write as a DNF of mutually exclusive terms, and count each term). Then we add up the results. Secondly, we can reduce to the case when all relations appear positively. If ¬R(x, y) (or any other subset of variables) appears in the conjunction, we can write = ¬R(x, y) ∧ (x, y, z), where has strictly fewer negated relations, as does R(x, y) ∧ (x, y, z). If we count both the number of triples that satisfy and R(x, y) ∧ , their difference is the number that satisfy . Thus, we can reduce any such counting problem with i ≥ 1 negated relations to two such problems with i −1 negated relations. Applying this repeatedly, we reduce to the case with no negations.

Lemma 21 Sparse triangle counting is complete for #F O 3 .
Proof We apply Lemma 20 and reduce to a set of counting problems which are conjunctions of positive relations. If the set of binary or greater relations is empty, we can individually count in linear time the number of elements x that satisfy all unary relations R(x) and similarly for y and z, and return their product. If there is a relation that involves all three variables, we enumerate the the triples satisfying that relation and compute the number of those that satisfy . If there are no 3-ary relations, and the binary relations are only between two specific variables (e.g., x and y), we can enumerate all such pairs x, y, then separately count the z's that satisfy unary relations and multiply these two counts. If there are specified relations on one of the variables, say x, and relations between x and y as well as x and z, but no relations between the y and z, we can compute, for each x both the number of consistent y's and the number of consistent z's in time equal to the number of tuples containing x. Then, we multiply these counts and sum up the results. This takes O(m) time total. Finally, if there are relations specified between every pair of variables, we can use these to specify the edges of a tripartite graph where the vertices are the elements that satisfy the unary relations and edges are pairs that satisfy all binary relations. This graph has at most O(m) edges, so it is an instance of sparse triangle counting of the same size as our original problem. Now we return to the theorem. Given a formula (x, y, z) with both ordering and sparse relations as well as input relations, we use the ordering and unary relations to construct the family of multi-graphs G i as in the lemma. Because each triple x, y, z appears as a triangle in exactly one graph, it suffices to decide whether there is an i and a triple x, y, z so that e x , e y , e z form a triangle in G i and (x, y, z) holds. Because for each i, all unary and ordering relations are fixed for triangles, we can compute a restricted formula i (x, y, z) with only sparse binary or greater arity relations in x, y, z so that i (x, y, z) is equivalent to (x, y, z) for triangles in G i . Thus, it is equivalent to decide whether there is an i and a triple x, y, z so that e x , e y , e z form a triangle in G i and i (x, y, z). G i is a multigraph, because different elements x might map to edges e x with the same endpoints (but the elements themselves might have different binary relations and therefore be distinguishable). Let H i be the graph corresponding to G i when we combine parallel edges. For each tuple in a relation and each pair of elements x, y in the tuple, if e x and e y do not share an endpoint, x and y cannot be part of a triangle, and if they do, the endpoints form a triple of vertices in H i . We can enumerate all such triples in O(m) time. Any triangle in H i that is not one of these triples corresponds to three elements that have no true relations. We can tell if there is such a triangle by counting the total number of triangles in H i , and subtracting the number of triangles among the O(m) special triples. If i is true when all relations are true, and there is such a triangle, we can return true. If not, any x, y, z that form a triangle and satisfy i must form a triangle on our list. We can, as a linear-time pre-processing step, for each edge (a, b), compute the set of elements x so that e x goes from a to b, and for each triple in our collection store the set of relations among x, y, z. For each triple T j = (a, b, c) in our collection, we have sets X j mapping to (a, b), Y j mapping to (b, c), and Z j mapping to (c, a). Let m j be the number of relations among the elements of these sets. Since any two elements in X j , Y j , or Z j with no relations are indistinguishable, we can remove all but one such element from each set so that there are at most O(m j ) elements total. Since each relation determines at most a single triangle, we have j m j ≤ m. If we can count triangles in time O(m 1+α ) for sparse graphs, as we saw earlier, we can compute the number of triples among X i , Y j , Z j that satisfy i in O(m 1+α j ) time. If any count is positive, we return true. If all counts are 0, we return false. The total time Thus, the exponent for the general class is the same as for counting triangles in sparse graphs.

TO ∀∃∃
We can get a similar but more complex hard problem for T O ∀∃∃ . Consider the problem of given a tripartite graph in adjacency list format, creating an array indexed by edges and giving the number of triangles containing that edge. The method of [10] can be used to give an algorithm with the same complexity for this problem as for counting triangles or deciding whether a triangle exists. Call this problem Edgewise Triangle Counting (ETC).
We will show that the complexity of this problem is an upper bound for the complexity of any problem in T O ∀∃∃ .
Consider the class of problems: For a formula (x, y, z) and a model given as lists of tuples for each relation, create an array indexed by x, giving the number of y, z so that (x, y, z) holds. We use similar argument as for the T O ∃∃∃ case to show that this class of problems reduces to E T C: We apply Lemma 20 and reduce to counting for constantly many formulas which are conjuction of positive relations. We observe that each such formula is essentially either counting triples overall, counting triples containing a single edge, a path of length 2, a triangle, or a hyperedge. All but the triangle can be solved in linear time, even in the array version. In the triangle case, we are counting for each x, the number of triangles involving a single vertex x. However, we can compute the number for a vertex by summing up all the numbers for adjacent edges to some y, since every triangle in the tripartite graph contains exactly one such edge.
Next, to decide a ∀x∃y∃z (x, y, z), we create the graphs G i again just as before, and define i (x, y, z) containing only sparse bipartite or 3-ary relations as before. For each graph, we will find the set of x so that there is a triple x, y, z so that i (x, y, z) and e x , e y , e z form a triangle in G i . The union of these sets is thus the set of x so that there are y, z with (x, y, z), and we check to see if that is all x. As before, we find the set of triangles in H i determined by tuples in a relation in O(m) time. For every edge in H i , we count the number of triangles in H i containing this edge, and subtract the number of such triangles on our list. For each edge in H i where this is positive, if i is satisfied by all false relations, we add the corresponding elements to our set. We then need to also include those x so that there is some triple x, y, z in our set with i (x, y, z). As before, if we can solve E T C in time O(m 1+α ), we can solve the array counting problem for the elements corresponding to each triple in O(m 1+α j ) time, and then mark those elements with a positive array position. All elements with no relations should be marked or unmarked identically, so we only need to include one such element in our sub-routine, but mark all such elements.

Corollary 22
Model-checking for sentences in T O ∀∃∃ can be done in timeÕ(m 1.41 ).

Nondeterministic Complexity of TO ∃∀∃
Here, we show why problems in T O ∃∀∃ are unlikely to be SETH-hard. In particular, we will show that every problem in PT O ∃∀∃ is in NTIME(n log O(1) n) ∩ co-NTIME(n log O(1) n) and every problem in T O ∃∀∃ is in NTIME(m 3/2 log O(1) m) ∩ co-NTIME(m 3/2 log O(1) m). From [17], it then follows that if the Nondeterministic Strong Exponential Hypothesis is true, then no reduction can show these problems are SETH-hard with exponent greater than 1.5. Via direct reduction, for PT O k , no SETHhardness can be shown for exponent greater than k − 2 for any quantifier sequence ending with ∃∀∃. Later, we will show that SETH hardness is possible up to that same exponent (Theorem 13).

Lemma 23 Model-Checking sentences in P T O ∃∀∃ can be done in nondeterministic and co-nondeterministic time O(n log d−1 n). Similarly, model-checking in T O ∃∀∃ can be done in nondeterministic and co-nondeterministic time O(m 3/2 log d−1 m)
Proof Let ∃x∀y∃z (x, y, z) be a problem in PT O ∃∀∃ . To solve it using a nondeterministic algorithm, we guess element x * nondeterministically and verify ∀y∃z (x * , y, z). This latter is a two quantifier statement and so can be solved in quasi-linear time using the base-line algorithm, once we add unary relations U (y) ≡ (x * ≤ i y) for each comparison relation ≤ i .
The complementary problem is ∀x∃y∀z¬ (x, y, z). To solve it nondeterministically, for each x we guess a y x . Then we create a new comparison relations (x ≤ i z) ≡ (y x ≤ i z) for every comparison relation ≤ i , and new unary relations U (x) ≡ (x ≤ y x ) for each comparison relation ≤ i and U (x) = U (y x ) for each unary relation U . This can be done in linear time. Then we can rewrite ¬ (x, y x , z) = (x, z) by replacing relations involving y x with these new relations. Then we verify that ∀x∀z (x, z) using the baseline algorithm in quasi-linear time.

123
If (x, y, z) also has sparse relations, we can use the same method. However, for the co-nondeterministic algorithm, we need to create relations R (x, z) ≡ R(y x , z) for sparse relations R. If many y x are in many tuples for R, this can blow up the number of tuples in the relation. We use the low-degree/high-degree method to get around this. Without loss of generality, we can assume that the variables x, y, z come from disjoint sub-sets of elements, possibly by duplicating elements. For each y * that appears in ≥ m 1/2 tuples, we use the baseline algorithm to compute {x|∀z (x, y * , z)}. We then delete this set of elements x as candidates for the first quantifier since we have shown that the statement ∃y∀z (x, y, z) is true for these x, and delete y * since it cannot be used for any further x's. There are at most O( √ m) such y, and each use of the baseline algorithm takes quasi-linear time. At the end, all y's appear in at most √ m tuples, and we can nondeterministically guess y x for each remaining x, and create the relations previously described, as well as the relations described above for each sparse relation. Since each x will appear in at most O( Exhaustively searching over the first k − 3 quantifiers gives us the following. d−1 (m)).

Quantifier Structure ∃∃∀
Lastly, we will show that VCND is a complete problem for model-checking ∃∃∀ formulas with ordering relations.

Theorem 14 For all d, there exists a d such that VCND d is complete for modelchecking ∃∃∀ formulas in P T O k,d .
Proof We start with a first-order formula ∃x∃y∀zϕ(x, y, z) containing ordering relations between x, y, and z. We want to reduce to VCND d : Given sets of d-dimensional vectors X , Y and 2d-dimensional vectors Z , is there a pair x ∈ X and y ∈ Y such that x • y dom z for all z ∈ Z . We will use a similar technique as in the proof of Theorem 7.

Lemma 24 We can write
where for each α, ϕ α (x, y, z) does not contain any comparisons between x and y.
The vector v x,y captures the relationship between x and y with respect to the total orderings ≤ i . Thus, we consider the formula where ψ α (x, y) is true if and only if v x,y = α and ϕ α (x, y, z) is obtained by replacing any predicates comparing x and y under the i th ordering relation with the truth value given by α i .

Lemma 25
For each α, we can efficiently construct a set I α and for each ∈ I α , construct sets X , Y with the following properties: • For every pair x ∈ X and y ∈ Y with v x,y = α, there exists exactly one ∈ I α such that x ∈ X , y ∈ Y . • For every ∈ I α , x ∈ X , y ∈ Y it holds that v xy = α.
Proof As before, say that a < b for positive integers a and b, where a = a k . . . a 0 in binary, and b = b k . . . b 0 in binary. The position of divergence is the first j starting at the high order bits so that b j = a j . Then, a k . . . a j+1 = b k . . . b j+1 , a j = 0 < b j = 1. If a = b, we call −1 the point of divergence. Recall that we are working with ddimensional vectors in X and Y with integer entries from 1 to n. Consider the set S = {(i 1 , . . . , i d ) | −1 ≤ i j ≤ log n}. We will use elements of S to "guess" the points of divergence between two vectors. Consider arbitrary w ∈ S.
We will alter the vectors in X and Y according to w. Say that w i = j. If x i = a k . . . a 1 and y i = b k . . . b 1 in binary, then we replace the i th coordinate in x with two coordinates, these being a k . . . a j+1 and a j . Similarly, we replace the i th coordinate in y with two coordinates b k . . . b j+1 and b j . If j = −1 we can simply put a special symbol for the second coordinate. We perform this operation for each coordinate i and each x ∈ X and y ∈ Y .
Then, we sort these vectors by the first dimension. We will group together the vectors that have the same value in the first coordinate. If α 1 = 1 (i.e. its required that the first entry of x has value larger than y), then we will discard all the vectors in the group that belong to the set X and have 0 in the second coordinate (since at point of divergence y will have larger value). Similarly, we will discard all the vectors in the group that belong to the set Y and have 1 in the second coordinate. Analogously, if α 1 = −1, we discard vectors from X which have 1 in the second coordinate and vectors from Y when they have 0 in the second coordinate. If α 1 = 0, we discard the all the vectors unless w 1 = −1. Notice now that for every pair of vectors x ∈ X and y ∈ Y that belong to this group, the relationship of x and y under ≤ 1 agrees with α 1 . We recurse on each dimension and perform this for each group. The vectors that came from X then form the set X and the vectors from Y form Y . The set I indexes each of the possible groups that were formed.
We will assume that the vectors in X and Y appear in their original form, rather than how they were altered in the previous step. Additionally, to make the reduction work, each vector x = (x 1 , . . . , x d ) will be altered to be (x 1 , −x 1 , . . . , x d , −x d ). We perform the same operation to vectors in Y . We can assume that ϕ α (x, y, z) is written in conjunctive normal form. For each clause C in ϕ α (x, y, z) and each z ∈ Z , we create a new vector z C in 4d dimensions. Let z = (z 1 , . . . , z 2d ). If the comparison x > i z appears in the clause C, then we set the (2i − 1) th coordinate to z i and the 2i th coordinate to ∞. If the comparison x < i z appears in the clause C, then we set the (2i − 1) th coordinate to ∞ and the 2i th coordinate to −z i . We perform the same operation for comparisons between y and z, this time making the changes in the corresponding dimensions in the last 2d dimensions of z C . If x ≤ i z appears, then as our vectors have integer entries, we can treat this as x − 1 < i z (the same trick works for x ≥ i z). We can assume x = i y does not appear in any clause since To give an example, when d = 2, and we have the clause (x > 1 z) ∨ (y < 2 z), we create the vector (z 1 , ∞, ∞, ∞, ∞, ∞, ∞, −z 2 ). Thus, if x • y dom z, then x, y, z satisfy this clause. We create this vector z C for each clause C in ϕ α (x, y, z) and each z ∈ Z . At least one of these VCND instances is a yes-instance if and only if there is some x ∈ X and y ∈ Y so that x, y, z satisfy each clause of ϕ(x, y, z) for all z ∈ Z .
The last point to make is that this reduction is fine-grained. We have many VCND instances of the form X , Y , Z where ∈ I α . If |X | or |Y | is of size less than n 1− /3 , then we use the data structure from [18] to decide this instance in time O(|X ||Y | log O(d) ). Doing this for each instance where either |X | or |Y | is less than n 1− /3 can take time at most n 2− /3 . Otherwise, |X ||Y | ≥ n 2−2 /3 . Since there are at most |X ||Y | = O(n 2 ) many pairs that can arise, we are in this case at most n 2 /3 times. If we use the improved O(n 2− ) time algorithm on these instances, we will use time at most O(n 2− /3 ). Combining this with the previous step gives an O(n 2− /3 ) algorithm for model-checking the sentence ∃∃∀ϕ(x, y, z).

Hardness Results
In this section, we will present the proofs of Theorem 15, 12 and 13. These results will establish hardness for model-checking sentences ending in ∃∃∀ or ∃∀∃.

Theorem 15 For k ≥ 2 and h ≥ 3, under the h-uniform hk-HyperClique hypothesis, model checking formulas in P T O k+1,hk ending in ∃∃∀ requires time (n k−o(1) ).
Proof For simplicity, we will state the proof for h = 3; the adaptation to h > 3 is straightforward. We will reduce determining if a 3-uniform hypergraph contains a 3k-HyperClique to deciding a k + 1 quantifier sentence in PT O k+1,3k . As a warmup, we will reduce 3-uniform k-HyperClique to a sentence in PT O k+1,k with O(n 3 ) objects, which gives an (n k/3−o(1) ) lower bound. Then, we will describe how to alter the sentence by reducing from 3-uniform 3k-HyperClique to give the desired lower bound (for h > 3, this will correspond to reduction from h-uniform hk-HyperClique).
Without loss of generality, we may assume that we are given a k-partite 3-uniform hypergraph G = (V 1 ∪ · · · ∪ V k , E) using standard color-coding arguments. We view each V i as a disjoint copy of {1, . . . , n}.
The symbols and * are special constants which we use to differentiate between different vertex and non-edge objects, which will introduce now. For 1 ≤ i ≤ k and each vertex v ∈ V i , we introduce a vertex object of dimension k where the i-th entry is set to v, and remaining k − 1 entries are set to . This allows us to represent a choice for including some vertex v for part V i into our clique we create a non-edge object: We set the a-th, b-th and c-th dimension to v a , v b and v c , respectively, and set all other dimensions to the special constant * . Intuitively, the non-edge objects represents all forbidden configuration of our clique.
The claim is that deciding the following formula decides the existence of a k-HyperClique: . . . , x k ) checks if each x i is a vertex object for the part V i . Doing so will simply involve checking that for all i, all but the i-th coordinate of x i is the constant. • E(y) checks if y is a non-edge object. This can be done by checking whether some coordinate is the * constant. • C(x i 1 , x i 2 , x i 3 , y) checks if the "forbidden" edge represented by y is different from the edge given by the vertices that x i 1 , x i 2 , x i 3 represent. This can be done by checking that y is different from at least one of x i 1 , x i 2 , or x i 3 in the i 1 -th, i 2 -th, and i 3 -th coordinate respectively.
However, there are O(n 3 ) many vectors in the domain. We will now remedy this by reducing from 3k-HyperClique. To this end, let G be a 3k-partite 3-uniform hypergraph with vertex parts V 1 , . . . , V 3k . This time, each vertex object will represent a choice of 3 vertices: we group the 3k vertex parts into k groups V 1 , . . . , V k of three vertex parts each. For each V i (representing the three vertex parts V 3i+1 , V 3i+2 , V 3i+3 ), and every triplet of vertices v ∈ V 3i+1 , v ∈ V 3i+2 , v ∈ V 3i+3 , we create a vertex object of dimension 3k, where we set the coordinates 3i + 1, 3i + 2 and 3i + 3 to v, v , and v , respectively, and all other to the special constant .
The edge objects will be constructed as before. The formula will change very slightly to implement the same idea as before: The formula T (x 1 , . . . , x k ) will again check that the x 1 , . . . , x k are vertex objects and E(y) will check that y is a non-edge object. For any non-edge object y, we need to ensure that the non- * dimensions a, b, c are not all equal to the dimensions a, b, c in the corresponding vertex objects x a , Proof Consider the Hitting Set problem: we are given sets U , V of n vectors in {0, 1} d with d = c log n, and the task is to determine whether there is some u ∈ U such that for Recall that the Hitting Set conjecture is that for all ε > 0 there is some c such that Hitting Set with d = c log n cannot be solved in time O(n 2−ε ).
The idea is to block the d vector-dimensions into b = d/s blocks of size s = log(n)/2, and define a 2b-dimensional order property: For each vector u ∈ U , we define an object whose first b dimensions represent u. Here, each dimension i encodes the i-th block of s bits of u, i.e., each dimension i uses an (arbitrary) total order on the block configurations {0, 1} s . Likewise, for each vector v ∈ V , we define an object whose last b dimensions represent the bits of v.
The formula will be: Here, for any x, y, an appropriately chosen z ∈ Z is supposed to serve as a witness that there is some k ∈ [d] with x[k] = y[k] = d. To do this, for any block i, we consider pairs of admissible configurations of the i-th blocks of x and y, namely: for any α, β ∈ {0, 1} s such that there is some k ∈ [s] with α[k] = β[k] = 1, we define the object z i,α,β such that its i-th dimension in the first half is α, its i-th dimension in the second half is β, and all other dimensions are ∞.
By this construction, the formula is satisfied if and only if for there is some u ∈ U such that for all v ∈ V we can find a block i and a corresponding bit k in block i in which both x and y have a 1, i.e., u and v are non-orthogonal. Since |Z | ≤ 2 2s = n, we obtain our lower bound, assuming the Hitting Set conjecture: Let ε > 0 and take a c such that Hitting Set on dimension d = c log n has no O(n 2−ε ) time algorithm. Then, a O(n 2−ε ) time algorithm for PT O ∃∀∃,b with b ≤ 2c would give a Hitting Set algorithm on dimension d = c log n in time O(n 2−ε ), contradicting the assumption. This concludes the claim.
Finally, for k-quantifier sentences ending in ∃∀∃, we have the following result.
Theorem 13 Assuming SETH, model checking formulas in P T O k,2 ending in ∃∀∃ requires time (n k−2− ) for any > 0.
Proof We will reduce k-Orthogonal Vectors to deciding a first-order sentence with k+2 quantifiers ending in ∃∀∃ with 2 ordering relations. We will associate the elements of the domain with 2-dimensional vectors. Let A = {a 1 , . . . , a n } be our k-Orthogonal vectors instance. We will assume that for every coordinate j, there is some vector a ∈ A with a[ j] = 0. Otherwise, this is trivially a no-instance. For each vector a i and coordinate j where a i [ j] = 0, we introduce a vector (i, j) into our domain. Thus, we have O(nd) many vectors in our domain. The claim is that deciding the following sentence on this new domain correctly decides if there are k-Orthogonal vectors in A: Say the sentence is satisfied by our domain. Let the first coordinate of x i be o i . Then, we claim that a o 1 , . . . , a o k are a k-orthogonal set of vectors. By our assumption, for every 1 ≤ j ≤ d there is some vector a v ∈ A with a v [ j] = 0. The universal quantifier ensures that for the correspdonding vector y = (v, j), our sentence is satisfied and so, there is some object x i of the form (o i , j). Therefore, there is a 0 in the j th coordinate of a o i . As this is true for all 1 ≤ j ≤ d, we infer that this choice of vectors is indeed k-orthogonal.
Conversely, if there is a k-orthogonal tuple a o 1 , . . . , a o k ∈ A, then choose x 1 , . . . , x k such that x i = (o i , j i ) for each 1 ≤ i ≤ k (and an arbitrary 0-coordinate j i ). Observe that for any choice of y = (i , j ) there is some a i with a i [ j ] = 0, and thus u = (i, j ) satisfies the condition.
Consequently any O(n k− )-time algorithm for this (k + 2)-quantifier sentence would give a O(n k− poly(d)) algorithm for k-OV, contradicting the Moderatedimension k-OV conjecture and thus SETH.
We prove our n 2−o(1) lower bound for VCND d under SETH via reduction from the low-dimensional Orthogonal Vectors Hypothesis which is well-known to be implied by SETH (see, e.g. [30]). This reduction is very similar to our Hitting Set reduction for ∃∀∃.

Theorem 16
Assuming SETH, for every > 0, there is a d such that VCND d requires time (n 2− ).
The result follows from the following lemma, since the low-dimension Orthogonal Vectors Hypothesis states that for every , there exists some c such that OV with dimension d = c log n cannot be solved in time O(n 2− ).

Lemma 26 For constant c > 0 and T (n)
Proof Consider an Orthogonal Vectors instance on dimension c log n vectors. Let s = log n 2 . First, for each vector x = (x 1 , x 2 , . . . , x c log n ), we will create a new vector x = (x 1 , x 2 , . . . , x b ) where b = c log n s = 2c and x i is the integer given by the bits x s(i−1)+1 x s(i−1)+2 . . . x si . In other words, we are grouping s bits of x at a time and converting them to integer values. Lastly, we will convert each vector x = The set of x vectors become X , and Y is obtained by performing the same operation to the vectors in Y . Lastly, we want to create a set of vectors Z that will "encode" witnesses to non-orthogonality. Consider a vector α ∈ {0, 1} s . Let α ⊥ denote the set of vectors orthogonal to α. For each i ∈ [b], α ∈ {0, 1} s , and β ∈ {0, 1} s \ α ⊥ we create vectors of the form Here, in the vector notation, we are implicitly viewing α and β as integers specified by the binary strings α, β. This vector has the property that it dominates some other vector if and only if the two agree on the indices that are not ∞ in our gadget. These vectors then form our Z set. Now, assume the Orthogonal Vectors instance indeed contained an orthogonal pair of vectors x and y. Consider the vectors x ∈ X and y ∈ Y . Assume for the sake of contradiction that some vector z ∈ Z dominates x • y . Then, x • y must have agreed on the non-∞ entries of z. But these are exactly the entries given by some α and β, where α and β have binary representations that are not orthogonal, which contradicts the assumption that x is orthogonal to y. Similarly, if all of the pairs of vectors x, y were not orthogonal, then they must have agreed with some vector in z on the non-∞ entries, so they could not form a yes-instance of VCND.
Creating the set of X and Y vectors takes time linear in the number of vectors in the OV instance. Creating the set of Z vectors takes time O(2 2s · b) = O(n) with our choice of s, b.

Specialized Algorithms for VCND d
Since VCND d turned out to be the only candidate for completeness of PT O k,d , we study this problem in more detail in this section.
First, we will present an improved algorithm for VCND d when the dimension of one of the sets is very small. Then, when d = O(n), we give an improved algorithm for VCND d using fast matrix multiplication.

Theorem 17
There is a O(n 2− 1 2 d ) time algorithm for VCND when one set of vectors is of dimension 2 and the other is of dimension d.

Proof
We will assume the set X contains vectors of dimension d and Y contains vectors of dimension 2. We utilize the well-known Erdös-Szekeres Theorem to preprocess the vectors in X . There are many equivalent formualations of this theorem, but the version we will use is as follows: In a list of n integers, there is a monotonic subsequence of size at least √ n . Consider the vectors in X restricted to their first coordinate. This is indeed a list of length n. We compute the longest increasing subsequence on the list in order and in reverse, which is guaranteed to return a monotonic sequence of length at least √ n . We then recurse on the list of vectors whose first coordinate is part of the monotonic subsequence, this time considering the next dimension of these vectors. We repeat this process for the remaining set of at most n − Note that each of these lists will have the property that when the vectors are viewed restricted to some dimension, the vectors appear in either increasing or decreasing order.
For each of the O(n 1− 1 2 d ) lists, we begin with the first vector x in the list. We can keep multiple copies of the Z set where in each copy the list is sorted with respect to a certain dimension. Then, we use binary search to compute the set of z ∈ Z that dominate the given x. We can keep pointers in place at each iteration so that updating this set is fast for the next x in the list. Let L x denote the list of z ∈ Z that dominate x. The last observation is that since the set of Y vectors is of dimension 2, we can remove vectors to make the set Pareto optimal. Then, we can assume that the set of Y have the property that they are in increasing order in the first dimension, and in decreasing order in the second dimension. Therefore, for a fixed z, the set of y ∈ Y that are dominated by z form a contiguous interval. We can compute this interval with two binary searches. Thus, for each z ∈ L x , we add the interval of y vectors that are dominated by z to a segment tree [19], adding 1 to each entry occupied by an interval. We then query the min-element in the segment tree. If the min-element is 0, then there was some y ∈ Y that was not dominated by any z ∈ L x , in which case we accept. Otherwise, we continue to compute L x , where x is the next vector in the list. Since we preprocessed the list, we can update L x using the saved pointers to compute L x . We will perform at most n updates to L x to compute L x for any x in the list. One might hope to extend this algorithm to VCND on two sets of d-dimension vectors using a generalization of the segment tree seen here. Indeed, a multidimensional segment tree supporting addition and min-queries in time poly-logarithmic in n would provide a truly subquadratic algorithm for VCND d . However, this may be too much to hope for since a multidimensional segment tree supporting these operations would violate SETH [36,39].
Lastly, when the dimension d = O(n), we can use a similar idea from Williams [48] to obtain speedups using fast matrix multiplication.

Conclusion and Open Problems
We have introduced general classes T O k,d , PT O k,d of multidimensional ordering problems as model-checking problems for k-quantifier first-order formulas over d succinctly represented ordering relations (with or without additional explicitly represented relations). We gave a conditionally tight algorithm running in time O(m k−1 log d m) for all these problems. For PT O k,d , we gave complete or hard problems for most quantifier structures, and identified a problem VCND d as the essentially only candidate to be complete for PT O k,d .
The main open problem is to prove or disprove that VCND d is complete for PT O k,d . The major challenge here is to reduce ∃∀∃-quantified ordering problems to the ∃∃∀quantified VCND d . Such a reduction is possible in the unordered setting [30], but its unclear how to make this approach work in our setting. Likewise, can we prove that a hybrid version of VCND d and the orthogonal vectors problem (which is complete for the sparse-relational setting [30]) is complete for T O k,d ? An intermediate step could be to find a complete problem for ∃∀∃-quantified ordering problems.
A further general algorithmic question is to study existence of improved algorithms for very small constant dimensions d, such as d = 1 and d = 2, in particular the existence of O(n 2− (d) ) time algorithms with (d) > 0, for 3-quantifier problems.
In this direction, we have given an O(n 2− 1