Open access

Author

Geissmann, Barbara

Date

2018Type

- Doctoral Thesis

ETH Bibliography

yes
Altmetrics

Abstract

To cope with errors in computation has an ongoing tradition in computer science. In fact, errors can represent hardware faults, imprecise measurements, mistakes in judgments made by humans, or even may be deliberately introduced in order to save certain resources. In this thesis we consider a
model, in which comparisons can fail. In particular, we assume that such comparisons allow us to query the linear order of two given elements but are prone to making errors: with some probability 1−p, the result of the comparison is correct, while with some small probability p, we get back
the reverse order of the two queried elements. We distinguish between persistent and non-persistent comparison errors, where the former means that repeating the same comparison several times always yields the same result, whereas the latter means that every repetition of a comparison has again the same probabilities to be correct or wrong.
In this regard, we are especially interested in algorithms that only use ordinal information of the elements to compute a solution to a given problem. Prominent examples are combinatorial problems such as comparison-based sorting, one focus in this thesis. But there are also many other optimization problems whose optimum solution or an approximation thereof can be computed without knowing the exact values of the elements.
We begin this thesis with considering the problem of sorting in the presence of non-persistent comparison errors. Starting with an input sequence of n distinct elements, we examine two natural sorting strategies that repeatedly compare and swap two elements within the sequence: in the first,
we restrict to pairs of elements whose positions in the current sequence are adjacent, while in the second, we allow to compare and swap two arbitrary elements. The sorting strategies can be considered as Markov chains, and we show that restricting to adjacent swaps yields a better “sortedness” of a sequence in stationary distribution than allowing arbitrary swaps, namely O(n) vs. O(n^2) total dislocation in expectation. (The dislocation of an element is the absolute difference between its position and its rank. The total dislocation of a sequence is the sum of all dislocations of the elements.)
In regard to persistent comparison errors, we optimally solve the problem of sorting n elements in terms of running time and sortedness. In particular, we present an algorithm that in O(n log n) time returns a permutation of the elements achieving both O(log n) maximum dislocation of any element and O(n) total dislocation with high probability. Additionally, we show that this is the best we can hope for in this error model, as we provide tight lower bounds on the two dislocation measures.
The second combinatorial problem that we consider is computing, in the presence of persistent comparison errors, a longest increasing subsequence of a given input sequence containing n distinct elements. Here we present an O(log n)-approximation algorithm that returns in O(n log n) time a subsequence of the input that is guaranteed to be increasing and whose length is at least Ω(1/log n) times the length of the correct answer. Moreover, we provide lower bounds to show that both the running time and the approximation factor are asymptotically tight.
We then turn to more general optimization problems and allow algorithms to perform a few comparisons that are always correct, but obviously much more costly than their error-prone counterparts. The intention behind these “always-correct” comparisons is to guarantee that, despite of errors in the other comparisons, an approximation to the optimum solution of such a problem can still be found. We therefore extend our model of computation to have two types of comparisons: always-correct and error-prone, where for the latter we assume that the errors are persistent. In this model, we propose to study a natural complexity measure which accounts for the number of operations of either type of comparisons separately.
For a large class of optimization problems, we finally show that a constant approximation can be computed using only a polylogarithmic number of always-correct comparisons, and O(n log n) error-prone comparisons. In particular, the result applies to k-extendible systems, which includes several NP-hard problems, as well as matroids as a special case. Show more

Permanent link

https://doi.org/10.3929/ethz-b-000319422Publication status

publishedExternal links

Search via SFX
Contributors

Examiner: Widmayer, PeterExaminer: Hromkovič, Juraj

Publisher

ETH ZurichOrganisational unit

03340 - Widmayer, Peter (emeritus) / Widmayer, Peter (emeritus)
More

Show all metadata
ETH Bibliography

yes
Altmetrics