Mini-Workshop "Theoretical Computer Science" at TuDo and RUB
A mini-workshop for theoretical computer science at the Universities of Dortmund and Bochum. Each session consists of about 3 conference-length (25 min.) talks.
Next session
MW 37, July 1, 2024
-
- Abstract: The modal mu-calculus, an extension of modal logic with fixpoint operators, is a widely applicable formalism valued for its good model-theoretic and algorithmic properties. It encompasses virtually all bisimulation-invariant logics such as LTL or CTL and is well-suited for describing infinitary properties such as ``there exists an infinite path''. However, (un)boundedness properties such as ``there exist arbitrarily long finite paths'' lie beyond the expressive power of the mu-calculus.
I will present the countdown mu-calculus, a novel extension of the mu-calculus with ordinal approximations of fixpoints, which captures such (un)boundedness properties. I will show how the classical logic~games~automata correspondence extends to suitably defined countdown games and countdown automata. Results obtained using this correspondence, as well as its limitations and open problems, will be discussed.
Based on joint work with Bartek Klin.
- Abstract: The modal mu-calculus, an extension of modal logic with fixpoint operators, is a widely applicable formalism valued for its good model-theoretic and algorithmic properties. It encompasses virtually all bisimulation-invariant logics such as LTL or CTL and is well-suited for describing infinitary properties such as ``there exists an infinite path''. However, (un)boundedness properties such as ``there exist arbitrarily long finite paths'' lie beyond the expressive power of the mu-calculus.
- Benjamin Bordais (RC Trust, TU Dortmund): Comparing the Complexity of Learning LTL, CTL and ATL formulas
- Abstract: We consider the problem of learning temporal logic formulas from examples of system behavior. Although learning temporal properties has crystallized as an effective mean to explain complex temporal behaviors, the theoretical understanding of the complexity of the learning decision problems remains largely unexplored. To address this, we study the complexity of the passive learning problems of three prominent temporal logics, Linear Temporal Logic (LTL), Computation Tree Logic (CTL) and Alternating-time Temporal Logic (ATL) and several of their fragments. In this talk, I will present some of our complexity results.
- Abstract: We consider the problem of learning temporal logic formulas from examples of system behavior. Although learning temporal properties has crystallized as an effective mean to explain complex temporal behaviors, the theoretical understanding of the complexity of the learning decision problems remains largely unexplored. To address this, we study the complexity of the passive learning problems of three prominent temporal logics, Linear Temporal Logic (LTL), Computation Tree Logic (CTL) and Alternating-time Temporal Logic (ATL) and several of their fragments. In this talk, I will present some of our complexity results.
-
- Abstract: We are interested in the following algorithmic problem: given pairs {(t_i , t^* i )} i of labelled, ordered trees, learn a set of rules that explain common (structural) differences for all pairs of trees. More precisely, the learnt set of rules shall allow to transform t_i into t^*_i in few steps for each i in {1, . . . , n}.
We approach this problem by (1) introducing a pattern-based specification language for tree transformations; (2) studying the computational complexity of variants of the above algorithmic problem, e.g. showing NP-hardness for very restricted variants; and (3) solving the problem for datasets from CS education research by encoding it into propositional satisfiability and using a propositional SAT solver.
This talk is based on joint work with Daniel Neider, Leif Saballek, Johannes Schmidt und Thomas Zeume.
- Abstract: We are interested in the following algorithmic problem: given pairs {(t_i , t^* i )} i of labelled, ordered trees, learn a set of rules that explain common (structural) differences for all pairs of trees. More precisely, the learnt set of rules shall allow to transform t_i into t^*_i in few steps for each i in {1, . . . , n}.
Previous sessions
MW 36, November 21, 2022
- Harold Nieuwboer (University of Amsterdam & Ruhr University Bochum): Quantum algorithms for matrix scaling and balancing
- Abstract: Matrix scaling and matrix balancing are two basic linear-algebraic problems with a wide variety of applications, such as approximating the permanent, computing optimal transport distances in machine learning, and pre-conditioning linear systems to make them more numerically stable. We study the power and limitations of quantum algorithms for these problems.
In particular, we obtain quantum speedups for two classical methods: Sinkhorn's algorithm for matrix scaling and Osborne's algorithm for matrix balancing. We achieve a time complexity that is sublinear in the dimension, at the cost of a worse dependence on the desired precision. We also discuss a second-order method that is even faster for well-conditioned instances. To achieve this, we use quantum amplitude estimation and graph sparsification techniques.
To complement these results we also provide lower bounds, showing that our quantum Sinkhorn algorithm is essentially optimal even for constant precision, while in the high-precision regime one cannot achieve a sublinear dependence on the dimension.
Based on joint works with Joran van Apeldoorn, Sander Gribling, Yinan Li, Michael Walter, and Ronald de Wolf.
- Abstract: Matrix scaling and matrix balancing are two basic linear-algebraic problems with a wide variety of applications, such as approximating the permanent, computing optimal transport distances in machine learning, and pre-conditioning linear systems to make them more numerically stable. We study the power and limitations of quantum algorithms for these problems.
- Jean Christoph Jung (Lehrstuhl Informatik 1, TU Dortmund): Learning Description Logic Concepts: From Theory to Practice
- Abstract: In the first part of the talk, we will present foundational results regarding the learnability of conjunctive queries and of concepts formulated in the description logic EL (a kind of tree-shaped conjunctive queries). We will focus on supervised learning, that is, learning from positive and negative examples, in Valiant’s „probably approximately correct“ (PAC) sense. In the second part, we present a tool for PAC learning of EL concepts that leverages the computational power of SAT solvers and show preliminary experimental results.
Based on joint work with Balder ten Cate, Maurice Funk, and Carsten Lutz
- Abstract: In the first part of the talk, we will present foundational results regarding the learnability of conjunctive queries and of concepts formulated in the description logic EL (a kind of tree-shaped conjunctive queries). We will focus on supervised learning, that is, learning from positive and negative examples, in Valiant’s „probably approximately correct“ (PAC) sense. In the second part, we present a tool for PAC learning of EL concepts that leverages the computational power of SAT solvers and show preliminary experimental results.
- Christopher Spinrath (Lehrstuhl Informatik 1, TU Dortmund): Work-Efficient Query Evaluation with PRAMs
- Abstract: It is well-known that all relational algebra queries can be evaluated in
parallel constant time on an appropriate PRAM model.
We are interested in the efficiency of algorithms for parallel evaluation, that is, in the number of processors or, asymptotically equivalent, in the work required to evaluate a query. Naive evaluation in the parallel setting results in huge (polynomial) bounds on the work of such algorithms and in presentations of the result sets that can be extremely scattered in memory.
In this talk, we first discuss some obstacles for constant time PRAM query evaluation, and present algorithms for relational operators that are considerably more efficient than the naive approaches.
In the second part of this talk, we explore three settings, in which efficient sequential query evaluation algorithms exist: acyclic queries, semi-join algebra queries, and join queries — the latter in the worst-case optimal framework.
Given suitable data structures for the database relations, the work of our algorithms matches the best sequential algorithms in the case of semi-join queries, and it comes close in the other two settings.
Based on joint work with Jens Keppeler and Thomas Schwentick
- Abstract: It is well-known that all relational algebra queries can be evaluated in
Via zoom: tu-dortmund.zoom.us/j/97926620750
MW 35, January 27, 2020
- Jonas Ellert: Space Efficient Construction of Lyndon Arrays in Linear Time
-
Abstract: The Lyndon array identifies for each suffix Si of a string of length n the next lexicographically smaller suffix Sj, i.e. the minimal index j with j > i and Sj ≺ Si. We present the first linear time algorithm to construct the 2n-bit version of the Lyndon array using only o(n) bits of working space. A simpler variant of this algorithm computes the plain (n log(n))-bit version of the Lyndon array using only O(1) words of additional working space. All previous algorithms are either not linear, or use at least n log(n) bits of additional working space. Also in practice, our new algorithms outperform the previous best ones by an order of magnitude, both in terms of time and space.
Based on joint work with Philip Bille, Johannes Fischer, Inge Li Gørtz, Florian Kurpicz, Ian Munro, Eva Rotenberg
-
- Jonas Schmidt: Dynamic Complexity Meets Parameterised Algorithms
-
Abstract: Dynamic Complexity studies the maintainability of queries with logical formulas in a setting where the underlying structure or database changes over time. Most often, these formulas are from first-order logic, giving rise to the dynamic complexity class DynFO. This paper investigates extensions of DynFO in the spirit of parameterised algorithms. In this setting structures come with a parameter k and the extensions allow additional ``space'' of size f(k) (in the form of an additional structure of this size) or additional time f(k) (in the form of iterations of formulas) or both. The resulting classes are compared with their non-dynamic counterparts and other classes. The main part of the paper explores the applicability of methods for parameterised algorithms to this setting through case studies for various well-known parameterised problems.
- based on joint work with Thomas Schwentick, Nils Vortmeier, Thomas Zeume, and Ioannis Kokkinis
-
- Jian-Jia Chen: Dependency Graph Approach for Multiprocessor Real-Time Synchronization
-
Abstract: In a multi-tasking system, mutual exclusion the accesses to shared resources has to be guaranteed to ensure the correctness of these operations. Such accesses are typically done in so-called critical sections that are protected by binary semaphores or mutex locks. The resource sharing problem becomes much more challenging in a multiprocessor environment. Therefore, a large number of resource sharing protocols has been proposed and analyses. However, the majority of these protocols only tries to handle a predefined situation resulting from a given tasks partition (for partitioned scheduling) or a given priority order (for global scheduling) in a reasonable way.
Fundamental exploration of the multiprocessor resource sharing problem for real-time systems is still missing: 1) what is the fundamental difficulty, 2) what is the performance gap of different scheduling strategies, i.e., global, partitioned, and semi-partitioned, and 3) is it always beneficial to prioritize critical sections. In this talk, I will present the computational complexity of multiprocessor real-time synchronization problems. I will present approximation algorithms for the restrictive setting to investigate the Makespan with non-nested critical sections. In addition, the extensions to more general scenarios will be discussed together with some experimental results. - based on joint work with Georg von der Brüggen, Junjie Shi, and Niklas Ueter
-
MW 34, May 27, 2019
- Patrick Dinklage: Distributed Wavelet Tree Construction
-
Abstract: The wavelet tree (Grossi et al. [SODA, 2003]) is a compact data structure with many applications such as text indexing or computational geometry. We describe and implement the first distributed wavelet tree construction algorithms, which allow us to process inputs that are orders of magnitude larger than what current shared memory construction algorithms can work with. Our best algorithms achieve almost optimal speedups when increasing the number of processing elements. They further use only very little extra memory and communication volume on top of the input.
- based on joint work with Johannes Fischer and Florian Kurpicz
-
- Christopher Spinrath: Parallel-Correctness for Datalog Programs
-
Abstract: Deciding parallel-correctness is a natural problem for parallel query evaluation: In a nutshell, given a query and an input database divided among multiple servers, does the evaluation of the query on the servers in parallel yield the same result as evaluating the query on the complete database?
Recently, Ketsman et al. started the investigation of the parallel evaluation of recursive queries in the Massively Parallel Communication (MPC) model. Among other things, it was shown that parallel-correctness for general Datalog programs is undecidable, by a reduction from the undecidable containment problem for Datalog.
In this talk, we discuss that the undecidability of parallel-correctness runs deeper: it already holds for fragments of Datalog with a decidable containment problem, e.g., monadic and frontier-guarded Datalog; even under relatively simple evaluation strategies. These simple evaluation strategies are defined w.r.t. data-moving distribution constraints. We then show that parallel-correctness for frontier-guarded Datalog and restrictions of data-moving distribution constraints is 2EXPTIME-complete.
Interestingly, distributed evaluation no longer preserves the usual containment relationships between fragments of Datalog. Indeed, not every monadic Datalog program is effectively equivalent to a frontier-guarded one in the distributed setting, although this holds in the classical setting. We illustrate the latter by considering another setting where parallel-correctness is decidable for
frontier-guarded Datalog but undecidable for monadic Datalog. - based on joint work with Frank Neven, Thomas Schwentick and Brecht Vandevoort
-
- Marco Wilhelm: The Probabilistic Description Logic ALC^ME
-
Abstract: We present ALC^ME, a probabilistic variant of the Description Logic ALC that allows for representing and processing conditional statements of the form "If A holds, then B follows with probability p." Probabilities are understood as degrees of belief and formally interpreted by the aggregating semantics. For reasoning with knowledge bases in ALC^ME, the principle of maximum entropy provides a valuable methodology following the idea of completing missing information in a most cautious way. We show that the major reasoning tasks of checking
consistency of a knowledge base, calculating approximations of the maximum entropy distribution, and drawing inferences are possible in time polynomial in the domain size. - based on joint work with Gabriele Kern-Isberner, Andreas Ecke, and Franz Baader
-
MW 33, July 16, 2018
- Nils Vortmeier: Reachability and Distances under Multiple Changes
-
Abstract: Recently it was shown that the transitive closure of a directed graph can be updated using first-order formulas after insertions and deletions of single edges in the dynamic descriptive complexity framework by Dong, Su, and Topor, and Patnaik and Immerman. In other words, Reachability is in DynFO. In this talk we extend the framework to changes of multiple edges at a time, and study the Reachability and Distance queries under these changes. We show that the former problem can be maintained in DynFO(+, x) under changes affecting O(log n / log log n) nodes, for graphs with n nodes. If the update formulas may use a majority quantifier then both Reachability and Distance can be maintained under changes that affect O(log^c n) nodes, for any fixed natural number c.
- based on joint work with Samir Datta, Anish Mukherjee, and Thomas Zeume presented at ICALP 2018
-
-
Anish Mukherjee: Shortest k-Disjoint Paths via Determinants
-
Abstract: The well-known $k$-disjoint path problem ($k$-DPP) asks for pairwise vertex-disjoint paths between $k$ specified pairs of vertices in a given graph. The shortest $k$-DPP asks to find such paths of shortest total length. We restrict attention to the shortest $k$-DPP instances on undirected planar graphs where all sources and sinks lie on a single face. We provide efficient sequential and parallel algorithms for the problem which goes via counting such solutions, for any fixed $k$. This partly answers an open question of Colin de Verdiere and Schrijver ’08. Previously, efficient algorithms are known only for the well-ordered case and for the general configuration when $k \le 4$. The algorithms are based on a bijection between a shortest $k$-tuple of disjoint paths and cycle covers. This allows us to non-trivially modify techniques relating counting cycle covers to the determinant computation. We further need to do a controlled inclusion-exclusion to produce a sum of determinants allowing us to count only the ``good'' cycle covers.
- based on joint work with Samir Datta, Raghav Kulkarni, and Siddharth Iyer.
-
-
Gaetano Geck: Introduction to Iltis: An Interactive, Web-Based System for Teaching Logic
-
Abstract: Logic is a foundation for many modern areas of computer science --- modelling scenarios using logical formalisms and inferring new knowledge are important skills for going-to-be computer scientists. The Iltis project aims at providing a web-based, interactive system that supports teaching logical methods. In particular the system shall (a) support to learn to model knowledge and to infer new knowledge using propositional logic, modal logic and first-order logic, and (b) provide immediate feedback and support to students. In this talk, we present a prototypical system that currently supports the above tasks for propositional logic. First impressions on its use in a second year logic course for computer science students are reported.
- based on joint work with Artur Ljulin, Sebastian Peter, Jonas Schmidt, Fabian Vehlken, and Thomas Zeume
-
MW 32, January 29, 2018
- Dominik Köppl: Indexing the Bijective BWT
-
Abstract: A text index is a data structure built over an input text supporting an efficient retrieval for all occurrences of a given pattern without the need of traversing the whole text. Many text indices contain sufficient information to retrieve a substring of the text such that it can replace the text itself to reduce the space consumption. One efficient kind of these indices is the FM-index, which is built on the traditional Burrows-Wheeler transform (BWT). We propose a self-index that works like the FM-index, but is built on the bijective BWT instead of the (traditional) BWT. Like the FM-index, this index supports efficient backward searching.
- joint work with Hideo Bannai, Juha Kärkkäinen, und Marcin Piatkowski
-
-
Ioannis Kokkinis: The Expected Duration of Sequential Gossiping
- Abstract: A gossip protocol aims at arriving, by means of point-to-point communications (or telephone calls), at a situation in which every agent knows all the information initially present in the network. If it is impossible to have more than one calls at the same time, the protocol is called sequential. When the calls happen at random, it makes sense to study the expected duration of the protocols. In this talk we study the expected duration of 5 sequential gossip protocols: we present an algorithm that calculates the exact value of the expected duration for a small number of agents and we show how bounds for the asymptotic behavior of the expected duration can be computed. In order to obtain our results we apply techniques from random graph theory.
- Jian-Jia Chen: Open problem: Exact worst-case response time analysis for self-suspending sporadic tasks
- Abstract: In computing systems, an execution entity (job/process/task) may suspend itself when it has to wait for some activities to continue/finish its execution. Our recent study in RTSS 2016 shows that, given a priority ordering of sporadic real-time tasks, validating whether the lowest-priority task can meet its deadline or not is co-NP-hard in the strong sense. There were several attempts in the literature using mixed integer linear programming (MILP) to solve this problem. However, none of them can provide exact solutions even with exponential time complexity. This talk will provide a short introduction about the above mentioned problem and explain why the existing solutions are in fact over-approximations. The talk hopes to bring collaborations to solve this long-standing problem in real-time systems (regardless of time complexity).
MW 31, July 24, 2017, Chair 1
- Johannes Fischer: Lempel-Ziv Compression in a Sliding Window
- Abstract: We present new algorithms for the sliding window Lempel-Ziv (LZ77) problem and the approximate rightmost LZ77 parsing problem. Our main result is a new and surprisingly simple algorithm that computes the sliding window LZ77 parse in O(w) space and either O(n) expected time or O(n log log w + z log log σ) deterministic time. Here, w is the window size, n is the size of the input string, z is the number of phrases in the parse, and σ is the size of the alphabet. This matches the space and time bounds of previous results while removing constant size restrictions on the alphabet size. To achieve our result, we combine a simple modification and augmentation of the suffix tree with periodicity properties of sliding windows. We also apply this new technique to obtain an algorithm for the approximate rightmost LZ77 problem that uses O(n(log z + log log n)) time and O(n) space and produces a (1+ε)-approximation of the rightmost parsing (any constant ε>0). While this does not improve the best known time-space trade-offs for exact rightmost parsing, our algorithm is significantly simpler and exposes a direct connection between sliding window parsing and the approximate rightmost matching problem.
- Thomas Liebig: Dynamic Transfer Patterns for Fast Multi-modal Route Planning
- Abstract: Route planning makes direct use of geographic data and provides beneficial recommendations to the public. In real-world the schedule of transit vehicles is dynamic and delays in the schedules occur. Incorporation of these dynamic schedule changes in multi-modal route computation is difficult and requires a lot of computational resources. Our approach extends the state-of-the-art for static transit schedules, Transfer Patterns, for the dynamic case. Therefore, we amend the patterns by additional edges that cover the dynamics. Our approach is implemented in the open-source routing framework OpenTripPlanner and compared to existing methods in the city of Warsaw. Our results are an order of magnitude faster then existing methods.
- Joint work with Sebastian Peter, Maciej Grzenda and Konstanty Junosza-Szaniawski
- Nils Vortmeier: A Strategy for Dynamic Programs: Start over and Muddle through
- Abstract: A strategy for constructing dynamic programs is introduced that utilises periodic computation of auxiliary data from scratch and the ability to maintain a query for a limited number of change steps. It is established that if some program can maintain a query for log n change steps after an AC1-computable initialisation, it can be maintained by a first-order dynamic program as well, i.e., in DynFO. As an application, it is shown that decision and optimisation problems defined by monadic second-order (MSO) and guarded second-order logic (GSO) formulas are in DynFO, if only change sequences that produce graphs of bounded treewidth are allowed. To establish this result, Feferman–Vaught-type composition theorems for MSO and GSO are established that might be useful in their own right.
- Joint work with Samir Datta, Anish Mukherjee, Thomas Schwentick and Thomas Zeume
MW 30, January 19, 2017, Chair 1
- Daan Apeldoorn: Exploiting Explicit Knowledge in the Context of Learning Agents
- Abstract: According to psychological models, learned knowledge can be distinguished into implicit and explicit knowledge. The former can be exploited (e.g., to solve a task), but it cannot be verbalized easily (e.g., to explain it to another person). The latter is accessible in an explicit form: it can comprise generalized, rule-based knowledge which can be verbalized and explained to others. During a learning process, the learned knowledge starts in implicit form and gets explicit as the learning process progresses, and humans can benefit from exploiting such generalized, rule-based knowledge when learning. In this talk, the incorporation of implicit and explicit knowledge is investigated in the context of learning agents. It will be shown experimentally that learning agents are also able to benefit from explicit knowledge, even in early phases of a learning process.
- Joint work with Gabriele Kern-Isberner
- Henrik Björklund (Umeå Universitet, Sweden): Restricted Hyperedge Replacement Grammars and Parsing Complexity
- Abstract: Hyperedge Replacement Grammars are a useful and expressive formalism for generating graph languages. Unfortunately, the uniform parsing problem for such grammars is NP-hard. We investigate restrictions which allow polynomial time parsing while still retaining enough expressive power to generate interesting languages. In particular, our search for suitable restrictions is guided by applications in natural language processing.
- Dominik Köppl: Computing All Distinct Squares in Linear Time for Integer Alphabets
- Abstract: Given a string on an integer alphabet, we present an algorithm that computes the set of all distinct squares belonging to this string in time linear to the string length. As an application, we show how to compute the tree topology of the minimal augmented suffix tree in linear time. Asides from that, we elaborate an algorithm computing the longest previous table in a succinct representation using compressed working space.
- Joint work with Hideo Bannai and Shunsuke Inenaga
MW 29, June 27, 2016, Chair 1
- Andre Droschinsky: Faster Algorithms for the Maximum Common Subtree Isomorphism Problem
- Abstract: The maximum common subtree isomorphism problem asks for the largest possible isomorphism between subtrees of two given input trees. This problem is a natural restriction of the maximum common subgraph problem, which is NP-hard in general graphs. Confining to trees renders polynomial time algorithms possible and is of fundamental importance for approaches on more general graph classes. Various variants of this problem in trees have been intensively studied. We consider the general case, where trees are neither rooted nor ordered and the isomorphism is maximum w.r.t. a weight function on the mapped vertices and edges. For trees of order n and maximum degree D our algorithm achieves a running time of O(n^2 D) by exploiting the structure of the matching instances arising as subproblems. Thus our algorithm outperforms the best previously known approaches. No faster algorithm is possible for trees of bounded degree and for trees of unbounded degree we show that a further reduction of the running time would directly improve the best known approach to the assignment problem. Combining a polynomial-delay algorithm for the enumeration of all maximum common subtree isomorphisms with central ideas of our new algorithm leads to an improvement of its running time from O(n^6 + T n^2) to O(n^3 + T n D), where n is the order of the larger tree, T is the number of different solutions, and D is the minimum of the maximum degrees of the input trees. Our theoretical results are supplemented by an experimental evaluation on synthetic and real-world instances.
- Joint work with Nils M. Kriege and Petra Mutzel
- Anish Mukherjee (Chennai Mathematical Institute): Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
- Abstract: We present a Logspace Approximation Scheme (LSAS), i.e. an approximation algorithm for maximum matching in planar graphs (not necessarily bipartite) that achieves an approximation ratio arbitrarily close to one, using only logarithmic space. This deviates from the well known Baker’s approach for approximation in planar graphs by avoiding the use of distance computation - which is not known to be in Logspace. Our algorithm actually works for any “recursively sparse” graph class which contains a linear size matching and also for certain other classes like bounded genus graphs.
The scheme is based on an LSAS in bounded degree graphs which are not known to be amenable to Baker’s method. We solve the bounded degree case by parallel augmentation of short augmenting paths. Finding a large number of such disjoint paths can, in turn, be reduced to finding a large independent set in a bounded degree graph. The bounded degree assumption allows us to obtain a Logspace algorithm. - Joint work with Samir Datta and Raghav Kulkarni
- Abstract: We present a Logspace Approximation Scheme (LSAS), i.e. an approximation algorithm for maximum matching in planar graphs (not necessarily bipartite) that achieves an approximation ratio arbitrarily close to one, using only logarithmic space. This deviates from the well known Baker’s approach for approximation in planar graphs by avoiding the use of distance computation - which is not known to be in Logspace. Our algorithm actually works for any “recursively sparse” graph class which contains a linear size matching and also for certain other classes like bounded genus graphs.
- Martin Schuster: Transducer-based Rewriting Games for Active XML
- Abstract: Context-free games are two-player rewriting games that are played on nested strings representing XML documents with embedded function symbols. These games were introduced to model rewriting processes for intensional documents in the Active XML framework, where input documents are to be rewritten into a given target schema by calls to external services.
This talk is based on a paper which studies the setting where dependencies between inputs and outputs of service calls are modelled by transducers. The paper defines transducer models operating on nested words and studies their properties, as well as the computational complexity of the winning problem for transducer-based context-free games in several scenarios. While the complexity of this problem is quite high in most settings (ranging from NP-complete to undecidable), some tractable restrictions are also identified.
- Abstract: Context-free games are two-player rewriting games that are played on nested strings representing XML documents with embedded function symbols. These games were introduced to model rewriting processes for intensional documents in the Active XML framework, where input documents are to be rewritten into a given target schema by calls to external services.
MW 28, December 10, 2015, Chair 1
- Christian Eichhorn: Sceptical Inference Based on C-representations and its Characterization as a Constraint Satisfaction Problem
- Abstract: The axiomatic system P is an important standard for plausible, nonmonotonic inferences that is, however, known to be too weak to solve benchmark problems like irrelevance, or subclass inheritance (so-called drowning problem). Spohn's ranking functions which provide a semantic base for system P have often been used to design stronger inference relations, like Pearl's system Z, or c-representations. While each c-representation shows excellent inference properties and handles particularly irrelevance and subclass inheritance properly, it is still an open problem which c-representation is the best. In this paper, we focus on the generic properties of c-representations and consider the sceptical inference relation (c-inference) that is obtained by taking all c-representations of a given knowledge base into account. In particular, we show that c-inference preserves the properties of solving irrelevance and subclass inheritance which are met by every single c-representation. Moreover, we characterize sceptical c-inference as a constraint satisfaction problem so that constraint solvers can be used for its implementation.
- Joint work with Christoph Beierle and Gabriele Kern-Isberner
- Joachim Biskup: Optimality and Complexity of Inference-Proof Data Filtering and CQE
- Abstract: The ample literature on confidentiality-preserving data publishing - and controlled query evaluation (CQE) in particular - leaves several questions open. Are the greedy data-filtering algorithms adopted in the literature maximally cooperative? Can novel secure view formats or answer distortion methods improve security or cooperativeness? What is the inherent complexity of confidentiality-preserving data publishing under different constraints, such as cooperativeness and availability? Can the theoretical results on CQE be systematically extended to more general settings? In this paper we answer the above questions using a completely generic, abstract data filtering framework, independent from any syntactic details and data source encodings, and compatible with all possible distortion methods. Some of the main results are: Refusal-based filterings can be adopted as a normal form for all kinds of filterings; greedy refusal-based filterings are optimal; cooperativeness checks and some availability checks are coNP-hard in the simplest case.
- Joint work with Piero A. Bonatti, Clemente Galdi, and Luigi Sauro
- Dominik Köppl: Efficiently Finding All Maximal α-gapped Repeats
- Abstract: For α≥1, an α-gapped repeat in a word w is a factor uvu of w such that |uv|≤α|u|; the two factors u in such a repeat are called arms, while the factor v is called gap. Such a repeat is called maximal if its arms cannot be extended simultaneously with the same symbol to the right or, respectively, to the left. In this paper we show that the number of maximal α-gapped repeats that may occur in a word is upper bounded by 18αn. This allows us to construct an algorithm finding all the maximal α-gapped repeats of a word in O(αn); this is optimal, in the worst case, as there are words that have Θ(αn) maximal α-gapped repeats. Our techniques can be extended to get comparable results in the case of α-gapped palindromes, i.e., factors uvuT with |uv|≤α|u|.
- Joint work with Paweł Gawrychowski, Tomohiro I, Shunsuke Inenaga, and Florin Manea
MW 27, June 10th, 2015, Chair 1
- Claudio Moraga: Design of circuits for Reversible Computing
- Abstract: Reversible digital circuits are characterized by a lower heat dissipation as compared with classical digital circuits. Therefore they are achieving increasing interest as the saturation of the Moore's Law is approaching. Furthermore in the quantum mechanics formalisms, all quantum computing operations must be reversible: elementary circuits ("gates") realize unitary matrices. Therefore advances in the design of reversible computing circuits are also in the perspective of quantum computing important, hoping that a "quantum technology" will some day be available. Since it has been shown that to decide whether a reversible circuit with a bounded number of auxiliary lines is realizable with a given number of gates is NP-complete, heuristicsfor the design are being developed. Particular aspects of the design of reversible digital circuits will be covered in the talk.
- Johannes Fischer: Computing and Approximating the Lempel-Ziv-77 Factorization in Small Space
- Abstract: The Lempel-Ziv-77 algorithm greedily factorizes a text of length n into z maximal substrings that have previous occurrences, which is particularly useful for text compression. We review two recent algorithms for this task:
-
-
-
- A linear-time algorithm using essentially only one integer array of length n in addition to the text.
- An even more space-conscious algorithm using O(z) space, computing a 2-approximation of the LZ77 parse in O(n lg n) time w.h.p.
-
-
- Gaetano Geck: Parallel-Correctness and Transferability for Conjunctive Queries
- Abstract: A dominant cost for query evaluation in modern massively distributed systems is the number of communication rounds. For this reason, there is a growing interest in single-round multiway join algorithms where data is first reshuffled over many servers and then evaluated in a parallel but communication-free way. The reshuffling itself is specified as a distribution policy. We introduce a correctness condition, called parallel-correctness, for the evaluation of queries w.r.t. a distribution policy. We study the complexity of parallel-correctness for conjunctive queries as well as transferability of parallelcorrectness between queries. We also investigate the complexity of transferability for certain families of distribution policies, including, for instance, the Hypercube distribution.
MW 26, December 8th, 2014, Chair 1
- Christian Eichhorn: LEG networks for ranking functions
(joint work with Gabriele Kern-Isberner)- Abstract: When using representations of plausibility for semantical frameworks, the storing capacity needed is usually exponentially in the number of variables. Therefore, network-based approaches that decompose the semantical space have proven to be fruitful in environments with probabilistic information. For applications where a more qualitative information is preferable to quantitative information, ordinal conditional functions (OCF) offer a convenient methodology. Here, Bayesian-like networks have been proposed for ranking functions, so called OCF-networks. These networks not only suffer from similar problems as Bayesian networks, in particular, allowing only restricted classes of conditional relationships, it also has been found recently that problems with admissibility may arise. In this paper we propose LEG networks for ranking functions, also carrying over an idea from probabilistics. OCF-LEG networks can be built for any conditional knowledge base and filled by local OCF that can be found by inductive reasoning. A global OCF is set up from the local ones, and it is shown that the global OCF is admissible with respect to the underlying knowledge base.
- Henrik Björklund: Abstract Meaning Representations and Graph Grammars
- Abstract: Abstract Meaning Representation (AMR) is a formalism for describing the semantics of natural language sentences as directed acyclic graphs. We discuss various types of graph grammars for generating AMRs and the computational complexity of their corresponding parsing problems.
- Thomas Zeume: Maintaining Reachability in Dynamically Changing Graphs using First-Order Logic
(joint work with Samir Datta, Raghav Kulkarni, Anish Mukherjee and Thomas Schwentick)- Abstract: In this talk we study the dynamic complexity of Reachability. As elementary change operations we allow insertion and deletion of edges of a graph. We show that Reachability is in DynFO, where DynFO allows first-order definable updates of a polynomial-size auxiliary data structure. This result confirms a two decade old conjecture of Immerman and Patnaik (1997).
MW 25, June 30th, 2014, Chair 1
Wim Martens: Efficient Separability of Regular Languages by Subsequences and Suffixes
(joint work with Wojciech Czerwiński and Tomáš Masopust )
- Abstract: When can two regular word languages K and L be separated by a simple language? We investigate this question and consider separation by piecewise- and suffix-testable languages and variants thereof. We give characterizations of when two languages can be separated and present an overview of when these problems can be decided in polynomial time if K and L are given by nondeterministic automata.
Samir Datta: Dynamic Complexity of Reachability and Matching
(joint work with William Hesse and Raghav Kulkarni)
- Abstract: We study some well-known graph problems such as reachability and matching in the context of dynamic complexity. In particular, we show the maintainability of:
(a) maximum matching in non-uniform DynTC^0,
(b) digraph reachability in non-uniform DynAC^0[2], and,
(c) embedded planar digraph reachability in DynFO.
For (a) we show the technique from [Hesse] to maintain directed reachability in DynTC^0 can in fact be generalized using the characterization of determinants by [MahajanVinay] to maintain the determinant of a matrix in DynTC^0.
For (b) we extend this technique with the help of two more ingredients namely isolation (cf. [AllenderReinhardtZhou]) and truncated approximation using rational polynomials to achieve a DynAC^0[2] bound.
For (c) we use the duality between cuts and cycles in embedded planar graphs to maintain the number of crossings between carefully chosen primal and dual paths.
Sven Rahmann: Non-Negative Matrix Factorization: A Need for Theory and Exact Algorithms
- Abstract: see here
MW 24, November 18th, 2013, Chair 1
- Matthias Niewerth: Reasoning about XML Constraints based on XML-to-relational mappings
(joint work with Thomas Schwentick)- Abstract: We introduce a simple framework for the specification of constraints for XML documents in which constraints are specified by (1) a mapping that extracts a relation from every XML document and (2) a relational constraint on the resulting relation. The mapping has to be generic with respect to the actual data values and the relational constraints can be of any kind. Besides giving a general undecidability result for first-order definable mappings and a general decidability result for MSO definable mappings for restricted functional dependencies, we also study the complexity of the implication problem for XML constraints that are specified by tree pattern queries and functional dependencies. Furthermore, we highlight how other specification languages for XML constraints can be formulated in the framework.
- Jakob Rehof: Subtype entailment
- Abstract: The subtype entailment problem, which has been open since 1996, is the question of decidability of the following decision problem: Given a finite set C of subtyping constraints (partial order constraints between simple type expressions) and a subtyping constraint c, does the set C entail the constraint c (in the sense that every valuation satisfying all constraints in C must also satisfy c)? Entailment is considered over a particular partial order (subtyping relation) on trees, and valuations map type variables to trees. In the talk we will focus on a reduction of the entailment problem to the universality problem for a special pushdown automaton model.
- Kristian Kersting: Towards Spectral Color Refinement (joint work with Martin Mladenov and Martin Grohe)
- Abstract: Colour refinement is a basic algorithmic routine for graph isomorphism testing that can be used as a subroutine in almost all practical isomorphism solvers. It partitions the vertices of a graph into "colour classes" in such a way that all vertices in the same colour class have the same number of neighbours in every colour class. Recently it has been shown to also have major implication on inference within probabilistic models. For instance, applied to loopy belief propagation (LBP), it partitions the random variables of the graphical model in such a way that all variables in the same colour class have the same computations trees. Consequently, one can construct a lifted model, where each node represents a set of random variables that all pass the same messages during LBP. Running LBP on this network can be significantly faster.
MW 23, June 24th, 2013, Chair 1
- Ahmet Kara: Dynamic Communicating Automata and Branching High-Level MSCs
(joint work with Benedikt Bollig, Aiswarya Cyriac, Loïc Hélouët and Thomas Schwentick)- Abstract: We study dynamic communicating automata (DCA), an extension of classical communicating finite-state machines that allows for dynamic creation of processes. The behavior of a DCA can be described as a set of message sequence charts (MSCs). While DCA serve as a model of an implementation, we propose branching high-level MSCs (bHMSCs) on the specification side. Our focus is on the implementability problem: given a bHMSC, can one construct an equivalent DCA? As this problem is undecidable, we introduce the notion of executability, a decidable necessary criterion for implementability. We show that executability of bHMSCs is EXPTIME-complete. We then identify a class of bHMSCs for which executability effectively implies implementability.
- Marc Gillé: OBDD-Based Representation of Interval Graphs
- Abstract: Symbolic/Implicit OBDD-based graph algorithms deals with the characteristic function χ E of the edge set of a graph G = (V, E) given as an OBDD to get a (hopefully) compact representation and solve graph optimization problems by mainly using functional operations. While the representation size can not be small in general, the idea is that it can be provable small for special graph classes and then also lead to fast algorithms. In this paper, we show that the OBDD size of unit interval graphs is O(|V |/ log |V |) and the OBDD size of interval graphs is O(|V | log |V |) which both improve a known result from Nunkesser and Woelfel [23]. Furthermore, we can show that using our variable order and node labeling for interval graphs the worst-case OBDD size is Ω(|V | log |V |). We use the structure of the adjacency matrices to prove these bounds. This method may be of independent interest and can be applied to other graph classes. We also develop a maximum matching algorithm on unit interval graphs using O(log |V |) operations and evaluate this algorithm empirically.
- Marco Wilhelm: Probabilistic Knowledge Representation Using Gröbner Basis Theory
(joint work with Gabriele Kern-Isberner and Christoph Beierle)- Abstract: An often used methodology for reasoning with probabilistic conditional knowledge bases is provided by the principle of maximum entropy (so-called MaxEnt principle) that realises an idea of informational economy. We exploit the fact that MaxEnt distributions can be computed by solving nonlinear equation systems that reflect the conditional logical structure of these distributions. Further, we apply the theory of Gröbner bases that is well known from computational algebra to the polynomial system which is associated with a MaxEnt distribution, in order to obtain results for reasoning with maximum entropy. A necessary condition for knowledge bases to be consistent, as well as approaches to answering MaxEnt queries, can be presented. Finally, we discuss computational methods to establish general MaxEnt inference rules.
MW 22, February 20th, 2013, Chair 1
- Melanie Schmidt: Coresets, k-means Clustering and Projective Clustering
- Abstract: Coresets are a technique to compress huge amounts of data in a way that still allows us to solve a specific optimization problem. The k-means clustering problem is a geometric clustering problem, where we are given a set of points P in the Euclidean space and ask for k centers that minimize the sum of the squared distances of every point to its closest center. Here, a coreset is a (small) weighted set of points, that has approximately the same cost as the original input for every set of centers. This implies that solving the problem on the coreset yields an approximation for the original problem. The coreset presented in this talk is of constant-size, more specifically, the number of points in the coreset only depends on k and the precision parameter.
- Moritz Martens: NP-Completeness of Intersection Type Matching
- Abstract: Type matching problems occur in a number of contexts, including library search, component composition, and inhabitation. The intersection type matching problem under the standard notion of subtyping for intersection types will be considered: Given types tau and sigma, where sigma is a constant type, does there exist a type substitution S such that S(tau) is a subtype of sigma? We show that the matching problem is NP-complete. NP-hardness holds already for the restriction to atomic substitutions. Membership in NP is challenging, and we provide an algorithm engineered for efficiency. Our algorithm is based on a nondeterministic polynomial time normalization procedure for subtype constraint systems with intersection types.
- Thomas Zeume: On the quantifier-free dynamic complexity of Reachability
(joint work with Thomas Schwentick)- Abstract: The dynamic complexity of the reachability query is studied in the dynamic complexity framework of Patnaik and Immerman, restricted to quantifier-free update formulas. We show that, with this restriction, the reachability query cannot be dynamically maintained, neither with binary auxiliary relations nor with unary auxiliary functions, and that ternary auxiliary relations are more powerful with respect to graph queries than binary auxiliary relations. We also establish that Reachability cannot be maintained with auxiliary relations or functions of arbitrary arity in the scenario, where update sequences are applied to a non-empty graph and the initialization of the auxiliary relations is permutation-invariant. Furthermore, the inexpressibility results are transferred to some other queries and some normal forms are established.
MW 21, November 26th, 2012, Chair 1
- Joachim Biskup: Signature-Based Inference-Usability Confinement for Relational Databases under Functional and Join Dependencies
- Abstract: Inference control of queries for relational databases confines the information content and thus the usability of data returned to a client, aiming to keep some pieces of information confidential as specified in a policy, in particular for the sake of privacy. In general, there is a tradeoff between the following factors: on the one hand, the expressiveness offered to administrators to declare a schema, a confidentiality policy and assumptions about a client’s a priori knowledge; on the other hand, the computational complexity of a provably confidentiality preserving enforcement mechanism. We propose and investigate a new balanced solution for a widely applicable situation: we admit relational schemas with functional and join dependencies, which are also treated as a priori knowledge, and select-project sentences for policies and queries; we design an efficient signature-based enforcement mechanism that we implement for an Oracle/SQL-system. At declaration time, the inference signatures are compiled from an analysis of all possible crucial inferences, and at run time they are employed like in the field of intrusion detection.
- Martin Schuster: On optimum left-to-right strategies for context-free games
- Abstract: Active context-free games are two-player games on strings over finite alphabets with one player trying to rewrite the input string to match a target specification. These games have been investigated in the context of exchanging Active XML (AXML) data. While it was known that the rewriting problem is undecidable in general, it is shown here that it is EXPSPACE-complete to decide for a given context-free game, whether all safely rewritable strings can be safely rewritten in a left-to-right manner, a problem that was previously considered by Abiteboul et al. Furthermore, it is shown that the corresponding problem for games with finite replacement languages is EXPTIME-complete.
- Ingo Battenfeld: Observationally induced eff ects in cartesian closed categories
- Abstract: Alex Simpson has suggested an observationally-induced approach towards obtaining monads for computational effects in denotational semantics. The underlying idea of this approach is to use a single observation algebra as computational prototype and to obtain a computational monad as a free algebra construction derived from this prototype. Recently, it has been shown that free observationally-induced algebras exist in the category of continuous maps between topological spaces for arbitrary pre-chosen computational prototypes. In this work we transfer these results to cartesian closed categories. In particular, we show that, provided the category under consideration satisfies suitable completeness conditions, it supports a free observationally-induced algebra construction for arbitrary computational prototypes. We also show that the free algebras are obtained as certain subobjects of double exponentials involving the computational prototype as result type. Finally, we apply these results to show that in topological domain theory an observationally-induced lower powerspace construction over a QCB-space X is given by the space of nonempty closed subsets of X topologised suitably.
MW 20, June 20th, 2012, Chair 1
-
Gabriele Kern-Isberner: A Constructive Approach to Independent and Evidence Retaining Belief Revision by General Information Sets
(joint work with Patrick Krümpelmann)
- Abstract: Recent years have seen a lot of work towards extending the established AGM belief revision theory with respect to iterating revision, preserving conditional beliefs, and handling sets of propositions as new information. In particular, novel postulates like independence and evidence retainment have been brought forth as new standards for revising epistemic states by (sets of) propositional information. In this paper, we propose a constructive approach for revising epistemic states by sets of (propositional and conditional) beliefs that combines ideas from nonmonotonic reasoning with conditional belief revision. We also propose a novel principle called enforcement that covers both independence and evidence retainment, and we show our revision operator to comply with major postulates from the literature. Moreover, we point out the relevance of our approach for default reasoning.
-
Boris Düdder: Bounded Combinatory Logic
(joint work with Moritz Martens, Jakob Rehof, and Paweł Urzyczyn)
- Abstract: In combinatory logic one usually assumes a fixed set of basic combinators (axiom schemes), usually K and S. In this setting the set of provable formulas (inhabited types) is PSPACE complete in simple types and undecidable in intersection types. When arbitrary sets of axiom schemes are considered, the inhabitation problem is undecidable even in simple types (this is known as Linial-Post theorem). Bounded combinatory logic (BCL _k) arises from combinatory logic by imposing the bound k on the depth of types (formulae) which may be substituted for type variables in axiom schemes. We consider the inhabitation (provability) problem for BCL_k: Given an arbitrary set of typed combinators and a type, is there a combinatory term of type in k-bounded combinatory logic? Our main result is that the problem is (k + 2)-EXPTIME complete for BCL _k with intersection types, for every fixed k (and hence non-elementary when k is a parameter). We also show that the problem is EXPTIME-complete for simple types, for all k. Theoretically, our results give new insight into the expressive power of intersection types. From an application perspective, our results are useful as a foundation for composition synthesis based on combinatory logic.
-
Marianna D'Addario: Designing q-Unique DNA Sequences with Integer Linear Programs and Euler Tours in De Bruijn Graphs
- Abstract: DNA nanoarchitechtures require carefully designed oligonucleotides with certain non-hybridization guarantees, which can be formalized as the q-uniqueness property on the sequence level. We study the optimization problem of finding a longest q-unique DNA sequence. We first present a convenient formulation as an integer linear program on the underlying De Bruijn graph that allows to flexibly incorporate a variety of constraints; solution times for practically relevant values of q are short. We then provide additional insights into the problem structure using the quotient graph of the De Bruijn graph with respect to the equivalence relation of reverse complementarity. Specifically, for odd q the quotient graph is Eulerian, and finding a longest q-unique sequence is equivalent to finding an Euler tour, hence solved in linear time (with respect to the output string length). For even q, self-complementary edges complicate the problem, and the graph has to be Eulerized by deleting a minimum number of edges. Two sub-cases arise, for one of which we present a complete solution, while the other one remains open.
MW 19, February 22th, 2012, Chair 1
- Chris Schwiegelshohn: Solving the minimum string cover problem
- Abstract: A string cover C of a set of strings S is a set of substrings from S such that every string in S can be written as a concatenation of the strings in C. Given costs assigned to each substring from S, the Minimum String Cover (MSC) problem asks for a cover of minimum total cost. This NP-hard problem has so far only been approached from a purely theoretical perspective. We propose a flow-based ILP formulation, whose structure is particularly favorable for a Lagrangian relaxation approach. By making use of the strong bounds obtained through a repeated shortest path computation in a branch-and-bound manner, we show for the first time that non-trivial MSC instances can be solved to provable optimality in reasonable time.
- Sven Rahmann: Welche Herausforderungen bietet die Genomanalyse der theoretischen Informatik?
- Abstract: Wir stellen aktuelle Probleme der Genominformatik vor und diskutieren, welche grundlegenden theoretischen Fragestellungen sich dahinter verbergen (könnten).
- Ahmet Kara: Decidability Results for Infinite State Systems
-
Abstract: The positive achievements in the field of automatic verification of finite state systems motivated the research on algorithmic methods for infinite state systems.
In this talk we will be concerned with general conditions on infinite state systems which are sufficient for the termination of several algorithmic procedures. We will introduce well-structured transition systems (WSTSs) as defined by Finkel and Schnoebelen. These are infinite state systems with a well-quasi ordering on the set of states which is compatible with the transition relation. For WSTSs it can be shown that algorithmic problems like reachability, i.e. the question whether a certain set of states is reachable from an initial state, are decidable. Many transition systems like Petri nets and lossy channel systems from various fields of computer science can be seen as WSTSs. We will give some examples from current research where the concept of WSTSs helps to get decidability results.
-
MW 18, August 9th, 2011, Chair 1
-
Gabriele Kern-Isberner: A Default Logical Semantics for Defeasible Argumentation
(joint work with Guillermo Simari)-
Abstract: Defeasible argumentation and default reasoning are usually perceived as two similar, but distinct approaches to commonsense reasoning. In this paper, we combine these two fields by viewing (defeasible resp. default) rules as a common crucial part in both areas. We will make use of possible worlds semantics from default reasoning to provide examples for arguments, and carry over the notion of plausibility to the argumentative framework. Moreover, we base a priority relation between arguments on the tolerance partitioning of system Z and obtain a criterion phrased in system Z terms that ensures warrancy in defeasible argumentation.
-
-
Melanie Schmidt: Adaptive Sampling for Clustering Problems
- Abstract: Clustering is the problem to partition a given set of objects into subsets called clusters such that objects in the same cluster are similar and objects in different clusters are dissimilar. The k-means problem is a well-known clustering problem where similarity is measured by squared Euclidean distance. The k-means algorithm is a very popular heuristic for this problem which unfortunately does not provide any guality guarantee. In this talk, we review recent improvements of the k-means algorithm by Arthur and Vassilvitzki and then consider possible directions for further research. In particular, we are interested in applications to a generalization of the k-means problem.
-
Peter Padawitz: Algebraic, relational and order-based co/induction
-
Abstract: Algebraic induction is a proof rule for showing properties of initial models. Its soundness follows from the fact that initial models do not have proper substructures. Algebraic coinduction is a proof rule for showing equations in final models. Its soundness follows from the fact that final models do not have proper quotients. A proof by algebraic co/induction can be carried out by relational co/induction, which is a proof rule for showing properties of least resp. greatest - not only unary or binary - relations in arbitrary models. Relational co/induction specializes order-based co/induction to the lattice of interpretations of the predicates of the underlying signature. Order-based co/induction works in any complete lattice or co/chain-complete poset. As an example of the use of algebraic coinduction, we show how it simplifies proofs of language equivalence, in particular of conjectures stating that a given context-free grammar generates a given language.
-
MW 17, Mai 4th, 2011, Chair 1
-
Thomas Schwentick: Two-variable logic and Key Constraints on Data Words
-
Abstract: We introduce key constraints for data words and show that it is decidable whether, for a given two-variable sentence $\varphi$ that can refer to the successor relation on positions and a set K of key constraints, there is a data string w that satisfies $\varphi$ and respects K. Here, the formula is allowed to refer to the successor relation but not to the linear order on the positions of the word. As a byproduct, a self-contained exposition of an algorithm that decides satisfiability of such formulas (without key constraints) in 2-NEXPTIME is given.
-
-
Johannes Köster: Proteinhypernetzwerke
-
Abstract: Protein-protein interactions are the fundamental building blocks of biochemical reaction systems underlying cellular functions. Importantly, the complexity and functionality of such systems emerge not from the protein interactions themselves but from the dependencies between these interactions. However, a comprehensive approach for large scale integration of such dependencies is still rather lacking.
We present an approach for endowing protein networks with interaction dependencies using propositional logic, thereby obtaining protein hypernetworks. We illustrate their potential by demonstrating that they improve protein complex prediction in yeast. By modeling the effects of protein perturbations in hypernetworks we derive a perturbation impact score, a novel measure of a protein's functional importance.
-
-
Patrick Krümpelmann: On Influence and Contractions in Defeasible Logic Programming
-
Abstract: We investigate the problem of contraction in Defeasible Logic Programming (DeLP), a logic-based approach for defeasible argumentation. We develop different notions of contraction based on both, the different forms of entailment implicitly existent in argumentation-based formalisms and the influence literals exhibit in the reasoning process. We give translations of widely accepted rationality postulates for belief contraction to our framework. Moreover, we discuss on the applicability of contraction for defeasible argumentation and the role of influence in this matter.
-
MW 16, January 19th, 2011, Chair 1
Speaker | Subject | ||
Thomas Zeume, LS 1 | Temporal Logics on Words with Multiple Data Values | abstract | |
Henrik Björklund | Recognizing Shuffled Languages | abstract | |
Peter Padawitz, LS 1 | How to benefit from algebra when reasoning about languages | abstract |
MW 15, July 14th, 2010, Chair 1
Speaker | Subject | ||
Wim Martens, LS 1 | Simplifying XML Schema: Single-Type Approximations of Regular Tree Languages | abstract | |
Jan-Hendrik Lochner, LS 6 | Efficient Inference Control for Open Relational Database Queries | abstract | |
Matthias Thimm, LS 1 | Classification and Strategical Issues of Argumentation Games on Structured Argumentation Frameworks | abstract |
MW 14, July 14th, 2010, Chair 1
Speaker | Subject | ||
Frank Hellweg, LS 2 | Testing Euclidean Spanners | abstract | |
Thomas Schwentick, LS 1 | Two-Variable Logic with Two Order Relations | abstract | |
Christoph Schubert, LS 10 | Expressivity of modal logics over general measurable spaces | abstract |
MW 13, February 23th, 2010, Chair 1:
Speaker | Subject | ||
Matthias Thimm, LS 1 | Relational Probabilistic Conditionals and Maximum Entropy | abstract | |
Tobias Marschall, LS 11 | Exact Analysis of Horspool's and Sunday's Pattern Matching Algorithms with Probabilistic Arithmetic Automata | abstract | |
Wim Martens, LS 1 | Schema Design for XML Repositories: Complexity and Tractability | abstract |
MW 12, December 17th, 2008, Chair 1:
Speaker | Subject | |
Sven Rahman, LS 11 | Efficient exhaustive motif discovery | abstract |
Chunlai Zhou, Peking | Finite approximation of labelled Markov processes through filtration | |
Patrick Krümpelmann, LS 6 | Towards a general framework for updates in logic programming. |
MW 11, July 9th, 2008, Chair 1:
Speaker | Subject | |
Lena Wiese, LS6 | preCQE: preprocessing for controlled query evaluation. Inferenzkontrolle in pädikatenlogischen Datenbanken. | abstract |
Ernst-Erich Doberkat, LS 11 | Bisimilarity of distributionally equivalent Markov transition sytems. | abstract |
Marcel Marquardt, LS 1 | Dynamische Komplexität formaler Sprachen. |
MW 10, May 7th, 2008, Chair 1:
Speaker | Subject | |
Felix Klaedtke, ETH Zürich | On the automata size for linear arithmetics | abstract (ca. 1 hour talk) |
R. Ramanujam, IMSC Chennai | Verifiying that an e-election is "Free and Fair" | abstract |
MW 9, April 9th, 2008, Chair 1:
Speaker | Subject |
Ingo Battenfeld | Computational effects in topoligical domain theory |
Peter Padawitz | Hyperdocuments are coalgebraic |
Matthias Thimm | On the relationship of defeasible argumentation and answer set programming |
MW 8, November 21st, 2007, Chair 1:
Speaker | Subject |
Manuela Ritterskamp | Using preferance fusion in default reasoning |
Wim Martens | Conjunctive query containment over trees |
MW 7, June 27th, 2007, Chair 2:
Speaker | Subject |
Christoph Schubert | Bisimilarity, behavioral and logical equivalance for stochastic right coalgebras |
Torben Weibert | Confidentiality policies for controlled query evaluation |
Henrik Björklund | Bounded depth data trees |
MW 6, February 28th, 2007, Chair 1:
Speaker | Subject |
Horst Wedde | Concurrency in distributed systems under autonomous and enforced actions |
Ernst-Erich Doberkat | Eine bemerkenswerte erzeugende Funktion |
Wim Martens | Conjunctive query containment over trees |
MW 5, December 20th, 2006, Chair 10
Speaker | Subject | |
Peter Buchholz | Bisimulation für gewichtete Automaten | |
Gabriele Kern-Isberner | Probabilistic Abducation without priors | abstract |
Henrik Björklund | Towards regular data languages | abstract |
MW 4, September 4th, 2006, math. dep. chair of Prof. Skutella:
Speaker | Subject | |
Lena Wiese | On finding an inference-proof complete database for controlled query evaluation | |
Volker Weber | Bounded-variable fragments of hybrid logics | slides |
Martin Skutella | Solving evacuation problems efficiently: earliest arrival flows with multiple source | slides |
MW 3, July 10th, 2006, Chair 1:
The workshop took place directly after a guest lecture of Dr. Thomas Eiter (TU Wien), a guest of Prof. Petra Mutzel. Titel: Datenintegration and answer set programming abstract
Speaker | Subject | |
Carsten Witt | Laufzeitanalysen von randomisierten Suchheuristiken für das Partitionsproblem | slides |
Peter Padawitz | Di-algebraic picture generation: A case study in multi-level data abstraction | slides |
MW 2, May 22nd, 2006, Chair 11:
Speaker | Subject |
E.-E. Doberkat | Randomisierte Morphismen für aktionsmarkierte Markoffsche Transitionssysteme |
Martin Sauerhoff | Approximate Counting und Häufigkeitsmomente von langen Datenströmen |
Gabriele Kern-Isberner | A note on comparing semantics for conditionals |
MW 1, March 20th, 2006, Chair 1:
Speaker | Subject | |
Joachim Biskup | Controlled query evaluation with open queries for a decidable relational submodel | slides |
Markus Chiamani | Non-planar core reduction | |
Thomas Schwentick | Pebble automata on trees | slides |