Augsten, Nikolaus.
Similarity Joins in Relational Database Systems [electronic resource] / by Nikolaus Augsten, Michael Bohlen. - 1st ed. 2014. - XVII, 106 p. online resource. - Synthesis Lectures on Data Management, 2153-5426 . - Synthesis Lectures on Data Management, .
Preface -- Acknowledgments -- Introduction -- Data Types -- Edit-Based Distances -- Token-Based Distances -- Query Processing Techniques -- Filters for Token Equality Joins -- Conclusion -- Bibliography -- Authors' Biographies -- Index.
State-of-the-art database systems manage and process a variety of complex objects, including strings and trees. For such objects equality comparisons are often not meaningful and must be replaced by similarity comparisons. This book describes the concepts and techniques to incorporate similarity into database systems. We start out by discussing the properties of strings and trees, and identify the edit distance as the de facto standard for comparing complex objects. Since the edit distance is computationally expensive, token-based distances have been introduced to speed up edit distance computations. The basic idea is to decompose complex objects into sets of tokens that can be compared efficiently. Token-based distances are used to compute an approximation of the edit distance and prune expensive edit distance calculations. A key observation when computing similarity joins is that many of the object pairs, for which the similarity is computed, are very different from each other. Filters exploit this property to improve the performance of similarity joins. A filter preprocesses the input data sets and produces a set of candidate pairs. The distance function is evaluated on the candidate pairs only. We describe the essential query processing techniques for filters based on lower and upper bounds. For token equality joins we describe prefix, size, positional and partitioning filters, which can be used to avoid the computation of small intersections that are not needed since the similarity would be too low.
9783031018510
10.1007/978-3-031-01851-0 doi
Computer networks .
Data structures (Computer science).
Information theory.
Computer Communication Networks.
Data Structures and Information Theory.
TK5105.5-5105.9
004.6
Similarity Joins in Relational Database Systems [electronic resource] / by Nikolaus Augsten, Michael Bohlen. - 1st ed. 2014. - XVII, 106 p. online resource. - Synthesis Lectures on Data Management, 2153-5426 . - Synthesis Lectures on Data Management, .
Preface -- Acknowledgments -- Introduction -- Data Types -- Edit-Based Distances -- Token-Based Distances -- Query Processing Techniques -- Filters for Token Equality Joins -- Conclusion -- Bibliography -- Authors' Biographies -- Index.
State-of-the-art database systems manage and process a variety of complex objects, including strings and trees. For such objects equality comparisons are often not meaningful and must be replaced by similarity comparisons. This book describes the concepts and techniques to incorporate similarity into database systems. We start out by discussing the properties of strings and trees, and identify the edit distance as the de facto standard for comparing complex objects. Since the edit distance is computationally expensive, token-based distances have been introduced to speed up edit distance computations. The basic idea is to decompose complex objects into sets of tokens that can be compared efficiently. Token-based distances are used to compute an approximation of the edit distance and prune expensive edit distance calculations. A key observation when computing similarity joins is that many of the object pairs, for which the similarity is computed, are very different from each other. Filters exploit this property to improve the performance of similarity joins. A filter preprocesses the input data sets and produces a set of candidate pairs. The distance function is evaluated on the candidate pairs only. We describe the essential query processing techniques for filters based on lower and upper bounds. For token equality joins we describe prefix, size, positional and partitioning filters, which can be used to avoid the computation of small intersections that are not needed since the similarity would be too low.
9783031018510
10.1007/978-3-031-01851-0 doi
Computer networks .
Data structures (Computer science).
Information theory.
Computer Communication Networks.
Data Structures and Information Theory.
TK5105.5-5105.9
004.6