BEGIN:VCALENDAR
VERSION:2.0
PRODID:ECMLPKDD-MB
BEGIN:VEVENT
DTSTAMP;TZID=Europe/Dublin:20180826T200000
UID:_ecmlpkdd_112
DTSTART;TZID="Europe/Dublin":20180911T170000
DTEND;TZID="Europe/Dublin":20180911T172000
LOCATION:Davin
TRANSP:TRANSPARENT
SEQUENCE:1
DESCRIPTION:Entropic measures such as conditional entropy or mutual information have been used numerous times in pattern mining, for instance to characterize valuable itemsets or approximate functional dependencies. Strangely enough the fundamental problem of designing efficient algorithms to compute entropy of subsets of features (or mutual information of feature subsets relatively to some target feature) has received little attention compared to the analog problem of computing frequency of itemsets. The present article proposes to fill this gap: it introduces a fast and scalable method that computes entropy and mutual information for a large number of feature subsets by adopting the divide and conquer strategy used by FP-growth, one of the most efficient frequent itemset mining algorithm. In order to illustrate its practical interest, the algorithm is then used to solve the recently introduced problem of mining reliable approximate functional dependencies. It finally provides empirical evidences that in the context of non-redundant pattern extraction, the proposed method outperforms existing algorithms for both speed and scalability.
SUMMARY:An Efficient Algorithm for Computing Entropic Measures of Feature Subsets
CLASS:PUBLIC
END:VEVENT
END:VCALENDAR