02014nam a2200289 a 450000100080000000500110000800800410001902400540006010000260011424500600014026001860020030000140038649000540040050001170045452008700057165300160144165300220145765300250147965300330150465300350153765300350157265300270160765300260163465300300166070000190169070000150170910074482020-01-17 2004 bl uuuu u00u1 u #d7 ahttps://doi.org/10.1007/978-3-540-24775-3_102DOI1 aOLIVEIRA, S. R. de M. aSecure association rule sharing.h[electronic resource] aIn: PACIFIC-ASIA CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 8., 2004, Sidney, Australia. Advances in knowledge discovery and data mining: proceedings. Berlin: Springerc2004 ap. 74-85. a(Lecture notes in artificial intelligence, 3056). aEditores: Honghua Dai, Ramakrishnan Srikant, Chengqi Zhang. PAKDD 2004. Na publicação: Stanley R. M. Oliveira. aThe sharing of association rules is often beneficial in industry, but requires privacy safeguards. One may decide to disclose only part of the knowledge and conceal strategic patterns which we call restrictive rules. These restrictive rules must be protected before sharing since they are paramount for strategic decisions and need to remain private. To address this challenging problem, we propose a unified framework for protecting sensitive knowledge before sharing. This framework encompasses: (a) an algorithm that sanitizes restrictive rules, while blocking some inference channels. We validate our algorithm against real and synthetic datasets; (b) a set of metrics to evaluate attacks against sensitive knowledge and the impact of the sanitization. We also introduce a taxonomy of sanitizing algorithms and a taxonomy of attacks against sensitive knowledge. aData mining aData sanitization aMineração de dados aPreservação de privacidade aPrivacy preserving data mining aProtecting sensitive knowledge aRegras de associação aSanitizing algorithms aSharing association rules1 aZAÏANE, O. R.1 aSAYGIN, Y.