-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
303 lines (288 loc) · 25.9 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
<!DOCTYPE html>
<html lang="en" style="scroll-behavior: sharp">
<div>
<p id="top"></p>
</div>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
<meta name="description" content="" />
<meta name="author" content="" />
<title> TrustKDD2023 | International Workshop on Trustworthy Knowledge Discovery and Data Mining </title>
<!-- Favicon-->
<link rel="icon" type="image/x-icon" href="assets/favicon.ico" />
<!-- Bootstrap icons-->
<link href="https://cdn.jsdelivr.net/npm/[email protected]/font/bootstrap-icons.css" rel="stylesheet" />
<!-- Core theme CSS (includes Bootstrap)-->
<!-- <link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet">
<link href="https://getbootstrap.com/docs/5.3/assets/css/docs.css" rel="stylesheet"> -->
<link href="css/styles.css" rel="stylesheet" />
</head>
<style>
.nav-link {
display: block;
padding: var(--bs-nav-link-padding-y) var(--bs-nav-link-padding-x);
font-size: var(--bs-nav-link-font-size);
font-weight: var(--bs-nav-link-font-weight);
color: var(--bs-primary);
text-decoration: none;
transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out;
}
</style>
<body class="d-flex flex-column h-100">
<main class="flex-shrink-0">
<!-- Navigation-->
<nav class="navbar navbar-expand-lg bg-dark" >
<div class="container px-1 ">
<a class="navbar-brand text-light fw-bolder" href="index.html">TrustKDD</a>
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation"><span class="navbar-toggler-icon"></span></button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav ms-auto mb-2 mb-lg-0">
<!-- <li class="nav-item dropdown">
<a class="nav-link dropdown-toggle" id="navbarDropdownBlog" href="#" role="button" data-bs-toggle="dropdown" aria-expanded="false">Calls</a>
<ul class="dropdown-menu dropdown-menu-end" aria-labelledby="navbarDropdownBlog">
<li><a class="dropdown-item" href="call-for-papers.html">Call for papers</a></li>
<li><a class="dropdown-item" href="call-for-workshop-proposals.html">Call for Workshops</a></li>
</ul>
</li> -->
<!-- <li class="nav-item"><a class="nav-link" href=call-for-papers.html>Call for papers</a></li>
<li class="nav-item"><a class="nav-link" href="organizers.html">Organizers</a></li> -->
<li class="nav-item"><a class="nav-link text-white-50 fw-bolder" href="#intro">Introduction</a></li>
<li class="nav-item"><a class="nav-link text-white-50 fw-bolder" href="#topics">Topics</a></li>
<li class="nav-item"><a class="nav-link text-white-50 fw-bolder" href="#submission">Submissions</a></li>
<li class="nav-item"><a class="nav-link text-white-50 fw-bolder" href="#dates">Important Dates</a></li>
<li class="nav-item"><a class="nav-link text-white-50 fw-bolder" href="#program">Program</a></li>
<li class="nav-item"><a class="nav-link text-white-50 fw-bolder" href="#committee">Committee</a></li>
<li class="nav-item"><a class="nav-link text-white-50 fw-bolder" href="mailto:[email protected]">Contact us</a></li>
</ul>
</div>
</div>
</nav>
<!-- Header -->
<header class="py-5" style="background-image:url(bg3.jpg); background-size:100% 100%">
<div class="container px-3">
<div class="row gx-3 align-items-center justify-content-center">
<div class="col-lg-10 col-xl-10 col-xxl-10">
<div class="my-5 text-center text-xl-start">
<h1 class="display-6 fw-bolder text-light text-center mb-4" style="text-shadow: -1px 1px 0 #000, 1px 1px 0 #000;">The 1st International Workshop on Trustworthy Knowledge Discovery and Data Mining (TrustKDD)</h1>
<p class="lead fw-bolder text-light text-center mb-2" style="font-size:1.6rem; text-shadow: -1px 1px 0 #000, 1px 1px 0 #000;">In conjunction with the IEEE International Conference on Data Mining 2023 (ICDM2023) </p>
<p class="lead fw-bolder text-light text-center mb-4" style="font-size:1.75rem; text-shadow: -1px 1px 0 #000, 1px 1px 0 #000;">December 1-4, 2023, Shanghai, China</p>
<div class="d-grid gap-3 d-sm-flex justify-content-sm-center">
<a class="btn btn-primary btn-lg px-4 me-sm-3" href="https://wi-lab.com/cyberchair/2023/icdm23/scripts/submit.php?subarea=S32&undisplay_detail=1&wh=/cyberchair/2023/icdm23/scripts/ws_submit.php">Submit now!</a>
<a class="btn btn-primary btn-lg px-4 me-sm-3" href="https://www.cloud-conf.net/icdm2023/">See ICDM 2023</a>
<a class="btn btn-outline-light btn-lg px-4" style="text-shadow: -0.5px 0.5px 0 #000, 0.5px 0.5px 0 #000;" href="mailto:[email protected]">Contact us</a>
</div>
</div>
</div>
<!-- <div class="col-xl-5 col-xxl-6 d-none d-xl-block text-center"><img class="img-fluid rounded-3 my-5" src="https://dummyimage.com/600x400/343a40/6c757d" alt="..." /></div> -->
</div>
</div>
</header>
<div style="position: fixed; bottom: 20px; right: 20px;">
<a class="fw-bolder" style="background-color: thistle; color: white; border-radius: 50%; padding: 20px 20px; font-size: 18px;" href="#top">Top</a>
</div>
<!-- Content section-->
<section class="pt-2 pb-5" id="content">
<div class="container px-1" style="line-height: 1.85;">
<h1 class="mb-4 mt-4 fs-2 px-1"></h1>
<section class="mb-4">
<h4 class="mb-4 mt-5 fs-3 text-center" id="notice" style="color:red">Notice: All ICDM Workshop will be held on December 1st, 2023!</h4>
</section>
<section class="mb-4">
<h2 class="mb-4 mt-5 fs-3 text-center" id="intro">Welcome to TrustKDD2023!</h2>
<p class="mb-4 px-1">
The rapid growth of data and the proliferation of data sources have resulted in a significant demand for advanced knowledge discovery and data mining (KDD) techniques. The trustworthiness of KDD results is important for subsequent reliable decision making. However, ensuring the trustworthiness of KDD results has become a major challenge, as the accuracy and reliability of KDD outcomes are often compromised by various factors, such as data quality, model bias, and privacy issues. The focus of this workshop is to provide a forum for researchers and practitioners to present and discuss innovative approaches and solutions to ensure the trustworthiness of KDD results, theoretical and conceptual insights to understand the trustworthiness of KDD results.
<br>
The 1st International Workshop on Trustworthy Knowledge Discovery and Data Mining (TrustKDD2023) will be held in conjunction with the IEEE International Conference on Data Mining (ICDM2023) on December 1-4.
<br>
We warmly welcome your participations and contributions from all corresponding
fields!
</p>
</section>
<section class="mb-4">
<h2 class="mb-4 mt-5 fs-3 text-center" id="topics">Topics of Interest </h2>
</section>
<section class="mb-4">
<p class="mb-4 px-1">
Our workshop aims to bring together leading researchers, practitioners and entrepreneurs to exchange and share their experiences and latest research/application results on all aspects of Trustworthy Knowledge Discovery and Data Mining. It will provide a premier interdisciplinary forum to discuss the most recent trends, innovations, applications as well as the real-world challenges encountered, and the corresponding data-driven solutions in relevant domains.
<br>
The topics of interest include but not limited to:
</p>
<ul class="mb-4 fs-6 px-1" style="list-style-type:none">
<li style="background-color:gainsboro">Trustworthy data preprocessing and cleaning</li>
<li>Privacy-preserving KDD</li>
<li style="background-color:gainsboro">Fairness and accountability in KDD</li>
<li>Explainability and interpretability of KDD results</li>
<li style="background-color:gainsboro">Robustness and resilience of KDD models</li>
<li>Security and privacy of KDD systems</li>
<li style="background-color:gainsboro">Ethics and social implications of KDD</li>
<li>Trustworthy KDD on spatio-temporal data, healthcare data, social networks, streaming data, text data and graph data</li>
<li style="background-color:gainsboro">
Real-world applications of trustworthy KDD, including trustworthy recommendation, trustworthy search, trustworthy outlier detection, trustworthy clustering, and trustworthy graph learning.
</li>
</ul>
</section>
<section class="mb-4">
<h2 class="mb-4 mt-5 fs-3 text-center" id="submission">Submissions & Publications</h2>
<p class="mb-4 px-1">
Authors are invited to submit original papers, which have not been published elsewhere and which are not currently under consideration for another journal, conference or workshop. Any papers available on the Web (including arXiv) no longer qualify for submissions, as their author information is already public.
<br>
Submissions should be formatted according to double-column in <a href="https://www.ieee.org/conferences/publishing/templates.html">IEEE conference template</a>, and should not exceed <strong>10 pages</strong> including the bibliography and any possible appendices. Submissions longer than 10 pages will be rejected without review. All submissions will be peer-reviewed by the Committee on the basis of technical quality, relevance to scope of our workshop, originality, significance, and clarity.
<br>
For more information on how to prepare your submission, please refer to <a href="https://www.cloud-conf.net/icdm2023/call-for-papers.html">ICDM2023 Guidelines</a>.
<br>
Kindly note that your manuscript should be submitted on <a href="https://wi-lab.com/cyberchair/2023/icdm23/scripts/submit.php?subarea=S32&undisplay_detail=1&wh=/cyberchair/2023/icdm23/scripts/ws_submit.php">TrustKDD 2023 Submission link</a>. We <strong>do not</strong> accept email submissions. All manuscripts are submitted as full papers and are reviewed based on their scientific merit. The reviewing process is confidential. There is no separate abstract submission step.
<br>
Note that all accepted papers will be included in the IEEE ICDM 2023 Workshops Proceedings(ICDMW) volume published by IEEE Computer Society Press, and will also be included in the IEEE Xplore Digital Library and indexed by EI. Therefore, papers must not have been accepted for publication elsewhere or be under review for any other workshop, conferences or journals.
</p>
</section>
<section class="mb-4">
<h2 class="mb-2 mt-5 fs-3 text-center" id="dates">Important Dates</h2>
<div class="mb-4 px-1">
<table style="width:100%">
<tr>
<th style="width:60%"><del>Paper Submission Deadline</del></th>
<th><del>September 23, 2023</del> </th>
</tr>
<tr>
<th><del>Notification of Acceptance</del></th>
<th><del>September 24, 2023</del></th>
</tr>
<tr>
<th><del>Camera-ready Deadline and Copyright Forms</del></th>
<th><del>October 1, 2023</del></th>
</tr>
<tr>
<th>Workshop Date</th>
<th style="color:red">December 1, 2023</th>
</tr>
</table>
</div>
<!--<p class="mb-4 px-1">
*All times are at 11:59PM Beijing Time (UTC+8)
</p>-->
</section>
<section class="mb-4">
<h2 class="mb-4 mt-5 fs-3 text-center" id="register">Registration</h2>
<p class="mb-4 px-1">
All accepted papers, including workshops, must have at least one “FULL” registration. Registration information will be released as long as ICDM2023 is announced. The registration fee of a workshop paper is the same as that of a main conference paper. Please refer to the registration webpage of the main conference for the fee details. There is no extra page fee for all workshop papers.
<br>
For registration informations, please contact: <a class="nav-item" href="https://www.cloud-conf.net/icdm2023/registration.html">Registration link</a>.
</p>
</section>
<section class="mb-4">
<h2 class="mb-4 mt-5 fs-3 text-center" id="committee">Workshop Organizations</h2>
<h4 class="px-1">Organizers</h4>
<ul class="mb-4 fs-5 px-1">
<li>Enhong Chen, University of Science and Technology of China</li>
<li>Le Wu, Hefei University of Technology</li>
<li>Hongzhi Yin, The University of Queensland</li>
<li>Jundong Li, University of Virginia</li>
<li>Defu Lian, University of Science and Technology of China</li>
</ul>
<p class="mb-4 px-1">
For more information, refer to <a class="nav-item" href="organizers.html">Organizers</a>.
</p>
</section>
<section class="mb-4">
<h2 class="mb-4 mt-5 fs-3 text-center">Keynote Invited Speakers</h2>
<ul class="mb-4 fs-5 px-1">
<li>Xiting Wang, Renmin University of China</li>
</ul>
<p class="mb-4 mt-1 px-1">
<b>Title: Model Interpretation and Alignment for Trustworthy AI</b><br>
<b>Abstract:</b> In the era of large models, interpretability and model alignment have become critically important. Large models have an increasingly significant impact on people's work and lives, but they are also becoming more difficult to understand and control. Among the seven major research directions supported by OpenAI, interpretability and model alignment are two of them. How can we make deep learning models more transparent, understandable, easier to train, debug, and optimize, ensuring their alignment with human intent? This report will delve into these questions and introduce our recent research in Explainable Artificial Intelligence (XAI) and methods for learning from human feedback using reinforcement learning (RLHF), which we publish in ICML, NeurIPS, and KDD conferences.<br>
<b>Bio:</b> Xiting Wang is a tenure-track assistant professor in Renmin University of China. She was previously a principal researcher at Microsoft Research Asia and obtained her Bachelor's degree and Ph.D. from Tsinghua University. Her research interest is explainable and trustworthy AI, and the technologies she developed have been applied in multiple products like Microsoft Bing and Microsoft News. Xiting is an area chair of IJCAI and AAAI, is the archive chair of IEEE VIS, and was awarded Best SPC by AAAI 2021. Two of her papers were selected as the spotlight article by IEEE TVCG (one spotlight each issue). She was invited to give a keynote speech in the SIGIR Workshop on explainable recommendation on 2022 and 2020, and is an IEEE senior member.<br>
</p>
<ul class="mb-4 fs-5 px-1">
<li>Zhenhua Dong, Huawei Noah’s Ark Lab</li>
</ul>
<p class="mb-4 mt-1 px-1">
<b>Title: Two perspectives about biases in recommender system: OoD and unfairness</b><br>
<b>Abstract:</b> The goal of recommender system is getting the right information to the right people. Most of recommender system studies focus on the optimization of accuracy, which is not enough to be a trustworthy recommender system. There are many topics about trustworthy recommender system, in this talk, I will mainly focus on the biases studies from two perspectives, out of distribution(OoD) and unfairness. For the perspective of OoD, there is gap between the expected user preference and the observed user behaviors. The gap will bring many biases problem such as position bias, expose bias, trust bias et. al. The talk will introduce several causal inspired methods to mitigate the above bias such as intervention technologies, counterfactual learning. For the perspective of unfairness, I will introduce two kinds of fairness based on two stakeholders in recommender systems: user and content provider. For the user fairness, we propose counterfactual data augmentation methods to generate the counterfactual samples and achieve the fair data distribution. For the provider fairness, we propose provider max-min fairness for ranking. Finally, the talk will briefly discuss the challenges of trustworthiness.<br>
<b>Bio:</b> Zhenhua Dong is a technology expert and project manager of Huawei Noah’s ark lab. He is leading a research team focused on recommender system and causal inference. His team has launched significant improvements of recommender systems for several applications, such as news feeds, App store, instant services and advertising. With more than 40 applied patents and 60 research articles in TKDE, SIGIR, RecSys, KDD, WWW, AAAI, CIKM etc., he is known for research on recommender system, causal inference and counterfactual learning. He is also serving as PC or SPC members of SIGKDD, SIGIR, RecSys, WSDM, CIKM, industry chair of RecSys 2024. He translated the book “the singularity is near” into Chinese, named “奇点临近”. He received the BEng degree from Tianjin University in 2006 and the PhD degree from Nankai University in 2012. He was a visiting scholar at GroupLens lab in the University of Minnesota during 2010-2011. <br>
</p>
</section>
<section class="mb-4">
<h2 class="mb-2 mt-5 fs-3 text-center" id="program">Program Schedule</h2>
<div class="mb-4 px-1">
<table style="width:100%">
<thead>
<tr>
<th style="width:20%">Time</th>
<th style="width:30%">Moderator </th>
<th style="width:60%">Content</th>
</tr>
</thead>
<tbody>
<tr>
<td style="width:20%">14:30-14:35</td>
<td style="width:30%">Organizers </td>
<td style="width:60%">Opening Remarks.</td>
</tr>
<tr>
<td style="width:20%">14:35-15:15</td>
<td style="width:30%">Xiting Wang (Renmin University of China)</td>
<td style="width:60%">Model Interpretation and Alignment for Trustworthy AI.</td>
</tr>
<tr>
<td style="width:20%">15:15-15:55</td>
<td style="width:30%">Zhenhua Dong (Huawei Noah’s Ark Lab)</td>
<td style="width:60%"> Two perspectives about biases in recommender system: OoD and unfairness. </td>
</tr>
<tr>
<td style="width:20%">15:55-16:15 </td>
<td style="width:30%">Meghdad Mirabi, Ren ́e Klaus Nikiel, and Carsten Binnig</td>
<td style="width:60%"> SafeML: A Privacy-Preserving Byzantine-Robust Framework for Distributed Machine Learning Training. </td>
</tr>
<tr>
<td style="width:20%">16:15-16:35</td>
<td style="width:30%"> Akito Yamamoto and Tetsuo Shibuya</td>
<td style="width:60%"> A Joint Permute-and-Flip and Its Enhancement for Large-Scale Genomic Statistical Analysis. </td>
</tr>
<tr>
<td style="width:20%">16:35-16:55</td>
<td style="width:30%"> Yi Hu, Hanchi Ren, Chen Hu, Jingjing Deng, and Xianghua Xie</td>
<td style="width:60%">An Element-Wise Weights Aggregation Method for Federated Learning. </td>
</tr>
<tr>
<td style="width:20%">16:55-17:15</td>
<td style="width:30%">Yifan Li and Chengxiang Zhai</td>
<td style="width:60%"> An Exploration of Large Language Models for Verification of News Headlines. </td>
</tr>
<tr>
<td style="width:20%">17:15-17:20</td>
<td style="width:30%">Organizers</td>
<td style="width:60%"> Closing Remarks.</td>
</tr>
</tbody>
</table>
</div>
<p class="mb-5 mt-5 px-1">
If you have any questions or inquiries, please contact the workshop organizers at <a href="mailto:[email protected]">[email protected]</a>
</p>
</section>
</div>
</section>
</main>
<!-- Footer-->
<footer class="bg-dark py-4 mt-auto">
<div class="container px-1">
<div class="row align-items-center justify-content-between flex-column flex-sm-row">
<div class="col-auto"><div class="small m-0 text-white">Copyright © TrustKDD 2023 </div></div>
<!-- <div class="col-auto">
<a class="link-light small" href="#!">Privacy</a>
<span class="text-white mx-1">·</span>
<a class="link-light small" href="#!">Terms</a>
<span class="text-white mx-1">·</span>
<a class="link-light small" href="#!">Contact</a>
</div> -->
</div>
</div>
</footer>
<!-- Bootstrap core JS-->
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js"></script>
<!-- Core theme JS-->
<script src="js/scripts.js"></script>
</body>
</html>