-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathindex.html
executable file
·495 lines (481 loc) · 25.9 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
<!DOCTYPE HTML>
<html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Akshita Gupta</title>
<meta name="author" content="Akshita Gupta">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="stylesheet.css">
<link rel="icon" type="image/png" href="images/seal_icon.png">
<style>
#myimg{
width:100%;
max-width:100%;
border-radius:50%;
border: 1px solid #ddd;
padding: 5px;
}
p {
line-height: 22px;
font-size: 15px;
}
ul li{
font-size:15px;
}
</style>
</head>
<body>
<table style="width:100%;max-width:800px;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:0px">
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr style="padding:0px">
<td style="padding:2.5%;width:63%;vertical-align:middle">
<!-- <p style="text-align:center"> -->
<!-- <name>Akshita Gupta</name> -->
<p id="namechange" align="center">
<span id="a"><name>Akshita Gupta</name></span><span id="b" style="font-family: 'Gugi', cursive; font-size: 40px;">अक्षिता गुप्ता </span>
</p>
<p style="text-align:justify" >
I am an ELLIS PhD student at TU Darmstadt, co-supervised by <a href="https://rohrbach.vision/">Prof. Marcus Rohrbach</a> and <a href="https://federicotombari.github.io/">Dr. Federico Tombari</a> at Google Zurich. I completed my MASc at the University of Guelph, where I was advised by <a href="https://www.gwtaylor.ca/">Prof. Graham Taylor</a>. During that time, I was also a student researcher at the <a href="https://vectorinstitute.ai/">Vector Institute</a>.
</p>
<p style="text-align:justify" >
I was fortunate to spent time as an research intern at Apple under <a href="https://scholar.google.com/citations?user=x7Z3ysQAAAAJ&hl=ru">Dr. Tatiana Likhomanenko</a>, Microsoft under <a href="https://g1910.github.io/">Gaurav Mittal</a> and <a href="https://www.microsoft.com/en-us/research/people/meic/">Mei Chen</a>, Vector Institute under <a href="https://sites.google.com/view/dbemerson">Dr. David Emerson</a>, and as a scientist in residence at NextAI Prof. Graham Taylor.
</p>
<p style="text-align:justify" >
Before starting coming to academia, I worked as a Data Scientist at <a href="https://space42.ai/en">Bayanat</a>, where I focused on projects related to detection and segmentation. Prior to that, I was a Research Engineer at the Inception Institute of Artificial Intelligence (IIAI), working with <a href="https://sites.google.com/view/sanath-narayan">Dr. Sanath Narayan</a>, <a href="https://salman-h-khan.github.io/">Dr. Salman Khan</a>, and <a href="https://sites.google.com/view/fahadkhans/home">Dr. Fahad Shahbaz Khan</a>. At IIAI, my research primarily involved open-world and zero-shot object detection, generative adversarial networks (GANs), and few- and zero-shot learning.
</p>
<p style="text-align:center">
<a href="mailto:[email protected]">Email</a>  / 
<a href="https://scholar.google.com/citations?user=G01YeI0AAAAJ&hl=en">Google Scholar</a>  / 
<a href="https://twitter.com/akshitac8">Twitter</a>  / 
<a href="https://github.com/akshitac8">Github</a>  / 
<a href="https://akshitac8.github.io/Gupta_Akshita_resume-12.pdf">Resume/CV</a>
</p>
</td>
<td style="padding:2.5%;width:40%;max-width:40%">
<a href="images/profile_aks.png"><img id = "myimg" alt="profile photo" src="images/profile_aks.png" class="hoverZoomLink"></a>
</td>
</tr>
</tbody></table>
<h2 style="text-align:left; margin-left: 10px;">What's New</h2>
<div class="news-container">
<table style="width:100%;border:0px;border-spacing:4px;border-collapse:separate;margin-right:auto;margin-left:auto;">
<tbody>
<tr>
<td><strong>[Mar 2025]</strong></td>
<td>Excited to start my PhD in Computer Science at <strong>TU Darmstadt</strong> under Prof. Marcus Rohrbach! 🎉</td>
</tr>
<tr>
<td><strong>[Sep 2024]</strong></td>
<td>Defended my <a href="https://atrium.lib.uoguelph.ca/items/67a35868-ca5a-494f-9116-62ea1c57b733">Masters Thesis</a></td>
</tr>
<tr>
<td><strong>[Jun 2024]</strong></td>
<td>Joined <strong>Apple</strong> as a Research Intern</td>
</tr>
<tr>
<td><strong>[May 2024]</strong></td>
<td>Serving as a Scientist-in-Residence at <strong>NextAI</strong>.</td>
</tr>
<tr>
<td><strong>[Mar 2024]</strong></td>
<td>Our paper <a href="https://arxiv.org/pdf/2411.17690">Visatronic: A Multimodal Decoder-Only Model for Speech Synthesis</a> is now on <strong>ArXiv 2025</strong>!</td>
</tr>
<tr>
<td><strong>[Jan 2024]</strong></td>
<td>Our paper <a href="https://arxiv.org/abs/2404.01282">Long-Short-range Adapter for Scaling End-to-End Temporal Action Localization</a> is accepted at <strong>WACV 2025 (<span style="color:red;">Oral</span>)</strong>! 🎤</td>
</tr>
<tr>
<td><strong>[Dec 2023]</strong></td>
<td>Our work <a href="https://arxiv.org/pdf/2406.15556">Open-Vocabulary Temporal Action Localization using Multimodal Guidance</a> is accepted at <strong>BMVC 2024</strong>!</td>
</tr>
<tr>
<td><strong>[Jun 2023]</strong></td>
<td>Our paper <a href="https://arxiv.org/pdf/2101.11606.pdf"> Generative Multi-Label Zero-Shot Learning </a> is accepted at TPAMI 2023.</td>
</tr>
<tr>
<td><strong>[Jun 2023]</strong></td>
<td>Started interning at Microsoft, ROAR team.</td>
</tr>
<tr>
<td><strong>[Jan 2023]</strong></td>
<td>Interned at Vector Institute with AI Eng team.</td>
</tr>
<tr>
<td><strong>[Sep 2022]</strong></td>
<td>Joined Prof. Graham Taylor's Lab and Vector Institute.</td>
</tr>
<tr>
<td><strong>[Mar 2022]</strong></td>
<td>OW-DETR accepted at CVPR 2022.</td>
</tr>
<tr>
<td><strong>[Sep 2021]</strong></td>
<td>Reviewer for CVPR 2023, CVPR 2022, ECCV 2022, ICCV 2021, TPAMI.</td>
</tr>
<tr>
<td><strong>[Jul 2021]</strong></td>
<td>BiAM accepted at ICCV 2021.</td>
</tr>
<tr>
<td><strong>[Feb 2021]</strong></td>
<td>Serving as a reviewer for ML Reproducibility Challenge 2020.</td>
</tr>
<tr>
<td><strong>[Jan 2021]</strong></td>
<td>Paper out on arxiv: <a href="https://arxiv.org/pdf/2101.11606.pdf"> Generative Multi-Label Zero-Shot Learning </a></td>
</tr>
<tr>
<td><strong>[Jul 2020]</strong></td>
<td>TF-VAEGAN accepted at ECCV 2020.</td>
</tr>
<tr>
<td><strong>[Aug 2019]</strong></td>
<td>A Large-scale Instance Segmentation Dataset for Aerial Images (iSAID) is available for <a href="https://captain-whu.github.io/iSAID/index.html"> download </a>.</td>
</tr>
<tr>
<td><strong>[Aug 2018]</strong></td>
<td>One paper accepted at Interspeech, chime workshop 2018.</td>
</tr>
<tr>
<td><strong>[May 2018]</strong></td>
<td>Selected as an Outreachy intern, with Mozilla.</td>
</tr>
</tbody>
</table>
</div>
<h1></h1>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="width:100%;vertical-align:middle">
<heading>Research</heading>
<p>
I'm interested in developing models which can learn with limited data and few, zero or one training sample(s).
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<!-- <td style="padding:20px;width:25%;vertical-align:middle"> -->
<td style="vertical-align:middle">
<div class="one">
<img src='images/OWDETR_intro.png' width="200">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://akshitac8.github.io/OWDETR">
<papertitle>OW-DETR: Open-world Detection Transformer</papertitle>
</a>
<br>
<a href="https://sites.google.com/view/sanath-narayan">Akshita Gupta<sup>*</sup></a>,
<strong>Sanath Narayan<sup>*</sup></strong>,
<a href="https://josephkj.in">Joseph KJ</a>,
<a href="https://salman-h-khan.github.io/">Salman Khan</a>,
<a href="https://sites.google.com/view/fahadkhans/home">Fahad Shahbaz Khan,</a><br>
<a href="https://www.crcv.ucf.edu/person/mubarak-shah/">Mubarak Shah</a>
<br>
<!-- (* denotes equal contribution)
--> <strong>CVPR 2022 </strong>
<br>
<a href="https://arxiv.org/pdf/2112.01513.pdf">paper</a> /
<a href="https://github.com/akshitac8/OW-DETR">code</a>
<ul>
<li>
<u>Description:</u> Developed multi-scale context aware detection framework with attention-driven psuedo-labelling.
</li>
<li>
<u>Outcome:</u> Improved state-of-the-art performances on MS-COCO dataset with absolute gains ranging from 1.8% to 3.3% in terms of unknown recall.
</li>
</ul>
</td>
</tr>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<!-- <td style="padding:20px;width:25%;vertical-align:middle"> -->
<td style="vertical-align:middle">
<div class="one">
<img src='images/image834.png' width="200">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://akshitac8.github.io/BiAM">
<papertitle>Discriminative Region-based Multi-Label Zero-Shot Learning</papertitle>
</a>
<br>
<a href="https://sites.google.com/view/sanath-narayan">Sanath Narayan<sup>*</sup></a>,
<strong>Akshita Gupta<sup>*</sup></strong>,
<a href="https://salman-h-khan.github.io/">Salman Khan</a>,
<a href="https://sites.google.com/view/fahadkhans/home">Fahad Shahbaz Khan,</a><br>
<a href="https://scholar.google.com/citations?user=z84rLjoAAAAJ&hl=en">Ling Shao,</a>
<a href="https://www.crcv.ucf.edu/person/mubarak-shah/">Mubarak Shah</a>
<br>
<!-- (* denotes equal contribution)
--> <strong>ICCV 2021 </strong>
<br>
<a href="https://openaccess.thecvf.com/content/ICCV2021/papers/Narayan_Discriminative_Region-Based_Multi-Label_Zero-Shot_Learning_ICCV_2021_paper.pdf">paper</a> /
<a href="https://github.com/akshitac8/BiAM">code</a>
<ul>
<li>
<u>Description:</u> Developed a attention module which combines both region-level and global-level contextual information.
</li>
<li>
<u>Outcome:</u> Improved state-of-the-art performances on NUS-WIDE, OpenImages by 6.9% and 31.9% mAP score.
</li>
</ul>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<img src='images/cvpr_result.png' width="140" align="right">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://akshitac8.github.io/GAN_MLZSL">
<papertitle>Generative Multi-Label Zero-Shot Learning</papertitle>
</a>
<br>
<strong>Akshita Gupta<sup>*</sup></strong>,
<a href="https://sites.google.com/view/sanath-narayan">Sanath Narayan<sup>*</sup></a>,
<a href="https://salman-h-khan.github.io/">Salman Khan</a>,
<a href="https://sites.google.com/view/fahadkhans/home">Fahad Shahbaz Khan,</a><br>
<a href="https://scholar.google.com/citations?user=z84rLjoAAAAJ&hl=en">Ling Shao,</a>
<a href="http://www.cvc.uab.es/LAMP/joost/">Joost van de Weijer</a>
<br>
<strong> Under Review in TPAMI </strong>
<br>
<a href="https://arxiv.org/abs/2003.07833">paper</a> /
<a href="https://github.com/akshitac8/Generative_MLZSL">code</a>
<ul>
<li>
<u>Description:</u> Developed a generative model that constructs multi-label features for (generalized) zero-shot learning.
</li>
<li>
<u>Outcome:</u> Improved state-of-the-art performances on NUS-WIDE, OpenImages and MS-COCO by 3.3%, 4.3% and 15.7% mAP score.
</li>
</ul>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<img src='images/feedback_vis.png' width="160">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://akshitac8.github.io/tfvaegan/">
<papertitle>Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification</papertitle>
</a>
<br>
<a href="https://sites.google.com/view/sanath-narayan">Sanath Narayan<sup>*</sup></a>,
<strong>Akshita Gupta<sup>*</sup></strong>,
<a href="https://salman-h-khan.github.io/">Salman Khan</a>,
<a href="https://sites.google.com/view/fahadkhans/home">Fahad Shahbaz Khan,</a><br>
<a href="https://www.ceessnoek.info/">Cees G. M. Snoek,</a>
<a href="https://scholar.google.com/citations?user=z84rLjoAAAAJ&hl=en">Ling Shao,</a>
<br>
<strong>ECCV 2020 </strong>
<br>
<a href="https://arxiv.org/abs/2003.07833">paper</a> /
<a href="https://github.com/akshitac8/tfvaegan">code</a>
<ul>
<li>
<u>Description:</u> Developed a generative feature synthesizing framework for zero-shot learning.
</li>
<li>
<u>Outcome:</u> Improved state-of-the-art performances on CUB, FLO, SUN, and AWA by 4.6%, 7.1%, 1.7%, and 3.1% harmonic mean by enforcing semantic consistency at all stages of zero-shot learning.
</li>
</ul>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<img src='images/isaid.png' width="140" align="right">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://captain-whu.github.io/iSAID/">
<papertitle>iSAID: A Large-scale Dataset for Instance Segmentation in Aerial Images</papertitle>
</a>
<br>
<a href="https://scholar.google.es/citations?user=WNGPkVQAAAAJ&hl=en">Syed Waqas Zamir,</a>
<a href="https://adityac8.github.io/">Aditya Arora,</a>
<strong>Akshita Gupta</strong>,
<a href="https://salman-h-khan.github.io/">Salman Khan,</a>
<a href="https://scholar.google.ae/citations?user=qd8Blw0AAAAJ&hl=en">Guolei Sun,</a>
<a href="https://sites.google.com/view/fahadkhans/home">Fahad Shahbaz Khan,</a>
<a href="https://scholar.google.com/citations?user=vD-ezyQAAAAJ&hl=en">Fan Zhu,</a>
<a href="https://scholar.google.com/citations?user=z84rLjoAAAAJ&hl=en">Ling Shao,</a>
<a href="http://www.captain-whu.com/xia_En.html">Gui-Song Xia,</a>
<a href="https://scholar.google.com/citations?user=UeltiQ4AAAAJ&hl=en">Xiang Bai</a>
<br>
<strong>CVPR Workshop 2019 <font color="red">(Oral Presentation)</font></strong>
<br>
<a href="https://github.com/CAPTAIN-WHU/iSAID_Devkit">code</a> /
<a href="https://captain-whu.github.io/iSAID/index.html">dataset</a>
<ul>
<li>
<u>Description:</u> Improved state of the art object detector (Mask-RCNN and PANet) for aerial imagery.
</li>
<li>
<u>Outcome:</u> Proposed a large scale instance segmentation and object detection dataset (iSAID) with benchmarking on mask-RCNN and PANet.
</li>
</ul>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<img src='images/interspeech.png' width="160">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<a href="https://arxiv.org/abs/1811.00936">
<papertitle>Acoustic features fusion using attentive multi-channel deep architecture</papertitle>
</a>
<br>
<a href="http://deeplearn-ai.com/about-3/?i=1">Gaurav Bhatt,</a>
<strong>Akshita Gupta</strong>,
<a href="https://adityac8.github.io/">Aditya Arora,</a>
<a href="http://bala.cs.faculty.iitr.ac.in/">Balasubramanian Raman</a>
<br>
<strong>Interspeech Workshop 2018 </strong>
<br>
<a href="https://github.com/DeepLearn-lab/Acoustic-Feature-Fusion_Chime18">code</a>
<ul>
<li>
<u>Description:</u> Developed an attention based framework for acoustic scene recognition and audio tagging.
</li>
<li>
<u>Outcome:</u> Improved the equal error rate by atleast 3% over the Dcase challenge results.
</li>
</ul>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;">
<tr>
<td width="100%" valign="middle">
<heading>Research Experience</heading>
</td>
</tr>
</table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<img src='images/bayant_logo.png' width="150" style="background-color:black;padding:10px;vertical-align:middle">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<papertitle>Data Scientist, Bayanat </papertitle>
<br>
<em>January, 2022 - present</em>
<br>
Supervisors: Dr Meng Wang, Dr Fan Zhu
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:25%;vertical-align:middle">
<div class="one">
<img src='images/logo_IIAI.png' width="150" style="background-color:black;padding:10px;vertical-align:middle">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<papertitle>Research Engineer, Inception Institute of Artificial Intelligence </papertitle>
<br>
<em>Dec 2018 - present</em>
<br>
Supervisors: Dr Sanath Narayan, Dr Salman Khan, Dr Fahad Shahbaz khan
<p>
<ul>
<li>Developing deep learning algorithms for low- (Few- and zero-) shot detection and classification,
generative adversarial models and open-world object detection problems.</li>
<li> Developed rock & seismic layer classification system.</li>
<li>Worked on satellite-imagery object detection and object counting system.</li>
</ul>
</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:25%">
<div class="one" style="height:auto;">
<img src='images/mozilla.jpg' width="160" style="vertical-align:middle">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<papertitle>Research & Development Intern, Mozilla, Outreachy</papertitle>
<br>
<em>May 2018 – Aug 2018</em>
<br>
Supervisor: Emma Irwin
<p> Developed an open source analytics dashboard prototype with the metrics to evaluate diversity and inclusion across different communities.</p>
</td>
</tr>
</tbody></table>
<table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:25%">
<div class="one" style="height:auto;">
<img src='images/iitr.jpg' width="160" style="vertical-align:middle">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<papertitle>Undergraduate Researcher, Indian Institute of Technology</papertitle>
<br>
<em>May 2017 – Dec 2018</em>
<br>
Supervisor: Dr R Balasubramanian
<p>Worked on acoustic scene recognition and audio tagging using attention networks. Paper accepted in Interspeech-CHIME 2018. </p>
</td>
</tr>
</tbody></table>
<!-- <table style="width:100%;border:0px;border-spacing:0px;border-collapse:separate;margin-right:auto;margin-left:auto;"><tbody>
<tr>
<td style="padding:20px;width:25%">
<div class="one" style="height:auto;">
<img src='images/iitr.jpg' width="160" style="vertical-align:middle">
</div>
</td>
<td style="padding:20px;width:75%;vertical-align:middle">
<papertitle>Research Intern, Indian Institute of Technology</papertitle>
<br>
<em>May 2017 – Jul 2017</em>
<br>
Supervisor: Dr R Balasubramanian
<p>Worked on Basic Machine Learning techniques such as Support Vector Machines, K-Means Clustering and K-Nearest Neighbors and used these as a baseline for Acoustic Scene Classification.
Setting up code environments, implemented models which were use for problems of Detection and Classification of Acoustic Scenes and Events.
Worked on Audio Processing related challenges to minimise Equal Error rate.</p>
</td>
</tr>
</tbody></table> -->
<table width="100%" align="center" border="0" cellspacing="0" cellpadding="20">
<tr>
<td>
<br>
<p align="right">
<font size="2">
<strong>I borrowed this website layout from <a target="_blank" href="https://jonbarron.info/">here</a>!</strong>
</font>
</p>
</td>
</tr>
</table>
</td>
</tr>
</table>
</body>
</html>