-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
155 lines (154 loc) · 11.8 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<title>Welcome to Guanqun Cao's Homepage</title>
<link rel='stylesheet' type='text/css' href='pages/style.css'>
<link rel="icon" href="figures/icon.png">
<meta charset="utf-8">
<meta name="google-site-verification" content="JVSPjY3sErV-mIG6fqz3E1ABZ9Nxue0wg_P8gW8mSPU" />
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<body>
<h1>Guanqun Cao's Homepage</h1>
<a href="figures/cgq_backpack.png">
<img style="float:right; width:30%; padding:20px; max-width:285px" src="figures/cgq_backpack.png" alt="Guanqun" title="Guanqun">
</a>
<p>Hello there! My name is Guanqun Cao, and I'm working in the automotive industry for improving the safety and well-being standard of future mobility. I have extensive research and development experience in multimedia analysis and data mining.
</p>
<p>
I was born and grew up in historic Beijing, and feel very fortunate to have lived in six European countries in the last 15 years. You can find my CV <a href="resources/CV.pdf">here</a>. I am also on <a href="https://www.linkedin.com/in/guanquncao/">Linkedin</a> and <a href="https://github.com/gqcao">Github</a>. Feel free to contact me at <u>[email protected]</u>.
</p>
<p>
About me professionally, I like <i>Linux and developing efficient neural nets.</i> Here below, you will know more from my educational background, projects I am/was working on, and things I am interested in. Thank you for visiting my homepage!
</p>
<br>
<CENTER>
<A HREF = "blog">Blog</A> |
<A HREF = "#edu">Education</A> |
<A HREF = "#work">Work</A> |
<A HREF = "#proj">Projects</A> |
<A HREF = "#int">Interests</A>
</CENTER>
<br>
<h2 id="edu">Education</h2>
<table>
<tr>
<th width="22%">
<a href='https://www.tuni.fi/en'><img src="figures/tau.png" alt="tau" title="Tampere University" width="100%"></a>
</th>
<td>
I did my PhD in <a href="https://businesstampere.com/world-firsts-invented-in-tampere-finland/ ">Tampere</a> on multi-view data analysis. It was motivated by improving the performance of a multimedia retrieval system, and later the research was focused on heterogeneous data mining.
<p>
My major contribution is a unified solution for subspace learning methods, which is extensible for multiple views, supervised learning, and non-linear transformations. Traditional statistical learning techniques including Canonical Correlation Analysis, Partial Least Square regression and Linear Discriminant Analysis are studied by constructing graphs of specific forms under the same framework. Methods using non-linear transforms based on kernels and (deep) neural networks are derived, which lead to superior performance compared to the linear ones. A novel multi-view discriminant embedding method is proposed by taking the view difference into consideration.
</p>
<p>
The project addressed challenges in representing multi-view data across different tasks. The proposed solutions have shown superior performance in numerous applications, including object recognition, cross-modal image retrieval, face recognition and object ranking.
</p>
<p>
I feel really grateful and fortunate to be advised by Profs. <a href="https://www.tuni.fi/en/moncef-gabbouj">Moncef Gabbouj</a> and <a href="https://sites.google.com/view/iosifidis">Alexandros Iosifidis</a>, who not only gave me words of wisdom about the project, but also help shape me to be a better person. You will find more info from the <a href="pages/publications.html">publication</a> page. It's an honor that the thesis was recognized distinction by the university.
</p>
</td>
</tr>
<tr>
<th>
<a href='https://cosi-master.eu/programme/cimet-master-degree/'><img src="figures/cimet.png" style="padding:10px" alt="cimet" title="CIMET" width="30%"></a>
</th>
<td>
<p>It was a multi-discipline MSc programme that equipped me with knowledge and skills in color science, advanced algorithms, color image processing and multimedia analysis. I also had a unique experience in studying in three different countries, including St-Etienne in France, Granada in Spain and Gjøvik in Norway. The two-year programme was fully funded by EU under Erasmus Mundus scheme.
</p>
</td>
<td>
</tr>
<tr>
<th>
<a href='https://www.birmingham.ac.uk/index.aspx'><img src="figures/bham.png" alt="bham" title="University of Birmingham" width="100%"></a>
</th>
<td>
<p>It's a direct entry into the final year BEng programme and I studied electronic and computing engineering at UoB. My final-year project was about content-based image retrieval in 2007. Several MPEG-7 image descriptors were used and K-means clustering was employed for image indexing. We made a comparison with ImageNet which was at its early stage from WordNet. I am grateful to receive a scholarship from the university.</p>
</td>
</tr>
<tr>
<th>
<a href='http://english.hust.edu.cn/'><img src="figures/hust.png" alt="hust" title="Huazhong University of Science and Technology" width="75%"></a>
</th>
<td>
<p>I studied electronic and information engineering at this highly-ranked university in central China, and acquired solid knowledge in data structure, C++, analog and digital electronics, stochastic processes, statistics, calculus and linear algebra. Moreover, it broadened my horizon in both academic and social worlds, which continuously benefitted me in the long run. Additionally, I feel proud of having lived in Wuhan after what it has been through recently. It is a widely acclaimed city by the locals thanks to its hospitality as reflected in this <a href="https://www.youtube.com/watch?v=dBBxGreWjnA">video</a>.</p>
</td>
</tr>
</table>
<h2 id="work">Industrial Experience</h2>
<p>I am currently working at <a href="https://www.cevt.se/">CEVT</a> to innovate the future mobility. Previously, I spent four years at Volvo Cars, and worked on a series of projects including improving perception algorithms (end-to-end object detection and tracking in low-end embedded devices), geo-spatial data mining and recently data analytics for safety assurance of automated driving. What I gained is not only the project experience with a car OEM, but the deep understanding about the landscape of car industry. Prior to that, I used to work on projects with Intel Corp. and Tieto Oy on mobile imaging and data analytics for object rankings, respectively.</p>
<h2 id="publ">Papers</h2>
Please go <a href="pages/publications.html">here</a> or <a href="https://scholar.google.com/citations?hl=en&user=owGiCUkAAAAJ">Google Scholar</a> to find an updated list of my publications.
<h2 id="proj">Projects</h2>
<table>
<tr>
<th width="20%">
<img src="figures/env.jpg" alt="edgeAI" title="Edge AI" width="70%">
</th>
<td> <p>With my business partner, we are preparing for our startup on Edge AI. We aim to make intelligent devices more accessible to low-income and senior citizens.</p>
</td>
<td>
</tr>
<tr>
<th>
<a href="https://www.youtube.com/watch?v=5Qltc4W6S0s">
<img src="figures/geo_air.jpg" alt="geo_air" title="Geospatial data mining at Why R? conference" width="80%">
</a>
</th>
<td>
<p>
I attended <a href="https://2020.whyr.pl/">Why R? 2020</a> conference and gave a talk about predicting air quality in California using geo-spatial data mining techniques. Specifically, I provided a solution to predict the air quality index (AQI) in exact locations by coupling the observations from sparsely distributed stations with gridded simulation outputs using a spatial Bayesian method. It is my humble effort to combat against climate change. The talk was the first in the left video and its slides can be found <a href="https://github.com/gqcao/spatial-release/blob/master/geospatial.pdf">here</a>. You could get the code from the github <a href="https://github.com/gqcao/spatial-release">repo</a>.
</p>
</td>
</tr>
<tr>
<th>
<a href="https://www.youtube.com/watch?v=FmSsek5luHk">
<img src="figures/cap_finnair.jpg" alt="caption_finnair" title="Video Captioning" width="80%">
</a>
</th>
<td>
<p>A way of video captioning by Neuraltalk2 (2015) is provided. Meanwhile, we also show how to extract the deep image feature of VGG-16, and detect shot boundaries using the feature. We finetune the MS-COCO model, annotate the key frames, and return the captions to the video sequence. Though there is a significant progress in image captioning, our technique to cope with image sequencies is still relevant. The implementation can be found <a href="https://github.com/gqcao/Video-Caption-with-Neuraltalk2">here</a>.</p>
</td>
</tr>
<tr>
<th>
<img src="figures/cmretrieval.png" alt="cmretrieval" title="Cross-modal image retrieval" width="100%">
</th>
<td>
<p>Cross-modal image retrieval is one of the applications from my PhD project. We project features from both textual and image space into a common subspace to provide an effective and precise way to search items across modalities.</p>
</td>
</tr>
</table>
<h2 id="int">Interests</h2>
<ul>
<li> I collected a list of <a href="pages/videos.html">videos</a> about self-driving cars, machine learning, robotics and entrepreneurship.</li>
<li> Voluntary work: Reviewer for IEEE <a href='https://ieeexplore.ieee.org/document/8766947'>TCyb</a>, <a href='https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=83'>TIP</a>, <a href='https://site.ieee.org/connected-vehicles/publications/ieee-transactions-on-vehicular-technology/'>TVT</a>, ICIP 2018-<a href='https://2022.ieeeicip.org/'>2022</a>. Mentor for <a href='blog/learn/2021/05/13/mentor.html'>Tampere University Doctoral Student Mentoring Program 2021</a>. Member of <a href="https://mlcommons.org/en/">ML Commons</a>.
</li>
<li> I enjoy taking <a href="pages/courses.html">online courses</a> and reading <a href="https://www.goodreads.com/review/list/50812335-guanqun-cao">books</a>.
</li>
<li> I am an advocator of <a href="https://en.wikipedia.org/wiki/Open-source-software_movement">open-source-software movement</a>, an active user of Arch Linux, and a fan of minimalism.</li>
<li> I remain involved in academic activities and am an IEEE member.</li>
<li> I am against forced/voluntary overtime work which is equivalent of a deprivation of workers' health and incompetent management.
<a href="https://996.icu/#/en_US"><img src="https://img.shields.io/badge/link-996.icu-red.svg" alt="996.icu" /></a>
</li>
</ul>
<p>© 2021-2022 Guanqun Cao. All Rights Reserved. </p>
<!-- Default Statcounter code for My homepage https://guanquncao.com/ -->
<script type="text/javascript">
var sc_project=12453890;
var sc_invisible=1;
var sc_security="def1cc52";
var sc_https=1;
var scJsHost = "https://";
document.write("<sc"+"ript type='text/javascript' src='" + scJsHost+
"statcounter.com/counter/counter.js'></"+"script>");
</script>
<noscript><div class="statcounter"><a title="Web Analytics"
href="https://statcounter.com/" target="_blank"><img class="statcounter"
src="https://c.statcounter.com/12453890/0/def1cc52/0/" alt="Web
Analytics"></a></div></noscript>
<!-- End of Statcounter Code -->
</body>