Hello!

I'm Vipula Dissanayake, a Full Stack Developer serving in the healthcare industry. PhD candidate at the University of Auckland.

Get in touch vipula803@gmail.com

Background

I'm currently a final year PhD candidate at The University of Auckland, researching deep learning techniques to recognise human emotions with smart devices. I also completed my Master of Engineering degree at the same university, investigating machine learning algorithms with wearable sensors.

My career started as an aspiring software engineer in 2016, and since then, I have worked in Sri Lanka, Singapore and New Zealand.

When I'm not in front of a computer screen, I'm probably behind a camera, playing badminton, or crossing off another item on my bucket list.

Skills
Machine Learning
  • Tensorflow
  • Keras
  • Scikit-learn
Programming Languages
  • Java
  • Python
  • JavaScript
  • Dart
Frameworks & Platforms
  • Google Cloud Platform
  • Spring Framework
  • Android
  • Flutter
Dev Tools
  • Git
  • Gradle
  • Maven
Experience
August 2022 - Present
Full Stack Developer
June 2018 - Nov 2020
Professional Casual Staff (Software Engineer)
Sep 2017 - Feb 2018
Research Assistant (Software Engineer)
Aug 2016 - Aug 2017
Software Engineer
Oct 2014 - Apr 2015
Software Engineering Intern
View My Resume
Education
June 2019 - Present
Doctor of Philosophy
Feb 2018 - Feb 2019
Master of Engineering (Hons)
Oct 2011 - Apr 2016
Bachelor of Science of Engineering (Hons)
Certifications
Coursera/DeepLearning.ai
Coursera/DeepLearning.ai
Publications

Dissanayake, V., Seneviratne, S., Rana, R., Wen, E., Kaluarachchi, T. and Nanayakkara, S., 2022. SigRep: Toward Robust Wearable Emotion Recognition With Contrastive Representation Learning. IEEE Access, 10, pp.18105-18120.

Elvitigala, D.S., Scholl, P.M., Suriyaarachchi, H., Dissanayake, V. and Nanayakkara, S., 2021, September. StressShoe: A DIY Toolkit for just-in-time Personalised Stress Interventions for Office Workers Performing Sedentary Tasks. In Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (pp. 1-14).

Dissanayake, V., Zhang, H., Billinghurst, M. and Nanayakkara, S., 2020. Speech Emotion Recognition 'in the Wild' Using an Autoencoder. In INTERSPEECH (pp. 526-530).

Buddhika, T., Zhang, H., Chan, S.W., Dissanayake, V., Nanayakkara, S. and Zimmermann, R., 2019, March. fSense: unlocking the dimension of force for gestural interactions using smartwatch PPG sensor. In Proceedings of the 10th Augmented Human International Conference 2019 (pp. 1-5).

Chua, Y., Sridhar, P.K., Zhang, H., Dissanayake, V. and Nanayakkara, S., 2019, October. Evaluating IVR in primary school classrooms. In 2019 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct) (pp. 169-174). IEEE.

More on Google Scholar