I have an interdisciplinary background in computer systems, machine learning, physics and electronics with a PhD in computer science from the University of Edinburgh. I am passionate about developing efficient software and hardware that can help to bring deep tech to the real world (brain-inspired computing, self-optimizing systems, AI, ML, quantum computing, IoT). However, after struggling to reproduce and compare experimental results from numerous ML and systems papers I started actively working on tools and techniques to enable reproducible research, reusable software, practical knowledge management, and a new publication model with portable workflows and reusable artifacts.
During my academic research, I prepared the foundations, scientific methodology, and tools to co-design self-optimizing and bio-inspired software and hardware that can run emerging workloads in the most efficient way in terms of speed, accuracy, energy and associated costs while automatically adapting to any user environment with diverse data sets. I managed to connect several cross-disciplinary techniques including machine learning, multi-objective autotuning, model-driven run-time adaptation and crowd-tuning. I was honored to receive several best paper awards, INRIA award of scientific excellence, and the ACM CGO test of time award for my R&D.
I am also an active open-source contributor since 2009 when I started collaborating with Google and Mozilla to integrate my Interactive Compilation Interface to the open-source GCC compiler. I developed it to crowdsource auto-tuning of real workloads across diverse devices provided by volunteers similar to SETI@home. I also connected it with my open cTuning.org portal to crowdsource the ML training of the ML-based compiler. This technology is considered by IBM to be the first in the world. However, it also exposed many problems to process and reproduce real experimental results shared by the community during crowd-tuning and crowd-learning.
These problems motivated me to establish the non-profit cTuning foundation in 2014 and develop the Collective Knowledge framework as a simple research SDK to convert artifacts shared along with published research papers into portable, customizable and reusable components and workflows. I wanted to use such a common experimental framework to bring DevOps principles to computational research and enable "live" research papers.
I also started collaborating with ACM and ML and systems conferences to reproduce results from accepted papers, develop a common methodology, artifact appendix and reproducibility checklist, and organize reproducible hackathons for AI/ML and quantum computing. At the same time, I co-founded an engineering company in Cambridge to test my CK framework in practice and help companies including Arm and General Motors to automate the development, optimization and co-design of efficient software and hardware for AI, ML, and IoT workloads.
In 2019 I founded the cKnowledge SAS to continue developing my open Collective Knowledge platform with my academic and industrial partners to systematize all the knowledge and experience about deep tech. CK helps to keep track of how to design, benchmark, optimize and use AI, ML, HPC and IoT systems that can automatically adapt to continuously changing software, hardware, models and data sets with the help of portable workflows, using reusable best practices, reusable artifacts, reproducible papers and live scoreboards to crowdsource experiments. See my recent MLPerf automation demo!