I have an interdisciplinary background in computer engineering, physics, quantum electronics and machine learning with a PhD in computer science from the University of Edinburgh. I was always passionate about deep tech (brain-inspired computing, self-optimizing systems, AI, ML, quantum computing, IoT) but after struggling to reproduce and compare experimental results from research papers I started actively working on tools and techniques to enable reproducible research, sustainable software, practical knowledge management, and open science.
During my academic research, I prepared the foundations, scientific methodology, and tools for ML-based autotuning, crowd-tuning and co-design of efficient computer systems that can run emerging workloads in the most efficient way in terms of speed, accuracy, energy and associated costs across diverse data sets, models, software and hardware. I managed to connect several cross-disciplinary techniques including machine learning, multi-objective autotuning, model-driven run-time adaptation. I was honored to receive an INRIA award of scientific excellence in 2012 and the ACM CGO'17 test of time award for my R&D.
I am also an active open-source contributor since 2009 when I started collaborating with Google and Mozilla to integrate my Interactive Compilation Interface to the open-source GCC compiler. I developed it to crowdsource auto-tuning of real workloads across diverse devices provided by volunteers similar to SETI@home. I also connected it with my open cTuning.org portal to crowdsource the ML training of the ML-based compiler. This technology is considered by IBM to be the first in the world. However, it also exposed many problems to process and reproduce real experimental results shared by the community during crowd-tuning and crowd-learning.
These problems motivated me to establish the non-profit cTuning foundation in 2014 and develop the Collective Knowledge framework as a simple research SDK to convert artifacts shared along with published research papers into portable, customizable and reusable components and workflows. I wanted to use such a common experimental framework to bring DevOps principles to computational research and enable "live" research papers. I also started collaborating with ACM and different systems and ML conferences to reproduce results from accepted papers and develop a common methodology, artifact appendix and reproducibility checklist. At the same time, I co-founded an engineering company in Cambridge to test my CK framework in practice and help companies such as Arm and General Motors automate the development and optimization of novel computational systems for AI, ML and IoT.
In 2019 I founded the cKnowledge SAS to continue developing my open Collective Knowledge platform with my academic and industrial partners to share knowledge about how to design, benchmark, optimize and use deep tech systems (AI, ML, quantum, IoT) in the form of reusable R&D automation actions, portable packages, portable benchmarking pipelines, reproduced papers and collaborative SOTA scoreboards. See our recent MLPerf benchmark automation demo!