Our long-term goal is to develop a common methodology and framework for reproducible co-design of the efficient software/hardware stack for emerging algorithms requested by our advisory board (inference, object detection, training, etc) in terms of speed, accuracy, energy, size, complexity, costs and other metrics. Open ReQuEST competitions bring together AI, ML and systems researchers to share complete algorithm implementations (code and data) as portable, customizable and reusable Collective Knowledge workflows. This helps other researchers and end-users to quickly validate such results, reuse workflows and optimize/autotune algorithms across different platforms, models, data sets, libraries, compilers and tools. We will also use our practical experience reproducing experimental results from ReQuEST submissions to help set up artifact evaluation at the upcoming SysML 2019, and to suggest new algorithms for the inclusion to the MLPerf benchmark.
The associated ACM ReQuEST workshop is co-located with ASPLOS 2018
March 24th, 2018 (afternoon), Williamsburg, VA, USA.
A ReQuEST introduction and long-term goals: cKnowledge.org/request website and ArXiv paper.
Time slot | Presentation | Reusable artifacts | ||||||||
1:30p-1:40pm |
Workshop introduction
ReQuEST tournaments bring together multidisciplinary researchers (AI, ML, systems) to find the most efficient solutions for realistic problems requested by the advisory board in terms of speed, accuracy, energy, complexity, costs and other metrics across the whole application/software/hardware stack In a fair and reproducible way. All the winning solutions (code, data, workflow) on a Pareto-frontier are then available to the community as portable and customizable "plug&play" AI/ML components with a common API and meta information. The ultimate goal is to accelerate research and reduce costs by reusing the most accurate and efficient AI/ML blocks continuously optimized, autotuned and crowd-tuned across diverse models, data sets and platforms from a cloud to edge. |
|||||||||
1:40p-2:30pm |
Keynote "The Retrospect and Prospect of Low-Power Image Recognition Challenge (LPIRC)"
Prof. Yiran Chen, Duke University, USA
Abstract: Reducing power consumption has been one of the most important goals since the creation of electronic systems. Energy efficiency is increasingly important as battery-powered systems (such as smartphones, drones, and body cameras) are widely used. It is desirable using the on-board computers to recognize objects in the images captured by these cameras. The Low-Power Image Recognition Challenge (LPIRC) is an annual competition started in 2015, aiming to discover the best technology in both image recognition and energy conservation. In this talk, we will explains the rules of the competition and the rationale, summarizes the teams' scores, and describes the lessons learned in the past years. We will also discuss possible improvements of future challenges and collaboration opportunities with other events and competitions like ReQuEST. Short bio: Yiran Chen received B.S and M.S. from Tsinghua University and Ph.D. from Purdue University in 2005. After five years in industry, he joined University of Pittsburgh in 2010 as Assistant Professor and then promoted to Associate Professor with tenure in 2014, held Bicentennial Alumni Faculty Fellow. He now is a tenured Associate Professor of the Department of Electrical and Computer Engineering at Duke University and serving as the co-director of Duke Center for Evolutionary Intelligence (CEI), focusing on the research of new memory and storage systems, machine learning and neuromorphic computing, and mobile computing systems. Dr. Chen has published one book and more than 300 technical publications and has been granted 93 US patents. He is the associate editor of IEEE TNNLS, IEEE D&T, IEEE ESL, ACM JETC, and ACM TCPS, and served on the technical and organization committees of more than 40 international conferences. He received 6 best paper awards and 12 best paper nominations from international conferences. He is the recipient of NSF CAREER award and ACM SIGDA outstanding new faculty award. He is the Fellow of IEEE. See LPIRC tournaments. |
|||||||||
2:30pm-2:50pm |
"Real-Time Image Recognition Using Collaborative IoT Devices"
Ramyad Hadidi, Jiashen Cao, Matthew Woodward, Michael S. Ryoo, Hyesoon Kim Georgia Institute of Technology, USA |
Validated
Nvidia Jetson TX2, ARM, Raspberry Pi, AlexNet, VGG16, TensorFlow, Keras, Avro |
||||||||
2:50pm-3:10pm |
"Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe"
Jiong Gong, Haihao Shen, Guoming Zhang, Xiaoli Liu, Shane Li, Ge Jin, Niharika Maheshwari Intel Corporation |
Validated
Xeon Platinum 8124M, AWS, Intel C++ Compiler 17.0.5 20170817, ResNet-50, Inception-V3, SSD, 32-bit, 8-bit, Caffe |
||||||||
3:10pm-3:30pm |
"VTA: Open Hardware/Software Stack for Vertical Deep Learning System Optimization"
Thierry Moreau, Tianqi Chen, Luis Ceze University of Washington, USA |
Validated
Xilinx FGPA (Pynq board), ResNet-*, MXNet, NNVM/TVM |
||||||||
3:30pm-4:00pm | Break | |||||||||
4:00pm-4:20pm |
"Optimizing Deep Learning Workloads on ARM GPU with TVM"
Lianmin Zheng1, Tianqi Chen2
1 Shanghai Jiao Tong University, China |
Validated
Firefly-RK3399, GCC, LLVM, VGG16, MobileNet, ResNet-18, OpenBLAS vs ArmCL, MXNet, NNVM/TVM |
||||||||
4:20pm-4:50pm |
"Introducing open ReQuEST platform, scoreboard and long-term vision"
Grigori Fursin and the ReQuEST organizers "Exploring performance and accuracy of the MobileNets family using the Arm Compute Library" Nikolay Chunosov, Flavio Vella, Anton Lokhmotov, Grigori Fursin
dividiti, UK |
Validated
HiKey 960 (GPU), GCC, MobileNets exploration, ArmCL (18.01,18.02,dividiti optimizations), OpenCL |
||||||||
4:50pm-5:00pm |
Demonstrating live ReQuEST scoreboard with latest validated results
Note that the idea of ReQuEST tournaments is to continuously update this scoreboard with the help of authors and the community even after the workshop! Please, stay tuned! |
Live ReQuEST scoreboard
and shared ReQuEST workflow with all artifacts.
Other shared CK artifact and workflows available here. |
||||||||
5:00pm |
Open panel and discussion:
"Tackling complexity, reproducibility and tech transfer challenges in a rapidly evolving AI/ML/systems research"
Moderators: Grigori Fursin and Thierry Moreau. We plan to center discussion around the following questions:
Participants:
|
The 1st ReQuEST tournament is co-located with ACM ASPLOS'18 and will focus on optimizing the whole model/software/hardware stack for image classification based on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Unlike the classical ILSVRC where submissions are ranked according to their classification accuracy, however, ReQuEST submissions will be evaluated according to multiple metrics and trade-offs selected by the authors (e.g. accuracy, speed, throughput, energy consumption, hardware cost, usage cost, etc.) in a unified, reproducible and objective way using the Collective Knowledge framework (CK). Restricting the competition to a single application domain will allow us to test our open-source ReQuEST tournament infrastructure, validate it across multiple platforms and environments, and prepare a dedicated live scoreboard with results similar to this public CK scoreboard.
We encourage participants to target accessible, off-the-shelf hardware to allow our evaluation committee to conveniently reproduce their results. Example systems include:
Example optimizations include:
If you are already familiar with the open-source Collective Knowledge framework (CK), you are encouraged to convert your experimental workflows to to portable CK workflows. Such workflows can automatically set up the environment, detect required software dependencies, install missing packages and run experiments, thus automating artifact evaluation. (See some examples here.)
If you are not familiar with CK, worry not! We will gladly help you convert your submission to CK during the evalution stage.
Step 1: Collaborate on converting your workflows to CK
Step 2: Collaborate on validating your results
Again, the authors can communicate with the reviewers privately via HotCRP, semi-privately via Slack, or publicly by opening tickets in shared repositories (see examples 1 and 2) and/or via the CK mailing list. If any of the organizers submit their workflows (mainly to provide reference implementations), their submissions will go through public evaluation.
Due to the multi-faceted nature of the competition, submissions will not be ranked according to a single metric (as this often results in over-engineered solutions), but instead the AEC will assess their Pareto optimality on two or more metrics exposed by the authors. As such, there will not be a single winner, but rather better and worse designs based on their relative Pareto optimality (up to 3 design points allowed per each submission). We will collaborate with the authors to correctly visualize the results and SW/HW/model configurations on a public scoreboard while grouping them according to certain categories of their choice (e.g. embedded vs. server). A unique submission may define a category in its own right. To win, the results of an entry will normally lie close to the Pareto-optimal frontier in its category. However, a winning entry can be also praised for its originality, reproducibility, adaptability, scalability, portability, ease of use, etc.
A common academic and industrial panel will be held at the end of the workshop to discuss how to improve the common SW/HW co-design methodology and infrastructure for deep learning and other real-world workloads.
After the workshop, we will prepare a public report
for the ReQuEST Advisory/Industrial Board.
The board members will provide their feedback on the results,
collaborate on a common methodology for reproducible
evaluation and optimization, suggest realistic workloads,
help provide access to rare hardware platforms to the
Artifact Evaluation Committee for future tournaments, and
provide prizes for distinguished entries.
We will use our practical experience reproducing experimental
results from ReQuEST submissions to help set up artifact evaluation
at the upcoming SysML 2019,
and to suggest new algorithms for the inclusion
to the MLPerf benchmark.