ITEM Workshop

IoT, Edge, and Mobile for Embedded Machine Learning


Workshop collocated with ECML-PKDD 2020, Ghent, Belgium, September 14-18, 2020

News

  • Program is online! See below for details.

  • ECML-PKDD will be held virtually, and so does ITEM. This is unfortunate, but we will do our best to ensure as much interactivity as possible.

  • Luca Benini from ETHZ and Bologna University just confirmed to be the second keynote speaker!

  • We are happy to announce Song Han from MIT as one of your keynote speakers!

Overview

Local and embedded machine learning (ML) is a key component for real-time data analytics in upcoming computing environments like the Internet of Things (IoT), edge computing and mobile ubiquitous systems. The goal of the ITEM workshop is to bring together experts, researchers and practitioners from all relevant communities, including ML, hardware design and embedded systems, IoT, edge, and ubiquitous / mobile computing. Topics of interest include compression techniques for existing ML models, new ML models that are especially suitable for embedded hardware, federated learning approaches, as well as automatic code generation, frameworks and tool support. The workshop is planned as a combination of invited talks, paper submissions, as well as opentable discussions.

Keywords: embedded machine learning, pervasive intelligent devices, real-time data analytics, uncertainty and robustness

Details on Intersection of Machine Learning and Computer Architecture

There is an increasing need for real-time intelligent data analytics, driven by a world of Big Data, and the society’s need for pervasive intelligent devices. Application examples include wearables for health and recreational purposes, infrastructure such as smart cities, transportation and smart power grids, e-commerce and Industry 4.0, and autonomous robots including self-driving cars. Most applications share facts like large data volumes, real-time requirements, limited resources including processor, memory and network. Often, battery life is a concern, data might be large but possibly incomplete, and probably most important, data can be uncertain. Notably, often powerful cloud services can be unavailable, or not an option due to latency or privacy constraints.

For these tasks, Machine Learning (ML) is among the most promising approaches to address learning and reasoning under uncertainty. In particular deep learning methods in general are well-established supervised or unsupervised ML methods, and well understood with regard to compute/data requirements, accuracy and (partly) generalization. Today’s deep learning algorithms dramatically advance state-of-the-art performance in terms of accuracy of the vast majority of AI tasks. Examples include image and speech processing, such as image recognition, segmentation, object localization, multi-channel speech enhancement, speech recognition, signal processing such as radar signal denoising, with applications as broad as robotics, medicine, autonomous navigation, recommender systems, etc.

As a result, ML is embedded in various compute devices, ranging from power cloud systems over fog and edge computing to smart devices. Due to the demanding nature of this workload, which is heavily compute- and memory-intensive, virtually all deployments are limited by resources, being particularly true for IoT, edge, and mobile. One of the results of these constraints are various specialized processor architectures, which are tailored for particular ML tasks. While this is helpful for this particular task, ML is advancing fast and new methods are introduced frequently. Notably, one can observe that very often the requirements of such tasks advance faster than the performance of new compute hardware, increasing the gap in between application and compute hardware. This observation is emphasized by the slowing-down of Moore’s law, which used to deliver constant performance scaling over decades.

Furthermore, to address uncertainty, limited data, and to improve in general the robustness of ML, new methods are required, with examples including Bayesian approaches, sum-product networks, capsule networks, graph-based neural networks, and many more. One can observe that, compared with deep convolutional neural networks, computations can be fundamentally different, compute requirements can substantially increase, and underlying properties like structure in computation are often lost.

As a result, we observe a strong need for new ML methods to address the requirements of emerging workloads deployed in the real-world, such as uncertainty, robustness, and limited data. In order to not hinder the deployment of such methods on various computing devices, and to address the gap in between application and compute hardware, we furthermore need a variety of tools. As such, this workshop proposal gears to gather new ideas and concepts on

  • ML methods for real-world deployment,

  • methods for compression and related complexity reduction tools,

  • dedicated hardware for emerging ML tasks,

  • and associated tooling like compilers and mappers.

Similarly, the workshop also gears to serve as a platform that gathers experts from ML and systems for joint tackling of these problems, creating an atmosphere of open discussions and other interactions.

Topics of Interest

Topics of particular interest include, but are not limited to:

  • Compression of neural networks for inference deployment, including methods for quantization (including binarization), pruning, knowledge distillation, structural efficiency and neural architecture search

  • Learning on edge devices, including federated and continuous learning

  • Trading among prediction quality (accuracy), efficiency of representation (model parameters, data types for arithmetic operations and memory footprint in general), and computational efficiency (complexity of computations)

  • Automatic code generation from high-level descriptions, including linear algebra and stencil codes, targeting existing and future instruction set extensions

  • Tool-driven optimizations up from ML model level down to instruction level, automatically adapted to the current hardware requirements

  • Understanding the difficulties and opportunities using common ML frameworks with marginally supported devices

  • Exploring new ML models designed to use on designated device hardware

  • Future emerging processors and technologies for use in resource-constrained environments

  • Applications and experiences from deployed use cases requiring embedded ML

  • New and emerging applications that require the use of ML on resource-constrained hardware

  • Energy efficiency of ML models created with distinct optimization techniques

  • Security/privacy of embedded ML

  • New benchmarks suited to edge devices and learning on the edge scenarios

Important Dates (updated)

  • Abstract registration deadline: June 9, 2020

  • Submission deadline: June 24, 2020

  • Acceptance notification: August 1, 2020

  • Camera-ready paper: August 15, 2020

  • Workshop program and proceedings online: September 1, 2020

  • Workshop date: Sept 14, 2020

Please see here for conference registration deadlines, including rules for ealy registration: https://ecmlpkdd2020.net/attending/registration/

Submission

Papers must be written in English and formatted according to the Springer LNCS guidelines. Author instructions, style files and the copyright form can be downloaded here: http://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines

Submissions may not exceed 12 pages in PDF format for full papers, respectively 6 pages for short papers, including figures and references. Submitted papers must be original work that has not appeared in and is not under consideration for another conference or journal. Work in progress is welcome, but first results should be made available as a proof of concept. Submissions only consisting of a proposal will be rejected.

The workshop does not have formal proceedings, so accepted papers do not preclude publishing at future conferences and/or journals. Accepted papers will be posted on the workshop's website. The workshop co-organizers are exploring the option of co-editing a special journal issue, for which selected contributions from the workshop will be invited.

Submission site: https://easychair.org/conferences/?conf=item2020

Program Committee

  • Costas Bekas, Citadel Securites

  • Herman Engelbrecht, Stellenbosch University, South Africa

  • Giulio Gambardella, XILINX

  • Tobias Golling, Geneva University, Switzerland

  • Domenik Helms, OFFIS e.V. - Institut für Informatik, Germany

  • Eduardo Rocha Rodrigues, IBM

  • David King, Air Force Institute of Technology, USA

  • Benjamin Klenk, NVIDIA Research

  • Manfred Mücke, Materials Center Leoben Forschung GmbH, Austria

  • Dimitrios S. Nikolopoulos, Virginia Tech, USA

  • Robert Peharz, University of Technology Eindhoven, NL

  • Marco Platzner, Paderborn University, Germany

  • Thomas B. Preußer, ETH Zurich, Switzerland

  • Johannes Schemmel, Heidelberg University, Germany

  • Wei Shao, Royal Melbourne Institute of Technology (RMIT), Australia

  • David Sidler, Microsoft

  • Jürgen Teich, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany

  • Sebastian Tschiatschek, University of Vienna, Austria

  • Nicolas Weber, NEC Labs Europe

  • Matthias Zöhrer, Evolve.tech, Austria

Program

All times in CET. ITEM will be a virtual event, as all of ECML-PKDD is. Zoom will be used as plattform, with details to be found in Whova (see main conference) and communicated to presenters.

  • 09:00 - 09:10 Workshop Co-Organizers: Introduction

  • 09:10 - 09:55 Luca Benini: From Near-Sensor to In-Sensor AI
    Edge Artificial Intelligence is the new megatrend, as privacy concerns and networks bandwidth/latency bottlenecks prevent cloud offloading of sensor analytics functions in many application domains, from autonomous driving to advanced prosthetic. The next wave of "Extreme Edge AI" pushes signal processing and machine learning aggressively into sensors and actuators, opening major research and business development opportunities. In this talk I will focus on recent efforts in developing an AI-centric Extreme Edge computing platform based on open source, parallel ultra-low power (PULP) customized ISA-RISC-V processors coupled with domain-specific accelerators, and I will look ahead to the next step: namely three-dimensional integration of sensors, mixed-signal front-ends and AI-processing engines.

  • 09:55 - 10:20 Philipp Spilger, Eric Müller, Arne Emmel, Aron Leibfried, Christian Mauch, Christian Pehle, Johannes Weis, Oliver Breitwieser, Sebastian Billaudelle, Sebastian Schmitt, Timo C. Wunderlich, Yannik Stradmann and Johannes Schemmel: hxtorch: PyTorch for BrainScaleS-2 — Perceptrons on Analog Neuromorphic Hardware
    Johannes Weis, Philipp Spilger, Sebastian Billaudelle, Yannik Stradmann, Arne Emmel, Eric Müller, Olivier Breitwieser, Andreas Grübl, Joscha Ilmberger, Vitali Karasenko, Mitja Kleider, Christian Mauch, Korbinian Schreiber and Johannes Schemmel: Inference with Artificial Neural Networks on Analog Neuromorphic Hardware (*)

  • 10:20 - 10:45 Dennis Rieber and Holger Fröning: Search Space Complexity of Iteration Domain Based Instruction Embedding for Deep Learning Accelerators

  • 10:45 - 11:10 Kevin Stehle, Günther Schindler and Holger Fröning: On the Difficulty of Designing Processor Arrays for Deep Neural Networks

  • 11:10 - 11:25 Break

  • 11:25 - 11:50 Laura Morán-Fernández, Eva Blanco-Mallo, Konstantinos Sechidis, Amparo Alonso-Betanzos and Verónica Bolón-Canedo: When size matters: Markov Blanket with limited bit depth conditional mutual information

  • 11:50 - 12:15 Christopher Cichiwskyj, Chao Qian and Gregor Schiele: Time to Learn: Temporal Accelerators as an Embedded Deep Neural Network Platform

  • 12:15 - 12:40 Frederik Funk, Thorsten Bucksch and Daniel Mueller-Gritschneder: Training on a Microcontroller

  • 12:40 - 13:25 Break

  • 13:25 - 14:10 Song Han: MCUNet: TinyNAS and TinyEngine on Microcontrollers
    Machine learning on tiny IoT devices based on microcontroller units (MCU) is appealing but challenging: the memory of microcontrollers is 2-3 orders of magnitude less even than mobile phones, not to mention GPUs. We propose MCUNet, a framework that jointly designs the efficient neural architecture (TinyNAS) and the lightweight inference engine (TinyEngine). MCUNet automatically designs perfectly matched neural architecture and the inference library on MCU. MCUNet enables ImageNet-scale inference on microcontrollers that has only 1MB of FLASH and 320KB SRAM. It achieves significant speedup compared to popular MCU libraries: TF-Lite Micro, CMSIS-NN, and MicroTVM. Our study suggests that the era of tiny machine learning on IoT devices has arrived.

  • 14:10 - 14:35 Laura Isabel Galindez Olascoaga, Wannes Meert, Nimish Shah and Marian Verhelst: Dynamic complexity tuning for hardware-aware probabilistic circuits

  • 14:35 - 15:00 Manuele Rusci, Marco Fariselli, Alessandro Capotondi and Luca Benini: Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers

  • 15:00 - 15:25 Johanna Rock, Wolfgang Roth, Paul Meissner and Franz Pernkopf: Quantized Deep Neural Networks for Radar Interference Mitigation

  • 15:25 - 15:30 Workshop Co-Organizers: Closing Remarks

(*) joint presentation

Synergies

This workshop is envisioned as a counterpart to the highly successful workshop series on embedded machine learning (WEML), held annually at Heidelberg University (for the latest edition, see https://www.deepchip.org/weml2020.). The two workshop formats complement each other, as WEML is highly discussion-oriented, invitation-only, without requirements on scientific novelty, results or publications. In contrast, ITEM is envisioned as a high-quality academic outlet, including results demonstrating at least the potential of presented work, and with a healthy mix of peer-reviewed contributions and invited talks.

Organization

Co-Organizers

  • Holger Fröning, Heidelberg University, Germany (holger.froening(at)ziti.uni-heidelberg.de)

  • Franz Pernkopf, Graz University of Technology, Austria (pernkopf(at)tugraz.at)

  • Gregor Schiele, University of Duisburg-Essen (gregor.schiele(at)uni-due.de)

  • Michaela Blott, XILINX Research, Dublin, Ireland (michaela.blott(at)xilinx.com)

Program Chair

  • Benjamin Klenk, NVIDIA Research, Santa Clara, US