ITEM Workshop 2022

IoT, Edge, and Mobile for Embedded Machine Learning

Workshop collocated with ECML-PKDD 2022, Grenoble, France, September 19-23, 2022


  • Sept 12, 2022: Accepted papers are online!

  • Sept 8, 2022: Program ist online! Looking forward to seeing all interested colleagues in Grenoble!

  • June 22, 2022: Submission deadline extended!

  • June 9, 2022: TPC updated and submission open!

  • May 11, 2022: ITEM will take place on Monday, September 19 in the afternoon!

  • April 16, 2022: ITEM 2022 has beeen accepted at ECML-PKDD2022!


Local and embedded machine learning (ML) is a key component for real-time data analytics in upcoming computing environments like the Internet of Things (IoT), edge computing and mobile ubiquitous systems. The goal of the ITEM workshop is to bring together experts, researchers and practitioners from all relevant communities, including ML, hardware design and embedded systems, IoT, edge, and ubiquitous / mobile computing. Topics of interest include compression techniques for existing ML models, new ML models that are especially suitable for embedded hardware, tractable models beyond neural networks, federated learning approaches, as well as automatic code generation, frameworks and tool support. The workshop is planned as a combination of invited talks, paper submissions, as well as open-table discussions.

Keywords: embedded machine learning, pervasive intelligent devices, real-time data analytics, uncertainty and robustness

About the Workshop

There is an increasing need for real-time intelligent data analytics, driven by a world of Big Data, and the society’s need for pervasive intelligent devices, such as wearables for health and recreational purposes, smart city infrastructure, e-commerce, Industry 4.0, and autonomous robots. Most applications share facts like large data volumes, real-time requirements, limited resources including processor, memory, network and possibly battery life. Data might be large but possibly incomplete and uncertain. Notably, often powerful cloud services can be unavailable, or not an option due to latency or privacy constraints. For these tasks, Machine Learning (ML) is among the most promising approaches to address learning and reasoning under uncertainty. Examples include image and speech processing, such as image recognition, segmentation, object localization, multi-channel speech enhancement, speech recognition, signal processing such as radar signal denoising, with applications as broad as robotics, medicine, autonomous navigation, recommender systems, etc.

To address uncertainty, limited data, and to improve in general the robustness of ML, new methods are required, with examples including Bayesian approaches, sum-product networks, transformer networks, graph-based neural networks, and many more. One can observe that, compared with deep convolutional neural networks, computations can be fundamentally different, compute requirements can substantially increase, and underlying properties like structure in computation are often lost. As a result, we observe a strong need for new ML methods to address the requirements of emerging workloads deployed in the real-world, such as uncertainty, robustness, and limited data. In order to not hinder the deployment of such methods on various computing devices, and to address the gap in between application and compute hardware, we furthermore need a variety of tools. As such, this workshop proposal gears to gather new ideas and concepts on

  • ML methods for real-world deployment,

  • methods for compression and related complexity reduction tools,

  • dedicated hardware for emerging ML tasks,

  • and associated tooling like compilers and mappers.

Topics of Interest

Topics of particular interest include, but are not limited to:

  • Compression of neural networks for inference deployment, including methods for quantization (including binarization), pruning, knowledge distillation, structural efficiency and neural architecture search

  • Hardware support for novel ML architectures beyond CNNs, e.g., transformer models

  • Tractable models beyond neural networks

  • Learning on edge devices, including federated and continuous learning

  • Trading among prediction quality (accuracy), efficiency of representation (model parameters, data types for arithmetic operations and memory footprint in general), and computational efficiency (complexity of computations)

  • Automatic code generation from high-level descriptions, including linear algebra and stencil codes, targeting existing and future instruction set extensions

  • Tool driven, optimizations up from ML model level down to instruction level, automatically adapted to the current hardware requirements

  • Understanding the difficulties and opportunities using common ML frameworks with marginally supported devices

  • Exploring new ML models designed to use on designated device hardware

  • Future emerging processors and technologies for use in resource-constrained environments, e.g. RISC V, embedded FPGAs, or analogue technologies

  • Applications and experiences from deployed use cases requiring embedded ML

  • New and emerging applications that require ML on resource-constrained hardware

  • Energy efficiency of ML models created with distinct optimization techniques

  • Security/privacy of embedded ML

  • New benchmarks suited to edge and embedded devices

Important Dates

  • Submission deadline: (was: June 23, 2022) July 8, 2022

  • Acceptance notification: (was: Jul 13, 2022) Aug 1, 2022

  • Camera-ready paper: (was: Aug 15, 2022) Aug 26, 2022

  • Workshop papers available online: Sept 5, 2022

  • Workshop date: Sept 19, 2022

Please see here for conference registration deadlines, including rules for ealy registration:


Papers must be written in English and formatted according to the Springer LNCS guidelines. Author instructions, style files and the copyright form can be downloaded here:

Submissions may not exceed 12 pages in PDF format for full papers, respectively 6 pages for short papers, including figures and references. Submitted papers must be original work that has not appeared in and is not under consideration for another conference or journal. Please prepare submissions according to single-blind standards. Work in progress is welcome, but first results should be made available as a proof of concept. Submissions only consisting of a proposal will be rejected.

We are negotiating with ECML-PKDD co-organizers on joint proceedings or will try to organize individual proceedings. More details to follow. In case of questions, please contact the workshop organizers. In any case, accepted papers will be posted on the workshop's website. If the authors of an accepted paper do not want to join the workshop's proceedings, these accepted papers do not preclude publishing at future conferences and/or journals.

Submission site closed.

Program Committee

  • Jürgen Becker, KIT

  • Costas Bekas, citadelsecurities

  • Herman Engelbrecht, University of Stellenbosch

  • Domenik Helms, DLR

  • Michael Kamps, Ruhr-University Bochum

  • David King, Air Force Institute of Technology

  • Benjamin Klenk, NVIDIA

  • Manfred Mücke, Materials Center Leoben

  • Marco Platzner, University of Paderborn

  • Sébastien Rumley, HES-SO Fribourg

  • Dolly Sapra, University of Amsterdam

  • Günther Schindler, SAP SE

  • Wei Shao, RMIT University

  • Yannik Stradmann, Heidelberg University

  • Jürgen Teich, University of Erlangen-Nuremberg

  • Ola Torudbakken, Graphcore

  • Nicolas Weber, NEC Laboratories Europe


  • 14:30-14:40 Introduction

  • 14:40-15:05 Xiaotian Guo, Andy D.Pimentel, and Todor Stefanov: Hierarchical Design Space Exploration for Distributed CNN Inference at the Edge [pdf]

  • 15:05-15:30 Fabian Kreß, Julian Hoefer, Tim Hotfilter, Iris Walter, El Mahdi El Annabi, Tanja Harbaum, and Jürgen Becker: Automated Search for Deep Neural Network Inference Partitioning on Embedded FPGA [pdf]

  • 15:30-15:55 Tiberius-George Sorescu, Chandrakanth R. Kancharla, Jeroen Boydens, Hans Hallez, and Mathias Verbeke: Framework to Evaluate Deep Learning Algorithms for Edge Inference and Training [pdf]

  • 15:55-16:20 Adrian Osterwind, Julian Droste-Rehling, Manoj-Rohit Vemparala, and Domenik Helms: Hardware Execution Time Prediction for Neural Network Layers [pdf]

  • 16:20-16:30 Mini-Discussion

  • Coffee Break

  • 17:00-17:25 Chao Qian, Tianheng Ling, Gregor Schiele: Enhancing Energy-efficiency by Solving the Throughput Bottleneck of LSTM Cells for Embedded FPGAs [pdf]

  • 17:25-17:50 Manuele Rusci, Marco Fariselli, Martin Croome, Francesco Paci, and Eric Flamand: Accelerating RNN-based Speech Enhancement on a Multi-Core MCU with Mixed FP16-INT8 Post-Training Quantization [pdf]

  • 17:50-18:15 Han Wu, Holland Qian, Huaming Wu, and Aad van Moorsel: LDRNet: Enabling Real-time Document Localization on Mobile Devices [pdf]

  • 18:15-18:30 Discussion


This workshop is envisioned as a counterpart to the highly successful workshop series on embedded machine learning (WEML), held annually at Heidelberg University (for the latest edition, see The two workshop formats complement each other, as WEML is highly discussion-oriented, invitation-only, without requirements on scientific novelty, results or publications. In contrast, ITEM is envisioned as a high-quality academic outlet, including results demonstrating at least the potential of presented work, and with a healthy mix of peer-reviewed contributions and invited talks.



  • Holger Fröning, Heidelberg University, Germany (holger.froening(at)

  • Franz Pernkopf, Graz University of Technology, Austria (pernkopf(at)

  • Michaela Blott, XILINX Research, Dublin, Ireland (michaela.blott(at)

  • Gregor Schiele, University of Duisburg-Essen (gregor.schiele(at)

Program Co-Chair

  • Kazem Shekofteh, Heidelberg University, Germany (kazem.shekofteh(at)