Sept 11, 2023: Final program including keynotes posted!
Aug 25, 2023: ITEM has been scheduled to Sept 18, 2023, and workshop program has been added!
Jun 10, 2023: Submission link online - https://cmt3.research.microsoft.com/ECMLPKDDworkshop2023, TPC added
Jun 8, 2023: Submission deadline extended to June 26, 2023
May 4, 2023: ITEM 2023 has beeen accepted at ECML-PKDD2023!
Local and embedded machine learning (ML) is a key component for real-time data analytics in upcoming computing environments like the Internet of Things (IoT), edge computing and mobile ubiquitous systems. The goal of the ITEM workshop is to bring together experts, researchers and practitioners from all relevant communities, including ML, hardware design and embedded systems, IoT, edge, and ubiquitous / mobile computing. Topics of interest include compression techniques for existing ML models, new ML models that are especially suitable for embedded hardware, tractable models beyond neural networks, federated learning approaches, as well as automatic code generation, frameworks and tool support. The workshop is planned as a combination of invited talks, paper submissions, as well as open-table discussions.
Keywords: embedded machine learning, pervasive intelligent devices, real-time data analytics, uncertainty and robustness
About the Workshop
There is an increasing need for real-time intelligent data analytics, driven by a world of Big Data, and the society’s need for pervasive intelligent devices, such as wearables for health and recreational purposes, smart city infrastructure, e-commerce, Industry 4.0, and autonomous robots. Most applications share facts like large data volumes, real-time requirements, limited resources including processor, memory, network and possibly battery life. Data might be large but possibly incomplete and uncertain. Notably, often powerful cloud services can be unavailable, or not an option due to latency or privacy constraints. For these tasks, Machine Learning (ML) is among the most promising approaches to address learning and reasoning under uncertainty. Examples include image and speech processing, such as image recognition, segmentation, object localization, multi-channel speech enhancement, speech recognition, signal processing such as radar signal denoising, with applications as broad as robotics, medicine, autonomous navigation, recommender systems, etc.
To address uncertainty, limited data, and to improve in general the robustness of ML, new methods are required, with examples including Bayesian approaches, sum-product networks, transformer networks, graph-based neural networks, and many more. One can observe that, compared with deep convolutional neural networks, computations can be fundamentally different, compute requirements can substantially increase, and underlying properties like structure in computation are often lost. As a result, we observe a strong need for new ML methods to address the requirements of emerging workloads deployed in the real-world, such as uncertainty, robustness, and limited data. In order to not hinder the deployment of such methods on various computing devices, and to address the gap in between application and compute hardware, we furthermore need a variety of tools. As such, this workshop proposal gears to gather new ideas and concepts on
ML methods for real-world deployment,
methods for compression and related complexity reduction tools,
dedicated hardware for emerging ML tasks,
and associated tooling like compilers and mappers.
Topics of Interest
Topics of particular interest include, but are not limited to:
Compression of neural networks for inference deployment, including methods for quantization (including binarization), pruning, knowledge distillation, structural efficiency and neural architecture search
Hardware support for novel ML architectures beyond CNNs, e.g., transformer models
Tractable models beyond neural networks
Learning on edge devices, including federated and continuous learning
Trading among prediction quality (accuracy), efficiency of representation (model parameters, data types for arithmetic operations and memory footprint in general), and computational efficiency (complexity of computations)
Automatic code generation from high-level descriptions, including linear algebra and stencil codes, targeting existing and future instruction set extensions
Tool driven, optimizations up from ML model level down to instruction level, automatically adapted to the current hardware requirements
Understanding the difficulties and opportunities using common ML frameworks with marginally supported devices
Exploring new ML models designed to use on designated device hardware
Future emerging processors and technologies for use in resource-constrained environments, e.g. RISC V, embedded FPGAs, or analogue technologies
Applications and experiences from deployed use cases requiring embedded ML
New and emerging applications that require ML on resource-constrained hardware
Energy efficiency of ML models created with distinct optimization techniques
Security/privacy of embedded ML
New benchmarks suited to edge and embedded devices
Workshop Paper Submission Deadline: June 26, 2023 (extended from June 12, firm)
Workshop Paper Author Notification: 12 July 2023
Camera-ready deadline: TBD (depending on conference organization)
Workshop date: full day, in between Sept 18 and 22, 2023 (depending on conference organization)
Papers must be written in English and formatted according to the Springer LNCS guidelines. Author instructions, style files and the copyright form can be downloaded here: http://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines
Submissions may not exceed 12 pages in PDF format for full papers, respectively 6 pages for short papers, including figures and references. Submitted papers must be original work that has not appeared in and is not under consideration for another conference or journal. Please prepare submissions according to single-blind standards. Work in progress is welcome, but first results should be made available as a proof of concept. Submissions only consisting of a proposal will be rejected.
ECML-PKDD2023 will organize joint workshop proceedings. Please submit your manuscript here, under the "ITEM" track: https://cmt3.research.microsoft.com/ECMLPKDDworkshop2023. Plese ensure that you use the right track, otherwise the manuscript is submitted to a different workshop.
Benjamin Klenk, NVIDIA Research
Dolly Sapra, University of Amsterdam
Enrique Quintana-Ortí, Technical Univesity of Valencia, Spain
Herman Engelbrecht, University of Stellenbosch
Jürgen Teich, University of Erlangen-Nuremberg
Manuel Dolz, Universitat Jaume I, Spain
Manuele Rusci, KU Leuven
Martin Andraud, Aalto University, Finland
Michael Kamp, Ruhr University Bochum
Nicolas Weber, NEC Laboratories Europe
Sébastien Rumley, University of Applied Sciences Western Switzerland - Fribourg, Switzerland
Yannick Stradmann, Heidelberg University, Germany
09:00-09:30 Welcome and Introduction
09:30-10:30 Keynote by Andraud Martin: Energy-efficient probabilistic reasoning for edge AI
Abstract: Developing accurate and energy-efficient AI algorithms is crucial for their deployment on edge devices. In this context, models based on probabilistic reasoning can complement or replace commonly used deep learning models, as they are generative, explainable, and can enable compact representation. However, their hardware implementation and acceleration are still in the early stages, since probabilistic models are typically challenging to translate into computational steps. In this talk, we will motivate the use of probabilistic reasoning on edge devices, taking as a case study Probabilistic Circuits (PCs). PCs have emerged as an appealing category of models, as they can be seen as computational graphs close to neural networks and optimized for efficient inference. After introducing PCs for training and inference, we will details recent efforts to develop energy-efficient hardware accelerators for PCs, spanning across efficient model training, optimization of the model’s structure and the computation, and key ideas for hardware acceleration.
Bio: Martin Andraud is an assistant professor in microelectronics at Aalto University since 2019 (and from January 2024 at UCLouvain, Belgium). He has received the PhD from Grenoble University, France in 2016. He has been a postdoctoral researcher successively with TU Eindhoven in 2016 and KU Leuven from 2017 to 2019. In Aalto, his research team is at the interface between edge AI, hardware/software co-design and custom AI accelerators. He currently runs various research projects, investigating for example analog in-memory computing accelerators, or alternative/complementary models to deep Neural Networks, such as probabilistic circuits.
10:30-11:00 Manfred Mücke, Christoph Gratl: Evaluating custom-precision operator support in MLIR for ARM CPUs
11:30 - 12:00 Yannick Emonds, Kai Xi, Holger Fröning: Implications of Noise in Resistive Memory on Deep Neural Networks for Image Classification
12:00-12:30 Lisa Kuhn, Bernhard Klein, Holger Fröning: On the Non-Associativity of Analog Computations
12:30-13:00 Sergio Barrachina, Manuel F. Dolz, Germán Fabregat, Antonio Maciá-Lillo: Boosting Deep Learning Inference Convolutions on ARM Cortex-M4 based Microcontrollers
14:30-15:30 Keynote by Burkhard Ringlein: Operation Set Architectures for low-latency ML inference using FPGAs
Abstract: The computational requirements of artificial intelligence workloads are growing exponentially. In addition, more and more compute is moved towards the edge due to latency or localization constraints. At the same time, Dennard scaling has ended and Moore’s law is winding down. These trends created an opportunity for specialized accelerators including field-programmable gate arrays (FPGAs), but the poor support and usability of today’s tools prevents FPGAs from being deployed at scale for deep neural network (DNN) inference applications. Within the H2020 project EVEREST (https://everest-h2020.eu/), we developed a design environment that aims to simplify the mapping of complex end-user workflows, like DNN inference or big data processing, to energy-optimized heterogeneous hardware with a focus on FPGA-accelerated systems. We proposed the concept of operation set architectures (OSAs) to overcome the current incompatibilities and hurdles in using DNN-to-FPGA compilers by combining existing specialized frameworks into one organic compiler that also allows the efficient and automatic re-use of existing community tools. In this talk, I will present the principles behind OSAs and how they enable a low latency inference for big data workflows or edge DNN inference. We will also discuss DOSA, a prototype OSA implementation to distribute DNNs across many FPGAs as part of the EVEREST system development kit. The talk aims also to spark a debate around the future of ML inference on FPGAs.
15:30-16:00 Mark Deutel, Christopher Mutschler, Jürgen Teich: microYOLO: Towards Single-Shot Object Detection on Microcontrollers
16:30-17:00 Martin Lechner; Axel Jantsch: OptiSim: A Hardware-Aware Optimization Space Exploration Tool for CNN Architectures
17:00-17:30 Nitish Satya Murthy, Francky Catthoor; Marian Verhelst; Peter Vrancx: Quantized dynamics models for hardware-efficient control and planning in model-based RL
17:30 Closing remarks
This workshop is envisioned as a counterpart to the highly successful workshop series on embedded machine learning (WEML), held annually at Heidelberg University (for the latest edition, see https://www.deepchip.org/weml2023.). The two workshop formats complement each other, as WEML is highly discussion-oriented, invitation-only, without requirements on scientific novelty, results or publications. In contrast, ITEM is envisioned as a high-quality academic outlet, including results demonstrating at least the potential of presented work, and with a healthy mix of peer-reviewed contributions and invited talks.
Gregor Schiele, University of Duisburg-Essen (gregor.schiele(at)uni-due.de)
Holger Fröning, Heidelberg University, Germany (holger.froening(at)ziti.uni-heidelberg.de)
Franz Pernkopf, Graz University of Technology, Austria (pernkopf(at)tugraz.at)
Michaela Blott, XILINX Research, Dublin, Ireland (michaela.blott(at)amd.com)