ITEM Workshop 2021
IoT, Edge, and Mobile for Embedded Machine Learning
Workshop collocated with ECML-PKDD 2021, ONLINE, September 17, 2021
News
Feb 21: The HiPEAC Info magazine is covering our workshop!
Jan 21: Keynote presentations are online!
Aug 19: Program is online! Please note that ITEM has been scheduled for Sept 17.
July 21: Registration link updated: https://2021.ecmlpkdd.org/?page_id=1751
June 21, 2021: Submission deadline is extended by two weeks! (No further extension)
June 9, 2021: Submission link online!
May 31, 2021: Happy to announce Jacques Pienaar (Google) as the second keynote speaker on the topic of "Composable domain specific compiler abstractions for ML and beyond"!
May 25, 2021: Happy to announce Sat Chatterjee (Google) as keynote speaker on the topic of "Machine Learning + Logic Synthesis For Fun and Profit"!
May 21, 2021: TPC added.
May 2, 2021: Note on workshop's proceedings added.
ITEM 2021 has beeen accepted at ECML-PKDD2021! While the following thus contains information on the upcoming workshop, information about previous editions are archived behind the menu at the top left.
Overview
Local and embedded machine learning (ML) is a key component for real-time data analytics in upcoming computing environments like the Internet of Things (IoT), edge computing and mobile ubiquitous systems. The goal of the ITEM workshop is to bring together experts, researchers and practitioners from all relevant communities, including ML, hardware design and embedded systems, IoT, edge, and ubiquitous / mobile computing. Topics of interest include compression techniques for existing ML models, new ML models that are especially suitable for embedded hardware, federated learning approaches, as well as automatic code generation, frameworks and tool support. The workshop is planned as a combination of invited talks, paper submissions, as well as opentable discussions.
Keywords: embedded machine learning, pervasive intelligent devices, real-time data analytics, uncertainty and robustness
About the Workshop
Local and embedded machine learning (ML) is a key component for real-time data analytics in upcoming computing environments like the Internet of Things (IoT), edge computing and mobile ubiquitous systems. The goal of the ITEM workshop is to bring together experts, researchers and practitioners from all relevant communities, including ML, hardware design and embedded systems, IoT, edge, and ubiquitous / mobile computing. Topics of interest include compression techniques for existing ML models, new ML models that are especially suitable for embedded hardware, federated learning approaches, as well as automatic code generation, frameworks and tool support. The workshop is planned as a combination of invited talks, paper submissions, as well as opentable discussions.
There is an increasing need for real-time intelligent data analytics, driven by a world of Big Data, and the society’s need for pervasive intelligent devices, such as wearables for health and recreational purposes, smart city infrastructure, e-commerce, Industry 4.0, and autonomous robots. Most applications share facts like large data volumes, real-time requirements, limited resources including processor, memory, network and possibly battery life. Data might be large but possibly incomplete and uncertain. Notably, often powerful cloud services can be unavailable, or not an option due to latency or privacy constraints. For these tasks, Machine Learning (ML) is among the most promising approaches to address learning and reasoning under uncertainty. Examples include image and speech processing, such as image recognition, segmentation, object localization, multi-channel speech enhancement, speech recognition, signal processing such as radar signal denoising, with applications as broad as robotics, medicine, autonomous navigation, recommender systems, etc.
To address uncertainty, limited data, and to improve in general the robustness of ML, new methods are required, with examples including Bayesian approaches, sum-product networks, capsule networks, graph-based neural networks, and many more. One can observe that, compared with deep convolutional neural networks, computations can be fundamentally different, compute requirements can substantially increase, and underlying properties like structure in computation are often lost. As a result, we observe a strong need for new ML methods to address the requirements of emerging workloads deployed in the real-world, such as uncertainty, robustness, and limited data. In order to not hinder the deployment of such methods on various computing devices, and to address the gap in between application and compute hardware, we furthermore need a variety of tools. As such, this workshop proposal gears to gather new ideas and concepts on
ML methods for real-world deployment,
methods for compression and related complexity reduction tools,
dedicated hardware for emerging ML tasks,
and associated tooling like compilers and mappers.
Topics of Interest
Topics of particular interest include, but are not limited to:
Compression of neural networks for inference deployment, including methods for quantization (including binarization), pruning, knowledge distillation, structural efficiency and neural architecture search
Learning on edge devices, including federated and continuous learning
Trading among prediction quality (accuracy), efficiency of representation (model parameters, data types for arithmetic operations and memory footprint in general), and computational efficiency (complexity of computations)
Automatic code generation from high-level descriptions, including linear algebra and stencil codes, targeting existing and future instruction set extensions
Tool driven, optimizations up from ML model level down to instruction level, automatically adapted to the current hardware requirements
Understanding the difficulties and opportunities using common ML frameworks with marginally supported devices
Exploring new ML models designed to use on designated device hardware
Future emerging processors and technologies for use in resource-constrained environments, e.g. RISC V, embedded FPGAs, or analogue technologies
Applications and experiences from deployed use cases requiring embedded ML
New and emerging applications that require ML on resource-constrained hardware
Energy efficiency of ML models created with distinct optimization techniques
Security/privacy of embedded ML
New benchmarks suited to edge and embedded devices
Important Dates
Submission deadline: July 7, 2021 June 23, 2021
Acceptance notification: August 4, 2021
Camera-ready paper: August 18, 2021
Workshop program and proceedings online: September 1, 2021
Workshop date: Sept 13-17, 2021 (depending on conference organization)
Please see here for conference registration deadlines, including rules for ealy registration: https://2021.ecmlpkdd.org/?page_id=1751
Submission
Papers must be written in English and formatted according to the Springer LNCS guidelines. Author instructions, style files and the copyright form can be downloaded here: http://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines
Submissions may not exceed 12 pages in PDF format for full papers, respectively 6 pages for short papers, including figures and references. Submitted papers must be original work that has not appeared in and is not under consideration for another conference or journal. Please prepare submissions according to single-blind standards. Work in progress is welcome, but first results should be made available as a proof of concept. Submissions only consisting of a proposal will be rejected.
We are negotiating with ECML-PKDD co-organizers on joint proceedings or will organize individual proceedings. More details to follow hopefully. In case of questions, please contact the workshop organizers. In any case, accepted papers will be posted on the workshop's website. If the authors of an accepted paper do not want to join the workshop's proceedings, these accepted papers do not preclude publishing at future conferences and/or journals
Submission site: https://easychair.org/conferences/?conf=item2021
Program Committee
Günther Schindler, Heidelberg University (Program Co-Chair)
Costas Bekas, Citadel Securites
Herman Engelbrecht, Stellenbosch University, South Africa
Giulio Gambardella, XILINX
Tobias Golling, Geneva University, Switzerland
Domenik Helms, OFFIS e.V. - Institut für Informatik, Germany
David King, Air Force Institute of Technology, USA
Benjamin Klenk, NVIDIA Research
Manfred Mücke, Materials Center Leoben Forschung GmbH, Austria
Marco Platzner, Paderborn University, Germany
Thomas B. Preußer, ETH Zurich, Switzerland
Wei Shao, Royal Melbourne Institute of Technology (RMIT), Australia
Yannik Stradmann, Heidelberg University
Jürgen Teich, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany
Nicolas Weber, NEC Labs Europe
Program (Sept 17, 2021)
09:00 - 09:10 Workshop Co-Organizers: Introduction
09:10 - 10:00 Keynote: Sat Chatterjee (Google) - Machine Learning + Logic Synthesis For Fun and Profit [presentation]
Abstract: This talk will have two parts. The first part (profit?) will be on the results of a contest held at the International Workshop on Logic Synthesis (IWLS) 2020 where the goal was to learn an unknown Boolean function from a set of input/output examples (i.e., a careset). Unlike usual logic synthesis optimization objectives such as minimizing area or delay, the goal of the contest was to maximize generalization (albeit under an area constraint). 10 teams from all over the world participated, and we shall discuss the different approaches tried and the results obtained. We believe that this line of research could be useful for approximate logic synthesis particularly in machine learning applications. The second part (fun?) will be on how logic synthesis, and particularly ideas inspired by FPGA lookup tables, can help answer a fundamental question in deep learning today: Why do neural networks generalize when they have sufficient capacity to memorize their training set.
Bio: Sat is an Engineering Leader and Machine Learning Researcher at Google AI. His current research focuses on fundamental questions in deep learning (such as understanding why neural networks generalize at all) as well as various applications of ML (such as hardware design and verification). Before Google, he was a Senior Vice President at Two Sigma, a leading quantitative investment manager, where he founded one of the first successful deep learning-based alpha research groups on Wall Street and led a team that built one of the earliest end-to-end FPGA-based trading systems for general purpose ultra-low latency trading. Prior to that, he was a Research Scientist at Intel where he worked on microarchitectural performance analysis and formal verification for on-chip networks. He did his undergraduate studies at IIT Bombay, has a PhD in Computer Science from UC Berkeley, and has published in the top machine learning, design automation, and formal verification conferences.
10:00 - 10:15 Break
10:15 - 10:45 Lukas Einhaus, Chao Qian, Christopher Ringhofer and Gregor Schiele: Towards Precomputed 1D-convolutional Layers for Embedded FPGAs [presentation]
10:45 - 11:15 Iris Walter, Jonas Ney, Tim Hotfilter, Vladimir Rybalkin, Julian Höfer, Norbert Wehn and Jürgen Becker. Embedded Face Recognition for Personalized Services in the Assistive Robotics [presentation]
11:15 - 11:45 Hassan Ghasemzadeh Mohammadi, Felix Paul Jentzsch, Maurice Kuschel, Rahil Arshad, Sneha Rautmare, Suraj Manjunatha, Marco Platzner, Alexander Boschmann and Dirk Schollbach. FLight: FPGA Acceleration of Lightweight DNN Model Inference in Industrial Analytics [presentation]
11:45 - 12:00 Break
12:00 - 12:30 Ilja van Ipenburg, Dolly Sapra and Andy D. Pimentel. Exploring Cell-based Neural Architectures for Embedded Systems [presentation]
12:30 - 13:00 Armin Schuster, Christian Heidorn, Marcel Brand, Oliver Keszocze and Jürgen Teich. Design Space Exploration of Time, Energy, and Error Rate Trade-offs for CNNs using Accuracy-Programmable Instruction Set Processors [presentation]
13:00 - 13:30 Sven Nitzsche, Moritz Neher, Stefan von Dosky and Jürgen Becker. Ultra-low Power Machinery Fault Detection Using Deep Neural Networks (short paper) [presentation]
13:30 - 14:30 Break
14:30 - 15:20 Keynote: Jacques Pienaar (Google) - Composable domain specific compiler abstractions for ML and beyond [presentation]
Abstract: The growing diversity of domain-specific accelerators spans all scales from mobile devices to data centers. It constitutes a global challenge across the high-performance computing stack and is particularly visible in the field of Machine Learning (ML). Program representations and compilers need to support a variety of devices at multiple levels of abstraction, from scalar instructions to coarse-grain parallelism and large scale distribution of computation graphs. This puts great pressure on the construction of both generic and target-specific optimizations, with domain specific language support, interfaces with legacy and future infrastructure, and special attention to future-proofness, modularity and code reuse. This motivates the construction of a new infrastructure, unifying graph representations, ML operators, optimizations at different levels and also across levels, targets, ML frameworks, training and inference, quantization, tightly interacting with runtime systems. Compilers are expected to readily support new applications, to easily port to new hardware, to bridge many levels of abstraction from dynamic, managed languages to vector accelerators and software-managed memories, while exposing high level knobs for autotuning, enable just-in-time operation, provide diagnostics and propagate functional and performance debugging information across the entire stack, and delivering performance close enough to hand-written assembly in most cases. This talk presents the MLIR compiler infrastructure, which is a novel approach to building reusable compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain specific compilers, and aid in connecting existing compilers together. MLIR facilitates the design and implementation of code generators, translators and optimizers at different levels of abstraction across application domains, hardware targets and execution environments. We will discuss our experience with enabling new hardware and application domains using MLIR. With focus on the importance and benefits of a composable system where domain specific abstractions from different domains can combine and how this enables an extensible system that can form the basis of compilation from ML to quantum computing to circuit design.
Bio: Jacques Pienaar is a Staff Software Engineer at Google, one of the leads of MLIR and the TensorFlow graph optimizations. Previously he was the co-developer of Lanai compiler, the open-source GPGPU compiler gpucc, one of the first engineers on XLA and the XLA TPU backend, and TF XLA bridge lead. His research interests span multiple layers of a ML software stack from programming model down to runtime. Jacques holds a Ph.D. from Purdue university where he was a member of the Integrated Systems Laboratory in the School of Electrical and Computer Engineering.
15:20 - 15:50 Lukas Sommer, Cristian Axenie and Andreas Koch. SPNC: Fast Sum-Product Network Inference [presentation]
15:50 - 16:20 Bernhard Klein, Lisa Kuhn, Johannes Weis, Arne Emmel, Yannik Stradmann, Johannes Schemmel and Holger Fröning. Towards Addressing Noise and Static Variations of Analog Computations using Efficient Retraining [presentation]
16:20 - 16:25 Workshop Co-Organizers: Closing Remarks
Synergies
This workshop is envisioned as a counterpart to the highly successful workshop series on embedded machine learning (WEML), held annually at Heidelberg University (for the latest edition, see https://www.deepchip.org/weml2020.). The two workshop formats complement each other, as WEML is highly discussion-oriented, invitation-only, without requirements on scientific novelty, results or publications. In contrast, ITEM is envisioned as a high-quality academic outlet, including results demonstrating at least the potential of presented work, and with a healthy mix of peer-reviewed contributions and invited talks.
Organization
Co-Organizers
Gregor Schiele, University of Duisburg-Essen (gregor.schiele(at)uni-due.de)
Franz Pernkopf, Graz University of Technology, Austria (pernkopf(at)tugraz.at)
Michaela Blott, XILINX Research, Dublin, Ireland (michaela.blott(at)xilinx.com)
Holger Fröning, Heidelberg University, Germany (holger.froening(at)ziti.uni-heidelberg.de)
Program Co-Chair
Günther Schindler, Heidelberg University, Germany (guenther.schindler@ziti.uni-heidelberg.de)