“Detecting and Measuring Human Walking in Laser Scans” Katerina Zamani, Georgios Stavrinos, and Stasinos Konstantopoulos Submitted to the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, September 2017. Under review. [Abstract] [Bibtex][Full Text] Abstract
This paper presents work on detecting and tracking human movement in planar range data. Our method stacks multiple planar scans into a 3D frame where time serves as the third dimension. This representation simultaneously informs about the size and shape of the objects in the scene and their movement, so that no explicit motion models are necessary. The scene is then segmented into 3D spatio-temporal objects which are classified as "pairs of walking legs" using methods from machine vision. Our main contribution is a novel pre-processing step which aligns the spatio-temporal objects, so that information about the direction and speed of movement is factored out of the representation. The advantage is that the subsequent feature extraction and classification steps are only exposed to movement patterns without reference to direction and speed, which are not relevant to recognizing human walking. The method is empirically evaluated and found to significantly increase classification accuracy. Bibtex
@InProceedings{zamani-etal:2017,
author = {Katerina Zamani and Georgios Stavrinos and Stasinos Konstantopoulos},
title = {Detecting and Measuring Human Walking in Laser Scans},
year = 2017
}
|
“Monitoring activities of daily living using audio analysis and a raspberryPI: A use case on bathroom activity monitoring” Georgios Siantikos, Theodoros Giannakopoulos, and Stasinos Konstantopoulos Accepted for publication to Selected and revised papers of the ICT4AWE 2016 proceedings, Communications in Computer and Information Science series. Springer, to appear. [Abstract] [Bibtex][Full Text] Abstract
A framework that utilizes audio information for recognition of activities of daily living (ADLs) in the context of a health monitoring environment is presented in this chapter. We propose integrating a Raspberry PI single-board PC that is used both as an audio acquisition and analysis unit. So Raspberry PI captures audio samples from the attached microphone device and executes a set of real-time feature extraction and classification procedures, in order to provide continuous and online audio event recognition to the end user. Furthermore, a practical workflow is presented, that helps the technicians that setup the device to perform a fast, user-friendly and robust tuning and calibration procedure. As a result, the technician is capable of ``training'' the device without any need for prior knowledge of machine learning techniques. The proposed system has been evaluated against a particular scenario that is rather important in the context of any healthcare monitoring system for the elder: In particular, we have focused on the "bathroom scenario" according to which, a Raspberry PI device equipped with a single microphone is used to monitor bathroom activity on a 24/7 basis in a privacy-aware manner, since no audio data is stored or transmitted. The presented experimental results prove that the proposed framework can be successfully used for audio event recognition tasks. Bibtex
@InProceedings{siantikos-etal:2017,
author = {Georgios Siantikos and Theodoros Giannakopoulos and Stasinos Konstantopoulos},
title = {Monitoring Activities of Daily Living Using Audio Analysis and a {RaspberryPI}: A Use Case on Bathroom Activity Monitoring},
booktitle = {Selected and Revised Papers of the {ICT4AWE 2016} Proceedings},
series = {Communications in Computer and Information Science},
year = 2017,
publisher = {Springer}
}
|
“Daily activity recognition based on meta-classification of low-level audio events” Theodoros Giannakopoulos and Stasinos Konstantopoulos Proceedings of the 3rd International Conference on ICT for Ageing Well and e-Health (ICT4AWE 2017), Porto, 28-29 April 2017. [Abstract] [Bibtex][Full Text] Abstract
This paper presents a method for recognizing activities taking place in a home environment. Audio is recorded and analysed realtime, with all computation taking place on a low-cost Raspberry PI. In this way, data acquisition, low-level signal feature calculation, and low-level event extraction is performed without transferring any raw data out of the device. This first-level analysis produces a time-series of low-level audio events and their characteristics: the event type (e.g., "music") and acoustic features that are relevant to further processing, such as energy that is indicative of how loud the event was. This output is used by a meta-classifier that extracts long-term features from multiple events and recognizes higher-level activities. The paper also presents experimental results on recognizing kitchen and living-room activities of daily living that are relevant to assistive living and remote health monitoring for the elderly. Evaluation on this dataset has shown that our approach discriminates between six activities with an accuracy of more than 90%, that our two-level classification approach outperforms one-level classification, and that including low-level acoustic features (such as energy) in the input of the meta-classifier significantly boosts performance. Bibtex
@InProceedings{schwiegelshohn-etal:2017,
author = {Theodoros Giannakopoulos and Stasinos Konstantopoulos},
title = {Daily activity recognition based on meta-classification of low-level audio events},
booktitle = {Proceedings of the 3rd International Conference on ICT for Ageing Well and e-Health {(ICT4AWE 2017)}, Porto, 28-29 April 2017},
year = 2017
}
|
"A Peer-to-Peer Protocol and System Architecture for Privacy-Preserving Statistical Analysis" Katerina Zamani, Angelos Charalambidis, Stasinos Konstantopoulos, Maria Dagioglou, and Vangelis Karkaletsis Proceedings of the Workshop on Privacy Aware Machine Learning for Health Data Science (PAML 2016), Salzburg, Austria, 31 August 31 - 2 September 2016. [Abstract] [Bibtex][Full Text] Abstract
The insights gained by the large-scale analysis of health-related data can have an enormous impact in public health and medical research, but access to such personal and sensitive data poses serious privacy implications for the data provider and a heavy data security and administrative burden on the data consumer. In this paper we present an architecture that fills the gap between the statistical tools ubiquitously used in medical research on the one hand, and privacy-preserving data mining methods on the other. This architecture foresees the primitive instructions needed to re-implement the elementary statistical methods so that they only access data via a privacy-preserving protocol. The advantage is that more complex analysis and visualisation tools that are built upon these elementary methods can remain unaffected. Furthermore, we introduce RASSP, a secure summation protocol that implements the primitive instructions foreseen by the architecture. An open-source reference implementation of this architecture is provided for the R language. We use these results to argue that the tension between medical research and privacy requirements can be technically alleviated and we outline a research plan towards a system that covers further requirements on computation efficiency and on the trust that the medical researcher can place on the statistical results obtained by it. Bibtex
@InProceedings{zamani-etal:2016,
author = {Katerina Zamani and Angelos Charalambidis and Stasinos Konstantopoulos and Maria Dagioglou and Vangelis Karkaletsis},
title = {A Peer-to-Peer Protocol and System Architecture for Privacy-Preserving Statistical Analysis},
booktitle = {Proceedings of the Workshop on Privacy Aware Machine Learning for Health Data Science {(PAML 2016)}, Salzburg, Austria, 31 August 31 - 2 September 2016},
year = 2016
}
|
“Enabling Indoor Object Localization through Bluetooth Beacons on the RADIO Robot Platform” Fynn Schwiegelshohn, Philipp Wehner, Florian Werner, Diana Göhringer, and Michael Hübner Proceedings of the International Conference on Embedded Computer Systems: Architecutres, Modeling and Simulation (SAMOS XV), Samos, Greece, 18 - 21 July 2016. IEEE, January 2017. [Abstract] [Bibtex][Full Text] Abstract
Localization is one of the four pillars of the autonomous robotic control loop. In order to work in complete unknown indoor environments, the robot needs to map its surroundings. This is done via the simultaneous localization and mapping (SLAM) algorithm. However, the SLAM algorithm does not provide additional context to the generated map. If this information is required, it needs to be provided by the operator. With Bluetooth Low Energy (BLE) technology, position dependent information can be annotated to the generated map without operator input. BLE beacons need to be positioned at points of interest for the robot and then need to be localized. Because the BLE beacon broadcasts an ID, localization is based on the Received Signal Strength Indication (RSSI). This paper presents an approach to localize BLE beacons in the RADIO indoor environment. The robot has one BLE receiver which must be used cleverly in order to triangulate the BLE beacons position. Bibtex
@InProceedings{schwiegelshohn-etal:2017,
author = {Fynn Schwiegelshohn and Philipp Wehner and Florian Werner and Diana G\"{o}hringer, and Michael H\"{u}bner},
title = {Enabling Indoor Object Localization through Bluetooth Beacons on the RADIO Robot Platform},
booktitle = {Proceedings of the International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation (SAMOS XV), Samos, Greece, 18 - 21 July 2016},
year = 2017,
month = jan
}
|
“Short-term Recognition of Human Activities using Convolutional Neural Networks” Michael Papakostas, Theodoros Giannakopoulos, Filia Makedon, and Vangelis Karkaletsis In Proceedings of the 12th International IEEE Conference on Signal-Image Technology & Internet Based Systems (SITIS 2016). Naples, Italy, Naples, Italy, 28 November - 1 December 2016 [Abstract] [Bibtex][Full Text] Abstract
This paper proposes a deep learning classification method for frame-wise recognition of human activities, using raw color (RGB) information. In particular, we present a Convolutional Neural Network (CNN) classification approach for recognising three basic motion activity classes, that cover the vast majority of human activities in the context of a home monitoring environment, namely: sitting, walking and standing up. A real-world fully annotated dataset has been compiled, in the context of an assisted living home environment. Through extensive experimentation we have highlighted the benefits of deep learning architectures against traditional shallow classifiers functioning on hand-crafted features, on the task of activity recognition. Our approach proves the robustness and the quality of CNN classifiers that lies on learning highly invariant features. Our ultimate goal is to tackle the challenging task of activity recognition in environments that are characterized with high levels of inherent noise. Bibtex
@InProceedings{papakostas-etal:2016,
author = {Michael Papakostas and Theodoros Giannakopoulos and Filia Makedon and Vangelis Karkaletsis},
title = {Short-term Recognition of Human Activities using Convolutional Neural Networks},
booktitle = {Proceedings of the 12th International IEEE Conference on Signal-Image Technology & Internet Based Systems (SITIS 2016). Naples, Italy, 28 November - 1 December 2016},
year = 2016
}
|
“Design for a system of multimodal interconnected ADL recognition services” Theodoros Giannakopoulos, Stasinos Konstantopoulos, Georgios Siantikos, and Vangelis Karkaletsis Chapter 16 in Components and Services for IoT Platforms: Paving the Way for IoT Standards. Spinger, 2016. [Abstract] [Bibtex][Full Text] Abstract
As smart interconnected sensing devices are becoming increasingly ubiquitous, more applications are becoming possible by re-arranging and re-connecting sensing and sensor signal analysis in different pipelines. Naturally, this is best facilitated by extremely thin services that expose minimal functionality and are extremely flexible regarding the ways in which they can be re-arranged. On the other hand, this ability to re-use might be purely theoretical since there are established patterns in they ways processing pipelines are assembled. By adding privacy and technical requirements the re-usability of some functionalities is further restricted, making it even harder to justify the communication and security overheads of maintaining them as independent services. This creates a design space that each application must explore using its own requirements. In this article we focus on detecting Activities of Daily Life (ADL) for medical applications and especially independent living applications, but our setting also offers itself to sharing devices with home automation and home security applications. By studying the methods and pipelines that dominate the audio and visual analysis literature, we observe that there are several multi-component sub-systems can be encapsulated by a single service without substantial loss of re-usability. We then use this observation to propose a design for our ADL recognition application that satisfies our medical and privacy requirements, makes effient use of processing and transmission resources, and is also consistent with home automation and home security extensions. Bibtex
@InCollection{giannakopoulos-etal:2016,
author = {Theodoros Giannakopoulos and Stasinos Konstantopoulos and Georgios Siantikos and Vangelis Karkaletsis},
title = {Design for a System of Multimodal Interconnected {ADL} Recognition Services},
booktitle = {Components and Services for {IoT} Platforms: Paving the Way for {IoT} Standards},
chapter = 16,
year = 2016,
month = sep,
publisher = {Springer}
}
|
“A ROS Framework for Audio-Based Activity Recognition” Theodors Giannakopoulos and Georgios Siantikos Proceedings of the 9th ACM International Conference on Pervasive Technologies Related to Assistive Environments (PETRA 2016), Corfu, 29 June 29 – 1 July 01 2016 [Abstract] [Bibtex][Full Text] Abstract
Research on robot perception mostly focuses on visual information analytics. Audio-based perception is mostly based on speech-related information. However, non-verbal information of the audio channel can be equally important in the perception procedure, or at least play a complementary role. This paper presents a framework for audio signal analysis that utilizes the ROS architectural principles. Details on the design and implementation issues of this workflow are described, while classification results are also presented in the context of two use-cases motivated by the task of medical monitoring. The proposed audio analysis framework is provided as an open-source library at github. Bibtex
@InProceedings{giannakopoulos-siantikos:2016,
author = {Theodors Giannakopoulos and Georgios Siantikos},
title = {A {ROS} Framework for Audio-Based Activity Recognition},
booktitle = {Proceedings of the 9th ACM International Conference on Pervasive Technologies Related to Assistive Environments (PETRA 2016), Corfu, 29 June 29 – 1 July 01 2016},
year = 2016
}
|
“Robot Navigation based on an Efficient Combination of an Extended A* algorithm, Bird’s Eye View and Image Stitching” Jens Rettkowski, David Gburek, and Diana Göhringer In Proceedings of the 9th IEEE/ECSI International Conference on Design & Architectures for Signal & Image Processing (DASIP 2015), Cracow, Poland, 23–25 September 2015. [Abstract] [Bibtex][Full Text] Abstract
Robotics combines a lot of different domains with sophisticated challenges such as computer vision, motion control and search algorithms. Search algorithms can be applied to calculate movements. The A* algorithm is a well-known and proved search algorithm to find a path within a graph. This paper presents an extended A* algorithm that is optimized for robot navigation using a bird’s eye view as a map that is dynamically generated by image stitching. The scenario is a robot that moves to a target in an environment containing obstacles. The robot is controlled by a Xilinx Zynq platform that contains an ARM processor and an FPGA. In order to exploit the flexibility of such an architecture, the FPGA is used to execute the most compute-intensive task of the extended A* algorithm. This task is responsible for sorting the accessible nodes in the graph. Several environments with different complexity levels are used to evaluate the extended A* algorithm. The environment is captured by a Kinect sensor located directly on the robot. In order to dewarp the robot’s view, the frames are transformed to a bird’s eye view. In addition, a wider viewing range is achieved by image stitching. The evaluation of the extended A* algorithm shows a significant improvement in terms of memory utilization. Accordingly, this algorithm is especially practicable for embedded systems since they have often only limited memory resources. Moreover, the overall execution time for several use cases is reduced up to a speed-up of 2.88x. Bibtex
@InProceedings{rettkowski-etal:2015,
author = {Jens Rettkowski and David Gburek and Diana G\"{o}hringer},
title = {Robot Navigation based on an Efficient Combination of an Extended {A*} algorithm, Bird’s Eye View and Image Stitching},
booktitle = {Conference on Design and Architectures for Signal and Image Processing (DASIP), Cracow, Poland, Sept. 2015},
year= 2015,
month = sep
}
|
“Internet of Things Simulation using OMNeT++ and Hardware in the Loop” Philipp Wehner and Diana Göhringer Chapter 4 in Components and Services for IoT platforms: Paving the Way for IoT Standards. Springer, September 2016 [Abstract] [Bibtex][Full Text] Abstract
NA
Bibtex
@InCollection{wehner-gohringer:2015,
author = {Philipp Wehner and Diana G\"{o}hringer},
title = {{I}nternet of {T}hings Simulation using {OMNeT++} and Hardware in the Loop},
booktitle = {Components and Services for IoT platforms: Paving the Way for IoT Standards},
chapter = 4,
year = 2016,
month = sep,
publisher = {Springer}
}
|
“FPGA Based Traffic Sign Detection for Automotive Camera Systems” Fynn Schwiegelshohn, Lars Gierke, and Michael Hübner In Proceedings of the 10th IEEE International Symposium on Reconfigurable Communication-centric Systems-on-Chip (ReCoSoC 2015), Bremen, Germany, 29 June – 1 July 2015. [Abstract][Bibtex][Full Text] Abstract
Advanced driver assistance systems (ADAS) have become very prominent in todays automobiles. The technological advancement of already familiar assistance systems enable the car to now autonomously assess the current situation and react accordingly. In terms of data processing, there is no difference between actually acting i.e. accelerating, braking or steering and just issuing warnings to alert the driver of a dangerous situation. In this paper, we introduce a camera based image processing system for traffic sign detection for FullHD resolution. This system is able to detect speed limit traffic signs but additional traffic signs can be implemented using the same model. The hardware components consist of a Microblaze softcore from Xilinx and an extended IP core for HDMI-in and out signals. The system is implemented on a a Spartan-6-FPGA. For image acquisition, an off-the-shelf car camera is used. The developed system is able to reliably detect traffic signs on short distances on static images as well as on image streams. Bibtex
@InProceedings{schwiegelshohn-etal:2015,
title = {FPGA based traffic sign detection for automotive camera systems},
author = {Fynn Schwiegelshohn and Lars Gierke and Michael H\"{u}bner},
booktitle = {10th International Symposium on Reconfigurable Communication-centric Systems-on-Chip (ReCoSoC 2015), Bremen, Germany, 29 June – 1 July 2015},
year = 2015,
month = jul
}
|
“Robots in Assisted Living Environments as an Unobtrusive, Efficient, Reliable and Modular Solution for Independent Ageing: The RADIO Perspective” Christos P. Antonopoulos, Georgios Keramidas, Nikolaos S. Voros, Michael Hübner, Diana Göhringer, Maria Dagioglou, Theodoros Giannakopoulos, Stasinos Konstantopoulos, and Vangelis Karkaletsis In Proceedings of the 11th International Symposium in Applied Reconfigurable Computing (ARC 2015). Published as LNCS 9040, Springer, 2015. [Abstract][Bibtex][Fulltext on Zenodo][SpringerLink] Abstract
Demographic and epidemiologic transitions in Europe have brought a new health care paradigm where life expectancy is increasing as well as the need for long-term care. To meet the resulting challenge, European healthcare systems need to take full advantage of new opportunities offered by technical advancements in ICT. The RADIO project explores a novel approach to user acceptance and unobtrusiveness: an integrated smart home/assistant robot system where health monitoring equipment is an obvious and accepted part of the user’s daily life. By using the smart home/assistant robot as sensing equipment for health monitoring, we mask the functionality of the sensors rather than the sensors themselves. In this manner, sensors do not need to be discrete and distant or masked and cumbersome to install; they do however need to be perceived as a natural component of the smart home/assistant robot functionalities. Bibtex
@InProceedings{antonopoulos-etal:2015,
title = {Robots in Assisted Living Environments as an Unobtrusive, Efficient, Reliable and Modular Solution for Independent Ageing: The {RADIO} Perspective},
author = {Antonopoulos, Christos P. and Georgios Keramidas and Voros, Nikolaos S. and Michael H\"{u}bner, Diana G\"{o}hringer and Maria Dagioglou and Theodoros Giannakopoulos and Stasinos Konstantopoulos and Vangelis Karkaletsis},
booktitle = {11th International Symposium in Applied Reconfigurable Computing (ARC 2015)},
series = {Lecture Notes in Computer Science},
number = 9040,
year = 2015,
month = apr,
publisher = {Springer}
}
|