2022 |
Giannopoulos, Michalis ; Tsagkatakis, Grigorios ; Tsakalides, Panagiotis 4D U-Nets for Multi-Temporal Remote Sensing Data Classification Journal Article Remote Sensing, 14 (3), pp. 634, 2022. Abstract | BibTeX | Tags: Higher-Order Convolutional Neural Networks, Multi-Temporal Data Classification, Remote Sensing, U-Nets @article{Giannopoulos_2022a, title = {4D U-Nets for Multi-Temporal Remote Sensing Data Classification}, author = {Giannopoulos, Michalis and Tsagkatakis, Grigorios and Tsakalides, Panagiotis}, year = {2022}, date = {2022-01-28}, journal = {Remote Sensing}, volume = {14}, number = {3}, pages = {634}, abstract = {Multispectral sensors constitute a core earth observation imaging technology generating massive high-dimensional observations acquired across multiple time instances. The collected multitemporal remote sensed data contain rich information for Earth monitoring applications, from flood detection to crop classification. To easily classify such naturally multidimensional data, conventional low-order deep learning models unavoidably toss away valuable information residing across the available dimensions. In this work, we extend state-of-the-art convolutional network models based on the U-Net architecture to their high-dimensional analogs, which can naturally capture multidimensional dependencies and correlations. We introduce several model architectures, both of low as well as of high order, and we quantify the achieved classification performance vis-à-vis the latest state-of-the-art methods. The experimental analysis on observations from Landsat-8 reveals that approaches based on low-order U-Net models exhibit poor classification performance and are outperformed by our proposed high-dimensional U-Net scheme.}, keywords = {Higher-Order Convolutional Neural Networks, Multi-Temporal Data Classification, Remote Sensing, U-Nets}, pubstate = {published}, tppubtype = {article} } Multispectral sensors constitute a core earth observation imaging technology generating massive high-dimensional observations acquired across multiple time instances. The collected multitemporal remote sensed data contain rich information for Earth monitoring applications, from flood detection to crop classification. To easily classify such naturally multidimensional data, conventional low-order deep learning models unavoidably toss away valuable information residing across the available dimensions. In this work, we extend state-of-the-art convolutional network models based on the U-Net architecture to their high-dimensional analogs, which can naturally capture multidimensional dependencies and correlations. We introduce several model architectures, both of low as well as of high order, and we quantify the achieved classification performance vis-à-vis the latest state-of-the-art methods. The experimental analysis on observations from Landsat-8 reveals that approaches based on low-order U-Net models exhibit poor classification performance and are outperformed by our proposed high-dimensional U-Net scheme. |
2021 |
Doutsi, Effrosyni ; Antonini, Marc ; Tsakalides, Panagiotis Neuronal Communication Process Opens New Directions in Image and Video Compression Systems In Proceedings Proc. European Research Consortium for Informatics and Mathematics, Special Theme: Brain-inspired Computing (ERCIM News 125), pp. 27–28, 2021. Abstract | BibTeX | Tags: Compression @inproceedings{Doutsi_2021b, title = {Neuronal Communication Process Opens New Directions in Image and Video Compression Systems}, author = {Doutsi, Effrosyni and Antonini, Marc and Tsakalides, Panagiotis}, year = {2021}, date = {2021-11-30}, booktitle = {Proc. European Research Consortium for Informatics and Mathematics, Special Theme: Brain-inspired Computing (ERCIM News 125)}, pages = {27--28}, abstract = {The 3D ultra-high-resolution world that is captured by the visual system is sensed, processed and transferred through a dense network of tiny cells, called neurons. An understanding of neuronal communication has the potential to open new horizons for the development of ground-breaking image and video compression systems. A recently proposed neuro-inspired compression system promises to change the framework of the current state- of-the-art compression algorithms.}, keywords = {Compression}, pubstate = {published}, tppubtype = {inproceedings} } The 3D ultra-high-resolution world that is captured by the visual system is sensed, processed and transferred through a dense network of tiny cells, called neurons. An understanding of neuronal communication has the potential to open new horizons for the development of ground-breaking image and video compression systems. A recently proposed neuro-inspired compression system promises to change the framework of the current state- of-the-art compression algorithms. |
Kalatzantonakis-Jullien, George-Marios ; Stefanakis, Nikolaos ; Giannakakis, Giorgos Investigation and ordinal modelling of vocal features for stress detection in speech In Proceedings Proc. 9th International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 1–8, IEEE, 2021. Abstract | BibTeX | Tags: Affective Computing, Biosignals, Emotion Recognition, Feature Selection, Hyperparameter Optimization, Mel Cepstral Coefficients, mRMR, Pairwise Transformation, Speech, Stress, Voice @inproceedings{Stefanakis_2021c, title = {Investigation and ordinal modelling of vocal features for stress detection in speech}, author = {Kalatzantonakis-Jullien, George-Marios and Stefanakis, Nikolaos and Giannakakis, Giorgos}, year = {2021}, date = {2021-09-01}, booktitle = {Proc. 9th International Conference on Affective Computing and Intelligent Interaction (ACII)}, pages = {1--8}, publisher = {IEEE}, abstract = {This paper investigates a robust and effective automatic stress detection model based on human vocal features. Our study experimental dataset contains the voices of 58 Greekspeaking participants (24 male, 34 female, 26.9±4.8 years old), both in neutral and stressed conditions. We extracted a total of 76 speech-derived features after extensive study of the relevant literature. We investigated and selected the most robust features using automatic feature selection methods, comparing multiple feature ranking methods (such as RFE, mRMR, stepwise fit) to assess their pattern across gender & experimental phase factors. Then, classification was performed both for the entire dataset, and then for each experimental task, for both genders combined and separately. The performance was evaluated using 10-fold cross-validation on the speakers. Our analysis achieved a best classification accuracy of 84.8% using linear SVM for the social exposure phase and 74.5% for the mental tasks phase using the gaussian SVM classifier. The ordinal modelling improved significantly our results, yielding a best on-subject basis 10- fold cross-validation classification accuracy of 95.0% for social exposure using gaussian SVM and 85.9% for mental tasks using the gaussian SVM. From our analysis, specific vocal features were identified as being robust and relevant to stress along with parameters to construct the stress model. However, it is was observed the susceptibility of speech to bias and masking and thus the need for universal speech markers for stress detection.}, keywords = {Affective Computing, Biosignals, Emotion Recognition, Feature Selection, Hyperparameter Optimization, Mel Cepstral Coefficients, mRMR, Pairwise Transformation, Speech, Stress, Voice}, pubstate = {published}, tppubtype = {inproceedings} } This paper investigates a robust and effective automatic stress detection model based on human vocal features. Our study experimental dataset contains the voices of 58 Greekspeaking participants (24 male, 34 female, 26.9±4.8 years old), both in neutral and stressed conditions. We extracted a total of 76 speech-derived features after extensive study of the relevant literature. We investigated and selected the most robust features using automatic feature selection methods, comparing multiple feature ranking methods (such as RFE, mRMR, stepwise fit) to assess their pattern across gender & experimental phase factors. Then, classification was performed both for the entire dataset, and then for each experimental task, for both genders combined and separately. The performance was evaluated using 10-fold cross-validation on the speakers. Our analysis achieved a best classification accuracy of 84.8% using linear SVM for the social exposure phase and 74.5% for the mental tasks phase using the gaussian SVM classifier. The ordinal modelling improved significantly our results, yielding a best on-subject basis 10- fold cross-validation classification accuracy of 95.0% for social exposure using gaussian SVM and 85.9% for mental tasks using the gaussian SVM. From our analysis, specific vocal features were identified as being robust and relevant to stress along with parameters to construct the stress model. However, it is was observed the susceptibility of speech to bias and masking and thus the need for universal speech markers for stress detection. |
Zervou, Michaela Areti ; Doutsi, Effrosyni ; Tsakalides, Panagiotis Visibility Graph Network of Multidimensional Time Series Data for Protein Structure Classification In Proceedings Proc. European Signal Processing Conference (EUSIPCO), pp. 1216–1220, 2021. Abstract | Links | BibTeX | Tags: Horizontal Visibility Graphs, Multidimensional Time Series, Nonlinear Time Series Analysis, Secondary Structure Classification, Visibility Graphs @inproceedings{Zervou_2021a, title = {Visibility Graph Network of Multidimensional Time Series Data for Protein Structure Classification}, author = {Zervou, Michaela Areti and Doutsi, Effrosyni and Tsakalides, Panagiotis}, doi = {10.23919/EUSIPCO54536.2021.9616113}, year = {2021}, date = {2021-08-30}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, pages = {1216--1220}, abstract = {In the last decades, many studies have explored the potential of utilizing complex network approaches to characterize time series generated from dynamical systems. Along these lines, Visibility Graph (VG) and Horizontal Visibility Graph (HVG) networks have contributed to an important yet difficult problem in bioinformatics, the classification of the secondary structure of low-homology proteins. In particular, each protein is presented as a two-dimensional time series that is later transformed, using either VG or HVG, into two independent graphs. However, this is an inefficient way of processing multidimensional time series as it fails to capture the correlation between the two signals while it also increases the time and memory complexity. To address this issue, this work proposes four novel VG and HVG-based frameworks that are able to deal directly with the multidimensional time series. Each of the methods generates a unique graph following a different visibility rule concerning only the relation between pairs of time series intensities of the multidimensional time series. Experimental evaluation on real protein sequences demonstrates the superiority of our best scheme, with respect to both accuracy and computational time, when compared against the state-of-the-art.}, keywords = {Horizontal Visibility Graphs, Multidimensional Time Series, Nonlinear Time Series Analysis, Secondary Structure Classification, Visibility Graphs}, pubstate = {published}, tppubtype = {inproceedings} } In the last decades, many studies have explored the potential of utilizing complex network approaches to characterize time series generated from dynamical systems. Along these lines, Visibility Graph (VG) and Horizontal Visibility Graph (HVG) networks have contributed to an important yet difficult problem in bioinformatics, the classification of the secondary structure of low-homology proteins. In particular, each protein is presented as a two-dimensional time series that is later transformed, using either VG or HVG, into two independent graphs. However, this is an inefficient way of processing multidimensional time series as it fails to capture the correlation between the two signals while it also increases the time and memory complexity. To address this issue, this work proposes four novel VG and HVG-based frameworks that are able to deal directly with the multidimensional time series. Each of the methods generates a unique graph following a different visibility rule concerning only the relation between pairs of time series intensities of the multidimensional time series. Experimental evaluation on real protein sequences demonstrates the superiority of our best scheme, with respect to both accuracy and computational time, when compared against the state-of-the-art. |
Karmiris, Ilias ; Astaras, Christos ; Ioannou, Konstantinos ; Vasiliadis, Ioakim ; Youlatos, Dionisios ; Stefanakis, Nikolaos ; Chatziefthimiou, Aspassia D; Kominos, Theodoros ; Galanaki, Antonia Estimating Livestock Grazing Activity in Remote Areas Using Passive Acoustic Monitoring Journal Article Information, 12 (8), pp. 290, 2021. Abstract | BibTeX | Tags: Acoustic Sensors, Detection Algorithm, Grazing Season, Grazing Timing, Passive Acoustic Monitoring @article{Stefanakis_2021b, title = {Estimating Livestock Grazing Activity in Remote Areas Using Passive Acoustic Monitoring}, author = {Karmiris, Ilias and Astaras, Christos and Ioannou, Konstantinos and Vasiliadis, Ioakim and Youlatos, Dionisios and Stefanakis, Nikolaos and Chatziefthimiou, Aspassia D and Kominos, Theodoros and Galanaki, Antonia}, year = {2021}, date = {2021-08-01}, journal = {Information}, volume = {12}, number = {8}, pages = {290}, abstract = {Grazing has long been recognized as an effective means of modifying natural habitats and, by extension, as a wildlife and protected area management tool, in addition to the obvious economic value it has for pastoral communities. A holistic approach to grazing management requires the estimation of grazing timing, frequency, and season length, as well as the overall grazing intensity. However, traditional grazing monitoring methods require frequent field visits, which can be labor intensive and logistically demanding to implement, especially in remote areas. Questionnaire surveys of farmers are also widely used to collect information on grazing parameters, however there can be concerns regarding the reliability of the data collected. To improve the reliability of grazing data collected and decrease the required labor, we tested for the first time whether a novel combination of autonomous recording units and the semi-automated detection algorithms of livestock vocalizations could provide insight on grazing activity at the selected areas of the Greek Rhodope mountain range. Our results confirm the potential of passive acoustic monitoring (PAM) techniques as a cost-efficient method for acquiring high resolution spatiotemporal data on grazing patterns. Additionally, we evaluate the three algorithms that we developed for detecting cattle, sheep/goat, and livestock bell sounds, and make them available to the broader scientific community. We conclude with suggestions on ways that acoustic monitoring can further contribute to managing legal and illegal grazing, and offer a list of priorities for related future research.}, keywords = {Acoustic Sensors, Detection Algorithm, Grazing Season, Grazing Timing, Passive Acoustic Monitoring}, pubstate = {published}, tppubtype = {article} } Grazing has long been recognized as an effective means of modifying natural habitats and, by extension, as a wildlife and protected area management tool, in addition to the obvious economic value it has for pastoral communities. A holistic approach to grazing management requires the estimation of grazing timing, frequency, and season length, as well as the overall grazing intensity. However, traditional grazing monitoring methods require frequent field visits, which can be labor intensive and logistically demanding to implement, especially in remote areas. Questionnaire surveys of farmers are also widely used to collect information on grazing parameters, however there can be concerns regarding the reliability of the data collected. To improve the reliability of grazing data collected and decrease the required labor, we tested for the first time whether a novel combination of autonomous recording units and the semi-automated detection algorithms of livestock vocalizations could provide insight on grazing activity at the selected areas of the Greek Rhodope mountain range. Our results confirm the potential of passive acoustic monitoring (PAM) techniques as a cost-efficient method for acquiring high resolution spatiotemporal data on grazing patterns. Additionally, we evaluate the three algorithms that we developed for detecting cattle, sheep/goat, and livestock bell sounds, and make them available to the broader scientific community. We conclude with suggestions on ways that acoustic monitoring can further contribute to managing legal and illegal grazing, and offer a list of priorities for related future research. |
Tsagkatakis, Grigorios ; Moghaddam, Mahta ; Tsakalides, Panagiotis Deep multi-modal satellite and in-situ observation fusion for Soil Moisture retrieval In Proceedings Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2021), pp. 6339–6342, IEEE 2021. Abstract | BibTeX | Tags: Deep Learning, Remote Sensing, Soil Moisture @inproceedings{Tsagkatakis_2021a, title = {Deep multi-modal satellite and in-situ observation fusion for Soil Moisture retrieval}, author = {Tsagkatakis, Grigorios and Moghaddam, Mahta and Tsakalides, Panagiotis}, year = {2021}, date = {2021-07-11}, booktitle = {Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2021)}, pages = {6339--6342}, organization = {IEEE}, abstract = {This work focuses on the problem of surface soil moisture estimation from multi-modal remote sensing observations. We focus on the scenario where both passive radiometer observations from NASA SMAP satellite, as well as active radar measurements from ESA Sentinel 1 are available. We formulate the problem as multi-source observation fusion and develop a deep learning model for SM estimation. To train and validate the performance of the proposed scheme, we consider observations from in-situ SM sensor networks over the continental USA. Experimental results demonstrate that the proposed model achieves high quality SM estimation, surpassing the performance of available products.}, keywords = {Deep Learning, Remote Sensing, Soil Moisture}, pubstate = {published}, tppubtype = {inproceedings} } This work focuses on the problem of surface soil moisture estimation from multi-modal remote sensing observations. We focus on the scenario where both passive radiometer observations from NASA SMAP satellite, as well as active radar measurements from ESA Sentinel 1 are available. We formulate the problem as multi-source observation fusion and develop a deep learning model for SM estimation. To train and validate the performance of the proposed scheme, we consider observations from in-situ SM sensor networks over the continental USA. Experimental results demonstrate that the proposed model achieves high quality SM estimation, surpassing the performance of available products. |
Zervou, Michaela Areti ; Doutsi, Effrosyni ; Pavlidis, Pavlos ; Tsakalides, Panagiotis Structural Classification Of Proteins Based On The Computationally Efficient Recurrence Quantification Analysis And Horizontal Visibility Graphs Journal Article Bioinformatics, 37 (13), pp. 1796–1804, 2021, ISSN: 1367-4803. Abstract | Links | BibTeX | Tags: Horizontal Visibility Graphs, Recurrence Quantification Analysis @article{Zervou_2021b, title = {Structural Classification Of Proteins Based On The Computationally Efficient Recurrence Quantification Analysis And Horizontal Visibility Graphs}, author = {Zervou, Michaela Areti and Doutsi, Effrosyni and Pavlidis, Pavlos and Tsakalides, Panagiotis}, doi = {10.1093/bioinformatics/btab407}, issn = {1367-4803}, year = {2021}, date = {2021-05-01}, journal = {Bioinformatics}, volume = {37}, number = {13}, pages = {1796--1804}, abstract = {Protein structural class prediction is one of the most significant problems in bioinformatics, as it has a prominent role in understanding the function and evolution of proteins. Designing a computationally efficient but at the same time accurate prediction method remains a pressing issue, especially for sequences that we cannot obtain a sufficient amount of homologous information from existing protein sequence databases. Several studies demonstrate the potential of utilizing chaos game representation along with time series analysis tools such as recurrence quantification analysis, complex networks, horizontal visibility graphs (HVG) and others. However, the majority of existing works involve a large amount of features and they require an exhaustive, time consuming search of the optimal parameters. To address the aforementioned problems, this work adopts the generalized multidimensional recurrence quantification analysis (GmdRQA) as an efficient tool that enables to process concurrently a multidimensional time series and reduce the number of features. In addition, two data-driven algorithms, namely average mutual information and false nearest neighbors, are utilized to define in a fast yet precise manner the optimal GmdRQA parameters.}, keywords = {Horizontal Visibility Graphs, Recurrence Quantification Analysis}, pubstate = {published}, tppubtype = {article} } Protein structural class prediction is one of the most significant problems in bioinformatics, as it has a prominent role in understanding the function and evolution of proteins. Designing a computationally efficient but at the same time accurate prediction method remains a pressing issue, especially for sequences that we cannot obtain a sufficient amount of homologous information from existing protein sequence databases. Several studies demonstrate the potential of utilizing chaos game representation along with time series analysis tools such as recurrence quantification analysis, complex networks, horizontal visibility graphs (HVG) and others. However, the majority of existing works involve a large amount of features and they require an exhaustive, time consuming search of the optimal parameters. To address the aforementioned problems, this work adopts the generalized multidimensional recurrence quantification analysis (GmdRQA) as an efficient tool that enables to process concurrently a multidimensional time series and reduce the number of features. In addition, two data-driven algorithms, namely average mutual information and false nearest neighbors, are utilized to define in a fast yet precise manner the optimal GmdRQA parameters. |
Doutsi, Effrosyni ; Fillatre, Lionel ; Antonini, Marc ; Tsakalides, Panagiotis Dynamic Image Quantization Using Leaky Integrate-and-Fire Neurons Journal Article IEEE Transactions on Image Processing, 30 , pp. 4305–4315, 2021. Abstract | Links | BibTeX | Tags: Leaky Integrate-and-Fire (LIF), Non-Uniform Quantization, Rate Coding, Spikes, Time Coding, Uniform Quantization @article{Doutsi_2021a, title = {Dynamic Image Quantization Using Leaky Integrate-and-Fire Neurons}, author = {Doutsi, Effrosyni and Fillatre, Lionel and Antonini, Marc and Tsakalides, Panagiotis}, doi = {10.1109/TIP.2021.3070193}, year = {2021}, date = {2021-04-09}, journal = {IEEE Transactions on Image Processing}, volume = {30}, pages = {4305--4315}, abstract = {This paper introduces a novel coding/decoding mechanism that mimics one of the most important properties of the human visual system: its ability to enhance the visual perception quality in time. In other words, the brain takes advantage of time to process and clarify the details of the visual scene. This characteristic is yet to be considered by the state-of-the-art quantization mechanisms that process the visual information regardless the duration of time it appears in the visual scene. We propose a compression architecture built of neuroscience models; it first uses the leaky integrate-and-fire (LIF) model to transform the visual stimulus into a spike train and then it combines two different kinds of spike interpretation mechanisms (SIM), the time-SIM and the rate-SIM for the encoding of the spike train. The time-SIM allows a high quality interpretation of the neural code and the rate-SIM allows a simple decoding mechanism by counting the spikes. For that reason, the proposed mechanisms is called Dual-SIM quantizer (Dual-SIMQ). We show that (i) the time-dependency of Dual-SIMQ automatically controls the reconstruction accuracy of the visual stimulus, (ii) the numerical comparison of Dual-SIMQ to the state-of-the-art shows that the performance of the proposed algorithm is similar to the uniform quantization schema while it approximates the optimal behavior of the non-uniform quantization schema and (iii) from the perceptual point of view the reconstruction quality using the Dual-SIMQ is higher than the state-of-the-art.}, keywords = {Leaky Integrate-and-Fire (LIF), Non-Uniform Quantization, Rate Coding, Spikes, Time Coding, Uniform Quantization}, pubstate = {published}, tppubtype = {article} } This paper introduces a novel coding/decoding mechanism that mimics one of the most important properties of the human visual system: its ability to enhance the visual perception quality in time. In other words, the brain takes advantage of time to process and clarify the details of the visual scene. This characteristic is yet to be considered by the state-of-the-art quantization mechanisms that process the visual information regardless the duration of time it appears in the visual scene. We propose a compression architecture built of neuroscience models; it first uses the leaky integrate-and-fire (LIF) model to transform the visual stimulus into a spike train and then it combines two different kinds of spike interpretation mechanisms (SIM), the time-SIM and the rate-SIM for the encoding of the spike train. The time-SIM allows a high quality interpretation of the neural code and the rate-SIM allows a simple decoding mechanism by counting the spikes. For that reason, the proposed mechanisms is called Dual-SIM quantizer (Dual-SIMQ). We show that (i) the time-dependency of Dual-SIMQ automatically controls the reconstruction accuracy of the visual stimulus, (ii) the numerical comparison of Dual-SIMQ to the state-of-the-art shows that the performance of the proposed algorithm is similar to the uniform quantization schema while it approximates the optimal behavior of the non-uniform quantization schema and (iii) from the perceptual point of view the reconstruction quality using the Dual-SIMQ is higher than the state-of-the-art. |
Aidini, Anastasia ; Tsagkatakis, Grigorios ; Tsakalides, Panagiotis Tensor Decomposition Learning for Compression of Multidimensional Signals Journal Article IEEE Journal of Selected Topics in Signal Processing, 15 (3), pp. 476–490, 2021. Abstract | BibTeX | Tags: Alternating Direction Method of Multipliers, Compression, Learning, Multidimensional Signals, Tucker Decomposition @article{Aidini_2021a, title = {Tensor Decomposition Learning for Compression of Multidimensional Signals}, author = {Aidini, Anastasia and Tsagkatakis, Grigorios and Tsakalides, Panagiotis}, year = {2021}, date = {2021-01-25}, journal = {IEEE Journal of Selected Topics in Signal Processing}, volume = {15}, number = {3}, pages = {476--490}, abstract = {Multidimensional signals like multispectral images and color videos are becoming ubiquitous in modern times, constantly introducing challenges in data storage and transfer, and therefore demanding efficient compression strategies. Such high dimensional observations can be naturally encoded as tensors, exhibiting significant redundancies across dimensions. This property is exploited by tensor decomposition techniques that are being increasingly used for compactly encoding large multidimensional arrays. While efficient, these methods are incapable of utilizing prior information present in training data. In this paper, a novel tensor decomposition learning method is proposed for the compression of high dimensional signals. Specifically, instead of extracting independent bases for each example, our method learns an appropriate basis for each dimension from a set of training samples by solving a constrained optimization problem. As such, each sample is quantized and encoded into a reduced-size core tensor of coefficients that corresponds to the multilinear combination of the learned basis matrices. Furthermore, the proposed method employs a symbol encoding dictionary for binarizing the decomposition outputs. Experimental results on synthetic data and real satellite multispectral image sequences demonstrate the efficacy of our method, surpassing competing compression methods while offering the flexibility to handle arbitrary high dimensional data structures.}, keywords = {Alternating Direction Method of Multipliers, Compression, Learning, Multidimensional Signals, Tucker Decomposition}, pubstate = {published}, tppubtype = {article} } Multidimensional signals like multispectral images and color videos are becoming ubiquitous in modern times, constantly introducing challenges in data storage and transfer, and therefore demanding efficient compression strategies. Such high dimensional observations can be naturally encoded as tensors, exhibiting significant redundancies across dimensions. This property is exploited by tensor decomposition techniques that are being increasingly used for compactly encoding large multidimensional arrays. While efficient, these methods are incapable of utilizing prior information present in training data. In this paper, a novel tensor decomposition learning method is proposed for the compression of high dimensional signals. Specifically, instead of extracting independent bases for each example, our method learns an appropriate basis for each dimension from a set of training samples by solving a constrained optimization problem. As such, each sample is quantized and encoded into a reduced-size core tensor of coefficients that corresponds to the multilinear combination of the learned basis matrices. Furthermore, the proposed method employs a symbol encoding dictionary for binarizing the decomposition outputs. Experimental results on synthetic data and real satellite multispectral image sequences demonstrate the efficacy of our method, surpassing competing compression methods while offering the flexibility to handle arbitrary high dimensional data structures. |
Adami, Ilia ; Foukarakis, Michalis ; Ntoa, Stavroula ; Partarakis, Nikolaos ; Stefanakis, Nikolaos ; Koutras, George ; Kutsuras, Themistoklis ; Ioannidi, Danai ; Zabulis, Xenophon ; Stephanidis, Constantine Monitoring Health Parameters of Elders to Support Independent Living and Improve Their Quality of Life Journal Article Sensors, 21 (2), pp. 517, 2021. Abstract | BibTeX | Tags: Cough Detection, eHealth, Elderly, Human-Centred Design, Independent Living, mHealth, Monitoring of Vital Signs, Quality of Life, Sensors, Virtual Assistant @article{Stefanakis_2021a, title = {Monitoring Health Parameters of Elders to Support Independent Living and Improve Their Quality of Life}, author = {Adami, Ilia and Foukarakis, Michalis and Ntoa, Stavroula and Partarakis, Nikolaos and Stefanakis, Nikolaos and Koutras, George and Kutsuras, Themistoklis and Ioannidi, Danai and Zabulis, Xenophon and Stephanidis, Constantine}, year = {2021}, date = {2021-01-01}, journal = {Sensors}, volume = {21}, number = {2}, pages = {517}, abstract = {Improving the well-being and quality of life of the elderly population is closely related to assisting them to effectively manage age-related conditions such as chronic illnesses and anxiety, and to maintain their independence and self-sufficiency as much as possible. This paper presents the design, architecture and implementation structure of an adaptive system for monitoring the health and well-being of the elderly. The system was designed following best practices of the Human-Centred Design approach involving representative end-users from the early stages.}, keywords = {Cough Detection, eHealth, Elderly, Human-Centred Design, Independent Living, mHealth, Monitoring of Vital Signs, Quality of Life, Sensors, Virtual Assistant}, pubstate = {published}, tppubtype = {article} } Improving the well-being and quality of life of the elderly population is closely related to assisting them to effectively manage age-related conditions such as chronic illnesses and anxiety, and to maintain their independence and self-sufficiency as much as possible. This paper presents the design, architecture and implementation structure of an adaptive system for monitoring the health and well-being of the elderly. The system was designed following best practices of the Human-Centred Design approach involving representative end-users from the early stages. |
2020 |
Geiller, Tristan ; Vancura, Bert ; Terada, Satoshi ; Troullinou, Eirini ; Chavlis, Spyridon ; Tsagkatakis, Grigorios ; Tsakalides, Panagiotis ; Ocsai, Katalin ; Poirazi, Panayiota ; Rozsa, Balazs J; Losonczy, Attila Large-Scale 3D Two-Photon Imaging of Molecularly Identified CA1 Interneuron Dynamics in Behaving Mice Journal Article Neuron, 108 (5), pp. 968–983, 2020. Abstract | BibTeX | Tags: Axo-Axonic, Calcium, CCK, Context, Hippocampus, imaging, Interneuron, Remapping, Reward, Sharp-Wave Ripple @article{Troullinou_2020b, title = {Large-Scale 3D Two-Photon Imaging of Molecularly Identified CA1 Interneuron Dynamics in Behaving Mice}, author = {Geiller, Tristan and Vancura, Bert and Terada, Satoshi and Troullinou, Eirini and Chavlis, Spyridon and Tsagkatakis, Grigorios and Tsakalides, Panagiotis and Ocsai, Katalin and Poirazi, Panayiota and Rozsa, Balazs J and Losonczy, Attila}, year = {2020}, date = {2020-12-09}, journal = {Neuron}, volume = {108}, number = {5}, pages = {968--983}, abstract = {Cortical computations are critically reliant on their local circuit, GABAergic cells. In the hippocampus, a large body of work has identified an unprecedented diversity of GABAergic interneurons with pronounced anatomical, molecular, and physiological differences. Yet little is known about the functional properties and activity dynamics of the major hippocampal interneuron classes in behaving animals. Here we use fast, targeted, three-dimensional (3D) two-photon calcium imaging coupled with immunohistochemistry-based molecular identification to retrospectively map in vivo activity onto multiple classes of interneurons in the mouse hippocampal area CA1 during head-fixed exploration and goal-directed learning. We find examples of preferential subtype recruitment with quantitative differences in response properties and feature selectivity during key behavioral tasks and states. These results provide new insights into the collective organization of local inhibitory circuits supporting navigational and mnemonic functions of the hippocampus.}, keywords = {Axo-Axonic, Calcium, CCK, Context, Hippocampus, imaging, Interneuron, Remapping, Reward, Sharp-Wave Ripple}, pubstate = {published}, tppubtype = {article} } Cortical computations are critically reliant on their local circuit, GABAergic cells. In the hippocampus, a large body of work has identified an unprecedented diversity of GABAergic interneurons with pronounced anatomical, molecular, and physiological differences. Yet little is known about the functional properties and activity dynamics of the major hippocampal interneuron classes in behaving animals. Here we use fast, targeted, three-dimensional (3D) two-photon calcium imaging coupled with immunohistochemistry-based molecular identification to retrospectively map in vivo activity onto multiple classes of interneurons in the mouse hippocampal area CA1 during head-fixed exploration and goal-directed learning. We find examples of preferential subtype recruitment with quantitative differences in response properties and feature selectivity during key behavioral tasks and states. These results provide new insights into the collective organization of local inhibitory circuits supporting navigational and mnemonic functions of the hippocampus. |
Vernardos, Georgios ; Tsagkatakis, Grigorios ; Pantazis, Yannis Quantifying the structure of strong gravitational lens potentials with uncertainty-aware deep neural networks Journal Article Monthly Notices of the Royal Astronomical Society, 499 (4), pp. 5641–5652, 2020. Abstract | BibTeX | Tags: Astrophysics, Deep Learning @article{Tsagkatakis_2020c, title = {Quantifying the structure of strong gravitational lens potentials with uncertainty-aware deep neural networks}, author = {Vernardos, Georgios and Tsagkatakis, Grigorios and Pantazis, Yannis}, year = {2020}, date = {2020-12-04}, journal = {Monthly Notices of the Royal Astronomical Society}, volume = {499}, number = {4}, pages = {5641--5652}, abstract = {Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science.}, keywords = {Astrophysics, Deep Learning}, pubstate = {published}, tppubtype = {article} } Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science. |
Pentari, Anastasia ; Tzagkarakis, George ; Marias, Kostas ; Tsakalides, Panagiotis A Study on the Effect of Distinct Adjacency Matrices for Graph Signal Denoising In Proceedings Proc. IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE), pp. 523–529, 2020. Abstract | BibTeX | Tags: EEG, Fractional Lower Order Moments, Functional Connectivity, Graph Signal Filtering, Topological Connectivity, Visibility Graph @inproceedings{Pentari_2020b, title = {A Study on the Effect of Distinct Adjacency Matrices for Graph Signal Denoising}, author = {Pentari, Anastasia and Tzagkarakis, George and Marias, Kostas and Tsakalides, Panagiotis}, year = {2020}, date = {2020-10-26}, booktitle = {Proc. IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE)}, pages = {523--529}, abstract = {As the field of brain monitoring is evolving rapidly, there is an increasing demand of finding innovative ways to handle relevant signals. Especially electroencephalogram (EEG) signals provide a non-invasive way of diagnostic inference of brain’s functionality. Nevertheless, EEG signals are often corrupted by impulsive noise, thus prior denoising is required for accurate analysis and decision making. On the other hand, EEG signals admit naturally a representation in the form of graphs, with the electrodes corresponding to the nodes of the graph and the edges expressing the connectivity strength. To this end, graph signal processing (GSP) is a versatile tool, which enables the representation and analysis of graph-structured signals, whose interdependencies are encoded in the form of an appropriate adjacency matrix. To address the denoising of graph-structured signals, under impulsive noise conditions, this work introduces a regularized graph filtering scheme based on fractional lower order moments, coupled with distinct adjacency matrices inspired both by statistical approaches and visibility graphs that are better capable of capturing the topological and functional connectivity between the distinct nodes. The experimental evaluation on real EEG signals recorded in epileptic and non-epileptic seizures, reveals the effects of the adjacency matrix choice on the denoising performance.}, keywords = {EEG, Fractional Lower Order Moments, Functional Connectivity, Graph Signal Filtering, Topological Connectivity, Visibility Graph}, pubstate = {published}, tppubtype = {inproceedings} } As the field of brain monitoring is evolving rapidly, there is an increasing demand of finding innovative ways to handle relevant signals. Especially electroencephalogram (EEG) signals provide a non-invasive way of diagnostic inference of brain’s functionality. Nevertheless, EEG signals are often corrupted by impulsive noise, thus prior denoising is required for accurate analysis and decision making. On the other hand, EEG signals admit naturally a representation in the form of graphs, with the electrodes corresponding to the nodes of the graph and the edges expressing the connectivity strength. To this end, graph signal processing (GSP) is a versatile tool, which enables the representation and analysis of graph-structured signals, whose interdependencies are encoded in the form of an appropriate adjacency matrix. To address the denoising of graph-structured signals, under impulsive noise conditions, this work introduces a regularized graph filtering scheme based on fractional lower order moments, coupled with distinct adjacency matrices inspired both by statistical approaches and visibility graphs that are better capable of capturing the topological and functional connectivity between the distinct nodes. The experimental evaluation on real EEG signals recorded in epileptic and non-epileptic seizures, reveals the effects of the adjacency matrix choice on the denoising performance. |
Troullinou, Eirini ; Tsagkatakis, Grigorios ; Chavlis, Spyridon ; Turi, Gergely F; Li, Wenke ; Losonczy, Attila ; Tsakalides, Panagiotis ; Poirazi, Panayiota Artificial Neural Networks in Action for an Automated Cell-Type Classification of Biological Neural Networks Journal Article IEEE Transactions on Emerging Topics in Computational Intelligence, 2020. Abstract | BibTeX | Tags: Artificial Neural Networks, Calcium Imaging, Neuronal Cell-Type Classification @article{Troullinou_2020c, title = {Artificial Neural Networks in Action for an Automated Cell-Type Classification of Biological Neural Networks}, author = {Troullinou, Eirini and Tsagkatakis, Grigorios and Chavlis, Spyridon and Turi, Gergely F and Li, Wenke and Losonczy, Attila and Tsakalides, Panagiotis and Poirazi, Panayiota}, year = {2020}, date = {2020-10-13}, journal = {IEEE Transactions on Emerging Topics in Computational Intelligence}, abstract = {Identification of different neuronal cell types is critical for understanding their contribution to brain functions. Yet, automated and reliable classification of neurons remains a challenge, primarily because of their biological complexity. Typical approaches include laborious and expensive immunohistochemical analysis while feature extraction algorithms based on cellular characteristics have recently been proposed. The former rely on molecular markers, which are often expressed in many cell types, while the latter suffer from similar issues: finding features that are distinctive for each class has proven to be equally challenging. Moreover, both approaches are time consuming and demand a lot of human intervention. In this work we establish the first, automated cell-type classification method that relies on neuronal activity rather than molecular or cellular features. We test our method on a real-world dataset comprising of raw calcium activity signals for four neuronal types. We compare the performance of three different deep learning models and demonstrate that our method can achieve automated classification of neuronal cell types with unprecedented accuracy.}, keywords = {Artificial Neural Networks, Calcium Imaging, Neuronal Cell-Type Classification}, pubstate = {published}, tppubtype = {article} } Identification of different neuronal cell types is critical for understanding their contribution to brain functions. Yet, automated and reliable classification of neurons remains a challenge, primarily because of their biological complexity. Typical approaches include laborious and expensive immunohistochemical analysis while feature extraction algorithms based on cellular characteristics have recently been proposed. The former rely on molecular markers, which are often expressed in many cell types, while the latter suffer from similar issues: finding features that are distinctive for each class has proven to be equally challenging. Moreover, both approaches are time consuming and demand a lot of human intervention. In this work we establish the first, automated cell-type classification method that relies on neuronal activity rather than molecular or cellular features. We test our method on a real-world dataset comprising of raw calcium activity signals for four neuronal types. We compare the performance of three different deep learning models and demonstrate that our method can achieve automated classification of neuronal cell types with unprecedented accuracy. |
Tsagkatakis, Grigorios ; Moghaddam, Mahta ; Tsakalides, Panagiotis Multi-Temporal Convolutional Neural Networks for Satellite-Derived Soil Moisture Observation Enhancement In Proceedings Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2020), pp. 4602–4605, IEEE 2020. Abstract | BibTeX | Tags: Deep Learning, Remote Sensing, Soil Moisture @inproceedings{Tsagkatakis_2020a, title = {Multi-Temporal Convolutional Neural Networks for Satellite-Derived Soil Moisture Observation Enhancement}, author = {Tsagkatakis, Grigorios and Moghaddam, Mahta and Tsakalides, Panagiotis}, year = {2020}, date = {2020-10-03}, booktitle = {Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2020)}, pages = {4602--4605}, organization = {IEEE}, abstract = {In this work, we propose a novel Convolutional Neural Network architecture for increasing the low spatial resolution SMAP radiometer based soil moisture estimations from 36 km to 3 km resolution by using time-series of observations from both SMAP's radiometer and Sentinel-1 radar. By simultaneously extracting features from both current low-resolution input and residuals between high and low resolution at previous time instances, the proposed network is capable of accurately estimating soil moisture using coarse resolution observations. Experimental results on three different locations demonstrate that the proposed scheme is able to estimate soil moisture with accuracy in the range of the requirements set by the SMAP science team.}, keywords = {Deep Learning, Remote Sensing, Soil Moisture}, pubstate = {published}, tppubtype = {inproceedings} } In this work, we propose a novel Convolutional Neural Network architecture for increasing the low spatial resolution SMAP radiometer based soil moisture estimations from 36 km to 3 km resolution by using time-series of observations from both SMAP's radiometer and Sentinel-1 radar. By simultaneously extracting features from both current low-resolution input and residuals between high and low resolution at previous time instances, the proposed network is capable of accurately estimating soil moisture using coarse resolution observations. Experimental results on three different locations demonstrate that the proposed scheme is able to estimate soil moisture with accuracy in the range of the requirements set by the SMAP science team. |
Simou, Nikonas ; Stefanakis, Nikolaos ; Zervas, Panagiotis A Universal System for Cough Detection in Domestic Acoustic Environments In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2020. Abstract | BibTeX | Tags: Cough Detection, Deep Neural Networks, Domestic Acoustic Environments @inproceedings{Simou_2020b, title = {A Universal System for Cough Detection in Domestic Acoustic Environments}, author = {Simou, Nikonas and Stefanakis, Nikolaos and Zervas, Panagiotis}, year = {2020}, date = {2020-08-31}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, abstract = {Automated cough detection may provide valuable clinical information for monitoring a patient's health condition. In this paper, we present a cough detection system that utilises an acoustic onset detector as a pre-processing step, aiming to detect impulsive patterns in the audio stream. In a subsequent step, discrimination of coughing events from other impulsive sounds is handled as a binary classification task. In contrast to existing works, the proposed cough discrimination models are trained and tested with heterogeneous data uploaded by different users to online audio repositories. In that way, our system achieves robust performance to a wide range of audio recording devices and to varying noise and/or reverberation conditions. Our evaluation results showed that a sensitivity in the order of 90% and a specificity in the order of 99% can be achieved in a domestic environment with the utilization of Long-Short-Term-Memory deep neural network architecture.}, keywords = {Cough Detection, Deep Neural Networks, Domestic Acoustic Environments}, pubstate = {published}, tppubtype = {inproceedings} } Automated cough detection may provide valuable clinical information for monitoring a patient's health condition. In this paper, we present a cough detection system that utilises an acoustic onset detector as a pre-processing step, aiming to detect impulsive patterns in the audio stream. In a subsequent step, discrimination of coughing events from other impulsive sounds is handled as a binary classification task. In contrast to existing works, the proposed cough discrimination models are trained and tested with heterogeneous data uploaded by different users to online audio repositories. In that way, our system achieves robust performance to a wide range of audio recording devices and to varying noise and/or reverberation conditions. Our evaluation results showed that a sensitivity in the order of 90% and a specificity in the order of 99% can be achieved in a domestic environment with the utilization of Long-Short-Term-Memory deep neural network architecture. |
Pentari, Anastasia ; Tzagkarakis, George ; Marias, Kostas ; Tsakalides, Panagiotis Graph-based Denoising of EEG Signals in Impulsive Environments In Proceedings Proc. European Signal Processing Conference (EUSIPCO), pp. 1095–1099, 2020. Abstract | BibTeX | Tags: Alpha-Stable Models, EEG Signals, Fractional Lower Order Moments, Graph Signal Denoising, Graph Signal Processing, Impulsive Noise @inproceedings{Pentari_2020a, title = {Graph-based Denoising of EEG Signals in Impulsive Environments}, author = {Pentari, Anastasia and Tzagkarakis, George and Marias, Kostas and Tsakalides, Panagiotis}, year = {2020}, date = {2020-08-31}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, pages = {1095--1099}, abstract = {As the fields of brain-computer interaction and digital monitoring of mental health are rapidly evolving, there is an increasing demand to improve the signal processing module of such systems. Specifically, the employment of electroencephalogram (EEG) signals is among the best non-invasive modalities for collecting brain signals. However, in practice, the quality of the recorded EEG signals is often deteriorated by impulsive noise, which hinders the accuracy of any decision-making process. Previous methods for denoising EEG signals primarily rely on second order statistics for the additive noise, which is not a valid assumption when operating in impulsive environments. To alleviate this issue, this work proposes a new method for suppressing the effects of heavy-tailed noise in EEG recordings. To this end, the spatio-temporal interdependence between the electrodes is first modelled by means of graph representations. Then, the family of alpha-stable models is employed to fit the distribution of the noisy graph signals and design an appropriate adjacency matrix. The denoised signals are obtained by solving iteratively a regularized optimization problem based on fractional lower-order moments. Experimental evaluation with real data reveals the improved denoising performance of our algorithm against well-established techniques.}, keywords = {Alpha-Stable Models, EEG Signals, Fractional Lower Order Moments, Graph Signal Denoising, Graph Signal Processing, Impulsive Noise}, pubstate = {published}, tppubtype = {inproceedings} } As the fields of brain-computer interaction and digital monitoring of mental health are rapidly evolving, there is an increasing demand to improve the signal processing module of such systems. Specifically, the employment of electroencephalogram (EEG) signals is among the best non-invasive modalities for collecting brain signals. However, in practice, the quality of the recorded EEG signals is often deteriorated by impulsive noise, which hinders the accuracy of any decision-making process. Previous methods for denoising EEG signals primarily rely on second order statistics for the additive noise, which is not a valid assumption when operating in impulsive environments. To alleviate this issue, this work proposes a new method for suppressing the effects of heavy-tailed noise in EEG recordings. To this end, the spatio-temporal interdependence between the electrodes is first modelled by means of graph representations. Then, the family of alpha-stable models is employed to fit the distribution of the noisy graph signals and design an appropriate adjacency matrix. The denoised signals are obtained by solving iteratively a regularized optimization problem based on fractional lower-order moments. Experimental evaluation with real data reveals the improved denoising performance of our algorithm against well-established techniques. |
Zervou, Michaela Areti ; Doutsi, Effrosyni ; Pavlidis, Pavlos ; Tsakalides, Panagiotis Efficient Dynamic Analysis of Low-similarity Proteins for Structural Class Prediction In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2020. Abstract | BibTeX | Tags: Chaos Game Representation, Multidimensional Recurrence Quantification Analysis, Nonlinear Time Series Analysis, Protein Structure Prediction @inproceedings{Zervou_2020a, title = {Efficient Dynamic Analysis of Low-similarity Proteins for Structural Class Prediction}, author = {Zervou, Michaela Areti and Doutsi, Effrosyni and Pavlidis, Pavlos and Tsakalides, Panagiotis }, year = {2020}, date = {2020-08-31}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, abstract = {Prediction of protein structural classes from amino acid sequences is a challenging problem as it is profitable for analyzing protein function, interactions, and regulation. The majority of existing prediction methods for low-homology sequences utilize numerous amount of features and require an exhausting search for optimal parameter tuning. To address this problem, this work proposes a novel self-tuned architecture for feature extraction by modeling directly the inherent dynamics of the data in higher-dimensional phase space via chaos game representation (CGR) and generalized multidimensional recurrence quantification analysis (GmdRQA). Experimental evaluation on a real benchmark dataset demonstrates the superiority of the herein proposed architecture when compared against the state-of-the-art unidimensional RQA taking under consideration that our method achieves similar performance in a data-driven manner with a smaller computational cost.}, keywords = {Chaos Game Representation, Multidimensional Recurrence Quantification Analysis, Nonlinear Time Series Analysis, Protein Structure Prediction}, pubstate = {published}, tppubtype = {inproceedings} } Prediction of protein structural classes from amino acid sequences is a challenging problem as it is profitable for analyzing protein function, interactions, and regulation. The majority of existing prediction methods for low-homology sequences utilize numerous amount of features and require an exhausting search for optimal parameter tuning. To address this problem, this work proposes a novel self-tuned architecture for feature extraction by modeling directly the inherent dynamics of the data in higher-dimensional phase space via chaos game representation (CGR) and generalized multidimensional recurrence quantification analysis (GmdRQA). Experimental evaluation on a real benchmark dataset demonstrates the superiority of the herein proposed architecture when compared against the state-of-the-art unidimensional RQA taking under consideration that our method achieves similar performance in a data-driven manner with a smaller computational cost. |
Bountrogiannis, Konstantinos ; Tzagkarakis, George ; Tsakalides, Panagiotis Anomaly Detection for Symbolic Time Series Representations of Reduced Dimensionality In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2020. BibTeX | Tags: Anomaly Detection @inproceedings{bountro_2020a, title = {Anomaly Detection for Symbolic Time Series Representations of Reduced Dimensionality}, author = {Bountrogiannis, Konstantinos and Tzagkarakis, George and Tsakalides, Panagiotis}, year = {2020}, date = {2020-08-31}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, keywords = {Anomaly Detection}, pubstate = {published}, tppubtype = {inproceedings} } |
Bountrogiannis, Konstantinos ; Tzagkarakis, George ; Tsakalides, Panagiotis Data-Driven Kernel-Based Probabilistic SAX for Time Series Dimensionality Reduction In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2020. BibTeX | Tags: Dimensionality Reduction, Time series @inproceedings{bountro_2020b, title = {Data-Driven Kernel-Based Probabilistic SAX for Time Series Dimensionality Reduction}, author = {Bountrogiannis, Konstantinos and Tzagkarakis, George and Tsakalides, Panagiotis}, year = {2020}, date = {2020-08-31}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, keywords = {Dimensionality Reduction, Time series}, pubstate = {published}, tppubtype = {inproceedings} } |
Stivaktakis, Radamanthys ; Tsagkatakis, Grigorios ; Tsakalides, Panagiotis Semantic Predictive Coding with Arbitrated Generative Adversarial Networks Journal Article MDPI Machine Learning and Knowledge Extraction (MAKE), 2 (3), pp. 307-326, 2020. Abstract | Links | BibTeX | Tags: Deep Learning, Generative Adversarial Networks, Next-Frame Prediction, Predictive Coding, Semantic Predictive Coding @article{stivakt_2020_arb1, title = {Semantic Predictive Coding with Arbitrated Generative Adversarial Networks}, author = {Stivaktakis, Radamanthys and Tsagkatakis, Grigorios and Tsakalides, Panagiotis}, doi = {//doi.org/10.3390/make2030017}, year = {2020}, date = {2020-08-25}, journal = {MDPI Machine Learning and Knowledge Extraction (MAKE)}, volume = {2}, number = {3}, pages = {307-326}, abstract = {In spatio-temporal predictive coding problems, like next-frame prediction in video, determining the content of plausible future frames is primarily based on the image dynamics of previous frames. We establish an alternative approach based on their underlying semantic information when considering data that do not necessarily incorporate a temporal aspect, but instead they comply with some form of associative ordering. In this work, we introduce the notion of semantic predictive coding by proposing a novel generative adversarial modeling framework which incorporates the arbiter classifier as a new component. While the generator is primarily tasked with the anticipation of possible next frames, the arbiter’s principal role is the assessment of their credibility. Taking into account that the denotative meaning of each forthcoming element can be encapsulated in a generic label descriptive of its content, a classification loss is introduced along with the adversarial loss. As supported by our experimental findings in a next-digit and a next-letter scenario, the utilization of the arbiter not only results in an enhanced GAN performance, but it also broadens the network’s creative capabilities in terms of the diversity of the generated symbols.}, keywords = {Deep Learning, Generative Adversarial Networks, Next-Frame Prediction, Predictive Coding, Semantic Predictive Coding}, pubstate = {published}, tppubtype = {article} } In spatio-temporal predictive coding problems, like next-frame prediction in video, determining the content of plausible future frames is primarily based on the image dynamics of previous frames. We establish an alternative approach based on their underlying semantic information when considering data that do not necessarily incorporate a temporal aspect, but instead they comply with some form of associative ordering. In this work, we introduce the notion of semantic predictive coding by proposing a novel generative adversarial modeling framework which incorporates the arbiter classifier as a new component. While the generator is primarily tasked with the anticipation of possible next frames, the arbiter’s principal role is the assessment of their credibility. Taking into account that the denotative meaning of each forthcoming element can be encapsulated in a generic label descriptive of its content, a classification loss is introduced along with the adversarial loss. As supported by our experimental findings in a next-digit and a next-letter scenario, the utilization of the arbiter not only results in an enhanced GAN performance, but it also broadens the network’s creative capabilities in terms of the diversity of the generated symbols. |
Aspri, Maria ; Tsagkatakis, Grigorios ; Tsakalides, Panagiotis Distributed training and inference of deep learning models for multi-modal land cover classification Journal Article Remote Sensing, 12 (17), pp. 2670, 2020. Abstract | BibTeX | Tags: Deep Learning, Land Cover, Remote Sensing @article{Aspri_2020a, title = {Distributed training and inference of deep learning models for multi-modal land cover classification}, author = {Aspri, Maria and Tsagkatakis, Grigorios and Tsakalides, Panagiotis}, year = {2020}, date = {2020-08-19}, journal = {Remote Sensing}, volume = {12}, number = {17}, pages = {2670}, abstract = {Deep Neural Networks (DNNs) have established themselves as a fundamental tool in numerous computational modeling applications, overcoming the challenge of defining use-case-specific feature extraction processing by incorporating this stage into unified end-to-end trainable models. Despite their capabilities in modeling, training large-scale DNN models is a very computation-intensive task that most single machines are often incapable of accomplishing. To address this issue, different parallelization schemes were proposed. Nevertheless, network overheads as well as optimal resource allocation pose as major challenges, since network communication is generally slower than intra-machine communication while some layers are more computationally expensive than others. In this work, we consider a novel multimodal DNN based on the Convolutional Neural Network architecture and explore several different ways to optimize its performance when training is executed on an Apache Spark Cluster. We evaluate the performance of different architectures via the metrics of network traffic and processing power, considering the case of land cover classification from remote sensing observations. Furthermore, we compare our architectures with an identical DNN architecture modeled after a data parallelization approach by using the metrics of classification accuracy and inference execution time. The experiments show that the way a model is parallelized has tremendous effect on resource allocation and hyperparameter tuning can reduce network overheads. Experimental results also demonstrate that proposed model parallelization schemes achieve more efficient resource use and more accurate predictions compared to data parallelization approaches.}, keywords = {Deep Learning, Land Cover, Remote Sensing}, pubstate = {published}, tppubtype = {article} } Deep Neural Networks (DNNs) have established themselves as a fundamental tool in numerous computational modeling applications, overcoming the challenge of defining use-case-specific feature extraction processing by incorporating this stage into unified end-to-end trainable models. Despite their capabilities in modeling, training large-scale DNN models is a very computation-intensive task that most single machines are often incapable of accomplishing. To address this issue, different parallelization schemes were proposed. Nevertheless, network overheads as well as optimal resource allocation pose as major challenges, since network communication is generally slower than intra-machine communication while some layers are more computationally expensive than others. In this work, we consider a novel multimodal DNN based on the Convolutional Neural Network architecture and explore several different ways to optimize its performance when training is executed on an Apache Spark Cluster. We evaluate the performance of different architectures via the metrics of network traffic and processing power, considering the case of land cover classification from remote sensing observations. Furthermore, we compare our architectures with an identical DNN architecture modeled after a data parallelization approach by using the metrics of classification accuracy and inference execution time. The experiments show that the way a model is parallelized has tremendous effect on resource allocation and hyperparameter tuning can reduce network overheads. Experimental results also demonstrate that proposed model parallelization schemes achieve more efficient resource use and more accurate predictions compared to data parallelization approaches. |
Tzagkarakis, George ; Charalampidis, Pavlos ; Roumpakis, Stylianos ; Makrogiannakis, Antonis ; Tsakalides, Panagiotis Quantifying the Computational Efficiency of Compressive Sensing in Smart Water Network Infrastructures Journal Article MDPI Sensors, 2020. BibTeX | Tags: Compressed Sensing, Wireless Sensor Networks @article{tzag_roub_2020a, title = {Quantifying the Computational Efficiency of Compressive Sensing in Smart Water Network Infrastructures}, author = {Tzagkarakis, George and Charalampidis, Pavlos and Roumpakis, Stylianos and Makrogiannakis, Antonis and Tsakalides, Panagiotis}, year = {2020}, date = {2020-06-10}, journal = {MDPI Sensors}, keywords = {Compressed Sensing, Wireless Sensor Networks}, pubstate = {published}, tppubtype = {article} } |
Simou, Nikonas ; Mastorakis, Yannis ; Stefanakis, Nikolaos Towards Blind Quality Assessment of Concert Audio Recordings Using Deep Neural Networks In Proceedings ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3477–3481, IEEE 2020, ISSN: 2379-190X. Abstract | Links | BibTeX | Tags: Deep Neural Networks, Quality Assessment, User Generated Content @inproceedings{Simou_2020a, title = {Towards Blind Quality Assessment of Concert Audio Recordings Using Deep Neural Networks}, author = {Simou, Nikonas and Mastorakis, Yannis and Stefanakis, Nikolaos}, doi = {10.1109/ICASSP40776.2020.9053356}, issn = {2379-190X}, year = {2020}, date = {2020-05-11}, booktitle = {ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {3477--3481}, organization = {IEEE}, abstract = {Live music audio and video recordings represent a large percentage of the huge amount of User Generated Content (UGC) that is available on the internet today. Applications and services related to the management and consumption of this content may significantly benefit from tools able to produce a subjective score of the audio quality. In this work, we apply different Deep Neural Network (DNN) architectures to a simple binary classification problem, that of deciding whether a musical recording is user-generated or of professional quality. Showing that we are able to efficiently address this binary classification problem, we gain some useful insight about factors that may assist the design and affect the performance of a future system that would be able to address the more general problem of blind audio quality assessment.}, keywords = {Deep Neural Networks, Quality Assessment, User Generated Content}, pubstate = {published}, tppubtype = {inproceedings} } Live music audio and video recordings represent a large percentage of the huge amount of User Generated Content (UGC) that is available on the internet today. Applications and services related to the management and consumption of this content may significantly benefit from tools able to produce a subjective score of the audio quality. In this work, we apply different Deep Neural Network (DNN) architectures to a simple binary classification problem, that of deciding whether a musical recording is user-generated or of professional quality. Showing that we are able to efficiently address this binary classification problem, we gain some useful insight about factors that may assist the design and affect the performance of a future system that would be able to address the more general problem of blind audio quality assessment. |
Aidini, Anastasia ; Tsagkatakis, Grigorios ; Tsakalides, Panagiotis Quantized Tensor Robust Principal Component Analysis In Proceedings ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE 2020. Abstract | BibTeX | Tags: Image Time-Series, Missing Values, Quantization, Robust PCA, Tensors @inproceedings{Aidini_2020b, title = {Quantized Tensor Robust Principal Component Analysis}, author = {Aidini, Anastasia and Tsagkatakis, Grigorios and Tsakalides, Panagiotis}, year = {2020}, date = {2020-05-11}, booktitle = {ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, organization = {IEEE}, abstract = {High-dimensional data structures, known as tensors, are fundamental in many applications, including multispectral imaging and color video processing. Compression of such huge amount of multidimensional data collected over time is of paramount importance, necessitating the process of quantization of measurements into discrete values. Furthermore, noise and issues related to the acquisition and transmission of signals frequently lead to unobserved, lost or corrupted measurements. In this paper, we introduce a tensor robust principal component analysis algorithm in order to recover a tensor with real-valued entries from a partly observed set of quantized and sparsely corrupted entries. We formulate the problem as a constrained maximum likelihood estimation of the sum of a low-rank tensor and a sparse tensor, through matricizations in each mode, in combination with a quantization and statistical measurement model. Experimental results on satellite derived land surface time-series demonstrate that directly operating with the quantized measurements, rather than treating them as real values, results in a low recovery error, while the proposed method is also capable of detecting temperature anomalies (e.g., forest fires).}, keywords = {Image Time-Series, Missing Values, Quantization, Robust PCA, Tensors}, pubstate = {published}, tppubtype = {inproceedings} } High-dimensional data structures, known as tensors, are fundamental in many applications, including multispectral imaging and color video processing. Compression of such huge amount of multidimensional data collected over time is of paramount importance, necessitating the process of quantization of measurements into discrete values. Furthermore, noise and issues related to the acquisition and transmission of signals frequently lead to unobserved, lost or corrupted measurements. In this paper, we introduce a tensor robust principal component analysis algorithm in order to recover a tensor with real-valued entries from a partly observed set of quantized and sparsely corrupted entries. We formulate the problem as a constrained maximum likelihood estimation of the sum of a low-rank tensor and a sparse tensor, through matricizations in each mode, in combination with a quantization and statistical measurement model. Experimental results on satellite derived land surface time-series demonstrate that directly operating with the quantized measurements, rather than treating them as real values, results in a low recovery error, while the proposed method is also capable of detecting temperature anomalies (e.g., forest fires). |
Giannopoulos, Michalis ; Aidini, Anastasia ; Pentari, Anastasia ; Fotiadou, Konstantina ; Tsakalides, Panagiotis Classification of Compressed Remote Sensing Multispectral Images via Convolutional Neural Networks Journal Article Journal of Imaging, 6 (4), pp. 24, 2020. Abstract | BibTeX | Tags: Compression, Convolutional Neural Networks, Deep Learning, Multispectral Image Classification, Nuclear Norm, Quantization, Residual Learning, Tensor Unfoldings @article{Giannopoulos_2020a, title = {Classification of Compressed Remote Sensing Multispectral Images via Convolutional Neural Networks}, author = {Giannopoulos, Michalis and Aidini, Anastasia and Pentari, Anastasia and Fotiadou, Konstantina and Tsakalides, Panagiotis}, year = {2020}, date = {2020-04-18}, journal = {Journal of Imaging}, volume = {6}, number = {4}, pages = {24}, abstract = {Multispectral sensors constitute a core Earth observation image technology generating massive high-dimensional observations. To address the communication and storage constraints of remote sensing platforms, lossy data compression becomes necessary, but it unavoidably introduces unwanted artifacts. In this work, we consider the encoding of multispectral observations into high-order tensor structures which can naturally capture multi-dimensional dependencies and correlations, and we propose a resource-efficient compression scheme based on quantized low-rank tensor completion. The proposed method is also applicable to the case of missing observations due to environmental conditions, such as cloud cover. To quantify the performance of compression, we consider both typical image quality metrics as well as the impact on state-of-the-art deep learning-based land-cover classification schemes. Experimental analysis on observations from the ESA Sentinel-2 satellite reveals that even minimal compression can have negative effects on classification performance which can be efficiently addressed by our proposed recovery scheme.}, keywords = {Compression, Convolutional Neural Networks, Deep Learning, Multispectral Image Classification, Nuclear Norm, Quantization, Residual Learning, Tensor Unfoldings}, pubstate = {published}, tppubtype = {article} } Multispectral sensors constitute a core Earth observation image technology generating massive high-dimensional observations. To address the communication and storage constraints of remote sensing platforms, lossy data compression becomes necessary, but it unavoidably introduces unwanted artifacts. In this work, we consider the encoding of multispectral observations into high-order tensor structures which can naturally capture multi-dimensional dependencies and correlations, and we propose a resource-efficient compression scheme based on quantized low-rank tensor completion. The proposed method is also applicable to the case of missing observations due to environmental conditions, such as cloud cover. To quantify the performance of compression, we consider both typical image quality metrics as well as the impact on state-of-the-art deep learning-based land-cover classification schemes. Experimental analysis on observations from the ESA Sentinel-2 satellite reveals that even minimal compression can have negative effects on classification performance which can be efficiently addressed by our proposed recovery scheme. |
Aidini, Anastasia ; Tsagkatakis, Grigorios ; Tsakalides, Panagiotis Tensor Dictionary Learning with Representation Quantization for Remote Sensing Observation Compression In Proceedings Proc. Data Compression Conference (DCC), 2020. Abstract | BibTeX | Tags: Alternating Direction Method of Multipliers, Compression, CP Decomposition, Remote Sensing Observations, Tensor Dictionary Learning @inproceedings{Aidini_2020a, title = {Tensor Dictionary Learning with Representation Quantization for Remote Sensing Observation Compression}, author = {Aidini, Anastasia and Tsagkatakis, Grigorios and Tsakalides, Panagiotis}, year = {2020}, date = {2020-03-30}, booktitle = {Proc. Data Compression Conference (DCC)}, abstract = {Nowadays, multidimensional data structures, known as tensors, are widely used in many applications like earth observation from remote sensing image sequences. However, the increasing spatial, spectral and temporal resolution of the acquired images, introduces considerable challenges in terms of data storage and transfer, making critical the necessity of an efficient compression system for high dimensional data. In this paper, we propose a tensor-based compression algorithm that retains the structure of the data and achieves a high compression ratio. Specifically, our method learns a dictionary of specially structured tensors using the Alternating Direction Method of Multipliers, as well as a symbol encoding dictionary. During run-time, a quantized and encoded sparse vector of coefficients is transmitted, instead of the whole multidimensional signal. Experimental results on real satellite image sequences demonstrate the efficacy of our method compared to a state-of the-art compression method.}, keywords = {Alternating Direction Method of Multipliers, Compression, CP Decomposition, Remote Sensing Observations, Tensor Dictionary Learning}, pubstate = {published}, tppubtype = {inproceedings} } Nowadays, multidimensional data structures, known as tensors, are widely used in many applications like earth observation from remote sensing image sequences. However, the increasing spatial, spectral and temporal resolution of the acquired images, introduces considerable challenges in terms of data storage and transfer, making critical the necessity of an efficient compression system for high dimensional data. In this paper, we propose a tensor-based compression algorithm that retains the structure of the data and achieves a high compression ratio. Specifically, our method learns a dictionary of specially structured tensors using the Alternating Direction Method of Multipliers, as well as a symbol encoding dictionary. During run-time, a quantized and encoded sparse vector of coefficients is transmitted, instead of the whole multidimensional signal. Experimental results on real satellite image sequences demonstrate the efficacy of our method compared to a state-of the-art compression method. |
Doutsi, Effrosyni ; Tsakalides, Panagiotis Image Compression based on Neuroscience Models: Rate-Distortion Performance of the Neural Code In Proceedings Data Compression Conference (DCC 2020), 2020. BibTeX | Tags: Image Compression @inproceedings{doutsi_2020a, title = {Image Compression based on Neuroscience Models: Rate-Distortion Performance of the Neural Code}, author = {Doutsi, Effrosyni and Tsakalides, Panagiotis}, year = {2020}, date = {2020-03-27}, booktitle = {Data Compression Conference (DCC 2020)}, keywords = {Image Compression}, pubstate = {published}, tppubtype = {inproceedings} } |
Tsagkatakis, Grigorios ; Nikolidakis, S; Petra, E; Kapantagakis, A; Grigorakis, K; Katselis, G; Vlahos, N; Tsakalides, Panagiotis Fish Freshness Estimation though analysis of Multispectral Images with Convolutional Neural Networks Journal Article Electronic Imaging, 2020 (12), pp. 171–1–171–5, 2020. Abstract | BibTeX | Tags: Deep Learning, Food Quality @article{Tsagkatakis_2020b, title = {Fish Freshness Estimation though analysis of Multispectral Images with Convolutional Neural Networks}, author = {Tsagkatakis, Grigorios and Nikolidakis, S and Petra, E and Kapantagakis, A and Grigorakis, K and Katselis, G and Vlahos, N and Tsakalides, Panagiotis}, year = {2020}, date = {2020-01-26}, journal = {Electronic Imaging}, volume = {2020}, number = {12}, pages = {171--1--171--5}, abstract = {Quantification of food quality is a critical process for ensuring public health. Fish correspond to a particularly challenging case due to its high perishable nature as food. Existing approaches require laboratory testing, a laborious and time consuming process. In this paper, we propose a novel approach for evaluating fish freshness by exploiting the information encoded in the spectral profile acquired by a snapshot spectral camera. To extract the relevant information, we employ state-of-the- art Convolutional Neural Networks and treat the problem as an instance of multi-class classification, where each class corresponds to a two-day period since harvesting. Experimental evaluation on individuals from the Sparidae (Boops sp.) family demonstrates that the proposed approach constitutes a valid methodology, offering both accuracy as well as effortless application.}, keywords = {Deep Learning, Food Quality}, pubstate = {published}, tppubtype = {article} } Quantification of food quality is a critical process for ensuring public health. Fish correspond to a particularly challenging case due to its high perishable nature as food. Existing approaches require laboratory testing, a laborious and time consuming process. In this paper, we propose a novel approach for evaluating fish freshness by exploiting the information encoded in the spectral profile acquired by a snapshot spectral camera. To extract the relevant information, we employ state-of-the- art Convolutional Neural Networks and treat the problem as an instance of multi-class classification, where each class corresponds to a two-day period since harvesting. Experimental evaluation on individuals from the Sparidae (Boops sp.) family demonstrates that the proposed approach constitutes a valid methodology, offering both accuracy as well as effortless application. |
Troullinou, Eirini ; Tsagkatakis, Grigorios ; Palagina, Gana ; Papadopouli, Maria ; Smirnakis, Stelios Manolis ; Tsakalides, Panagiotis Adversarial dictionary learning for a robust analysis and modelling of spontaneous neuronal activity Journal Article Neurocomputing, 388 , pp. 188–201, 2020. Abstract | BibTeX | Tags: Biological Neural Networks, Dictionary Learning, Supervised Machine Learning @article{Troullinou_2020a, title = {Adversarial dictionary learning for a robust analysis and modelling of spontaneous neuronal activity}, author = {Troullinou, Eirini and Tsagkatakis, Grigorios and Palagina, Gana and Papadopouli, Maria and Smirnakis, Stelios Manolis and Tsakalides, Panagiotis}, year = {2020}, date = {2020-01-11}, journal = {Neurocomputing}, volume = {388}, pages = {188--201}, abstract = {The field of neuroscience is experiencing rapid growth in the complexity and quantity of the recorded neural activity allowing us unprecedented access to its dynamics in different brain areas. The objective of this work is to discover directly from the experimental data rich and comprehensible models for brain function that will be concurrently robust to noise. Considering this task from the perspective of dimensionality reduction, we develop an innovative, robust to noise dictionary learning framework based on adversarial training methods for the identification of patterns of synchronous firing activity as well as within a time lag. We employ real-world binary datasets describing the spontaneous neuronal activity of laboratory mice over time, and we aim to their efficient low-dimensional representation. The results on the classification accuracy for the discrimination between the clean and the adversarial-noisy activation patterns obtained by an SVM classifier highlight the efficacy of the proposed scheme compared to other methods, and the visualization of the dictionary's distribution demonstrates the multifarious information that we obtain from it.}, keywords = {Biological Neural Networks, Dictionary Learning, Supervised Machine Learning}, pubstate = {published}, tppubtype = {article} } The field of neuroscience is experiencing rapid growth in the complexity and quantity of the recorded neural activity allowing us unprecedented access to its dynamics in different brain areas. The objective of this work is to discover directly from the experimental data rich and comprehensible models for brain function that will be concurrently robust to noise. Considering this task from the perspective of dimensionality reduction, we develop an innovative, robust to noise dictionary learning framework based on adversarial training methods for the identification of patterns of synchronous firing activity as well as within a time lag. We employ real-world binary datasets describing the spontaneous neuronal activity of laboratory mice over time, and we aim to their efficient low-dimensional representation. The results on the classification accuracy for the discrimination between the clean and the adversarial-noisy activation patterns obtained by an SVM classifier highlight the efficacy of the proposed scheme compared to other methods, and the visualization of the dictionary's distribution demonstrates the multifarious information that we obtain from it. |
2019 |
Pentari, Anastasia ; Tsagkatakis, Grigorios ; Marias, Kostas ; Manikis C., Georgios ; Kartalis, Nikolaos ; Papanikolaou, Nikolaos ; Tsakalides, Panagiotis Sparse Representations on DW-MRI: A study on pancreas In Proceedings The 19th annual IEEE International Conference on Bioinformatics and Bioengineering (BIBE), pp. 791–795, 2019. Abstract | BibTeX | Tags: b-value, Dictionary Learning, DW-MRI, IVIM, Sparse Coding @inproceedings{Pentari_2019a, title = {Sparse Representations on DW-MRI: A study on pancreas}, author = {Pentari, Anastasia and Tsagkatakis, Grigorios and Marias, Kostas and Manikis, C., Georgios and Kartalis, Nikolaos and Papanikolaou, Nikolaos and Tsakalides, Panagiotis}, year = {2019}, date = {2019-12-26}, booktitle = {The 19th annual IEEE International Conference on Bioinformatics and Bioengineering (BIBE)}, pages = {791--795}, abstract = {This paper presents a method for reducing the Diffusion Weighted Magnetic Resonance Imaging (DW-MRI) examination time based on the mathematical framework of sparse representations. The aim is to undersample the b-values used for DW-MRI image acquisition which reflect the strength and timing of the gradients used to generate the DW-MRI images since their number defines the examination time. To test our method we investigate whether the undersampled DW-MRI data preserve the same accuracy in terms of extracted imaging biomarkers. The main procedure is based on the use of the k-Singular Value Decomposition (k-SVD) and the Orthogonal Matching Pursuit (OMP) algorithms, which are appropriate for the sparse representations computation. The presented results confirm the hypothesis of our study as the imaging biomarkers extracted from the sparsely reconstructed data have statistically close values to those extracted from the original data. Moreover, our method achieves a low reconstruction error and an image quality close to the original.}, keywords = {b-value, Dictionary Learning, DW-MRI, IVIM, Sparse Coding}, pubstate = {published}, tppubtype = {inproceedings} } This paper presents a method for reducing the Diffusion Weighted Magnetic Resonance Imaging (DW-MRI) examination time based on the mathematical framework of sparse representations. The aim is to undersample the b-values used for DW-MRI image acquisition which reflect the strength and timing of the gradients used to generate the DW-MRI images since their number defines the examination time. To test our method we investigate whether the undersampled DW-MRI data preserve the same accuracy in terms of extracted imaging biomarkers. The main procedure is based on the use of the k-Singular Value Decomposition (k-SVD) and the Orthogonal Matching Pursuit (OMP) algorithms, which are appropriate for the sparse representations computation. The presented results confirm the hypothesis of our study as the imaging biomarkers extracted from the sparsely reconstructed data have statistically close values to those extracted from the original data. Moreover, our method achieves a low reconstruction error and an image quality close to the original. |
Aidini, Anastasia ; Giannopoulos, Michalis ; Pentari, Anastasia ; Fotiadou, Konstantina ; Tsakalides, Panagiotis Hyperspectral Image Compression and Super-Resolution Using Tensor Decomposition Learning In Proceedings Proc. 53rd Asilomar Conference on Signals, Systems, and Computers, pp. 1369–1373, IEEE 2019. Abstract | BibTeX | Tags: Alternating Direction Method of Multipliers, Compression, Multi-Spectral Image Classification, Super-Resolution, Tensor Unfoldings @inproceedings{Aidini_2019b, title = {Hyperspectral Image Compression and Super-Resolution Using Tensor Decomposition Learning}, author = {Aidini, Anastasia and Giannopoulos, Michalis and Pentari, Anastasia and Fotiadou, Konstantina and Tsakalides, Panagiotis}, year = {2019}, date = {2019-11-07}, booktitle = {Proc. 53rd Asilomar Conference on Signals, Systems, and Computers}, pages = {1369--1373}, organization = {IEEE}, abstract = {As the field of remote sensing for Earth Observation is rapidly evolving, there is an increasing demand for developing suitable methods to store and transmit the massive amounts of the generated data. At the same time, as multiple sensors acquire observations with different dimensions, super-resolution methods come into play to unify the framework for upcoming statistical inference tasks. In this paper, we employ a tensor-based structuring of multi-spectral image data and we propose a low-rank tensor completion scheme for efficient image-content compression and recovery. To address the problem of low-resolution imagery, we further provide a robust algorithmic scheme for super-resolving satellite images, followed by a state-of- the-art convolutional neural network architecture serving the classification task of the employed images. Experimental analysis on real-world observations demonstrates the detrimental effects of image compression on classification, an issued successfully addressed by the proposed recovery and super-resolution schemes.}, keywords = {Alternating Direction Method of Multipliers, Compression, Multi-Spectral Image Classification, Super-Resolution, Tensor Unfoldings}, pubstate = {published}, tppubtype = {inproceedings} } As the field of remote sensing for Earth Observation is rapidly evolving, there is an increasing demand for developing suitable methods to store and transmit the massive amounts of the generated data. At the same time, as multiple sensors acquire observations with different dimensions, super-resolution methods come into play to unify the framework for upcoming statistical inference tasks. In this paper, we employ a tensor-based structuring of multi-spectral image data and we propose a low-rank tensor completion scheme for efficient image-content compression and recovery. To address the problem of low-resolution imagery, we further provide a robust algorithmic scheme for super-resolving satellite images, followed by a state-of- the-art convolutional neural network architecture serving the classification task of the employed images. Experimental analysis on real-world observations demonstrates the detrimental effects of image compression on classification, an issued successfully addressed by the proposed recovery and super-resolution schemes. |
Zervou, Michaela Areti ; Tzagkarakis, George ; Tsakalides, Panagiotis Automated Screening of Dyslexia via Dynamical Recurrence Analysis of Wearable Sensor Data In Proceedings The 19th annual IEEE International Conference on Bioinformatics and Bioengineering (BIBE), 2019. Abstract | BibTeX | Tags: Dyslexia Screening, Multidimensional Recurrence Quantification Analysis, Non-Linear Data Analysis, Wearable Sensors @inproceedings{Zervou_2019b, title = {Automated Screening of Dyslexia via Dynamical Recurrence Analysis of Wearable Sensor Data}, author = {Zervou, Michaela Areti and Tzagkarakis, George and Tsakalides, Panagiotis}, year = {2019}, date = {2019-10-30}, booktitle = {The 19th annual IEEE International Conference on Bioinformatics and Bioengineering (BIBE)}, abstract = {Dyslexia is a neurodevelopmental learning disorder that affects the acceleration and precision of word recognition, therefore obstructing the reading fluency, as well as text comprehension. Although it is not an oculomotor disease, readers with dyslexia have shown different eye movements than typically developing subjects during text reading. The majority of existing screening techniques for dyslexia's detection employ features associated with the aberrant visual scanning of reading text seen in dyslexia, whilst ignoring completely the behavior of the underlying data generating dynamical system. To address this problem, this work proposes a novel self-tuned architecture for feature extraction by modeling directly the inherent dynamics of wearable sensor data in higher-dimensional phase spaces via multidimensional recurrence quantification analysis (RQA) based on state matrices. Experimental evaluation on real data demonstrates the improved recognition accuracy of our method when compared against its state-of-the-art vector-based RQA counterparts.}, keywords = {Dyslexia Screening, Multidimensional Recurrence Quantification Analysis, Non-Linear Data Analysis, Wearable Sensors}, pubstate = {published}, tppubtype = {inproceedings} } Dyslexia is a neurodevelopmental learning disorder that affects the acceleration and precision of word recognition, therefore obstructing the reading fluency, as well as text comprehension. Although it is not an oculomotor disease, readers with dyslexia have shown different eye movements than typically developing subjects during text reading. The majority of existing screening techniques for dyslexia's detection employ features associated with the aberrant visual scanning of reading text seen in dyslexia, whilst ignoring completely the behavior of the underlying data generating dynamical system. To address this problem, this work proposes a novel self-tuned architecture for feature extraction by modeling directly the inherent dynamics of wearable sensor data in higher-dimensional phase spaces via multidimensional recurrence quantification analysis (RQA) based on state matrices. Experimental evaluation on real data demonstrates the improved recognition accuracy of our method when compared against its state-of-the-art vector-based RQA counterparts. |
Perez-Lopez, Andres ; Stefanakis, Nikolaos Analysis of Spherical Isotropic Noise Fields with an A-format Tetrahedral Microphone Journal Article Journal of the Acoustical Society of America, 146 (4), 2019. Abstract | BibTeX | Tags: Ambisonics, spatial coherence, Spherically Isotropic Sound Field, Tetrahedral Microphone @article{Stefanakis_2019c, title = {Analysis of Spherical Isotropic Noise Fields with an A-format Tetrahedral Microphone}, author = {Perez-Lopez, Andres and Stefanakis, Nikolaos}, year = {2019}, date = {2019-10-04}, journal = {Journal of the Acoustical Society of America}, volume = {146}, number = {4}, abstract = {Several applications in spatial audio signal processing benefit from the knowledge of the diffuseness of the sound field. In this paper, several experiments are performed to determine the response of a tetrahedral microphone array under a spherically isotropic sound field. The data were gathered with numerical simulations and real recordings using a spherical loudspeaker array. The signal analysis, performed in the microphone signal and spherical harmonic domains, reveals the characteristic coherence curves of spherical isotropic noise as a function of the frequency.}, keywords = {Ambisonics, spatial coherence, Spherically Isotropic Sound Field, Tetrahedral Microphone}, pubstate = {published}, tppubtype = {article} } Several applications in spatial audio signal processing benefit from the knowledge of the diffuseness of the sound field. In this paper, several experiments are performed to determine the response of a tetrahedral microphone array under a spherically isotropic sound field. The data were gathered with numerical simulations and real recordings using a spherical loudspeaker array. The signal analysis, performed in the microphone signal and spherical harmonic domains, reveals the characteristic coherence curves of spherical isotropic noise as a function of the frequency. |
Tsagkatakis, Grigorios ; Aidini, Anastasia ; Fotiadou, Konstantina ; Giannopoulos, Michalis ; Pentari, Anastasia ; Tsakalides, Panagiotis Survey of Deep-Learning Approaches for Remote Sensing Observation Enhancement Journal Article Sensors, 19 (18), pp. 3929, 2019. Abstract | Links | BibTeX | Tags: Convolutional Neural Networks, Deep Learning, Denoising, Earth Observations, Fusion, Generative Adversarial Networks, Pan-Sharpening, Satellite Imaging, Super-Resolution @article{Tsagkatakis_2019d, title = {Survey of Deep-Learning Approaches for Remote Sensing Observation Enhancement}, author = {Tsagkatakis, Grigorios and Aidini, Anastasia and Fotiadou, Konstantina and Giannopoulos, Michalis and Pentari, Anastasia and Tsakalides, Panagiotis}, doi = {10.3390/s19183929}, year = {2019}, date = {2019-09-12}, journal = {Sensors}, volume = {19}, number = {18}, pages = {3929}, abstract = {Deep Learning, and Deep Neural Networks in particular, have established themselves as the new norm in signal and data processing, achieving state-of-the-art performance in image, audio, and natural language understanding. In remote sensing, a large body of research has been devoted to the application of deep learning for typical supervised learning tasks such as classification. Less yet equally important effort has also been allocated to addressing the challenges associated with the enhancement of low-quality observations from remote sensing platforms. Addressing such channels is of paramount importance, both in itself, since high-altitude imaging, environmental conditions, and imaging systems trade-offs lead to low-quality observation, as well as to facilitate subsequent analysis, such as classification and detection. In this paper, we provide a comprehensive review of deep-learning methods for the enhancement of remote sensing observations, focusing on critical tasks including single and multi-band super-resolution, denoising, restoration, pan-sharpening, and fusion, among others. In addition to the detailed analysis and comparison of recently presented approaches, different research avenues which could be explored in the future are also discussed.}, keywords = {Convolutional Neural Networks, Deep Learning, Denoising, Earth Observations, Fusion, Generative Adversarial Networks, Pan-Sharpening, Satellite Imaging, Super-Resolution}, pubstate = {published}, tppubtype = {article} } Deep Learning, and Deep Neural Networks in particular, have established themselves as the new norm in signal and data processing, achieving state-of-the-art performance in image, audio, and natural language understanding. In remote sensing, a large body of research has been devoted to the application of deep learning for typical supervised learning tasks such as classification. Less yet equally important effort has also been allocated to addressing the challenges associated with the enhancement of low-quality observations from remote sensing platforms. Addressing such channels is of paramount importance, both in itself, since high-altitude imaging, environmental conditions, and imaging systems trade-offs lead to low-quality observation, as well as to facilitate subsequent analysis, such as classification and detection. In this paper, we provide a comprehensive review of deep-learning methods for the enhancement of remote sensing observations, focusing on critical tasks including single and multi-band super-resolution, denoising, restoration, pan-sharpening, and fusion, among others. In addition to the detailed analysis and comparison of recently presented approaches, different research avenues which could be explored in the future are also discussed. |
Aidini, Anastasia ; Tsagkatakis, Grigorios ; Tsakalides, Panagiotis Compression of High-Dimensional Multispectral Image Time Series Using Tensor Decomposition Learning In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. Abstract | BibTeX | Tags: Compression, CP Decomposition, High-order Tensors, Learning, Multispectral Image Time Series @inproceedings{Aidini_2019a, title = {Compression of High-Dimensional Multispectral Image Time Series Using Tensor Decomposition Learning}, author = {Aidini, Anastasia and Tsagkatakis, Grigorios and Tsakalides, Panagiotis}, year = {2019}, date = {2019-09-09}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, abstract = {Multispectral imaging is widely used in many fields, such as in medicine and earth observation, as it provides valuable spatial, spectral and temporal information about the scene. It is of paramount importance that the large amount of images collected over time, and organized in multidimensional arrays known as tensors, be efficiently compressed in order to be stored or transmitted. In this paper, we present a compression algorithm which involves a training process and employs a symbol encoding dictionary. During training, we derive specially structured tensors from a given image time sequence using the CANDECOMP/PARAFAC (CP) decomposition. During runtime, every new image time sequence is quantized and encoded into a vector of coefficients corresponding to the learned CP decomposition. Experimental results on sequences of real satellite images demonstrate that we can efficiently handle higher-order tensors and obtain the decompressed data by composing the learned tensors by means of the received vector of coefficients, thus achieving a high compression ratio. }, keywords = {Compression, CP Decomposition, High-order Tensors, Learning, Multispectral Image Time Series}, pubstate = {published}, tppubtype = {inproceedings} } Multispectral imaging is widely used in many fields, such as in medicine and earth observation, as it provides valuable spatial, spectral and temporal information about the scene. It is of paramount importance that the large amount of images collected over time, and organized in multidimensional arrays known as tensors, be efficiently compressed in order to be stored or transmitted. In this paper, we present a compression algorithm which involves a training process and employs a symbol encoding dictionary. During training, we derive specially structured tensors from a given image time sequence using the CANDECOMP/PARAFAC (CP) decomposition. During runtime, every new image time sequence is quantized and encoded into a vector of coefficients corresponding to the learned CP decomposition. Experimental results on sequences of real satellite images demonstrate that we can efficiently handle higher-order tensors and obtain the decompressed data by composing the learned tensors by means of the received vector of coefficients, thus achieving a high compression ratio. |
Doutsi, Effrosyni ; Tzagkarakis, George ; Tsakalides, Panagiotis Neuro-Inspired Compression of RGB Images In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. Abstract | BibTeX | Tags: Edge Detection, FR-IQA, Leaky Integrate-and-Fire Model, NR-IQA, Retina-inspired Filter, Spikes @inproceedings{Doutsi_2019b, title = {Neuro-Inspired Compression of RGB Images}, author = {Doutsi, Effrosyni and Tzagkarakis, George and Tsakalides, Panagiotis}, year = {2019}, date = {2019-09-09}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, abstract = {During the last decade, there is an ever increasing interest about the decryption and analysis of the human visual system, which offers an intelligent mechanism for capturing and transforming the visual stimulus into a very dense and informative code of spikes. The compression capacity of the visual system is beyond the latest image and video compres- sion standards, motivating the image processing community to investigate whether a neuro-inspired system, that performs according to the visual system, could outperform the state-of- the-art image compression methods. Inspired by neuroscience models, this paper proposes for a first time a neuro-inspired compression method for RGB images. Specifically, each color channel is processed by a retina-inspired filter combined with a compression scheme based on spikes. We demonstrate that, even for a very small number of bits per pixel (bpp), our proposed compression system is capable of extracting faithful and exact knowledge from the input scene, compared against the JPEG that generates strong artifacts. To evaluate the performance of the proposed algorithm we use Full-Reference (FR) and No- Reference (NR) Image Quality Assessments (IQA). We further validate the performance improvements by applying an edge detector on the decompressed images, illustrating that contour extraction is much more precise for the images compressed via our neuro-inspired algorithm.}, keywords = {Edge Detection, FR-IQA, Leaky Integrate-and-Fire Model, NR-IQA, Retina-inspired Filter, Spikes}, pubstate = {published}, tppubtype = {inproceedings} } During the last decade, there is an ever increasing interest about the decryption and analysis of the human visual system, which offers an intelligent mechanism for capturing and transforming the visual stimulus into a very dense and informative code of spikes. The compression capacity of the visual system is beyond the latest image and video compres- sion standards, motivating the image processing community to investigate whether a neuro-inspired system, that performs according to the visual system, could outperform the state-of- the-art image compression methods. Inspired by neuroscience models, this paper proposes for a first time a neuro-inspired compression method for RGB images. Specifically, each color channel is processed by a retina-inspired filter combined with a compression scheme based on spikes. We demonstrate that, even for a very small number of bits per pixel (bpp), our proposed compression system is capable of extracting faithful and exact knowledge from the input scene, compared against the JPEG that generates strong artifacts. To evaluate the performance of the proposed algorithm we use Full-Reference (FR) and No- Reference (NR) Image Quality Assessments (IQA). We further validate the performance improvements by applying an edge detector on the decompressed images, illustrating that contour extraction is much more precise for the images compressed via our neuro-inspired algorithm. |
Doutsi, Effrosyni ; Fillatre, Lionel ; Antonini, Marc Efficiency of the bio-inspired Leaky Integrate-and-Fire neuron for signal coding In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. Abstract | BibTeX | Tags: Entropy, Leaky Integrate-and-Fire (LIF), Neuro-inspired Quantization, Spikes, Uniform Scalar Quantizer @inproceedings{Doutsi_2019a, title = {Efficiency of the bio-inspired Leaky Integrate-and-Fire neuron for signal coding}, author = {Doutsi, Effrosyni and Fillatre, Lionel and Antonini, Marc}, year = {2019}, date = {2019-09-09}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, abstract = {The goal of this paper is to investigate whether purely neuro-mimetic architectures are more efficient for signal compression than architectures that combine neuroscience and state-of-the-art models. We are motivated to produce spikes, using the LIF model, in order to compress images. Seeking solutions to improve the efficiency of the LIF in terms of the memory cost, we compare two different quantization approaches; the Neuro-inspired Quantization (NQ) and the Conventional Quantization (CQ). We present that when the LIF model and the NQ appear in the same architecture, the performance of the compression system is higher compared to an architecture that consists of the LIF model and the CQ. The main reason of this occurrence is the dynamic properties embedded in the neuro-mimetic models. As a consequence, we first study which are the dynamic properties of the recently released (NQ) which is an intuitive way of counting the number of spikes. Moreover, we show that some parameters of the NQ (i.e. the observation window and the resistance) strongly influence its behavior that ranges from non-uniform to uniform. As a result, the NQ is more flexible than the CQ when it is applied to real data while for the same bit rate it ensures higher reconstruction quality.}, keywords = {Entropy, Leaky Integrate-and-Fire (LIF), Neuro-inspired Quantization, Spikes, Uniform Scalar Quantizer}, pubstate = {published}, tppubtype = {inproceedings} } The goal of this paper is to investigate whether purely neuro-mimetic architectures are more efficient for signal compression than architectures that combine neuroscience and state-of-the-art models. We are motivated to produce spikes, using the LIF model, in order to compress images. Seeking solutions to improve the efficiency of the LIF in terms of the memory cost, we compare two different quantization approaches; the Neuro-inspired Quantization (NQ) and the Conventional Quantization (CQ). We present that when the LIF model and the NQ appear in the same architecture, the performance of the compression system is higher compared to an architecture that consists of the LIF model and the CQ. The main reason of this occurrence is the dynamic properties embedded in the neuro-mimetic models. As a consequence, we first study which are the dynamic properties of the recently released (NQ) which is an intuitive way of counting the number of spikes. Moreover, we show that some parameters of the NQ (i.e. the observation window and the resistance) strongly influence its behavior that ranges from non-uniform to uniform. As a result, the NQ is more flexible than the CQ when it is applied to real data while for the same bit rate it ensures higher reconstruction quality. |
Aspri, Maria ; Tsagkatakis, Grigorios ; Panousopoulou, Athanasia ; Tsakalides, Panagiotis On Realizing Distributed Deep Neural Networks: An Astrophysics Case Study In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. Abstract | BibTeX | Tags: Convolutional Neural Networks, Distributed Deep Learning, Spectroscopic Redshift Estimation @inproceedings{Aspri_2019a, title = {On Realizing Distributed Deep Neural Networks: An Astrophysics Case Study}, author = {Aspri, Maria and Tsagkatakis, Grigorios and Panousopoulou, Athanasia and Tsakalides, Panagiotis}, year = {2019}, date = {2019-09-09}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, abstract = {Deep Learning architectures are extensively adopted as the core machine learning framework in both industry and academia. With large amounts of data at their disposal, these architectures can autonomously extract highly descriptive features for any type of input signals. However, the extensive volume of data combined with the demand for high computational resources, are introducing new challenges in terms of computing platforms. The work herein presented explores the performance of Deep Learning in the field of astrophysics, when conducted on a distributed environment. To set up such an environment, we capitalize on TensorFlowOnSpark, which combines both TensorFlow's dataflow graphs and Spark's cluster management. We report on the performance of a CPU cluster, considering both the number of training nodes and data distribution, while quantifying their effects via the metrics of training accuracy and training loss. Our results indicate that distribution has a positive impact on Deep Learning, since it accelerates our network's convergence for a given number of epochs. However, network traffic adds a significant amount of overhead, rendering it suitable for mostly very deep models or in big Data Analytics.}, keywords = {Convolutional Neural Networks, Distributed Deep Learning, Spectroscopic Redshift Estimation}, pubstate = {published}, tppubtype = {inproceedings} } Deep Learning architectures are extensively adopted as the core machine learning framework in both industry and academia. With large amounts of data at their disposal, these architectures can autonomously extract highly descriptive features for any type of input signals. However, the extensive volume of data combined with the demand for high computational resources, are introducing new challenges in terms of computing platforms. The work herein presented explores the performance of Deep Learning in the field of astrophysics, when conducted on a distributed environment. To set up such an environment, we capitalize on TensorFlowOnSpark, which combines both TensorFlow's dataflow graphs and Spark's cluster management. We report on the performance of a CPU cluster, considering both the number of training nodes and data distribution, while quantifying their effects via the metrics of training accuracy and training loss. Our results indicate that distribution has a positive impact on Deep Learning, since it accelerates our network's convergence for a given number of epochs. However, network traffic adds a significant amount of overhead, rendering it suitable for mostly very deep models or in big Data Analytics. |
Zervou, Michaela Areti ; Tzagkarakis, George ; Panousopoulou, Athanasia ; Tsakalides, Panagiotis A Self-Tuned Architecture for Human Activity Recognition Based on a Dynamical Recurrence Analysis of Wearable Sensor Data In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. Abstract | BibTeX | Tags: Human Activity Recognition, motif discovery, nonlinear data analysis, Recurrence Quantification Analysis, Wearable Sensors @inproceedings{Zervou_2019a, title = {A Self-Tuned Architecture for Human Activity Recognition Based on a Dynamical Recurrence Analysis of Wearable Sensor Data}, author = {Zervou, Michaela Areti and Tzagkarakis, George and Panousopoulou, Athanasia and Tsakalides, Panagiotis}, year = {2019}, date = {2019-09-09}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, abstract = {Human activity recognition (HAR) is encountered in a plethora of applications, such as pervasive health care systems and smart homes. The majority of existing HAR techniques employs features extracted from symbolic or frequency-domain representations of the associated data, whilst ignoring completely the behavior of the underlying data generating dynamical system. To address this problem, this work proposes a novel self-tuned architecture for feature extraction and activity recognition by modeling directly the inherent dynamics of wearable sensor data in higher-dimensional phase spaces, which encode state recurrences for each individual activity. Experimental evaluation on real data of leisure activities demonstrates an improved recog- nition accuracy of our method when compared against a state- of-the-art motif-based approach using symbolic representations.}, keywords = {Human Activity Recognition, motif discovery, nonlinear data analysis, Recurrence Quantification Analysis, Wearable Sensors}, pubstate = {published}, tppubtype = {inproceedings} } Human activity recognition (HAR) is encountered in a plethora of applications, such as pervasive health care systems and smart homes. The majority of existing HAR techniques employs features extracted from symbolic or frequency-domain representations of the associated data, whilst ignoring completely the behavior of the underlying data generating dynamical system. To address this problem, this work proposes a novel self-tuned architecture for feature extraction and activity recognition by modeling directly the inherent dynamics of wearable sensor data in higher-dimensional phase spaces, which encode state recurrences for each individual activity. Experimental evaluation on real data of leisure activities demonstrates an improved recog- nition accuracy of our method when compared against a state- of-the-art motif-based approach using symbolic representations. |
Roumpakis, Stylianos ; Tzagkarakis, George ; Tsakalides, Panagiotis Real-Time Prototyping of Matlab-Java Code Integration for Water Sensor Networks Applications In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. Abstract | BibTeX | Tags: Client-Server Model, Matlab-Java Code Integration, Real-Time Prototyping, Water Sensor Networks @inproceedings{Roubakis_2019a, title = {Real-Time Prototyping of Matlab-Java Code Integration for Water Sensor Networks Applications}, author = {Roumpakis, Stylianos and Tzagkarakis, George and Tsakalides, Panagiotis}, year = {2019}, date = {2019-09-09}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, abstract = {Industrial applications typically necessitate the interaction of heterogeneous software components, which makes the design of an integrated system a demanding task. Specifically, although Matlab and Java are among the most commonly used programming languages in industrial practice, with each one offering its own advantages, however, their integration for real-time code prototyping is not straightforward. Motivated by this problem, this work proposes an efficient method based on the use of sockets to integrate Matlab and Java code for designing a data processing platform tailored to smart water sensor networks scenarios. The performance of the proposed approach is evaluated on two distinct tasks, namely, the recovery of missing values and the temporal super-resolution from streaming data. Experimental evaluation with real pressure data reveals the superiority of our methodology, in terms of reduced execution times, when compared against two well-established alternatives, namely, the use of standalone applications using input-output files for executing Matlab code in Java-based environments and socket-based solutions implemented directly in a Matlab environment.}, keywords = {Client-Server Model, Matlab-Java Code Integration, Real-Time Prototyping, Water Sensor Networks}, pubstate = {published}, tppubtype = {inproceedings} } Industrial applications typically necessitate the interaction of heterogeneous software components, which makes the design of an integrated system a demanding task. Specifically, although Matlab and Java are among the most commonly used programming languages in industrial practice, with each one offering its own advantages, however, their integration for real-time code prototyping is not straightforward. Motivated by this problem, this work proposes an efficient method based on the use of sockets to integrate Matlab and Java code for designing a data processing platform tailored to smart water sensor networks scenarios. The performance of the proposed approach is evaluated on two distinct tasks, namely, the recovery of missing values and the temporal super-resolution from streaming data. Experimental evaluation with real pressure data reveals the superiority of our methodology, in terms of reduced execution times, when compared against two well-established alternatives, namely, the use of standalone applications using input-output files for executing Matlab code in Java-based environments and socket-based solutions implemented directly in a Matlab environment. |
Tzagkarakis, Christos ; Stefanakis, Nikolaos ; Tzagkarakis, George Impact Sounds Classification for Interactive Applications via Discriminative Dictionary Learning In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. Abstract | BibTeX | Tags: Discriminative Dictionary Sparse Coding, Impact Sound Classification, Real-time Processing, Sparse Representation Classification @inproceedings{Stefanakis_2019d, title = {Impact Sounds Classification for Interactive Applications via Discriminative Dictionary Learning }, author = {Tzagkarakis, Christos and Stefanakis, Nikolaos and Tzagkarakis, George}, year = {2019}, date = {2019-09-09}, booktitle = {Proc. European Signal Processing Conference (EUSIPCO)}, abstract = {Classification of impulsive events produced from the acoustic stimulation of everyday objects opens the door to exciting interactive applications, as for example, gestural control of sound synthesis. Such events may exhibit significant variability, which makes their recognition a very challenging task. Furthermore, the fact that interactive systems require an immediate response to achieve low latency in real-time scenarios, poses major constraints to be overcome. This paper focuses on the design of a novel method for identifying the sound producing objects, as well as the location of impact of each event, under a low-latency assumption. To this end, a sparse representation coding framework is adopted based on learned discriminative dictionaries from short training and testing data. The performance of the proposed method is evaluated on a set of real impact sounds and compared against a nearest neighbor classifier. The experimental results demonstrate the high performance improvements of our proposed method, both in terms of classification accuracy and low latency.}, keywords = {Discriminative Dictionary Sparse Coding, Impact Sound Classification, Real-time Processing, Sparse Representation Classification}, pubstate = {published}, tppubtype = {inproceedings} } Classification of impulsive events produced from the acoustic stimulation of everyday objects opens the door to exciting interactive applications, as for example, gestural control of sound synthesis. Such events may exhibit significant variability, which makes their recognition a very challenging task. Furthermore, the fact that interactive systems require an immediate response to achieve low latency in real-time scenarios, poses major constraints to be overcome. This paper focuses on the design of a novel method for identifying the sound producing objects, as well as the location of impact of each event, under a low-latency assumption. To this end, a sparse representation coding framework is adopted based on learned discriminative dictionaries from short training and testing data. The performance of the proposed method is evaluated on a set of real impact sounds and compared against a nearest neighbor classifier. The experimental results demonstrate the high performance improvements of our proposed method, both in terms of classification accuracy and low latency. |
Stivaktakis, Radamanthys ; Tsagkatakis, Grigorios ; Moraes, Bruno ; Abdalla, Filipe ; Starck, Jean-Luc ; Tsakalides, Panagiotis Convolutional Neural Networks for Spectroscopic Redshift Estimation on Euclid Data Journal Article IEEE Transactions on Big Data, 6 (3), pp. 460 - 476, 2019. Abstract | Links | BibTeX | Tags: Astronomy, Astrophysics, Convolutional Neural Networks, Cosmology, Deep Learning, Euclid, Spectroscopic Redshift Estimation @article{stivakt_2019_astro, title = {Convolutional Neural Networks for Spectroscopic Redshift Estimation on Euclid Data}, author = {Stivaktakis, Radamanthys and Tsagkatakis, Grigorios and Moraes, Bruno and Abdalla, Filipe and Starck, Jean-Luc and Tsakalides, Panagiotis}, doi = {10.1109/TBDATA.2019.2934475}, year = {2019}, date = {2019-08-14}, journal = {IEEE Transactions on Big Data}, volume = {6}, number = {3}, pages = {460 - 476}, abstract = {In this paper, we address the problem of spectroscopic redshift estimation in Astronomy. Due to the expansion of the Universe, galaxies recede from each other on average. This movement causes the emitted electromagnetic waves to shift from the blue part of the spectrum to the red part, due to the Doppler effect. Redshift is one of the most important observables in Astronomy, allowing the measurement of galaxy distances. Several sources of noise render the estimation process far from trivial, especially in the low signal-to-noise regime of many astrophysical observations. In recent years, new approaches for a reliable and automated estimation methodology have been sought out, in order to minimize our reliance on currently popular techniques that heavily involve human intervention. The fulfilment of this task has evolved into a grave necessity, in conjunction with the insatiable generation of immense amounts of astronomical data. In our work, we introduce a novel approach based on Deep Convolutional Neural Networks. The proposed methodology is extensively evaluated on a spectroscopic dataset of full spectral energy galaxy distributions, modelled after the upcoming Euclid satellite galaxy survey. Experimental analysis on observations of idealistic and realistic conditions demonstrate the potent capabilities of the proposed scheme.}, keywords = {Astronomy, Astrophysics, Convolutional Neural Networks, Cosmology, Deep Learning, Euclid, Spectroscopic Redshift Estimation}, pubstate = {published}, tppubtype = {article} } In this paper, we address the problem of spectroscopic redshift estimation in Astronomy. Due to the expansion of the Universe, galaxies recede from each other on average. This movement causes the emitted electromagnetic waves to shift from the blue part of the spectrum to the red part, due to the Doppler effect. Redshift is one of the most important observables in Astronomy, allowing the measurement of galaxy distances. Several sources of noise render the estimation process far from trivial, especially in the low signal-to-noise regime of many astrophysical observations. In recent years, new approaches for a reliable and automated estimation methodology have been sought out, in order to minimize our reliance on currently popular techniques that heavily involve human intervention. The fulfilment of this task has evolved into a grave necessity, in conjunction with the insatiable generation of immense amounts of astronomical data. In our work, we introduce a novel approach based on Deep Convolutional Neural Networks. The proposed methodology is extensively evaluated on a spectroscopic dataset of full spectral energy galaxy distributions, modelled after the upcoming Euclid satellite galaxy survey. Experimental analysis on observations of idealistic and realistic conditions demonstrate the potent capabilities of the proposed scheme. |
Fotiadou, Konstantina ; Tsagkatakis, Grigorios ; Tsakalides, Panagiotis Snapshot High Dynamic Range Imaging via Sparse Representations and Feature Learning Journal Article IEEE Transactions on Multimedia, 22 (3), pp. 688-703, 2019. BibTeX | Tags: Deep Learning, Sparse Representations @article{fotiadou_2020a, title = {Snapshot High Dynamic Range Imaging via Sparse Representations and Feature Learning}, author = {Fotiadou, Konstantina and Tsagkatakis, Grigorios and Tsakalides, Panagiotis}, year = {2019}, date = {2019-08-05}, journal = {IEEE Transactions on Multimedia}, volume = {22}, number = {3}, pages = {688-703}, keywords = {Deep Learning, Sparse Representations}, pubstate = {published}, tppubtype = {article} } |
Tsagkatakis, Grigorios ; Bloemen, Maarten ; Geelen, Bert ; Jayapala, Murali ; Tsakalides, Panagiotis Graph and Rank Regularized Matrix Recovery for Snapshot Spectral Image Demosaicing Journal Article IEEE Transactions on Computational Imaging, 5 (2), pp. 301–316, 2019. Abstract | Links | BibTeX | Tags: Demosaicing, Low-Rank and Graph Regularized Estimation, Snapshot Spectral Imaging @article{Tsagkatakis_2019c, title = {Graph and Rank Regularized Matrix Recovery for Snapshot Spectral Image Demosaicing}, author = {Tsagkatakis, Grigorios and Bloemen, Maarten and Geelen, Bert and Jayapala, Murali and Tsakalides, Panagiotis}, doi = {10.1109/TCI.2018.2888989}, year = {2019}, date = {2019-06-03}, journal = {IEEE Transactions on Computational Imaging}, volume = {5}, number = {2}, pages = {301--316}, abstract = {Snapshot spectral imaging (SSI) is a cutting-edge technology for enabling the efficient acquisition of the spatio-spectral content of dynamic scenes using miniaturized platforms. To achieve this goal, SSI architectures associate each spatial pixel with a specific spectral band, thus introducing a critical trade-off between spatial and spectral resolutions. In this paper, we propose a computational approach for the recovery of high spatial and spectral resolution content from a single exposure or a small number of exposures. We formulate the problem in a novel framework of spectral measurement matrix completion and we develop an efficient low-rank and graph regularized method for SSI demosaicing. Furthermore, we extend state-of-the-art approaches by considering more realistic sampling paradigms that incorporate information related to the spectral profile associated with each pixel. In addition to reconstruction quality, we also investigate the impact of recovery on subsequent analysis tasks, such as classification using state-of-the-art convolutional neural networks. We experimentally validate the merits of the proposed recovery scheme using synthetically generated data from indoor and satellite observations and real data obtained with an Interuniversity MicroElectronics Center (IMEC) visible range SSI camera.}, keywords = {Demosaicing, Low-Rank and Graph Regularized Estimation, Snapshot Spectral Imaging}, pubstate = {published}, tppubtype = {article} } Snapshot spectral imaging (SSI) is a cutting-edge technology for enabling the efficient acquisition of the spatio-spectral content of dynamic scenes using miniaturized platforms. To achieve this goal, SSI architectures associate each spatial pixel with a specific spectral band, thus introducing a critical trade-off between spatial and spectral resolutions. In this paper, we propose a computational approach for the recovery of high spatial and spectral resolution content from a single exposure or a small number of exposures. We formulate the problem in a novel framework of spectral measurement matrix completion and we develop an efficient low-rank and graph regularized method for SSI demosaicing. Furthermore, we extend state-of-the-art approaches by considering more realistic sampling paradigms that incorporate information related to the spectral profile associated with each pixel. In addition to reconstruction quality, we also investigate the impact of recovery on subsequent analysis tasks, such as classification using state-of-the-art convolutional neural networks. We experimentally validate the merits of the proposed recovery scheme using synthetically generated data from indoor and satellite observations and real data obtained with an Interuniversity MicroElectronics Center (IMEC) visible range SSI camera. |
Fotiadou, Konstantina ; Tsagkatakis, Grigorios ; Tsakalides, Panagiotis Alternating Direction Method of Multipliers for Semi-blind Astronomical Image Deconvolution In Proceedings ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2157–2161, IEEE 2019. Abstract | Links | BibTeX | Tags: Astronomy, Deconvolution, Sparse Representations @inproceedings{Fotiadou_2019a, title = {Alternating Direction Method of Multipliers for Semi-blind Astronomical Image Deconvolution}, author = {Fotiadou, Konstantina and Tsagkatakis, Grigorios and Tsakalides, Panagiotis}, doi = {10.1109/ICASSP.2019.8683494}, year = {2019}, date = {2019-05-12}, booktitle = {ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {2157--2161}, organization = {IEEE}, abstract = {High resolution astronomical imagery plays a critical role in multiple remote sensing applications. In this work, we introduce a novel post-acquisition computational technique aiming to recover the high-quality versions of blurry and degraded astronomical observations. Additionally, the proposed scheme is able to retrieve significant information regarding the characteristic properties of the blurring kernel, i.e point spread function (PSF). In order to accomplish this goal, we exploit the mathematical frameworks of Sparse Representations, and the Alternating Direction Method of Multipliers (ADMM). Experimental results demonstrate the ability of the proposed approach to synthesize high-quality astronomical imagery.}, keywords = {Astronomy, Deconvolution, Sparse Representations}, pubstate = {published}, tppubtype = {inproceedings} } High resolution astronomical imagery plays a critical role in multiple remote sensing applications. In this work, we introduce a novel post-acquisition computational technique aiming to recover the high-quality versions of blurry and degraded astronomical observations. Additionally, the proposed scheme is able to retrieve significant information regarding the characteristic properties of the blurring kernel, i.e point spread function (PSF). In order to accomplish this goal, we exploit the mathematical frameworks of Sparse Representations, and the Alternating Direction Method of Multipliers (ADMM). Experimental results demonstrate the ability of the proposed approach to synthesize high-quality astronomical imagery. |
Pitsis, George ; Tsagkatakis, Grigorios ; Kozanitis, Christos ; Kalomoiris, Ioannis ; Ioannou, Aggelos ; Dollas, Apostolos ; Katevenis, Manolis GH ; Tsakalides, Panagiotis Efficient Convolutional Neural Network Weight Compression for Space Data Classification on Multi-fpga Platforms In Proceedings ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3917–3921, IEEE 2019. Abstract | Links | BibTeX | Tags: Astronomy, Convolutional Neural Networks, Deep Learning, fpga @inproceedings{Tsagkatakis_2019a, title = {Efficient Convolutional Neural Network Weight Compression for Space Data Classification on Multi-fpga Platforms}, author = {Pitsis, George and Tsagkatakis, Grigorios and Kozanitis, Christos and Kalomoiris, Ioannis and Ioannou, Aggelos and Dollas, Apostolos and Katevenis, Manolis GH and Tsakalides, Panagiotis}, doi = {10.1109/ICASSP.2019.8682732}, year = {2019}, date = {2019-05-12}, booktitle = {ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages = {3917--3921}, organization = {IEEE}, abstract = {Convolutional Neural Networks (CNNs) represent the cutting edge in signal analysis tasks like classification and regression. Realization of such architectures in hardware capable of performing high throughput computations, with minimal energy consumption, is a key enabling factor towards the proliferation of analysis immediately after acquisition. Our driving problem is a satellite-based remote sensing platform in which onboard signal processing and classification tasks must take place, given strict bandwidth and energy limitations. In this work, we exploit the implementation of a CNN on Field Programmable Gate Array (FPGA) platforms and explore different ways to minimize the impact of different hardware restrictions to performance. We compare our results against competing technologies such as Graphics Processing Units (GPU) in terms of throughput, latency and energy consumption. In actual experimental runs we demonstrate competitive latency and throughput of the FPGA platform vs. GPU technology at an order-of-magnitude energy savings, which is especially important for space-borne computing.}, keywords = {Astronomy, Convolutional Neural Networks, Deep Learning, fpga}, pubstate = {published}, tppubtype = {inproceedings} } Convolutional Neural Networks (CNNs) represent the cutting edge in signal analysis tasks like classification and regression. Realization of such architectures in hardware capable of performing high throughput computations, with minimal energy consumption, is a key enabling factor towards the proliferation of analysis immediately after acquisition. Our driving problem is a satellite-based remote sensing platform in which onboard signal processing and classification tasks must take place, given strict bandwidth and energy limitations. In this work, we exploit the implementation of a CNN on Field Programmable Gate Array (FPGA) platforms and explore different ways to minimize the impact of different hardware restrictions to performance. We compare our results against competing technologies such as Graphics Processing Units (GPU) in terms of throughput, latency and energy consumption. In actual experimental runs we demonstrate competitive latency and throughput of the FPGA platform vs. GPU technology at an order-of-magnitude energy savings, which is especially important for space-borne computing. |
Vernardos, Georgios ; Tsagkatakis, Grigorios Quasar Microlensing Light-Curve Analysis using Deep Machine Learning Journal Article Monthly Notices of the Royal Astronomical Society, 486 (2), pp. 1944–1952, 2019, ISSN: 0035-8711. Abstract | Links | BibTeX | Tags: Astronomy, Convolutional Neural Networks, Deep Learning, machine learning @article{Tsagkatakis_2019b, title = {Quasar Microlensing Light-Curve Analysis using Deep Machine Learning}, author = {Vernardos, Georgios and Tsagkatakis, Grigorios}, doi = {10.1093/mnras/stz868}, issn = {0035-8711}, year = {2019}, date = {2019-04-01}, journal = {Monthly Notices of the Royal Astronomical Society}, volume = {486}, number = {2}, pages = {1944--1952}, abstract = {We introduce a deep machine learning approach to studying quasar microlensing light curves for the first time by analysing hundreds of thousands of simulated light curves with respect to the accretion disc size and temperature profile. Our results indicate that it is possible to successfully classify very large numbers of diverse light-curve data and measure the accretion disc structure. The detailed shape of the accretion disc brightness profile is found to play a negligible role. The speed and efficiency of our deep machine learning approach is ideal for quantifying physical properties in a ‘big-data’ problem set-up. This proposed approach looks promising for analysing decade-long light curves for thousands of microlensed quasars, expected to be provided by the Large Synoptic Survey Telescope.}, keywords = {Astronomy, Convolutional Neural Networks, Deep Learning, machine learning}, pubstate = {published}, tppubtype = {article} } We introduce a deep machine learning approach to studying quasar microlensing light curves for the first time by analysing hundreds of thousands of simulated light curves with respect to the accretion disc size and temperature profile. Our results indicate that it is possible to successfully classify very large numbers of diverse light-curve data and measure the accretion disc structure. The detailed shape of the accretion disc brightness profile is found to play a negligible role. The speed and efficiency of our deep machine learning approach is ideal for quantifying physical properties in a ‘big-data’ problem set-up. This proposed approach looks promising for analysing decade-long light curves for thousands of microlensed quasars, expected to be provided by the Large Synoptic Survey Telescope. |
Stefanakis, Nikolaos Efficient Implementation of Superdirective Beamforming in a Half-Space Environment Journal Article Journal of Acoustical Society of America, 145 (3), pp. 1293-1302, 2019. Abstract | BibTeX | Tags: half-space environment, spatial coherence, Superdirective beamforming @article{Stefanakis_2019a, title = {Efficient Implementation of Superdirective Beamforming in a Half-Space Environment}, author = {Stefanakis, Nikolaos}, year = {2019}, date = {2019-03-01}, journal = {Journal of Acoustical Society of America}, volume = {145}, number = {3}, pages = {1293-1302}, abstract = {In this paper, the case considered is a planar microphone array placed in front of a wall of the room so that the microphone array plane is perpendicular to that of the wall. For this arrangement, a so-called half-space propagation model has been recently proposed, which accounts for the joint contribution of the direct path and the earliest reflection introduced by the adjacent wall. Based on this propagation model, a numerical process to estimate a model of the diffuse noise spatial coherence, which accounts for the presence of the reflecting surface, is proposed. The suggested noise covariance model is used in order to extend the superdirective beamformer in half-space, achieving notable improvements in performance in comparison to a more typical implementation that involves the spherical isotropic coherence model.}, keywords = {half-space environment, spatial coherence, Superdirective beamforming}, pubstate = {published}, tppubtype = {article} } In this paper, the case considered is a planar microphone array placed in front of a wall of the room so that the microphone array plane is perpendicular to that of the wall. For this arrangement, a so-called half-space propagation model has been recently proposed, which accounts for the joint contribution of the direct path and the earliest reflection introduced by the adjacent wall. Based on this propagation model, a numerical process to estimate a model of the diffuse noise spatial coherence, which accounts for the presence of the reflecting surface, is proposed. The suggested noise covariance model is used in order to extend the superdirective beamformer in half-space, achieving notable improvements in performance in comparison to a more typical implementation that involves the spherical isotropic coherence model. |
Stefanakis, Nikolaos ; Mastorakis, Yannis ; Alexandridis, Anastasios ; Mouchtaris, Athanasios Automating Mixing of User-Generated Audio Recordings from the Same Event Journal Article Journal of Audio Engineering Society, 67 (4), 2019. Abstract | BibTeX | Tags: automatic mixing, normalization, user-generated content, user-generated recordings @article{Stefanakis_2019b, title = {Automating Mixing of User-Generated Audio Recordings from the Same Event}, author = {Stefanakis, Nikolaos and Mastorakis, Yannis and Alexandridis, Anastasios and Mouchtaris, Athanasios}, year = {2019}, date = {2019-03-01}, journal = {Journal of Audio Engineering Society}, volume = {67}, number = {4}, abstract = {We present a systematic approach for audio mixing based on synchronized User-Generated audio Recordings (UGRs), e.g., audio recordings contributed by users attending the same public event. We discuss the challenges that relate to creating a mixture with such recordings, mainly due to the fact that each audio stream spans a different portion of the event of interest and comes with different signal level characteristics. We propose an approach to combine the available recordings based on a normalization step and a mixing step. The normalization step defines a fixed-with-time gain that is specific to each UGR. In the mixing step, a mechanism that reduces the master gain in accordance with the number of activated inputs at each time is employed. An approach called orthogonal mixing is presented, which is designed based on the assumption that the mixture components are mutually independent. The presented mixing process allows the combination of multiple short duration UGRs to produce a longer audio stream with potentially better quality than any one of its constituent parts.}, keywords = {automatic mixing, normalization, user-generated content, user-generated recordings}, pubstate = {published}, tppubtype = {article} } We present a systematic approach for audio mixing based on synchronized User-Generated audio Recordings (UGRs), e.g., audio recordings contributed by users attending the same public event. We discuss the challenges that relate to creating a mixture with such recordings, mainly due to the fact that each audio stream spans a different portion of the event of interest and comes with different signal level characteristics. We propose an approach to combine the available recordings based on a normalization step and a mixing step. The normalization step defines a fixed-with-time gain that is specific to each UGR. In the mixing step, a mechanism that reduces the master gain in accordance with the number of activated inputs at each time is employed. An approach called orthogonal mixing is presented, which is designed based on the assumption that the mixture components are mutually independent. The presented mixing process allows the combination of multiple short duration UGRs to produce a longer audio stream with potentially better quality than any one of its constituent parts. |
Publications
2022 |
4D U-Nets for Multi-Temporal Remote Sensing Data Classification Journal Article Remote Sensing, 14 (3), pp. 634, 2022. |
2021 |
Neuronal Communication Process Opens New Directions in Image and Video Compression Systems In Proceedings Proc. European Research Consortium for Informatics and Mathematics, Special Theme: Brain-inspired Computing (ERCIM News 125), pp. 27–28, 2021. |
Investigation and ordinal modelling of vocal features for stress detection in speech In Proceedings Proc. 9th International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 1–8, IEEE, 2021. |
Visibility Graph Network of Multidimensional Time Series Data for Protein Structure Classification In Proceedings Proc. European Signal Processing Conference (EUSIPCO), pp. 1216–1220, 2021. |
Estimating Livestock Grazing Activity in Remote Areas Using Passive Acoustic Monitoring Journal Article Information, 12 (8), pp. 290, 2021. |
Deep multi-modal satellite and in-situ observation fusion for Soil Moisture retrieval In Proceedings Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2021), pp. 6339–6342, IEEE 2021. |
Structural Classification Of Proteins Based On The Computationally Efficient Recurrence Quantification Analysis And Horizontal Visibility Graphs Journal Article Bioinformatics, 37 (13), pp. 1796–1804, 2021, ISSN: 1367-4803. |
Dynamic Image Quantization Using Leaky Integrate-and-Fire Neurons Journal Article IEEE Transactions on Image Processing, 30 , pp. 4305–4315, 2021. |
Tensor Decomposition Learning for Compression of Multidimensional Signals Journal Article IEEE Journal of Selected Topics in Signal Processing, 15 (3), pp. 476–490, 2021. |
Monitoring Health Parameters of Elders to Support Independent Living and Improve Their Quality of Life Journal Article Sensors, 21 (2), pp. 517, 2021. |
2020 |
Large-Scale 3D Two-Photon Imaging of Molecularly Identified CA1 Interneuron Dynamics in Behaving Mice Journal Article Neuron, 108 (5), pp. 968–983, 2020. |
Quantifying the structure of strong gravitational lens potentials with uncertainty-aware deep neural networks Journal Article Monthly Notices of the Royal Astronomical Society, 499 (4), pp. 5641–5652, 2020. |
A Study on the Effect of Distinct Adjacency Matrices for Graph Signal Denoising In Proceedings Proc. IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE), pp. 523–529, 2020. |
Artificial Neural Networks in Action for an Automated Cell-Type Classification of Biological Neural Networks Journal Article IEEE Transactions on Emerging Topics in Computational Intelligence, 2020. |
Multi-Temporal Convolutional Neural Networks for Satellite-Derived Soil Moisture Observation Enhancement In Proceedings Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2020), pp. 4602–4605, IEEE 2020. |
A Universal System for Cough Detection in Domestic Acoustic Environments In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2020. |
Graph-based Denoising of EEG Signals in Impulsive Environments In Proceedings Proc. European Signal Processing Conference (EUSIPCO), pp. 1095–1099, 2020. |
Efficient Dynamic Analysis of Low-similarity Proteins for Structural Class Prediction In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2020. |
Anomaly Detection for Symbolic Time Series Representations of Reduced Dimensionality In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2020. |
Data-Driven Kernel-Based Probabilistic SAX for Time Series Dimensionality Reduction In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2020. |
Semantic Predictive Coding with Arbitrated Generative Adversarial Networks Journal Article MDPI Machine Learning and Knowledge Extraction (MAKE), 2 (3), pp. 307-326, 2020. |
Distributed training and inference of deep learning models for multi-modal land cover classification Journal Article Remote Sensing, 12 (17), pp. 2670, 2020. |
Quantifying the Computational Efficiency of Compressive Sensing in Smart Water Network Infrastructures Journal Article MDPI Sensors, 2020. |
Towards Blind Quality Assessment of Concert Audio Recordings Using Deep Neural Networks In Proceedings ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3477–3481, IEEE 2020, ISSN: 2379-190X. |
Quantized Tensor Robust Principal Component Analysis In Proceedings ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE 2020. |
Classification of Compressed Remote Sensing Multispectral Images via Convolutional Neural Networks Journal Article Journal of Imaging, 6 (4), pp. 24, 2020. |
Tensor Dictionary Learning with Representation Quantization for Remote Sensing Observation Compression In Proceedings Proc. Data Compression Conference (DCC), 2020. |
Image Compression based on Neuroscience Models: Rate-Distortion Performance of the Neural Code In Proceedings Data Compression Conference (DCC 2020), 2020. |
Fish Freshness Estimation though analysis of Multispectral Images with Convolutional Neural Networks Journal Article Electronic Imaging, 2020 (12), pp. 171–1–171–5, 2020. |
Adversarial dictionary learning for a robust analysis and modelling of spontaneous neuronal activity Journal Article Neurocomputing, 388 , pp. 188–201, 2020. |
2019 |
Sparse Representations on DW-MRI: A study on pancreas In Proceedings The 19th annual IEEE International Conference on Bioinformatics and Bioengineering (BIBE), pp. 791–795, 2019. |
Hyperspectral Image Compression and Super-Resolution Using Tensor Decomposition Learning In Proceedings Proc. 53rd Asilomar Conference on Signals, Systems, and Computers, pp. 1369–1373, IEEE 2019. |
Automated Screening of Dyslexia via Dynamical Recurrence Analysis of Wearable Sensor Data In Proceedings The 19th annual IEEE International Conference on Bioinformatics and Bioengineering (BIBE), 2019. |
Analysis of Spherical Isotropic Noise Fields with an A-format Tetrahedral Microphone Journal Article Journal of the Acoustical Society of America, 146 (4), 2019. |
Survey of Deep-Learning Approaches for Remote Sensing Observation Enhancement Journal Article Sensors, 19 (18), pp. 3929, 2019. |
Compression of High-Dimensional Multispectral Image Time Series Using Tensor Decomposition Learning In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. |
Neuro-Inspired Compression of RGB Images In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. |
Efficiency of the bio-inspired Leaky Integrate-and-Fire neuron for signal coding In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. |
On Realizing Distributed Deep Neural Networks: An Astrophysics Case Study In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. |
A Self-Tuned Architecture for Human Activity Recognition Based on a Dynamical Recurrence Analysis of Wearable Sensor Data In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. |
Real-Time Prototyping of Matlab-Java Code Integration for Water Sensor Networks Applications In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. |
Impact Sounds Classification for Interactive Applications via Discriminative Dictionary Learning In Proceedings Proc. European Signal Processing Conference (EUSIPCO), 2019. |
Convolutional Neural Networks for Spectroscopic Redshift Estimation on Euclid Data Journal Article IEEE Transactions on Big Data, 6 (3), pp. 460 - 476, 2019. |
Snapshot High Dynamic Range Imaging via Sparse Representations and Feature Learning Journal Article IEEE Transactions on Multimedia, 22 (3), pp. 688-703, 2019. |
Graph and Rank Regularized Matrix Recovery for Snapshot Spectral Image Demosaicing Journal Article IEEE Transactions on Computational Imaging, 5 (2), pp. 301–316, 2019. |
Alternating Direction Method of Multipliers for Semi-blind Astronomical Image Deconvolution In Proceedings ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2157–2161, IEEE 2019. |
Efficient Convolutional Neural Network Weight Compression for Space Data Classification on Multi-fpga Platforms In Proceedings ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3917–3921, IEEE 2019. |
Quasar Microlensing Light-Curve Analysis using Deep Machine Learning Journal Article Monthly Notices of the Royal Astronomical Society, 486 (2), pp. 1944–1952, 2019, ISSN: 0035-8711. |
Efficient Implementation of Superdirective Beamforming in a Half-Space Environment Journal Article Journal of Acoustical Society of America, 145 (3), pp. 1293-1302, 2019. |
Automating Mixing of User-Generated Audio Recordings from the Same Event Journal Article Journal of Audio Engineering Society, 67 (4), 2019. |