The current era of supercomputing is referred to as the Petascale (10^15) era. The next big HPC challenge is to break the Exascale (10^18) barrier. However, due to technological limitations, the achievement of this goal will require a substantial shift towards hardware/software co-design, e.g., a computational dataflow in custom hardware accelerator, and the use of an appropriate scale that can (a) offer as much meaning as possible, and (b) translate to real, usable performance, to the highest possible level. Such a scale should also feature the same two properties, even when applied to unconventional computational approaches. Our viewpoint is that the performance metric should become multidimensional measuring more than just number of floating point operations (FLOPS), e.g., (a) performance per watt, (b) performance per cubic meter, or (c) performance per monetary unit. This publication contributes to increased reputation of our research programme because it was done in an international research group with eminent experts from the field that are known for Flynn's classification of parallel computers, for early introduction of microprocessors and GaAs technology, and for commercialization of high performance dataflow computers.
COBISS.SI-ID: 26715687
Spontaneous cardioinhibitory syncope was documented by high-precision 31-channel body-surface ECG measurement. Detailed analyses of the atrial activity (P Waves) in this rare recording have shown that a functional pacemaker area exists outside the sinoatrial node with a profound influence of the autonomic nervous system on the sinoatrial node. Novelty is the evidence that the pacemaker area can be activated during the spontaneous cardioinhibitory syncope, which was not published before. This publication in a leading journal with respective impact factor is a great achievement for us as we are not medical experts.
COBISS.SI-ID: 26534183
In practical finite-impulse-response (FIR) digital filter applications, it is often necessary to represent the filter coefficients with a finite number of bits. Such is the case if we want to use a fixed point DSP processor that is cheaper and/or faster than a floating point DSP processor. The optimization problem that gives optimal finite wordlength coefficients is computationally very demanding. The computation time is greatly reduced with the help of a lower bound on the increase of approximation error. Derivation of an improved lower bound that uses the well-known LLL algorithm is presented in this paper. Test set examples prove that the speed of computation is increased considerably. The method is new and has not been known before. It is published in the leading journal from the field of signal processing and has a great potential in FIR filter applications.
COBISS.SI-ID: 8955476
A new approach is proposed for synthesizing the standard12-lead ECG from three differential leads formed by pairs of proximal electrodes on the body surface. The method is supported by a statistical analysis that gives the best personalized positions of electrodes. The algorithm calculates the corresponding personalized transformation matrix that is used to synthesize the standard 12- lead ECG. The algorithm has been evaluated on 99 multichannel ECGs measured on 30 healthy subjects and 35 patients scheduled for elective cardiac surgery. It is shown that the algorithm significantly outperforms the synthesis based on the EASI lead system with medians of correlation coefficients greater than 0.954 for all 12 standard leads. The analysis shows that 3 is the optimal number of differential leads for practical applications. The proposed methodology is applicable in telemedicine. While this paper was cited 3 times in a half year period after the publication, we expect it to have great further impact.
COBISS.SI-ID: 24909607
We have proposed a new method for a supervised online estimation of probabilistic discriminative models for classification tasks. The method estimates the class distributions from a stream of data in the form of Gaussian mixture models (GMMs). The reconstructive updates of the distributions are based on the recently proposed online kernel density estimator (oKDE). We maintain the number of components in the model low by compressing the GMMs from time to time. We propose a new cost function that measures loss of interclass discrimination during compression, thus guiding the compression toward simpler models that still retain discriminative properties. The resulting classifier thus independently updates the GMM of each class, but these GMMs interact during their compression through the proposed cost function. We call the proposed method the online discriminative kernel density estimator (odKDE). We compare the odKDE to oKDE, batch state-of-the-art kernel density estimators (KDEs), and batch/incremental support vector machines (SVM) on the publicly available datasets. The odKDE achieves comparable classification performance to that of best batch KDEs and SVM, while allowing online adaptation from large datasets, and produces models of lower complexity than the oKDE.
COBISS.SI-ID: 9907284