Projects / Programmes source: ARIS

Parallel and Distributed Systems

Research activity

Code Science Field Subfield
2.07.00  Engineering sciences and technologies  Computer science and informatics   
2.08.00  Engineering sciences and technologies  Telecommunications   

Code Science Field
P170  Natural sciences and mathematics  Computer science, numerical analysis, systems, control 

Code Science Field
1.02  Natural Sciences  Computer and information sciences 
2.02  Engineering and Technology  Electrical engineering, Electronic engineering, Information engineering 
parallel processing, high performance algorithms, interconnection topologies, modelling, computer simulations, advanced signal processing, big data analytics.
Evaluation (rules)
source: COBISS
Researchers (19)
no. Code Name and surname Research area Role Period No. of publicationsNo. of publications
1.  03302  PhD Viktor Avbelj  Systems and cybernetics  Researcher  2017 - 2019  203 
2.  23400  PhD Uroš Čibej  Computer science and informatics  Researcher  2017 - 2019  72 
3.  26454  PhD Matjaž Depolli  Computer science and informatics  Researcher  2017 - 2019  99 
4.  18188  PhD Tomaž Dobravec  Computer science and informatics  Researcher  2017 - 2019  136 
5.  09454  PhD Monika Kapus Kolar  Computer science and informatics  Researcher  2017 - 2019  174 
6.  09532  PhD Dušan Kodek  Computer science and informatics  Retired researcher  2017 - 2019  294 
7.  28366  PhD Gregor Kosec  Computer science and informatics  Researcher  2017 - 2019  161 
8.  06856  PhD Stanislav Kovačič  Systems and cybernetics  Retired researcher  2018 - 2019  390 
9.  05389  PhD Andrej Lipej  Process engineering  Researcher  2017 - 2019  159 
10.  33172  PhD Rok Mandeljc  Systems and cybernetics  Researcher  2017  56 
11.  22475  PhD Jurij Mihelič  Computer science and informatics  Researcher  2017 - 2019  144 
12.  20183  PhD Boštjan Murovec  Systems and cybernetics  Researcher  2017 - 2019  208 
13.  21310  PhD Janez Perš  Systems and cybernetics  Researcher  2017 - 2019  238 
14.  32441  PhD Aleksandra Rashkovska Koceva  Computer science and informatics  Researcher  2017 - 2019  82 
15.  04646  PhD Borut Robič  Computer science and informatics  Researcher  2017 - 2019  292 
16.  53133  PhD Rituraj Singh  Computer science and informatics  Researcher  2019 
17.  50509  PhD Jure Slak  Computer science and informatics  Junior researcher  2017 - 2019  56 
18.  12766  PhD Boštjan Slivnik  Computer science and informatics  Researcher  2017 - 2019  157 
19.  06875  PhD Roman Trobec  Computer science and informatics  Head  2017 - 2019  469 
Organisations (3)
no. Code Research organisation City Registration number No. of publicationsNo. of publications
1.  0106  Jožef Stefan Institute  Ljubljana  5051606000  90,624 
2.  1538  University of Ljubljana, Faculty of Electrical Engineering  Ljubljana  1626965  27,742 
3.  1539  University of Ljubljana, Faculty of Computer and Information Science  Ljubljana  1627023  16,234 
The research group is directed in the development and efficient integration of algorithms for parallel and distributed systems. The need for the use of all forms of distributed computing comes from the increasing spread of multi-processor and multi-core computer systems, processors and micro-controllers. Tightly connected with efficient exploitation of computational resources of multiple processors is efficient use of interconnecting communication channels. Therefore the research group investigates in the field of communications, particularly their software aspects (e.g. algorithms able to exploit connections efficiently) and new topologies of intercommunication networks (e.g. high radix regular topologies). For better exploitation of processor facilities we intend to investigate also methods that exploit probability and approximation for better efficiency on contemporary computer architectures. Our work is related to efficient management and analysis of large amounts of data. To support the development and analysis of systems of cooperating processes, we develop also formal method for the purpose. The first among the often considered difficult problems are computer simulations based on modelling and on the solving of partial differential equations. Computer simulations are employed for the simulation of physical, biomedical, technological and similar systems if real experiments are impossible (e.g. hazardous for lives or nature), difficult to perform (e.g. simulation of the atmosphere for weather forecasting), or too expensive (e.g. simulation of turbines for raising their efficiency). Next group are NP-hard combinatorial problems are often most successfully solved with parallel approaches. An example is the search of graph similarities (e.g. for the detection of similarities of molecules). Although very large NP-hard problems cannot be efficiently solved with the currently available computers, we can find for them good solutions in acceptable time. With new developments in the field of distributed algorithms, we can look for better solutions, solve larger problems or produce solutions in a shorter time. The last group of are large scale multi objective optimization problems. Modern optimization algorithms are based primarily on the imitation of the nature, which itself most of the time relies on distributed solution algorithms. Such algorithms are therefore particularly suited for the development and subsequent implementation on distributed computers. Examples are algorithms based on the principles of evolution, group intelligence, physical and chemical processes (the best known is simulated annealing), invasive weed, plant growth (Flower algorithm and Flower pollination algorithms) etc. All exposed research areas share a common prerequisite that they need high performance multiple computers. The main research goals of our program group are therefore to develop and validate the methodologies for efficient cooperation in the implementation of computationally intensive tasks.
Significance for science
The importance of computers, that are standard equipment in most areas of technical research and standard tools for complex tasks such as system management in industry, global financial market, medical diagnostics, exploration of the Earth and the universe, and many more, is undeniable. The complexity of the tasks that we are trying to solve has been growing swiftly, as demands for increasingly precise and accurate numerical predictions are becoming higher and higher. As a result, computer algorithms are becoming more expensive, both from the financial and the time aspect and, last but not least, the ecological point of view. Often, a real-time response plays a crucial role, which typically results in additional price increases or even prevents practical applications of computer calculations. Buying expensive high-performance computer systems is often recognized as a remedy for above demands. However, the raw computational power is in most cases poorly exploited, especially since technological and physical limitations demand that computing power increases mainly at the expense of parallel architectures, whether these are multi-processors, multi-cores, processor accelerators or computing clusters. Currently only massively parallel computers with more than 1015 Floating point Operations Per Second (PFLOPS), i.e. Petascale, allow achieving Moore's law expectations (doubling the computing performance in each two years). Assuming that one floating point operation requires one system clock period, a sequential computer with a 10 GHz system clock would reach "just" 1010 FLOPS, which is far below the Petascale. Today, the only option to break this limit in approaching to Exascale era (1018 FLOPS), is a parallel computer with millions of processors that run in an optimal, parallel way. However, parallel execution of a program is limited by the speed of the communications between the processors and the ability of the algorithms to be parallelized. Every part of the code that cannot be written in parallel becomes a bottleneck in the parallel efficiency. This gravely limits the scalability of many problems that should be parallelized. On the other hand, the excessive power consumption of massively parallel computers, which is today about 0.5 kW/TFLOPS, represents an environmental and financial problem. Therefore, it is an imperative to find procedures and methods for even more efficient utilization of cooperating computing resources. A deeper knowledge is needed in computer architectures, procedures for communication protocols, data reconciliation, access to shared resources, error detection, and similar themes, which are central to our programme. Based on such knowledge, we can achieve new breakthroughs in the field of computer science, in order to significantly shorten calculation times and to reduce the energy consumption, which is becoming an increasingly important factor in information technologies. The proposed research programme aims at theoretical investigations of new fundamental concepts in the field of parallel and distributed processing of data. The research will be focused on parallel algorithms, on methods and frameworks for their development, and on the improvement of the performance and effectiveness of computation and communication. New findings are expected particularly in the development of new parallel algorithms and simulation models, in temporal and spatial complexity analysis and in the management of processing and communication resources. We aim to develop new and better methods for programming, modelling, standardization, and exploitation of distributed systems. Research on protocols for the use of common variables and cooperation on common tasks will contribute to the integration and standardization and thereby significantly influence the application of future information services. Theoretical solutions and procedures will be conceived in a way that will facilitate their wide application. Directly applicabl
Significance for the country
Research in parallel and distributed systems is fundamental for the socio-economic development of Slovenia as well as of the EU and other technologically advanced countries which position developments in ICT and HPC in strategic directions. Several reasons could be identified. In Slovenia, there are many companies that have to adapt to more and more complex technological demands in order to compete on the world market. The most straightforward approach towards development is experimental fabrication and testing of products. Such approach is, as a rule, so expensive and time-consuming that most of our companies cannot afford it. The experiments are often limited also by the expensive measurement equipment. The alternative is the use of appropriate computational modelling. Well-tested and validated simulations can provide detailed insight into the operation of the device as well as into the process of its production. Efficient parallel simulations play a crucial role in such activities. An example of good practice of parallel computing in industry is the technological innovation "System for mobile monitoring of vital physiological parameters and environmental context". Such a system offers solutions for the optimization of health and medical service budgets of Slovenia and EU. The innovation comprises a small wireless body gadget with the corresponding firmware, the software on a personal digital terminal, the software on a computer server or computer cloud, and innovative algorithms and procedures that are enabling the whole system. This innovation was successfully sold to the company SIMED d.o.o., with which we have signed also the intention to continue the cooperation in the area of research, development, and marketing of body sensor gadgets and computer systems that connect them. Without in-depth knowledge of parallel computers, the development of such a system would be impossible. Besides for indirect industrial application, parallel computing is important also for operative services. In February 2014, a severe icing storm hit Slovenia and caused damage in order of 8.5 million € only on the power transmission network. In cooperation with Milan Vidmar Electric Power Research Institute (EIMV) and Slovenian Environment Agency (SEA), we developed and validated a DTRi model (Dynamic Thermal Rating - icing), which was implemented as a prototype system for operative forecasting and prevention of icing on high voltage transmission lines. The results of the study were promising, and therefore the customer decided to promote the system into the operative environment, where, of course, quick response is crucial. Without advanced parallel computing, designing of such a system would be impossible. The computational times would be simply too long, or accuracy of the calculations too low. An additional aspect of the usefulness of the proposed programme are the opportunities that the acquired know-how, regarding parallel and distributed computers, brings for Slovenia. Because of its well-developed IT infrastructure and geographically strategic location, Slovenia has good conditions for high-quality R&D and for pulling ahead of other countries in the area of the research programme. Years ago, we established an inter-institutional test grid and developed an application environment for small business needs. Today, this initiative continues with cloud computing and the paradigm of "everything as a service". Tomorrow, we will still pursue the same strategy, while the terminology may change again. The proposed programme presents a good incentive for the education system and gives opportunities for young and innovative profiles. The proposed programme offers direct results and opportunities also in medicine, which already declares the information technology as its alliance. With the use of modelling, computer simulation, and research results obtained in collaboration with medical experts and researchers from the University Medical Cen
Most important scientific results Annual report 2017, 2018, final report
Most important socioeconomically and culturally relevant results Annual report 2017, 2018, final report
Views history