Parallel Computing (alpha)
Parallel Computing (alpha)
Staff
- Aldinucci Marco (Coordinator)
- Rabellino Sergio (Member)
- Cantalupo Barbara (Member)
- Colonelli Iacopo (PhD student)
- Arfat Yasir (PhD student)
- Martinelli Alberto Riccardo (Assistant)
- Mittone Gianluca (Assistant)
- Torquati Massimo (External Collaborator)
- Danelutto Marco (External Collaborator)
- Kilpatrick Peter (External Collaborator)
- Tremblay Guy (External Collaborator)
- Misale Claudia (External Collaborator)
- Drocco Maurizio (External Collaborator)
Contacts
ERC Sectors
Activity
The Parallel Computing research group is interested in parallel programming models, languages and tools for parallel programming. This topic has undergone impressive change over recent years. New architectures and applications have rapidly become the central focus of the discipline. These changes are often a result of cross-fertilisation of parallel and distributed technologies with other rapidly evolving technologies. In the storm of such rapid evolution, we believe, abstraction provides a cornerstone to build on.
The shift toward multicore and many-core technologies has many drivers that are likely to sustain this trend for several years to come. Software technology is consequently changing: in the long term, writing parallel programs that are efficient, portable, and correct must be no more onerous than writing sequential programs. To date, however, parallel programming has not embraced much more than low-level libraries, which often require the architectural re-design of the application. In the hierarchy of abstractions, it is only slightly above toggling absolute binary in the front panel of the machine. This approach is unable to effectively scale to support the mainstream of software development where human productivity, total cost and time to solution are equally, if not more, important aspects.
- High-level development tools and languages for parallel computing
- Programming models and tools for multi- and many-core: non-blocking multithreading, lock-less and lock-free algorithms
- Programming models and tools for distributed computing
- Autonomic computing
- High-Performance Computing @exascale
- Parallelization of legacy codes
- In-transit computing & I/O
- Deep Learning
- Federated Learning
- Distributed training at scale