Vai al contenuto principale
Coronavirus: aggiornamenti per la comunità universitaria / Coronavirus: updates for UniTo Community

PICTURE
NOT AVAILABLE

Parallel Computing (alpha)

Parallel Computing (alpha)

Staff

ERC Sectors

PE6_2 - Computer systems, parallel/distributed systems, sensor networks, embedded systems, cyber-physical systems

Activity

The Parallel Computing research group is interested in parallel programming models, languages and tools for parallel programming. This topic has undergone impressive change over recent years. New architectures and applications have rapidly become the central focus of the discipline. These changes are often a result of cross-fertilisation of parallel and distributed technologies with other rapidly evolving technologies. In the storm of such rapid evolution, we believe, abstraction provides a cornerstone to build on.

The shift toward multicore and many-core technologies has many drivers that are likely to sustain this trend for several years to come. Software technology is consequently changing: in the long term, writing parallel programs that are efficient, portable, and correct must be no more onerous than writing sequential programs. To date, however, parallel programming has not embraced much more than low-level libraries, which often require the architectural re-design of the application. In the hierarchy of abstractions, it is only slightly above toggling absolute binary in the front panel of the machine. This approach is unable to effectively scale to support the mainstream of software development where human productivity, total cost and time to solution are equally, if not more, important aspects.

  • High-level development tools and languages for parallel computing
    • Programming models and tools for multi- and many-core: non-blocking multithreading, lock-less and lock-free algorithms
    • Programming models and tools for distributed computing
    • Autonomic computing
  • High-Performance Computing @exascale
    • Parallelization of legacy codes
    • In-transit computing & I/O
  • Deep Learning
    • Federated Learning
    • Distributed training at scale

More on Parallel Computing reserach group website

Last update: 09/03/2021 12:52
Non cliccare qui!