A parallel program needs a model in order to be written. Therefore, a parallel programming model serves as a source of inspiration for this form of computation that features a large number of calculations which can be carried out simultaneously.
Parallel computing is based on the idea that large problems can be easier solved by dividing them into smaller ones. Although there are several types of parallel programming, users got interested in using it only due to the physical constraints preventing frequency scaling. Furthermore, the power consumption generated by computers and the heat that they deliver have become a major concern in the past years, so parallel computing has become the main paradigm in computer architecture.
Bit-level parallelism is a type of parallel programming which is based on enlarging the processor wide size. Due to this measure there were made some big advancements in computer architecture. Furthermore, by enlarging the word size, the number of commands that the processor must execute to run an operation is reduced. For example, an 8-bit processor that must add two 16-bit integers requires two instructions instead of one. In order to complete the operation, it must add the 8 lower-order bits from each integer and then the 8 higher ones. Therefore, a 16-bit processor could operate by using a single instruction, so it is more effective.
Instruction-level parallelism is a form of parallel computing that is used to measure how many of the operations that exist in a computer program can be performed at the same time. The potential overlap that can take place among certain commands is known as instruction level parallelism.
Data parallelism has the role to distribute information across different parallel computing nodes. Moreover, it represents a multiprocessor system that executes a single set of commands. So, this form of parallelization is achieved when each processor runs the same task on different pieces of distributed information. There are cases where a single execution thread coordinates operations on all the data, and others where separate threads coordinate the operation, but they execute the same command.
Task parallelism is also named control parallelism, and it is a form of parallelization of computer code across a large number of processors in parallel programming environments. It has the goal to distribute threads, or execution processes across certain computing nodes. This function parallelism is achieved when each processor executes a different process on certain data. Furthermore, the processes can execute the same or different code.