Static vs Dynamic Scheduling (OpenMP)

Solving problems from Project Euler is a great way to experiment with code. While solving problem 49 (https://projecteuler.net/problem=49) I decided to speed it up by parallelizing it. The problem input was too small to take any noticeable execution time, so I just increased the input size to get more number sequences. I will not be sharing the answer here.

The aim is to find an arithmetic sequence of 3 numbers  such that 1) each number is a prime, and 2) Each number is a permutation of the same set of digits. eg: 1487, 4817, 8147.

First step is finding all prime numbers using the Sieve of Eratosthenes method.
Then find arithmetic progressions within them and check if they are permutations of the same set of digits.

The second step can be parallelized by scanning through multiple chunks of prime numbers simultaneously. However there is no guarantee that each thread will get the same work load. Static scheduling may not provide a significant improvement as threads may join early if they complete their job. This is where dynamic scheduling shines. Threads can steal work form others in order to keep themselves busy and complete execution faster.

Testing the code on an AMD E1-2100 APU with 2 cores and searching within prime numbers from 1000 to 500000 (these patterns don’t exist bellow 1000).

49

The results clearly show the improvement of dynamic over static scheduling in this case.

Code:https://github.com/xSooDx/ProjectEulerCode/blob/master/_49.c

#c, #openmp, #parallel-computing, #project-euler