Commit d87aaf4e authored by RODIONOV Sergey's avatar RODIONOV Sergey

advance tasks

parent e9d6107e
Write one line command using gnu parallel which create the following
files into separate directories:
dir_1/rez_1.jpg
dir_2/rez_2.jpg
dir_3/rez_3.jpg
...
...
...
dir_20/rez_20.jpg
each of jpg files is converted from png result of code1.py
......@@ -10,8 +10,8 @@
#PBS -l nodes=1:ppn=1
#The maximum wall-clock time during which this job can run (hh:mm:ss)
#PBS -l walltime=00:01:00
#The maximum wall-clock time during which this job can run (sec)
#PBS -l walltime=600
# output log file name
#PBS -o "run-log.txt"
......
Study speed-up for "example2" with different N ( -size parameters ).
Make sure that for big N speed-up is bigger.
......@@ -2,11 +2,11 @@
#jobs's name
#PBS -N test
#we request 1 nodes with 1 cores per nodes = 1 x 12 = 12 cores in total
#we request 1 nodes with 12 cores per nodes = 1 x 12 = 12 cores in total
#PBS -l nodes=1:ppn=12
# We ask for 600 sec
#PBS -l walltime=600
# We ask for 7200 seconds
#PBS -l walltime=7200
# output log file name
#PBS -o "run-log.txt"
......
1. Do the same as in code2.py using only Process and Queue.
- You should run not more than ncores process at once
- You can use multiprocessing.cpu_count to take number of cores
- code4.py is an example of solution, but first, try to do it by yourself
2. Test performance of your code and compare it with performance of
code2.py which use Pool+map. You will find that your solution is a
little bit slower than code2.py .
3. Read about "chunksize" parameter of map
(https://docs.python.org/3/library/multiprocessing.html). By
default map set chunksize in such a way that each process get
approximately 4 chunks.
4. Modify your code by including chunksize parameter. Compare
performance of your new code with code2.py.
#!/usr/bin/env python
import numpy as np
import argparse
from multiprocessing import Process, Queue, Pipe, cpu_count
from multiprocessing import Process, Queue, cpu_count
import mkl
# To be sure that we use 1 thread in MKL
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment