进程是由系统自己管理的。
1:最基本的写法
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
p = Pool(5)
print(p.map(f, [1, 2, 3]))
[1, 4, 9]
2、实际上是通过os.fork的方法产生进程的
unix中,所有进程都是通过fork的方法产生的。
multiprocessing Process
os
info(title):
title
, __name__
(os, ): , os.getppid()
, os.getpid()
f(name):
info()
, name
__name__ == :
info()
p = Process(=f, =(,))
p.start()
p.join()
3、线程共享内存
threading
run(info_list,n):
info_list.append(n)
info_list
__name__ == :
info=[]
i ():
p=threading.Thread(=run,=[info,i])
p.start()
[0]
[0, 1]
[0, 1, 2]
[0, 1, 2, 3]
[0, 1, 2, 3, 4]
[0, 1, 2, 3, 4, 5]
[0, 1, 2, 3, 4, 5, 6]
[0, 1, 2, 3, 4, 5, 6, 7]
[0, 1, 2, 3, 4, 5, 6, 7, 8]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
进程不共享内存:
multiprocessing Process
run(info_list,n):
info_list.append(n)
info_list
__name__ == :
info=[]
i ():
p=Process(=run,=[info,i])
p.start()
[1]
[2]
[3]
[0]
[4]
[5]
[6]
[7]
[8]
[9]
若想共享内存,需使用multiprocessing模块中的Queue
multiprocessing Process, Queue
f(q,n):
q.put([n,])
__name__ == :
q=Queue()
i ():
p=Process(=f,=(q,i))
p.start()
:
q.get()
4、锁:仅是对于屏幕的共享,因为进程是独立的,所以对于多进程没有用
multiprocessing Process, Lock
f(l, i):
l.acquire()
, i
l.release()
__name__ == :
lock = Lock()
num ():
Process(=f, =(lock, num)).start()
hello world 0
hello world 1
hello world 2
hello world 3
hello world 4
hello world 5
hello world 6
hello world 7
hello world 8
hello world 9
5、进程间内存共享:Value,Array
multiprocessing Process, Value, Array
f(n, a):
n.value = i ((a)):
a[i] = -a[i]
__name__ == :
num = Value(, )
arr = Array(, ())
num.value
arr[:]
p = Process(=f, =(num, arr))
p.start()
p.join()
0.0
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
3.1415927
[0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
multiprocessing Process, Manager
f(d, l):
d[] = d[] = d[] = l.reverse()
__name__ == :
manager = Manager()
d = manager.dict()
l = manager.list(())
p = Process(=f, =(d, l))
p.start()
p.join()
d
l
# print '-------------'这里只是另一种写法
# print pool.map(f,range(10))
{0.25: None, 1: '1', '2': 2}
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
multiprocessing Pool
time
f(x):
x*x
time.sleep()
x*x
__name__ == :
pool=Pool(=)
res_list=[]
i ():
res=pool.apply_async(f,[i]) res_list.append(res)
r res_list:
r.get(timeout=10) #超时时间
同步的就是apply
Copyright© 2013-2020
All Rights Reserved 京ICP备2023019179号-8