python - Sharing psycopg2 / libpq connections across processes -
according psycopg2
docs:
libpq
connections shouldn’t used forked processes, when using module suchmultiprocessing
or forking web deploy method such fastcgi make sure create connections after fork.
following the link document leads to:
on unix, forking process open
libpq
connections can lead unpredictable results because parent , child processes share same sockets , operating system resources. reason, such usage not recommended, though doing exec child process load new executable safe.
but it seems there's no inherent problem forking processes open sockets. what's reason psycopg2
's warning against forking when connections open?
the reason question saw (presumably successful) multiprocessing approach opened connection right before forking.
perhaps safe fork open connections under restrictions (e.g., 1 process ever uses connection, etc.)?
your surmise correct: there no issue connection being opened before fork long don't attempt use in more 1 process.
being said, think misunderstood "multiprocessing approach" link provided. demonstrates separate connection being opened in each child. (there connection opened parent before forking, it's not being used in child.)
improvement given answer there (versus code in question) refactor -- rather opening new connection each task in queue -- each child process opened single connection , shared across multiple tasks executed within same child (i.e. connection passed argument task processor).
edit:
general practice, 1 should prefer creating connection within process using it. in answer cited, connection being created in parent before forking, used in child. work fine, leaves each "child connection" open in parent well, @ best waste of resources , potential cause of bugs.
Comments
Post a Comment