I am trying to run gensim WMD similarity faster. Typically, this is what is in the docs:
Example corpus:
my_corpus = ["Human machine interface for lab abc computer applications",
>>> "A survey of user opinion of computer system response time",
>>> "The EPS user interface management system",
>>> "System and human system engineering testing of EPS",
>>> "Relation of user perceived response time to error measurement",
>>> "The generation of random binary unordered trees",
>>> "The intersection graph of paths in trees",
>>> "Graph minors IV Widths of trees and well quasi ordering",
>>> "Graph minors A survey"]
my_query = 'Human and artificial intelligence software programs'
my_tokenized_query =['human','artificial','intelligence','software','programs']
model = a trained word2Vec model on about 100,000 documents similar to my_corpus.
model = Word2Vec.load(word2vec_model)
from gensim import Word2Vec
from gensim.similarities import WmdSimilarity
def init_instance(my_corpus,model,num_best):
instance = WmdSimilarity(my_corpus, model,num_best = 1)
return instance
instance[my_tokenized_query]
the best matched document is "Human machine interface for lab abc computer applications"
which is great.
However the function instance
above takes an extremely long time. So I thought of breaking up the corpus into N
parts and then doing WMD
on each with num_best = 1
, then at the end of it, the part with the max score will be the most similar.
from multiprocessing import Process, Queue ,Manager
def main( my_query,global_jobs,process_tmp):
process_query = gensim.utils.simple_preprocess(my_query)
def worker(num,process_query,return_dict):
instance=init_instance
(my_corpus[num*chunk+1:num*chunk+chunk], model,1)
x = instance[process_query][0][0]
y = instance[process_query][0][1]
return_dict[x] = y
manager = Manager()
return_dict = manager.dict()
for num in range(num_workers):
process_tmp = Process(target=worker, args=(num,process_query,return_dict))
global_jobs.append(process_tmp)
process_tmp.start()
for proc in global_jobs:
proc.join()
return_dict = dict(return_dict)
ind = max(return_dict.iteritems(), key=operator.itemgetter(1))[0]
print corpus[ind]
>>> "Graph minors A survey"
The problem I have with this is that, even though it outputs something, it doesn’t give me a good similar query from my corpus even though it gets the max similarity of all the parts.
Am I doing something wrong?
2
Answers
If you define
chunk
static, then you have to computenum_workers
.It’s common to use not more
process
thancores
you have.If you have
17 cores
, that’s ok.cores
are static, therefore you should:Not the same result, changed to:
All
worker
results Index 0..n.Therefore,
return_dict[x]
could be overwritten from last worker with same Index having lower value. The Index in return_dict is NOT the same as Index inmy_corpus
. Changed to:Using
+1
in chunk size computing, will skip that first Document.I don’t know how you compute
chunk
, consider this example:Tested with Python: 3.4.2
Using Python 2.7:
I used threading instead of multi-processing.
In the WMD-Instance creation thread, I do something like this:
‘wmd_instance_count’ is the number of threads to use to search. I also remember the chunk-size. Then, when I want to search for something, I start “wmd_instance_count”-threads to search for and they return found sims:
‘wmd_logic’ is the instance of a class that then does this:
I know, the code isn’t nice, but it works. It uses ‘wmd_instance_count’ threads to find results, I aggregate them and then choose the top-10 or something like that.
Hope this helps.