order_search_paper_byfulltext_English = "CALL db.index.fulltext.queryNodes(\"title_abstract_English\", \"" + query + "\") YIELD node, score " \
"RETURN node as p, score limit $limit"
result = tx.run(order_search_paper_byfulltext_English, limit=limit)
pool = Pool()
return pool.map(my_paper_attribute_tackle, result , chunksize=100)
when I run the code above, an exception occurs:
Process ForkPoolWorker-1:
Traceback (most recent call last):
File "/home/xiaorui/anaconda3/envs/hangkong/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/xiaorui/anaconda3/envs/hangkong/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/xiaorui/anaconda3/envs/hangkong/lib/python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File "/home/xiaorui/anaconda3/envs/hangkong/lib/python3.9/multiprocessing/queues.py", line 368, in get
return _ForkingPickler.loads(res)
File "/home/xiaorui/anaconda3/envs/hangkong/lib/python3.9/site-packages/neo4j/data.py", line 56, in __new__
for key, value in iter_items(iterable):
File "/home/xiaorui/anaconda3/envs/hangkong/lib/python3.9/site-packages/neo4j/conf.py", line 50, in iter_items
for key, value in iterable:
ValueError: too many values to unpack (expected 2)
even if my my_paper_attribute_tackle function do nothing.
when I add d = paper_result.data() it's ok , however I want to directly map paper_result to different process, is there any solutions?