Current best approach to programmatically bulk import data into Neo4j from Spark?

Hi,

I've been successfully able to use the neo4j-spark-connector to read graph data from Neo4j into Spark's dataframes. I now have a use case to write large amount of data from Spark back into Neo4j. Using the mergeEdgeList function, I successfully wrote back small amount of data into my graph in Neo4j. However, when tried with larger amount of data, I began running into the LockClient dead lock issue (I've found multiple instances of this reported).

2 questions:

  1. Is using neo4j-admin import the best way to import/update large graphs in Neo4j currently?
  2. Are there alternatives? I prefer to process raw data with Spark, and write into Neo4j also from Spark to avoid intermediate csv files..etc.

Thank you in advance.

Hi @calkhoavu , I know it's been a while, but have you found any best practice to do this? I'm currently facing a similar problem