[my previous post](https://discourse.neo4j.com/t/can-this-query-be-optimized-gds-node-similarity/42063/4)
I created a python driver and ran those queries. In order to avoid having hundreds of subgraphs projected in the memory and exploding the server I saved the communityIds in a python list and iterate over this list, applying and returning node similarity and dropping the subgraph.
The process always dies because of Out-of-Memory errors. I thought it could be a GC issue, so I tried using ZGC experimental feature, and it actually got better, but not enough to actually conclude the query.
After the client instantiation, I opened a single session and then ran the queries. I tried both to open a new session per query but the result was the same.
Reading the GC log I saw that the memory that got dropped through each community iteration was not even close to being the same that was allocated in the first place.
I thought maybe because I'm calling a projected graph every time I create a subgraph (
CALL gds.beta.graph.create.subgraph(graphName, 'graph', nodeFilter, '*') ) maybe this graph is being saved at the memory again and again and never dropped.