Understanding cluster load balancing and high availability

Hello. I set a 3 core cluster in AWS. I'm trying to understand how to properly set up the cluster. Are all cores capable of read and write or do I set up which does which in the config?

I understand that the bolt+routing handles LB but do I point it to one core only? What happens if the core becomes unavailable? Does it mean there's no need to use AWS ELB? What about auto-scaling group? Is it even needed or is there a mechanism I can set up in the config?

In a 3-node cluster you get 1 leader and 2 followers. Only the leader may accept writes. That is, as you add cores you're adding redundancy/safety so that your data is less likely to be lost, and you're scaling your ability to do read queries, but your writes will be limited by the leader.

Load balancers such as AWS ELB can often introduce problem, because they work against the way the bolt+routing protocol works. Load balancers tend to want to treat all connections as equal, and they don't know for example if your bolt connection is going to issue a write query, they'll just route the connection to one of the three, and it may often fail if for example you send a write to a follower.

What you can do is use a client with bolt+routing, point it at any IP of the 3. It will bootstrap a routing table. And then as nodes fail or get added / removed from the cluster topology, the client will keep track of that routing table. If you connect to machine A in cluster A, B, C, it will discover all three -- then later even if A is removed, it'll know to keep talking to B, C, and (possibly a new D)

As for auto-scaling groups, these can be helpful in as far as they guarantee availability of a set of instances.

To learn more about these topics, I'd recommend this link: Introduction - Operations Manual

Thanks David. This is a very clear explanation.

1 Like

@david_allen so as per my understanding there can be only 1 leader in cluster but multiple followers.
the large number of follwer will cause dely in write . because all were syncing up correct me if i am wrong.
also please help me out i want multiple writer in cluster is that possible

we only need at most more than half of the members defined as followers to commit. So with a 3 instance cluster the leader and 1 follower would need to commit for the txn to be fully committed. The other core member will eventually commit.
If you had a 5 core member cluster, i.e 1 leader and 4 followers you would only need the leader and 2 followers to commit for it to be fully committed. The other 2 members will eventually commit.

It is not possible to have multiple LEADERS/writers.

If you have a performance concern maybe create read replicas to satisfy the READ requests and leave the core members solely responsible for writes

2 Likes

Hey @dana_canzano ,
we have 4 node (each with 16GB RAM ) neo4j cluster running now,
I want to make my search query more efficient.
For performance concern while reading the data if i want to use my heap size more than 16GB on follower or read-replicas node ,is it possible to use heap size more than 16GB? if yes then please let me know.

Thankyou

if each cluster member has a total of 16GB RAM I'm not sure how you would be able to set heap to exceed total RAM. Is there a reason you think that size of heap is the concern for query performance?

@dana_canzano thank-you for replying
i have read this line somewhere though i am not sure about this.
For clusters, set up a single DNS record with multiple A entries, each pointing to the cluster members

here is my query
match(p1:Customer)
with COLLECT(distinct p1.CustomerId ) as ws
unwind ws as ws1
match(p:Customer{CustomerId: ws1})-[x:InteractsWith]->(pr:Product)<-[:HasProduct]-(c:Category)
with c.Category as Category, x.Date as date, p.CustomerId as Id
order by date desc
with collect(distinct Category)[..5] as category, Id
unwind category as w2
match(c:Category{Category: w2})-[:HasProduct]->(pr:Product)
return Id, w2, collect(pr.ProductId)[..3] as pid
it has 38000 distinct customerids
so to speed up the query i increased the heap size to 16GB and it ran out of memory and collapsed

one more way i can think of is to run the query on all 4 nodes by splitting the customers
for example (0 to 9500 customers on node number 1) and (9500 to 19000 on node number 2) and so on by increasing the heap size of each node.
is there any other way please let me know
Thankyou again

ok. so I'm confused. Your prior response spoke of performance issues and wondering if heap could be increased but now the next response is about setting up DNS in a clustered environment. ????

Do you have a performance concern? if so can you provide more details?

Do you have a concern as to how to connect to a cluster member and need DNS help ? If so can you provide more details

or is this something entirely different?

Hey i just want to deduce my querying time, i tried executing the query that i mentioned in my reply and it was taking substantial time i tried increasing heap considering querying speed depends upon heap (correct me if i am wrong ). so i just increased my max heap size to 16GB which is the total RAM available on my node so when i executed the query for a subset (size=2500) it was returning results in 2.5 minutes but again when i executed it for complete dataset (size=38000) and after running for 500 second the memory consumption was around 100% and node collapsed.
so this was the exact scenario happened, so i tried increasing heap size by considering that it will speed up the process .
if you can help me out in speed up my query by any method , that would be great.
Thank-you

setting heap to 16GB where the total RAM on the machine is 16GB is certainly going to result in an out of memory. if you attempt to assign all RAM to the Neo4j Java process through heap allocation, you leave 0 RAM available for any other process.

Given your query, do you have any indexes on labels :Customer or :Category ?

Yeah i understand that we can not use heap memory equal to ram that will throw memory error, but i wanted to speed up the query and for that i maximised heap.
Yeah i have created index on all the property nodes as used in the query.
One more thing that i found was when i executed the query for subset, the terminal showed "displaying 1000 rows, completed in 12 ms" but the query was running for 150 seconds

  1. can you preface the query with explain and return the results?

  2. did you configure in conf/neo4j.conf and parameter dbms.memory.pagecache.size?

i set the pagecache size to 4G
i put the query with preface explain and here are my results