Trying to understand why a neo4j docker instance shutdown by itself - how to start?

Hello All,

We are running a neo4J instance enterprise version 4.11 on docker on a EC2 instance to back an application and it regularly get down. We have Lambda functions (node) that are querying the instance.

We have been putting the logs and metrics in debug mode but we have a hard time to figure what it means.
We have been "tuning" the memory with the value recommended by the neo4j tool and we are launching the docker instance like this

 docker run --restart always --publish=7474:7474 --publish=7687:7687 --volume=/data/graph:/data --env NEO4J_dbms_memory_heap_max__size=3600m --env NEO4J_dbms_memory_heap_initial__size=3600m --env NEO4J_dbms_memory_pagecache_size=2G --volume=/data/logs:/logs --env NEO4J_dbms_logs_query_enabled=INFO --env NEO4J_dbms_logs_query_threshold=5s --env NEO4J_dbms_logs_debug_level=DEBUG --ulimit=nofile=90000:90000 neo4j

Before an instance crash we usually see logs like

2020-10-05 23:19:16.747+0000 DEBUG [io.netty.buffer.PoolThreadCache] Freed 23 thread-local buffer(s) from thread: neo4j.BoltNetworkIO-3
2020-10-05 23:19:16.747+0000 DEBUG [io.netty.buffer.PoolThreadCache] Freed 24 thread-local buffer(s) from thread: neo4j.BoltNetworkIO-1
2020-10-05 23:19:16.748+0000 DEBUG [io.netty.buffer.PoolThreadCache] Freed 26 thread-local buffer(s) from thread: neo4j.BoltNetworkIO-2
2020-10-05 23:19:16.748+0000 DEBUG [io.netty.buffer.PoolThreadCache] Freed 22 thread-local buffer(s) from thread: neo4j.BoltNetworkIO-4
2020-10-05 23:19:16.749+0000 DEBUG [o.n.b.t.NettyServer] Event loop group shut down

Other logs does (the standard output) does not show meaningful informations.

What are we missing?
How to start to investigate?

Thanks for any help!