Can't backup remote database using Neo4J

It is exactly as the logs above... Just breaks and doesn't record anything...

I would still like to see anything that was written to neo4j.log, debug.log, and the file you specified for the backup. It might also be useful to see what you modified in the neo4j.conf file. More information will help us to help you better.

Elaine

If the database is crashing, you need to investigate why. I suggest that for starters, you take the database down and dump it. Then perform a consistency check on it.

There is definitely something wrong here that needs to be investigated.

Elaine

I made the dump of the DB and then copied it to the remove server, so I can make incremental backup on it.

Launched the neo4j-admin backup script remotely, got the following response after:

Destination is not empty, doing incremental backup...
Doing consistency check...
2020-05-29 13:34:37.631+0000 INFO [o.n.k.i.s.f.RecordFormatSelector] Selected RecordFormat:StandardV3_2[v0.A.8] record format from store /home/ubuntu/backup/graph.db
2020-05-29 13:34:37.631+0000 INFO [o.n.k.i.s.f.RecordFormatSelector] Format not configured. Selected format from the store: RecordFormat:StandardV3_2[v0.A.8]
2020-05-29 13:34:37.673+0000 INFO [o.n.m.MetricsExtension] Initiating metrics...
....................  10%
...Killed

The log on the backuped up db server:

2020-05-29 13:34:22.442+0000 INFO [o.n.b.BackupImpl] BackupServer:13462-1: Incremental backup started...
2020-05-29 13:34:22.446+0000 INFO [o.n.b.BackupImpl] BackupServer:13462-1: Incremental backup finished.

When I check consistency of the dump it seems ok:


Does this mean the backup was successfully done?

So you successfully dumped the database to a file. That is great.

But... you still don't know if the database you dumped is consistent. You must run the consistency check on a database that is not started. You cannot do a consistency check on a dump file.

If that consistency check is successful, then you need to try to backup the database locally first to see if it can be backed up locally.

Elaine

Hello, the consistency check on the DB gives the following error:

2020-05-30 09:54:13.573+0000 INFO [o.n.k.i.s.f.RecordFormatSelector] Selected RecordFormat:StandardV3_2[v0.A.8] record format from store /home/ubuntu/backup/graph.db
2020-05-30 09:54:13.577+0000 INFO [o.n.k.i.s.f.RecordFormatSelector] Format not configured. Selected format from the store: RecordFormat:StandardV3_2[v0.A.8]
2020-05-30 09:54:13.984+0000 INFO [o.n.m.MetricsExtension] Initiating metrics...
....................  10%
....................  20%
....................  30%
....................  40%
....................  50%
....................  60%
....................  70%
....................  80%
.......2020-05-30 10:40:27.039+0000 WARN [o.n.c.ConsistencyCheckService] Label index was not properly shutdown and rebuild is required.
	Label index: neostore.labelscanstore.db
2020-05-30 10:40:29.602+0000 WARN [o.n.c.ConsistencyCheckService] Index was not properly shutdown and rebuild is required.
	Index[ IndexRule[id=1, descriptor=Index( GENERAL, :label[0](property[0]) ), provider={key=lucene, version=1.0}] ]
2020-05-30 10:40:29.602+0000 WARN [o.n.c.ConsistencyCheckService] Index was not properly shutdown and rebuild is required.
	Index[ IndexRule[id=2, descriptor=Index( GENERAL, :label[0](property[1]) ), provider={key=lucene, version=1.0}] ]
2020-05-30 10:40:29.603+0000 WARN [o.n.c.ConsistencyCheckService] Index was not properly shutdown and rebuild is required.
	Index[ IndexRule[id=3, descriptor=Index( GENERAL, :label[1](property[0]) ), provider={key=lucene, version=1.0}] ]
2020-05-30 10:40:29.603+0000 WARN [o.n.c.ConsistencyCheckService] Index was not properly shutdown and rebuild is required.
	Index[ IndexRule[id=4, descriptor=Index( GENERAL, :label[1](property[1]) ), provider={key=lucene, version=1.0}] ]
2020-05-30 10:40:29.604+0000 WARN [o.n.c.ConsistencyCheckService] Index was not properly shutdown and rebuild is required.
	Index[ IndexRule[id=5, descriptor=Index( GENERAL, :label[2](property[0]) ), provider={key=lucene, version=1.0}] ]
2020-05-30 10:40:29.604+0000 WARN [o.n.c.ConsistencyCheckService] Index was not properly shutdown and rebuild is required.
	Index[ IndexRule[id=6, descriptor=Index( GENERAL, :label[2](property[1]) ), provider={key=lucene, version=1.0}] ]
2020-05-30 10:40:29.604+0000 WARN [o.n.c.ConsistencyCheckService] Index was not properly shutdown and rebuild is required.
	Index[ IndexRule[id=7, descriptor=Index( GENERAL, :label[3](property[0]) ), provider={key=lucene, version=1.0}] ]
2020-05-30 10:40:29.605+0000 WARN [o.n.c.ConsistencyCheckService] Index was not properly shutdown and rebuild is required.
	Index[ IndexRule[id=8, descriptor=Index( GENERAL, :label[3](property[1]) ), provider={key=lucene, version=1.0}] ]
2020-05-30 10:40:29.605+0000 WARN [o.n.c.ConsistencyCheckService] Index was not properly shutdown and rebuild is required.
	Index[ IndexRule[id=9, descriptor=Index( GENERAL, :label[0](property[15]) ), provider={key=lucene, version=1.0}] ]

However, it then goes on to redo the consistency check after this warning and reaches 100%

Do I need to fix those (and how?) and can this affect the online backups and why?

Ok, I reset and repopulated the broken indexes in the original database.

When I try to back it up remotely I'm again getting this kind of error:

2020-05-30 17:25:13.480+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.propertystore.db
command failed: Backup failed: Unexpected Exception

There is no record with that time / date neither in the backed up DB log nor in the remote location (where I'm backing up).

What to do?

Did you perform the consistency check after you recreated the indexes?

To be safe, I would do a dump of the database after you have confirmed that it is consistent.

Then... once you bring the database online, can you do a LOCAL backup. I still wonder if there is something with your remote backup configuration.

Elaine

Hi @elaine_rosenber

Should I perform the consistency check again after I recreated the indexes?

Then, as I understand, you suggest I do the offline backup (dump).

Then put the DB back online and do a local ONLINE backup to see if that works, right?

Regarding the settings for the remote backup, as I understand the only settings I can use are in the neo4j-admin command on the server where I am backing up to.

I currently set HEAP_SIZE=2G before and then in the actual neo4j-admin backup command:

--timeout=59m --pagecache=2G

The DB I want to back up is about 41Gb.

The full line I use to backup is:

sudo neo4j-admin backup --backup-dir=backup --name=graph.db --from=my.server.ip:port_address --timeout=59m --pagecache=4G

Of course, on the local DB that I am backing up, I set up in neo4j.conf the following settings:

dbms.memory.heap.initial_size=4024m
dbms.memory.heap.max_size=7300m

dbms.memory.pagecache.size=4g

dbms.backup.enabled=true

dbms.backup.address=0.0.0.0:port_address

Then, when I do the backup, I open that port_address on my server and everything works fine.

However, as I mentioned before, at some point the process stalls or quits (numerous errors above).

There is no trace of anything in the logs both on the server I'm backing up to or on the one I am backing up.

I have on both Neo4J Enterprise 3.3.3.

Is there anything I'm missing or all the settings look good to you?

Thank you!

Definitely do a consistency check on the database before you back it up.

If that passes, then do the LOCAL backup.

dbms.backup.address=0.0.0.0:6362

Hopefully that "default" port of 6362 is available?

I would also make sure there are no files in the backup directory. You want the initial backup to occur as a complete backup.

Elaine

Hi Elaine,

I cannot make that "default" port available, so I'm using another port, however, I specified it in the settings (so it's not 6362 but another combination of digits). Is this a problem? I thought if the new port is specified in the settings, it should work, no?

Did you as you suggested, again run a consistency check (went fine) and did a local online backup.

neo4j-admin backup --backup-dir=backups --name=graph.db --from=localhost:port_number --timeout=59m --pagecache=2G

Went up to this and stopped there with a "killed" message, stopping here:

Doing full backup...
2020-06-01 22:24:32.947+0000 INFO [o.n.c.s.StoreCopyClient] Copying index.db
2020-06-01 22:24:32.983+0000 INFO [o.n.c.s.StoreCopyClient] Copied index.db 797.00 B
2020-06-01 22:24:32.983+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.nodestore.db.labels
2020-06-01 22:24:33.059+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.nodestore.db.labels 7.97 kB
2020-06-01 22:24:33.060+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.nodestore.db
2020-06-01 22:24:34.703+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.nodestore.db 144.63 MB
2020-06-01 22:24:34.704+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.propertystore.db.index.keys
2020-06-01 22:24:34.711+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.propertystore.db.index.keys 7.98 kB
2020-06-01 22:24:34.711+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.propertystore.db.index
2020-06-01 22:24:34.713+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.propertystore.db.index 8.00 kB
2020-06-01 22:24:34.716+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.propertystore.db.strings
2020-06-01 22:24:50.685+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.propertystore.db.strings 1.23 GB
2020-06-01 22:24:50.686+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.propertystore.db.arrays
2020-06-01 22:24:50.696+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.propertystore.db.arrays 8.00 kB
2020-06-01 22:24:50.696+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.propertystore.db
Killed

In the debug.log it says:

2020-06-01 22:24:29.346+0000 INFO [o.n.k.i.DiagnosticsManager]     - Total: 2018-04-17T07:42:55+0000 - 0.00 B
2020-06-01 22:24:29.346+0000 INFO [o.n.k.i.DiagnosticsManager]   - Total: 2018-04-17T07:42:55+0000 - 473.43 MB
2020-06-01 22:24:29.347+0000 INFO [o.n.k.i.DiagnosticsManager]   store_lock: 2018-04-17T07:44:17+0000 - 0.00 B
2020-06-01 22:24:29.347+0000 INFO [o.n.k.i.DiagnosticsManager] Storage summary:
2020-06-01 22:24:29.347+0000 INFO [o.n.k.i.DiagnosticsManager]   Total size of store: 45.68 GB
2020-06-01 22:24:29.347+0000 INFO [o.n.k.i.DiagnosticsManager]   Total size of mapped files: 36.13 GB
2020-06-01 22:24:29.347+0000 INFO [o.n.k.i.DiagnosticsManager] --- STARTED diagnostics for KernelDiagnostics:StoreFiles END ---
2020-06-01 22:24:31.482+0000 INFO [o.n.b.BackupImpl] BackupServer:13462-1: Full backup started...
2020-06-01 22:24:31.486+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Starting check pointing...
2020-06-01 22:24:31.486+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Starting store flush...
2020-06-01 22:24:31.607+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Store flush completed
2020-06-01 22:24:31.607+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Starting appending check point entr$
2020-06-01 22:24:31.611+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Appending check point entry into th$
2020-06-01 22:24:31.611+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Check pointing completed
2020-06-01 22:24:31.611+0000 INFO [o.n.k.i.t.l.p.LogPruningImpl] Log Rotation [944]:  Starting log pruning.
2020-06-01 22:24:31.613+0000 INFO [o.n.k.i.t.l.p.LogPruningImpl] Log Rotation [944]:  Log pruning complete.
2020-06-01 22:24:32.966+0000 INFO [o.n.k.i.DiagnosticsManager] --- SERVER STARTED START ---
2020-06-01 22:24:35.582+0000 INFO [o.n.k.i.DiagnosticsManager] --- SERVER STARTED END ---
2020-06-01 22:24:55.377+0000 WARN [o.n.k.i.c.MonitorGc] GC Monitor: Application threads blocked for 289ms.
2020-06-01 22:25:06.109+0000 WARN [o.n.k.i.c.MonitorGc] GC Monitor: Application threads blocked for 287ms.

So it looks like the DB restarted, right? But why did it do that? It was not accessible by the app, and the backup was the only action I was performing on it...

I then did another attempt.

This was my terminal output:

Doing full backup...
2020-06-01 23:20:25.453+0000 INFO [o.n.c.s.StoreCopyClient] Copying index.db
2020-06-01 23:20:25.515+0000 INFO [o.n.c.s.StoreCopyClient] Copied index.db 797.00 B
2020-06-01 23:20:25.515+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.nodestore.db.labels
2020-06-01 23:20:25.658+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.nodestore.db.labels 7.97 kB
2020-06-01 23:20:25.659+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.nodestore.db
2020-06-01 23:20:27.669+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.nodestore.db 144.63 MB
2020-06-01 23:20:27.673+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.propertystore.db.index.keys
2020-06-01 23:20:27.677+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.propertystore.db.index.keys 7.98 kB
2020-06-01 23:20:27.678+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.propertystore.db.index
2020-06-01 23:20:27.685+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.propertystore.db.index 8.00 kB
2020-06-01 23:20:27.693+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.propertystore.db.strings
2020-06-01 23:20:41.915+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.propertystore.db.strings 1.23 GB
2020-06-01 23:20:41.916+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.propertystore.db.arrays
2020-06-01 23:20:41.933+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.propertystore.db.arrays 8.00 kB
2020-06-01 23:20:41.933+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.propertystore.db
command failed: Backup failed: Unexpected Exception

Got these errors in the debug:

2020-06-01 23:20:23.325+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Starting check pointing...
2020-06-01 23:20:23.325+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Starting store flush...
2020-06-01 23:20:23.422+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Store flush completed
2020-06-01 23:20:23.422+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Starting appending check point entr$
2020-06-01 23:20:23.425+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Appending check point entry into th$
2020-06-01 23:20:23.425+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by full backup [3843881]:  Check pointing completed
2020-06-01 23:20:23.425+0000 INFO [o.n.k.i.t.l.p.LogPruningImpl] Log Rotation [944]:  Starting log pruning.
2020-06-01 23:20:23.426+0000 INFO [o.n.k.i.t.l.p.LogPruningImpl] Log Rotation [944]:  Log pruning complete.
2020-06-01 23:20:50.548+0000 WARN [o.n.k.i.c.MonitorGc] GC Monitor: Application threads blocked for 204ms.

Another fail....

It happens all the time!

In the operations manual for later version of Neo4j, the backup listen address must be in the range: 6362-6372. Would you be able to use that range for a local backup?

Elaine

No, this is not possible to use that range.

I also don't see why this should be an issue?

If a different port is set in the config file it should work, no?

Also, I don't think the backup fails because of the port. First, it starts and continues always for a different period of time. Then when I was doing it remotely it was nearly going until the end.

@elaine_rosenber as you suggested, I changed the port to 6362 (my hosting provider agreed to do that) and tried to run the backup again.

Fail again. This is what I see:

2020-06-02 10:33:22.540+0000 INFO [o.n.c.s.StoreCopyClient] Copied neostore.propertystore.db 28.40 GB
2020-06-02 10:33:22.645+0000 INFO [o.n.c.s.StoreCopyClient] Copying neostore.relationshipstore.db
command failed: Backup failed: Unexpected Exception

The log simply says:

2020-06-02 10:40:02.459+0000 WARN [o.n.k.i.c.MonitorGc] GC Monitor: Application threads blocked for 235ms.
2020-06-02 10:49:24.324+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by scheduler for time threshold [3844003]:  Starting check poi$
2020-06-02 10:49:24.324+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by scheduler for time threshold [3844003]:  Starting store flu$
2020-06-02 10:49:24.483+0000 INFO [o.n.k.i.s.c.CountsTracker] About to rotate counts store at transaction 3844003 to [/$
2020-06-02 10:49:24.490+0000 INFO [o.n.k.i.s.c.CountsTracker] Successfully rotated counts store at transaction 3844003 to [/$
2020-06-02 10:49:28.943+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by scheduler for time threshold [3844003]:  Store flush comple$
2020-06-02 10:49:28.943+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by scheduler for time threshold [3844003]:  Starting appending$
2020-06-02 10:49:28.944+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by scheduler for time threshold [3844003]:  Appending check po$
2020-06-02 10:49:28.945+0000 INFO [o.n.k.i.t.l.c.CheckPointerImpl] Check Pointing triggered by scheduler for time threshold [3844003]:  Check pointing com$
2020-06-02 10:49:28.945+0000 INFO [o.n.k.i.t.l.p.LogPruningImpl] Log Rotation [944]:  Starting log pruning.
2020-06-02 10:49:28.947+0000 INFO [o.n.k.i.t.l.p.LogPruningImpl] Log Rotation [944]:  Log pruning complete.

The database is now totally verified and checked for consistency.

There is definitely a problem with how backup in Neo4J is implemented.

I'm trying to do this already for about 3 weeks.

I tried it locally, remotely, using different hosting environments, doing it offline and then online — and the online backup is simply NOT working.

Shall we just call this thing off and admit that online backups in the commercial version of Neo4J simply don't work?

On my side I guess I will have to start looking for an alternative, like TigerGraph or something similar.

@deemeetree
@elaine_rosenber asked me to look into this.

We have a number of commercial customers who successfully run backup in production.
Is there a reason to use Neo4j 3.3? Yes it should work but 3.3 is not our most recent Neo4j release. Are you required to use 3.3 and not for example 3.5.x or 4.0.x .
Through the updates there is a comment of

The DB I'm backing up is Enterprise 3.3.3 and the one I'm backing up with is 3.5.14
Does this imply the database is in 3.5.14 format but you are using a 3.3.x neo4j-admin to perform the backup? If so is there a reason to backup 3.5.14 with 3.3.x and not simply use a 3.5.x neo4j-admin backup against a 3.5.x database.

Regarding setting of HEAP_SIZE under 3.3.x , there was a fix in 3.5 such that prior releases may not properly recognize this variable when a neo4j-admin command is run on the same instance as where a running Neo4j instance is running.

Can you return running backup but preface the neo4j-admin command with bash -x, i.e.

bash -x ./neo4j-admin backup .... .... .....

The inclusion of bash -x should provide more detail of when the java command is invoked to start backup and specifically I'm interested in a line of output that indicates or similar

+ exec /usr/bin/java -XX:+UseParallelGC -classpath '/home/neo4j/cluster/instance1/neo4j-enterprise-3.5.18/plugins:/home/neo4j/cluster/instance1/neo4j-enterprise-3.5.18/conf:/home/neo4j/cluster/instance1/neo4j-enterprise-3.5.18/lib/*:/home/neo4j/cluster/instance1/neo4j-enterprise-3.5.18/plugins/*' -Dfile.encoding=UTF-8 org.neo4j.commandline.admin.AdminTool backup --backup-dir=/tmp/

Well, I'm running it from 3.3 because I want to back it up first and to then update it to a higher version. I cannot use 4.0 because there are breaking changes (with apoc), so I'm stuck with 3.5 I guess.

I did what you recommended and ran it with the bash. Here is the output:

+ exec /usr/bin/java -classpath '/home/path/neo4j-enterprise-3.3.3/plugins:/home/path/neo4j-enterprise-3.3.3/conf:/home/path/neo4j-enterprise-3.3.3/lib/*:/home/path/neo4j-enterprise-3.3.3/plugins/*' -Dfile.encoding=UTF-8 org.neo4j.commandline.admin.AdminTool backup --backup-dir=backups --name=graph.db --from=localhost:6362 --timeout=59m --pagecache=1G

Right after that it says "Doing full backup..." (and failing, as always). Right before that line I have the following output:

++ /usr/bin/java -version
++ awk -F '"' '/version/ {print $2}'
+ JAVA_VERSION=1.8.0_252
+ [[ 1.8.0_252 < 1.8 ]]
+ /usr/bin/java -version
+ egrep -q '(Java HotSpot\(TM\)|OpenJDK|IBM) (64-Bit Server|Server|Client|J9) VM'
+ build_classpath
+ CLASSPATH='/home/path/neo4j-enterprise-3.3.3/plugins:/home/path/neo4j-enterprise-3.3.3/conf:/home/path/neo4j-enterprise-3.3.3/lib/*:/home/path/neo4j-enterprise-3.3.3/plugins/*'
+ EXTRA_JVM_ARGUMENTS=-Dfile.encoding=UTF-8
+ class_name=org.neo4j.commandline.admin.AdminTool
+ shift
+ export NEO4J_HOME NEO4J_CONF

from your output of

exec /usr/bin/java -classpath '/home/path/neo4j-enterprise-3.3.3/plugins:/home/path/neo4j-enterprise-3.3.3/conf:/home/path/neo4j-enterprise-3.3.3/lib/*:/home/path/neo4j-enterprise-3.3.3/plugins/*' -Dfile.encoding=UTF-8 org.neo4j.commandline.admin.AdminTool backup --backup-dir=backups --name=graph.db --from=localhost:6362 --timeout=59m --pagecache=1G

we see --timeout=59m --pagecache=1G

and to which we see that min/max heap is not defined for it was we would see a reference to -Xms and -Xmx in the above line. So Java will simply default based upon the amount of free RAM when started.

How much total RAM is on the instance and how much is free?

Also with regards to --timeout=59m I'm not familiar with this arguement to java and how it is established. And if it is to timeout after 59 minutes I dont suspect this is in play but when I run backup I do not have it in my command line as it reports

+ exec /usr/bin/java -classpath '/home/neo4j/cluster/instance1/neo4j-enterprise-3.3.3/plugins:/home/neo4j/cluster/instance1/neo4j-enterprise-3.3.3/conf:/home/neo4j/cluster/instance1/neo4j-enterprise-3.3.3/lib/*:/home/neo4j/cluster/instance1/neo4j-enterprise-3.3.3/plugins/*' -Dfile.encoding=UTF-8 org.neo4j.commandline.admin.AdminTool backup --backup-dir=/tmp --name=graph.db

@dana_canzano — well the parameters timeout and pagecache are the neo4-admin params I found in your manual.

Shall I run it without?

RAM is 8G, but really used at every moment of time is about 4G.

So what should I do? How to run the backup?