Automatic Data Load from Azure Storage to Neo4j

I am automating data load from Azure storage to Neo4j

I have a csv file in Datalake storage, which I am reading using pandas in Azure Databricks.
Below is the code I am using.
Nodes and relationships should be created only when row.Name is not null, I am using this condition in Python itself to avoid creation of null nodes.
But I am getting around 12 nodes as 'NaN', my input file has only 3 rows which has non-Null Name field.
There should be no NaN nodes, kindly help if someone has faced same situation.

Earlier, I was using foreach loop within Cypher, but TRIM condition results in nonstring output when used in py2neo.
FOREACH(n IN (CASE WHEN trim(row.Name) <> "" THEN [1] else END) |
Or, is there a possibility to use TRIM condition in python-pandas? my end result is there should be no NaN nodes - only nodes with values to be created.
I have similar code to create ~60 nodes in my cypher query

from py2neo import Graph, Node, Relationship
import pandas as pd
graph = Graph("bolt://", auth=("user", "password"))

data = pd.read_csv("/dbfs/mnt/mountname/filename.csv")
print("Column name of data : ", data.columns)
user_data =data[['']]
user_data = list(user_data.T.to_dict().values())

for index, row in data.iterrows():
if pd.notna(row["Name"]):
query='''UNWIND {rows} AS row
WITH row, split(toString(row.column1), ";") AS colvalue1, split(toString(row.column2), ";") AS colvalue2
UNWIND colvalue1 AS col1
UNWIND colvalue2 AS col2
MERGE (label1:node1 {pr1: col1, pr2: row.column3, test: row.column4, att: row.column5})
MERGE (label2:node2 {name: col2})
MERGE (label3:node3 {name: row.column6})
MERGE (label4:node4 {name: row.column7, attr:row.column5})