Retrieving mapped data problem

I have a problem with retrieving data mapped to imported RDF ontology.

I want to achieve smth like mentioned in "We have a graph in Neo4j that we want to publish as JSON-LD through a REST api, but we want to map the elements in our graph (labels, property names, relationship names) to a public vocabulary so our API 'speaks' that public vocabulary and is therefore easily consumable by applications that 'speak' the same vocabulary."

I've successfully imported RDF schema into the Neo4j with n10s plugin. I figured out how to export specific elements with a URI special formal, etc., this part works fine. I created mappings to existing schema namespace to map data (not imported with RDF, but entered either manually or via OGM) and querying that data with any export RDF option results in a rather empty response, although data exists, it has proper mappings and queryable by the cypher.

For example, I've step by step followed doc and as a result I'm having the same situation - however I data retrivable by cypher and proper mapping are in place, data (not imported with n10s) cannot be retrieved with describe (:get /rdf/neo4j/describe/312) or:

:POST /rdf/neo4j/cypher
{ "cypher" : "MATCH path = (n:Customer { customerID : 'GROSR'})-[:PURCHASED]->(o)-[:ORDERS]->()-[:PART_OF]->(:Category { categoryName : 'Beverages'}) RETURN path " , "format": "RDF/XML" }


<?xml version="1.0" encoding="UTF-8"?>


Potentially, I mistook the idea of having data of somewhat different format than imported RDF and just map it to public vocabulary, and that would allow the plugin to retrieve any of that data in some wantable format.
Can you advice what could be wrong in the environment or configuration or in usage?

Regads, Iuliia

Hi Iuliia, thanks for the detailed description!
While we try to reproduce the issue, could you please confirm the versions of neo4j and n10s that your're working with?


Hi again Iuliia, I think I have an idea of what can be going on here:
You say you've created some graph data (via OGM or any other way) and you've also imported some RDF using n10s. Then you've defined some mappings and when trying to see them applied on the RDF endpoint, you get an empty triple set. If this is correct, then here are a couple of ideas:

  1. If all you want to do is expose your graph data as RDF according to a specific vocabulary, then the mappings are all you need, you don't need to import the RDF schema definition of the vocabulary. So my guess is that if you delete the imported RDF and then you drop your graph config, your mappings should be applied and the RDF endpoint should work as expected. Here's how to do this without affecting the non-RDF data:
    match (r:Resource) detach delete r
    call n10s.graphconfig.drop()

The reason why this happens is that the GraphConfig is what drives how components like the RDF exporter or the validator or the microinferencer operate on the graph data. So when you initialised the config CALL n10s.graphconfig.init(); in order to import the RDF you were actually telling the exporter that the data in your graph was RDF with namespace prefixes being shortened (because handleVocabUris: "SHORTEN" is one of the default settings) so when the exporter tries to serialise the result of your cypher query it expects the returned nodes and rels to be in this format, which they're not, hence the empty set returned.

  1. For the case of having both non-RDF and RDF data coexisting in the graph, we had created the handleVocabUris: "IGNORE" config option. In this mode the imported RDF is stripped of all namespace elements and integrates nicely with preexisting data in Neo4j. But you bring up an interesting case that we had not thought of. What if we want to import RDF data keeping the namespace info in and all and use it with not-RDF-imported data? Well this would be a limitation currently in n10s I'm afraid.

Let me know if 1. solves your problem or if you'd need something like what I describe in 2. and if so maybe share a bit more about your use case so we can do an informed analysis of the requirement.



Neo4j 4.0.4

1 Like

Thanks Iuliia, let me know if my comment makes sense and what are your thoughts on it.



If all you want to do is expose your graph data as RDF according to a specific vocabulary, then the mappings are all you need, you don't need to import the RDF schema definition of the vocabulary.

Thanks for assuring me in that.
You were totally right about me trying to do mapping on non-RDF data alongside with imported RDF. I was experimenting locally, so having that in one up and running instance of neo4j was my choise.

Idea#1 totally worked, thanks for clarifications about how it working and why it didn't work as I expected it to.

Idea#2 I've re-imported data from RDF alongside non-RDF data with handleVocabUris: "IGNORE" config option, in result


doesn't work neither for RDF nor non-RDF data, returning blank
So there is no namespace info, but still no data is descriptive.

About sharing the details, I was assured that the company (where I make efforts as a java software engineer) would like to be acquainted with you properly as project does have an interesting case, supposedly from your side as well.

We're adopting and building service on top of the RealEstateCore ontology, you can get a look here. Now we're in the progress of migrating to neo4j as data storage and it would be beneficial for us to use n10s to validate code, schemas, representation/description data we use and adopt with the REC ontology itself

So for now I am investigating almost everything that could be done and I'm not sure, that there would be need to keep RDF and non RDF-data alongside, not talking about RDF with the namespace alongside non-RDF data

Great to hear!

Re. the issue with the describe method when working on 'IGNORE' mode, a fix has been pushed to GitHub so you can either build from source or wait for the next release expected in approximately a week.



oh, and the project you're working on looks quite interesting. I hadn't heard about RealEstateCore before.
It would be great to have a call to discuss, please feel free to reach out to me privately on: jesus dot barrasa at neo4j dot com