Knowledge Graph vs RDF: Two Things People Keep Confusing
RDF is a model, not a graph
The Resource Description Framework, or RDF, is a W3C recommendation that specifies how to write down a graph of facts as a set of subject-predicate-object statements called triples. The current version, RDF 1.1, was published in 2014, and the upcoming RDF 1.2 standardisation work covers RDF-star (a syntax for talking about triples themselves). RDF defines a data model and a handful of serialisations — Turtle, N-Triples, JSON-LD, RDF/XML — but it does not define a database, a query language, or a particular knowledge graph.
A knowledge graph, on the other hand, is the populated thing — a body of typed entities and relationships about a domain. You can express a knowledge graph in RDF, in Cypher property graphs, in plain JSON, or even in a relational schema. The choice of expression affects performance, tooling, and interoperability, but not whether what you have is a knowledge graph.
Confusing the two leads to bad architecture decisions. Teams pick RDF because they need a knowledge graph, even when their use case has no need for OWL reasoning, federated SPARQL queries, or interop with public linked data. Or, going the other way, teams build property graphs and then realise too late that interop with semantic-web datasets — Wikidata, DBpedia, schema.org-typed pages — would have been easier with RDF.
RDF is a way of writing down a graph of facts. A knowledge graph is what you have once you have written enough of them down to be useful.
What RDF actually gives you
RDF's atomic unit is the triple: subject, predicate, object. A triple says 'subject has predicate value object'. A set of triples is a graph. Every subject and predicate, and most objects, is identified by an IRI — an Internationalised Resource Identifier — so that two RDF documents from different sources can refer to the same entity without coordination. This is the killer feature: global, unique, decentralised identifiers.
Around the core data model, the W3C has published a stack: RDFS for class hierarchies and basic schema; OWL for richer ontologies with reasoning support; SPARQL for querying; SHACL for validation. JSON-LD provides a JSON syntax that is interoperable with RDF and is widely used for schema.org markup on web pages. Together these specs cover the full lifecycle of a semantic-web knowledge graph.
Real-world RDF stores include Apache Jena, Stardog, GraphDB, Virtuoso, AllegroGraph, and Amazon Neptune (which speaks both SPARQL and Gremlin). They support a similar feature set — SPARQL querying, named graphs, full-text indexing, OWL reasoning at varying levels — but differ on operational details like clustering, write throughput, and licence cost.
What property graphs do differently
The other main flavour of knowledge graph storage is the property graph, popularised by Neo4j and now standardised as GQL (ISO/IEC 39075:2024). Where RDF talks about triples, property graphs talk about nodes and edges, both of which can carry property maps. The structural difference is small but consequential: in RDF, putting a property on an edge requires a workaround called RDF reification or the newer RDF-star syntax, whereas in a property graph you just attach the property directly.
Property graphs trade decentralised identifiers for ergonomics. There is no IRI, just a node ID local to the database. Merging two property graphs requires a deliberate id-mapping step, where merging two RDF graphs is a set union. In return, property-graph languages like Cypher feel closer to ordinary SQL and are quicker to pick up.
Performance characteristics also differ. Property graphs are usually faster for traversal queries — 'find all nodes within five hops' — because the storage layout puts adjacent nodes near each other. Triple stores are usually faster for join-heavy queries that filter on many predicates, because they index every pattern. Modern systems blur this line; FalkorDB, the open-source graph engine that powers KnodeGraph, is a property-graph engine that delivers traversal-heavy workloads at low latency.
When RDF is the right choice
Pick RDF when interoperability with the public semantic web matters. If your graph needs to consume Wikidata, DBpedia, GeoNames, or schema.org-marked-up pages, RDF means zero impedance — the data is already in your model. Same for cross-organisation collaboration in domains like life sciences (BioPortal, the OBO Foundry) or libraries (BIBFRAME), where decades of vocabulary work assume RDF.
Pick RDF when formal reasoning matters. OWL DL reasoners can derive new facts from your graph that are not explicitly stored, and can detect inconsistencies. This is useful in compliance-heavy or correctness-critical domains. It is overkill for most product features.
Pick RDF when you want long-term stability. The W3C specs from 2004 are still the W3C specs in 2026. Property-graph standards have only just stabilised under GQL. If your graph needs to outlive your current vendor, RDF's standards-based approach is the safer bet.
When property graphs are the right choice
Pick property graphs when developer ergonomics dominate. Cypher reads like English and ports easily between Neo4j, FalkorDB, Memgraph, and Amazon Neptune (in OpenCypher mode). The learning curve from SQL is shorter than the curve to SPARQL, and the debugging story is generally better.
Pick property graphs when most of your queries are traversal-shaped. Recommendations, fraud detection, supply-chain tracing, document relationship mapping — these workloads benefit from index-free adjacency, which is the property-graph storage trick that lets each step of a traversal cost roughly O(1).
Pick property graphs when JSON is your lingua franca. Property graphs serialise to JSON cleanly, and tools like Apache TinkerPop's Gremlin and Neo4j's cypher-shell make it straightforward to round-trip data through web APIs. RDF can do this with JSON-LD, but the JSON-LD context boilerplate adds friction.
KnodeGraph itself is a property-graph product running on FalkorDB, with template-driven schemas that play the same role as a small ontology. We made that choice deliberately: most users are coming from documents, not from public RDF data, and the property-graph ergonomics make the learning curve gentler. Users who need RDF interop can export to JSON and convert via tools like rdflib.
Related reading
- KnodeGraph vs Stardog — Stardog is a leading RDF triple store; this page contrasts it with a property-graph workflow.
- Extract structured data from PDFs — How to populate a graph from documents — the question of RDF vs property graph comes up at this exact step.
- Knowledge graphs for academic research — Academic users frequently need RDF interop for public datasets like DBpedia and Wikidata.
Frequently Asked Questions
Are RDF and JSON-LD the same thing?
JSON-LD is a JSON-based serialisation format for RDF. Every JSON-LD document represents an RDF graph and can be processed by RDF tools; not every RDF graph is naturally JSON-LD-shaped. JSON-LD has become the default for embedding structured data on web pages because Google, Bing, and other search engines parse it for rich results. If you have ever written schema.org markup in a website's <head>, you have written JSON-LD, which means you have written RDF.
Is SPARQL strictly better than Cypher?
No, and the reverse is not true either. SPARQL is more expressive in some ways — federated queries across multiple endpoints, named graph manipulation, property paths over IRIs — and Cypher is more ergonomic in others, especially for variable-length paths and pattern matching. The 2024 GQL standard largely follows Cypher's design. Most teams choose based on the surrounding stack rather than on language merits.
Can I convert between RDF and a property graph?
Yes, and several tools exist for this — Neo4j's neosemantics, RDFLib, Apache Jena's neo4j-shacl. The conversions are lossy in subtle ways: RDF reification on edges does not round-trip perfectly into property maps, and vice versa. For most engineering uses the lossiness is acceptable. For semantic-web purists, RDF-star (RDF 1.2) is closing the gap from the RDF side.
Does KnodeGraph use RDF?
No. KnodeGraph stores graphs in FalkorDB, an open-source property-graph engine that speaks Cypher. The data model is property-graph-native: nodes have labels and property maps, edges have types and property maps. We chose this because most KnodeGraph users start from documents and want a quick path to a queryable graph. Users who need RDF can export to JSON and use rdflib or a similar library to translate.
What is RDF-star, and should I care?
RDF-star (sometimes written RDF*) is an extension that lets you make statements about other statements — for example, 'the fact that Marie Curie discovered polonium was asserted by source X with confidence 0.95'. It is part of the RDF 1.2 work and ships in most modern triple stores. If you are building a knowledge graph with strong provenance requirements and you are already in the RDF camp, it is worth using. If you are happy in the property-graph world, edge properties give you the same expressiveness without the new syntax.
Source
W3C, 'RDF 1.1 Concepts and Abstract Syntax', W3C Recommendation, 25 February 2014. [link]
Ready to Try KnodeGraph?
Start free with 3 graphs and 100 nodes. Upgrade to Pro for AI extraction, unlimited graphs, and 50K nodes.
Get Started Free