stores Package¶
stores
Package¶
This package contains modules for additional RDFLib stores
auditable
Module¶
This wrapper intercepts calls through the store interface and implements thread-safe logging of destructive operations (adds / removes) in reverse. This is persisted on the store instance and the reverse operations are executed In order to return the store to the state it was when the transaction began Since the reverse operations are persisted on the store, the store itself acts as a transaction.
Calls to commit or rollback, flush the list of reverse operations This provides thread-safe atomicity and isolation (assuming concurrent operations occur with different store instances), but no durability (transactions are persisted in memory and wont be available to reverse operations after the system fails): A and I out of ACID.
-
class
rdflib.plugins.stores.auditable.
AuditableStore
(store)[source]¶ Bases:
rdflib.store.Store
-
__module__
= 'rdflib.plugins.stores.auditable'¶
-
concurrent
Module¶
-
class
rdflib.plugins.stores.concurrent.
ConcurrentStore
(store)[source]¶ Bases:
object
-
__module__
= 'rdflib.plugins.stores.concurrent'¶
-
regexmatching
Module¶
This wrapper intercepts calls through the store interface which make use of the REGEXTerm class to represent matches by REGEX instead of literal comparison.
Implemented for stores that don’t support this and essentially provides the support by replacing the REGEXTerms by wildcards (None) and matching against the results from the store it’s wrapping.
-
class
rdflib.plugins.stores.regexmatching.
REGEXMatching
(storage)[source]¶ Bases:
rdflib.store.Store
-
__module__
= 'rdflib.plugins.stores.regexmatching'¶
-
-
class
rdflib.plugins.stores.regexmatching.
REGEXTerm
(expr)[source]¶ Bases:
unicode
REGEXTerm can be used in any term slot and is interpreted as a request to perform a REGEX match (not a string comparison) using the value (pre-compiled) for checking rdf:type matches
-
__module__
= 'rdflib.plugins.stores.regexmatching'¶
-
sparqlstore
Module¶
This is an RDFLib store around Ivan Herman et al.’s SPARQL service wrapper. This was first done in layer-cake, and then ported to RDFLib 3 and rdfextras
This version works with vanilla SPARQLWrapper installed by easy_install
,
pip
or similar. If you installed rdflib
with a tool that understands
dependencies, it should have been installed automatically for you.
- Changes:
- Layercake adding support for namespace binding, I removed it again to work with vanilla SPARQLWrapper
- JSON object mapping support suppressed
- Replaced ‘4Suite-XML Domlette with Elementtree
- Incorporated as an RDFLib store
-
rdflib.plugins.stores.sparqlstore.
CastToTerm
(node)[source]¶ Helper function that casts XML node in SPARQL results to appropriate rdflib term
-
class
rdflib.plugins.stores.sparqlstore.
NSSPARQLWrapper
(endpoint, updateEndpoint=None, returnFormat='xml', defaultGraph=None, agent='sparqlwrapper 1.6.4 (rdflib.github.io/sparqlwrapper)')[source]¶ Bases:
SPARQLWrapper.Wrapper.SPARQLWrapper
-
__module__
= 'rdflib.plugins.stores.sparqlstore'¶
-
nsBindings
= {}¶
-
setNamespaceBindings
(bindings)[source]¶ A shortcut for setting namespace bindings that will be added to the prolog of the query
@param bindings: A dictionary of prefixs to URIs
-
setQuery
(query)[source]¶ Set the SPARQL query text. Note: no check is done on the validity of the query (syntax or otherwise) by this module, except for testing the query type (SELECT, ASK, etc).
Syntax and validity checking is done by the SPARQL service itself.
@param query: query text @type query: string @bug: #2320024
-
-
class
rdflib.plugins.stores.sparqlstore.
SPARQLStore
(endpoint=None, bNodeAsURI=False, sparql11=True, context_aware=True)[source]¶ Bases:
rdflib.plugins.stores.sparqlstore.NSSPARQLWrapper
,rdflib.store.Store
An RDFLib store around a SPARQL endpoint
This is in theory context-aware, and should work OK when the context is specified. (I.e. for Graph objects) then all queries should work against the named graph with the identifier of the graph only.
For ConjunctiveGraphs, reading is done from the “default graph” Exactly what this means depends on your endpoint. General SPARQL does not offer a simple way to query the union of all graphs.
Fuseki/TDB has a flag for specifying that the default graph is the union of all graphs (tdb:unionDefaultGraph in the Fuseki config) If this is set this will work fine.
Warning
The SPARQL Store does not support blank-nodes!
As blank-nodes acts as variables in SPARQL queries there is no way to query for a particular blank node.
-
__module__
= 'rdflib.plugins.stores.sparqlstore'¶
-
add
((subject, predicate, obj), context=None, quoted=False)[source]¶ Add a triple to the store of triples.
-
addN
(quads)[source]¶ Adds each item in the list of statements to a specific context. The quoted argument is interpreted by formula-aware stores to indicate this statement is quoted/hypothetical.
Note that the default implementation is a redirect to add.
-
contexts
(triple=None)[source]¶ Iterates over results to SELECT ?NAME { GRAPH ?NAME { ?s ?p ?o } } returning instances of this store with the SPARQL wrapper object updated via addNamedGraph(?NAME) This causes a named-graph-uri key / value pair to be sent over the protocol
-
formula_aware
= False¶
-
open
(configuration, create=False)[source]¶ sets the endpoint URL for this SPARQLStore if create==True an exception is thrown.
-
query_endpoint
¶
-
regex_matching
= 0¶
-
transaction_aware
= False¶
-
triples
((s, p, o), context=None)[source]¶ - tuple (s, o, p)
the triple used as filter for the SPARQL select. (None, None, None) means anything.
- context context
the graph effectively calling this method.
Returns a tuple of triples executing essentially a SPARQL like SELECT ?subj ?pred ?obj WHERE { ?subj ?pred ?obj }
context may include three parameter to refine the underlying query:
- LIMIT: an integer to limit the number of results
- OFFSET: an integer to enable paging of results
- ORDERBY: an instance of Variable(‘s’), Variable(‘o’) or Variable(‘p’)
or, by default, the first ‘None’ from the given triple
- Using LIMIT or OFFSET automatically include ORDERBY otherwise this is
because the results are retrieved in a not deterministic way (depends on the walking path on the graph) - Using OFFSET without defining LIMIT will discard the first OFFSET - 1 results
`` a_graph.LIMIT = limit a_graph.OFFSET = offset triple_generator = a_graph.triples(mytriple):
#do something#Removes LIMIT and OFFSET if not required for the next triple() calls del a_graph.LIMIT del a_graph.OFFSET ``
-
triples_choices
((subject, predicate, object_), context=None)[source]¶ A variant of triples that can take a list of terms instead of a single term in any slot. Stores can implement this to optimize the response time from the import default ‘fallback’ implementation, which will iterate over each term in the list and dispatch to triples.
-
-
class
rdflib.plugins.stores.sparqlstore.
SPARQLUpdateStore
(queryEndpoint=None, update_endpoint=None, bNodeAsURI=False, sparql11=True, context_aware=True, postAsEncoded=True)[source]¶ Bases:
rdflib.plugins.stores.sparqlstore.SPARQLStore
A store using SPARQL queries for read-access and SPARQL Update for changes
This can be context-aware, if so, any changes will be to the given named graph only.
For Graph objects, everything works as expected.
Warning
The SPARQL Update Store does not support blank-nodes!
As blank-nodes acts as variables in SPARQL queries there is no way to query for a particular blank node.
-
__init__
(queryEndpoint=None, update_endpoint=None, bNodeAsURI=False, sparql11=True, context_aware=True, postAsEncoded=True)[source]¶
-
__module__
= 'rdflib.plugins.stores.sparqlstore'¶
-
open
(configuration, create=False)[source]¶ sets the endpoint URLs for this SPARQLStore :param configuration: either a tuple of (queryEndpoint, update_endpoint),
or a string with the query endpointParameters: create – if True an exception is thrown.
-
update
(query, initNs={}, initBindings={}, queryGraph=None, DEBUG=False)[source]¶ Perform a SPARQL Update Query against the endpoint, INSERT, LOAD, DELETE etc. Setting initNs adds PREFIX declarations to the beginning of the update. Setting initBindings adds inline VALUEs to the beginning of every WHERE clause. By the SPARQL grammar, all operations that support variables (namely INSERT and DELETE) require a WHERE clause. Important: initBindings fails if the update contains the substring ‘WHERE {‘ which does not denote a WHERE clause, e.g. if it is part of a literal.
-
update_endpoint
¶ the HTTP URL for the Update endpoint, typicallysomething like http://server/dataset/update
-
where_pattern
= <_sre.SRE_Pattern object>¶
-