【转载】Apache Jena TDB CRUD operations
Apache Jena TDB CRUD operations
June 11, 2015 by maltesander
http://tutorial-academy.com/apache-jena-tdb-crud-operations/
In this tutorial we explain Apache Jena TDB CRUD operations with simple examples. The CRUD operations are implemented with the Jena programming API instead of SPARQL. We provide deeper understanding of the internal operations of the TDB triple store and show some tips and tricks to avoid common programming errors.
1. What are Apache Jena TDB and CRUD operations?
Apache Jena is an open source Java framework for Semantic Web and Linked Data applications. It offers RDF and SPARQL support an Ontology API and Reasoning support as well as triple stores (TDB and Fuseki).
CRUD operations is an abbrevation for create, read, updat and delete and represents the most basic database operations. The same operations are available for triple stores and are shown in this tutorial for TDB.
2. Install Apache Jena and TDB
You can download and add the required libraries manually and add them to your Java Build Path. I recommend to download the full Apache Jena framework to use the Jena API later on. You can download it here.
If you use Maven add the following to your dependencies:
<dependency>
<groupId>org.apache.jena</groupId>
<artifactId>apache-jena-libs</artifactId>
<type>pom</type>
<version>2.13.0</version>
</dependency>
目前最新版本 3.2.0
We use the latest stable release which is 2.13.0 at the moment. Do not forget to update your Maven project afterwards.
3. Writing Java class for TDB access
We create a class called TDBConnection. In the constructor we already initialize the TDB triple store with a path pointing to a folder to be stored. We need a Dataset which is a collection of named graphs or an unamed default graph.
public class TDBConnection
{
private Dataset ds;
public TDBConnection( String path )
{
ds = TDBFactory.createDataset( path );
}
}
If you have an ontology you want to store and manipulate you can use the following function to load it into the store. The begin and end functions mark transaction, which we strongly recommend to use throughout your application. It speeds up read operations and protects the data against data corruption, process termination or system crashes. You basically store multiple named models (namend graphs) in the dataset. You can store one default graph (no name).
public void loadModel( String modelName, String path )
{
Model model = null;
ds.begin( ReadWrite.WRITE );
try
{
model = ds.getNamedModel( modelName );
FileManager.get().readModel( model, path );
ds.commit();
}
finally
{
ds.end();
}
}
If we do not want to load an ontology or model we can build it from scratch using an add method.
public void addStatement( String modelName, String subject, String property, String object )
{
Model model = null;
ds.begin( ReadWrite.WRITE );
try
{
model = ds.getNamedModel( modelName );
Statement stmt = model.createStatement
(
model.createResource( subject ),
model.createProperty( property ),
model.createResource( object )
);
model.add( stmt );
ds.commit();
}
finally
{
if( model != null ) model.close();
ds.end();
}
}
Moving on with reading stored triples. We store the results in a List of Statements.
public List<Statement> getStatements( String modelName, String subject, String property, String object )
{
List<Statement> results = new ArrayList<Statement>();
Model model = null;
ds.begin( ReadWrite.READ );
try
{
model = ds.getNamedModel( modelName );
Selector selector = new SimpleSelector(
( subject != null ) ? model.createResource( subject ) : (Resource) null,
( property != null ) ? model.createProperty( property ) : (Property) null,
( object != null ) ? model.createResource( object ) : (RDFNode) null
);
StmtIterator it = model.listStatements( selector );
{
while( it.hasNext() )
{
Statement stmt = it.next();
results.add( stmt );
}
}
ds.commit();
}
finally
{
if( model != null ) model.close();
ds.end();
}
return results;
}
For removing triples we use the following function.
public void removeStatement( String modelName, String subject, String property, String object )
{
Model model = null;
ds.begin( ReadWrite.WRITE );
try
{
model = ds.getNamedModel( modelName );
Statement stmt = model.createStatement
(
model.createResource( subject ),
model.createProperty( property ),
model.createResource( object )
);
model.remove( stmt );
ds.commit();
}
finally
{
if( model != null ) model.close();
ds.end();
}
}
The update method can be realized by removing and adding the new triple.
Finally we want to close the triple store if we finished our transactions
public void close()
{
ds.close();
}
Now we can move on to write a small test application.
4. Write a test application for the TDB Connection
If you are familiar with JUnit tests in Java, you can use the following code. We add some triples to two named graphs (named models), check the size of the result and remove some triples.
public class TDBConnectionTest extends TestCase
{
protected TDBConnection tdb = null;
protected String URI = "http://tutorial-academy.com/2015/tdb#";
protected String namedModel1 = "Model_German_Cars";
protected String namedModel2 = "Model_US_Cars";
protected String john = URI + "John";
protected String mike = URI + "Mike";
protected String bill = URI + "Bill";
protected String owns = URI + "owns";
protected void setUp()
{
tdb = new TDBConnection("tdb");
}
public void testAll()
{
// named Model 1
tdb.addStatement( namedModel1, john, owns, URI + "Porsche" );
tdb.addStatement( namedModel1, john, owns, URI + "BMW" );
tdb.addStatement( namedModel1, mike, owns, URI + "BMW" );
tdb.addStatement( namedModel1, bill, owns, URI + "Audi" );
tdb.addStatement( namedModel1, bill, owns, URI + "BMW" );
// named Model 2
tdb.addStatement( namedModel2, john, owns, URI + "Chrysler" );
tdb.addStatement( namedModel2, john, owns, URI + "Ford" );
tdb.addStatement( namedModel2, bill, owns, URI + "Chevrolet" );
// null = wildcard search. Matches everything with BMW as object!
List<Statement> result = tdb.getStatements( namedModel1, null, null, URI + "BMW");
System.out.println( namedModel1 + " size: " + result.size() + "\n\t" + result );
assertTrue( result.size() > 0);
// null = wildcard search. Matches everything with john as subject!
result = tdb.getStatements( namedModel2, john, null, null);
System.out.println( namedModel2 + " size: " + result.size() + "\n\t" + result );
assertTrue( result.size() == 2 );
// remove all statements from namedModel1
tdb.removeStatement( namedModel1, john, owns, URI + "Porsche" );
tdb.removeStatement( namedModel1, john, owns, URI + "BMW" );
tdb.removeStatement( namedModel1, mike, owns, URI + "BMW" );
tdb.removeStatement( namedModel1, bill, owns, URI + "Audi" );
tdb.removeStatement( namedModel1, bill, owns, URI + "BMW" );
result = tdb.getStatements( namedModel1, john, null, null);
assertTrue( result.size() == 0);
tdb.close();
}
}
If you do not want to use JUnit you can simply add the code to a main function.
public class TDBMain
{
public static void main(String[] args)
{
TDBConnection tdb = null;
String URI = "http://tutorial-academy.com/2015/tdb#";
String namedModel1 = "Model_German_Cars";
String namedModel2 = "Model_US_Cars";
String john = URI + "John";
String mike = URI + "Mike";
String bill = URI + "Bill";
String owns = URI + "owns";
tdb = new TDBConnection("tdb");
// named Model 1
tdb.addStatement( namedModel1, john, owns, URI + "Porsche" );
tdb.addStatement( namedModel1, john, owns, URI + "BMW" );
tdb.addStatement( namedModel1, mike, owns, URI + "BMW" );
tdb.addStatement( namedModel1, bill, owns, URI + "Audi" );
tdb.addStatement( namedModel1, bill, owns, URI + "BMW" );
// named Model 2
tdb.addStatement( namedModel2, john, owns, URI + "Chrysler" );
tdb.addStatement( namedModel2, john, owns, URI + "Ford" );
tdb.addStatement( namedModel2, bill, owns, URI + "Chevrolet" );
// null = wildcard search. Matches everything with BMW as object!
List<Statement> result = tdb.getStatements( namedModel1, null, null, URI + "BMW");
System.out.println( namedModel1 + " size: " + result.size() + "\n\t" + result );
// null = wildcard search. Matches everything with john as subject!
result = tdb.getStatements( namedModel2, john, null, null);
System.out.println( namedModel2 + " size: " + result.size() + "\n\t" + result );
// remove all statements from namedModel1
tdb.removeStatement( namedModel1, john, owns, URI + "Porsche" );
tdb.removeStatement( namedModel1, john, owns, URI + "BMW" );
tdb.removeStatement( namedModel1, mike, owns, URI + "BMW" );
tdb.removeStatement( namedModel1, bill, owns, URI + "Audi" );
tdb.removeStatement( namedModel1, bill, owns, URI + "BMW" );
result = tdb.getStatements( namedModel1, john, null, null);
System.out.println( namedModel1 + " size: " + result.size() + "\n\t" + result );
tdb.close();
}
}
5. Tips for developing with Jena and TDB
In your TDB storage folder you will find a file called nodes.dat, after initializing the TDB store. There you can check if your triples were inserted. Of course it gets complicated in a bigger graph, but it is kept mostly in plain text. Make use of the search function.
<Model_5FGerman_5FCars> +<http://tutorial-academy.com/2015/tdb#John> +<http://tutorial-academy.com/2015/tdb#owns> .<http://tutorial-academy.com/2015/tdb#Porsche> *<http://tutorial-academy.com/2015/tdb#BMW> +<http://tutorial-academy.com/2015/tdb#Mike> +<http://tutorial-academy.com/2015/tdb#Bill> +<http://tutorial-academy.com/2015/tdb#Audi> <Model_5FUS_5FCars> /<http://tutorial-academy.com/2015/tdb#Chrysler> +<http://tutorial-academy.com/2015/tdb#Ford> 0<http://tutorial-academy.com/2015/tdb#Chevrolet>
If you delete triples and wonder why they are still kept in the nodes.dat, but do not show up when reading via the API, this is related to the TDB architecture.
6. TDB architecture
TDB uses a node table which maps RDF nodes to 64 bit integer Ids and the other way around. The 64 bit integer Ids are used to create indexes. The indexes allow database scans which are required to process SPARQL queries.
Now if new data is added, the TDB store adds entries to the node table and the indexes. Removing data only affects the indexes. Therefore the node table will grow continuously even if data is removed.
You might think that is a terrible way to store data, but there are good reasons to do so:
- The integer Ids contain file offsets. In order to accelerate inserts, the node table is a squential file. The Id to node lookup is a fast file scan. If data gets deleted from the node table, you have to recalculate and rewrite all file offsets.
- Now if data is deleted, we do not know how often a node is used without scanning the complete database. Consequently we do not know which node table entry should be deleted. A workaround would add complexity and slow down and delete operations.
Anyways, in our experience the majority of operations on a triple store are inserts and reads. If you ever have the trouble of running out of disk space, you may read the whole affected graph and store it from scratch while deleting the original one. Of course depending on the size, this may as well slow down the triple store.