PulseDB is a database-mapping software library written in Java, it provides a transparent access and manipulation to a great variety of database implementations. PDB provides a DSL that covers most of SQL functionalities and allows to easily integrate persistence into your projects and modules.
Add the following dependency to your Maven pom (example for PDB v2.8.14):
<dependencies>
...
<dependency>
<groupId>com.feedzai</groupId>
<artifactId>pdb</artifactId>
<version>2.8.14</version>
</dependency>
...
</dependencies>
The timeout properties have been redefined in PdbProperties as numeric instead of strings since v2.4.6 - avoid using versions 2.4.4 and 2.4.5.
DatabaseEngine interface has a new method #dropView() since v2.5.3. This method was created without a default so this led to a breaking change. Use 2.5.5 and avoid using 2.5.3 and 2.5.4.
DatabaseEngine methods #getPSResultSet(final String name)
, #getPSIterator(final String name)
, #getPSIterator (final String name, final int fetchSize)
, since 2.7.0, throw an extra ConnectionResetException
when the database
connection is lost and recovered.
- It is now possible to call built-in database vendor functions [e.g. f("lower", column("COL1"))]
- Added lower and upper functions
- Fixed several connection leaks
- Fixed MySQL large result fetching
- It now uses Guava 25.1-jre version, which might require the client to also upgrade it to match the same version
- H2 version upgraded to 2.1.210
H2Engine
is now deprecated and uses the H2v2 legacy mode if this engine is used with the H2v2 driver (for more information regarding the legacy mode see: http://www.h2database.com/html/features.html)- The
H2V2Engine
was created and it is the engine that should be used from now on (it works on regular mode, not legacy)
- Added a multithreaded implementation for batches, that uses multiple connections to the database. This may improve write performance to the database, if it is not otherwise limited by resource usage.
- Batch implementations should now be obtained by calling the method
DatabaseEngine#createBatch(BatchConfig)
, where the type of config dictates the type of batch implementation that is created. - NOTE: versions 2.8.12 and 2.8.13 already had introduced this, but due to a bug, creating a batch would change the DatabaseEngine, possibly causing it to malfunction. If using that version, avoid the new method to create batches.
Version 2.8.15 added back compatibility with Guice v4, but also changed the method SqlBuilder.k to return K instead of Expression. This breaks compatibility with existing code compiled with older PDB versions, so 2.8.16 reverts this change.
In order to compile PDB you will need to have the Oracle Driver JAR in your local repository.
The current version assumes Oracle Driver version 12.2.0.1 (even when compiled with this version
it is possible to use version 11 drivers in runtime; it is also possible to compile with
version 11 instead, but the oracle pom dependency has to be modified).
Please download the driver from the respective
Oracle driver page and
run the following to install the driver in your local maven repository.
mvn install:install-file -DgroupId=com.oracle.jdbc -DartifactId=ojdbc8 \
-Dversion=12.2.0.1 -Dpackaging=jar -Dfile=ojdbc8.jar
Alternatively you can setup the username/password for accessing Oracle's Maven repository in your settings.xml file.
To test PDB with different database engines there are several Maven profiles that can be used, one for each vendor (check list of supported vendors below, under Establishing a connection).
Run the following to run the tests for the chosen vendor specified in lowercase:
mvn test -P<vendor>
NOTE: there is also a "special" profile for H2 to test that engine in server mode (instead of the default H2 embedded);
for that case the profile h2remote
is used in the <vendor>
placeholder.
This will start a docker container running the chosen vendor's database server, and run the tests. The container will be stopped at the end if all tests pass, otherwise will be kept running.
Note: the containers will be started assuming the respective vendor's license agreements have been read and accepted.
More info:
Microsoft SQL Server: https://hub.docker.com/r/microsoft/mssql-server-linux/
IBM DB2: https://hub.docker.com/r/ibmcom/db2express-c/
- Example Description
- Establishing a connection
- Table Manipulation
- Data Manipulation
- Create View
We describe a scenario where there are some data Providers that share Streams of data with the world. These Streams have a data Type, and they are consumed by some Modules. The entities and its relations are modeled into SQL using PDB in the following sections.
With PDB you connect to the database of your preference using the following code.
import static com.feedzai.commons.sql.abstraction.engine.configuration.PdbProperties.*;
(...)
Properties properties = new Properties() {
{
setProperty(JDBC, "<JDBC-CONNECTION-STRING>");
setProperty(USERNAME, "username");
setProperty(PASSWORD, "password");
setProperty(ENGINE, "<PDB-ENGINE>");
setProperty(SCHEMA_POLICY, "create");
}
};
DatabaseEngine engine = DatabaseFactory.getConnection(properties);
The following table shows how to connect for the supported database vendors.
Vendor | Engine | JDBC |
---|---|---|
DB2 | com.feedzai.commons.sql.abstraction.engine.impl.DB2Engine | jdbc:db2://<HOST>:<PORT>/<DATABASE> |
Oracle | com.feedzai.commons.sql.abstraction.engine.impl.OracleEngine | jdbc:oracle:thin:@<HOST>:1521:<DATABASE> |
PostgreSQL | com.feedzai.commons.sql.abstraction.engine.impl.PostgreSqlEngine | jdbc:postgresql://<HOST>/<DATABASE> |
MySQL | com.feedzai.commons.sql.abstraction.engine.impl.MySqlEngine | jdbc:mysql://<HOST>/<DATABASE> |
H2 | com.feedzai.commons.sql.abstraction.engine.impl.H2Engine | jdbc:h2:<FILE> | jdbc:h2:mem |
SQLServer | com.feedzai.commons.sql.abstraction.engine.impl.SqlServerEngine | jdbc:sqlserver://<HOST>;database=<DATABASE> |
It is also important to select a schema policy. There are four possible schema policies:
- create - New entities are created normally.
- create-drop - Same as create policy but before the connection is closed all entries created during this session will be dropped.
- drop-create - New entities are dropped before creation if they already exist.
- none - The program is not allowed to create new entities.
PDB natively supports connection pools. You can create one using either a Properties
or a Map<String, String>
instance when defining the properties.
final DatabaseEnginePool dbPool = DatabaseEnginePool.getConnectionPool(properties);
Additionally, you can specify a modifier for the DatabaseEngine
.
This can be useful when you need to load entities to be able to insert entries on them.
final DatabaseEnginePool dbPool = DatabaseEnginePool.getConnectionPool(properties, engine -> engine.loadEntity(entity));
DatabaseEnginePool
implements AutoClosable
so you can close the pool when you need,
for instance, before exiting the application.
Apart from the already described PDB configurations, you can configure your pool using the parameters in the following table.
The pool.generic.maxTotal
, pool.generic.maxIdle
, pool.generic.minIdle
,
and pool.generic.maxWaitMillis
are the most common ones.
Property | Description | Default value |
---|---|---|
pool.generic.maxTotal |
The maximum number of active DatabaseEngine instances that can be allocated from the pool at the same time, or negative for no limit. |
8 |
pool.generic.maxIdle |
The maximum number of DatabaseEngine instances that can remain idle in the pool, without extra ones being released, or negative for no limit. |
8 |
pool.generic.minIdle |
The minimum number of DatabaseEngine instances that can remain idle in the pool, without extra ones being created, or zero to create none. |
0 |
pool.generic.maxWaitMillis |
The maximum number of milliseconds that the pool will wait (when there are no available DatabaseEngine instances) for a DatabaseEngine instance to be returned before throwing an error, or -1 to wait indefinitely. |
-1 |
pool.generic.testOnCreate |
The indication of whether DatabaseEngine instances will be validated after creation. If the instance is invalid, the borrow attempt that triggered the DatabaseEngine instance creation will fail. |
false |
pool.generic.testOnBorrow |
The indication of whether DatabaseEngine instances will be validated before being borrowed from the pool. If the instance is invalid, it is dropped from the pool and another one is tried to be borrowed. |
true |
pool.generic.testOnReturn |
The indication of whether DatabaseEngine instances will be validated before being returned to the pool. |
false |
pool.generic.testWhileIdle |
The indication of whether DatabaseEngine instances will be validated by the idle evictor (if any). If an instance is invalid, it will be dropped from the pool. |
false |
pool.generic.timeBetweenEvictionRunsMillis |
The number of milliseconds to sleep between runs of the idle evictor thread. When non-positive, no idle evictor thread will be run. | -1 |
pool.generic.numTestsPerEvictionRun |
The number of DatabaseEngine instances to examine during each run of the idle evictor thread (if any). |
3 |
pool.generic.minEvictableIdleTimeMillis |
The minimum amount of time a DatabaseEngine instance may sit idle in the pool before it is eligible for eviction by the idle evictor (if any). |
1000 * 60 * 30 |
pool.generic.lifo |
True means that the pool returns the most recently used ("last in") DatabaseEngine instance in the pool (if there are idle instances available). False means that the pool behaves as a FIFO queue - instances are taken from the idle instance pool in the order that they are returned to the pool. |
true |
pool.abandoned.removeAbandonedOnMaintenance pool.abandoned.removeAbandonedOnBorrow |
Flags to remove abandoned DatabaseEngine instances if they exceed pool.abandoned.removeAbandonedTimout .A DatabaseEngine instance is considered abandoned and eligible for removal if it has not been used for longer than pool.abandoned.removeAbandonedTimeout .Setting pool.abandoned.removeAbandonedOnMaintenance to true removes abandoned DatabaseEngine instances on the maintenance cycle (when eviction ends). This property has no effect unless maintenance is enabled by setting pool.generic.timeBetweenEvictionRunsMillis to a positive value. If pool.abandoned.removeAbandonedOnBorrow is true, abandoned DatabaseEngine instances are removed each time a instance is borrowed from the pool, with the additional requirements that [number of active instances] > pool.generic.maxTotal - 3 and [number of idle instances] < 2 |
false |
pool.abandoned.removeAbandonedTimeout |
Timeout in seconds before an abandoned DatabaseEngine instance can be removed. |
300 |
pool.abandoned.logAbandoned |
Flag to log stack traces for application code which abandoned a DatabaseEngine instance. |
false |
If no
pool.abandoned
property is defined in the configuration file, then no policy to remove abandonedDatabaseEngine
instances is applied.
We start by creating the table to store the different data Types:
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
DbEntity data_type_table =
dbEntity()
.name("data_type")
.addColumn("id", INT, UNIQUE, NOT_NULL)
.addColumn("code", STRING, UNIQUE, NOT_NULL)
.addColumn("description", CLOB)
.pkFields("id")
.build();
A table is represented with a DbEntity and its properties can be defined with methods:
Function | Description |
---|---|
name | Select the name for this table. |
addColumn | Create a column with a given name and type. Additionally you can had autoincrement behaviour and define some extra constraints. There are two possible constraints available: UNIQUE and NOT_NULL. |
pkFields | Define which columns are part of the primary key. |
To create the data_type_table you call addEntity method on the previously created database engine. Depending on the policy you chose existing tables might be dropped before creation.
engine.addEntity(data_type_table);
Let's now create the Providers and Streams tables:
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
DbEntity provider_table =
dbEntity()
.name("provider")
.addColumn("id", INT, true, UNIQUE, NOT_NULL)
.addColumn("uri", STRING, UNIQUE, NOT_NULL)
.addColumn("certified", BOOLEAN, NOT_NULL)
.addColumn("description", CLOB)
.pkFields("id")
.build();
engine.addEntity(provider_table);
DbEntity stream_table =
dbEntity()
.name("stream")
.addColumn("id", INT, true, UNIQUE, NOT_NULL)
.addColumn("provider_id", INT, NOT_NULL)
.addColumn("data_type_id", INT, NOT_NULL)
.addColumn("description", CLOB)
.pkFields("id")
.addFk(
dbFk()
.addColumn("provider_id")
.foreignTable("provider")
.addForeignColumn("id"),
dbFk()
.addColumn("data_type_id")
.foreignTable("data_type")
.addForeignColumn("id"))
.addIndex(false, "provider_id", "data_type_id")
.build();
engine.addEntity(stream_table);
You may have noticed that this stream_table has some foreign keys, which we define using the addFK method. This method receives a list of the foreign keys constraints. A foreign key is created with dbFk(), and it is defined using these methods:
Function | Description |
---|---|
addColumn | Define which columns will be part of this constraint. |
foreignTable | Define the foreign table we are referring to. |
addForeignColumn | Selects the affected columns in the foreign table. |
Wait! Looks like we also created an index in the Stream table.
Function | Description |
---|---|
addIndex | Creates and index for the listed columns. If not specified, an index is not unique. |
The rest of the example case is created with the following code:
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
DbEntity module_table =
dbEntity()
.name("module")
.addColumn("name", STRING, UNIQUE, NOT_NULL)
.build();
engine.addEntity(module_table);
DbEntity stream_to_module_table =
dbEntity()
.name("stream_to_module")
.addColumn("name", STRING, NOT_NULL)
.addColumn("stream_id", INT, NOT_NULL)
.addColumn("active", INT)
.pkFields("name", "stream_id")
.addFk(
dbFk()
.addColumn("name")
.foreignTable("module")
.addForeignColumn("name"),
dbFk()
.addColumn("stream_id")
.foreignTable("stream")
.addForeignColumn("id"))
.build();
engine.addEntity(stream_to_module_table);
When you are done with this example you might want to clean the database.
engine.dropEntity("stream_to_module");
Function | Description |
---|---|
dropEntity | Drops an entity given the name. |
With PDB you can change some aspects of a previously created tables. After calling the the addEntity method with the created entity you can continue to modify this local representation by calling the methods described in the previous sections. Then to synchronize the local representation with the actual table in the database you call the updateEntity method.
data_type_table = data_type_table
.newBuilder()
.removeColumn("description")
.build();
engine.updateEntity(data_type_table);
Function | Description |
---|---|
removeColumn | Removes a column from the local representation of the table. |
updateEntity | Synchronizes the entity representation with the table in the database. If schema policy is set to drop-create the whole table is dropped and created again. |
Another mechanism to alter table is by using the AlterColumn expression creation and the executeUpdate method provided by the database engine. In this case changes are made to each column, one at a time.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
Expression alterColumn = alterColumn(
table("stream_to_module"),
dbColumn("active", BOOLEAN).addConstraint(NOT_NULL).build());
engine.executeUpdate(alterColumn);
Function | Description |
---|---|
alterColumn | Creates a expression of changing a given table schema affecting a column. |
dbColumn | Column definition. Provide new type and autoincrement behavior. |
addConstraint | Define the constraints you want the column to oblige to. |
addConstraints | Define the constraints you want the column to oblige to. |
It is also possible to remove the the primary key constraint.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
Expression dropPrimaryKey = dropPK(table("TEST"));
engine.executeUpdate(dropPrimaryKey);
Function | Description |
---|---|
dropPK | Drops the primary key constraint on the given table. |
Now that we have the structure of the database in place, let's play it with some data. An EntityEntry it's our representation of an entry that we want to add to the database.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
EntityEntry data_type_entry =
entry()
.set("id", 1)
.set("code", "INT16")
.set("description", "The type of INT you always want!")
.build();
Function | Description |
---|---|
set | Define the value that will be assigned to a given column. |
Notice that the values for each column were defined using the set method. A new entry for the database is persisted with engine's method persist.
engine.persist("data_type", data_type_entry, false);
Function | Description |
---|---|
persist | Select the table in which the new entity will be inserted. If the affected table has an autoincrement column you might want to activate this flag. In case that the autoincrement behaviour is active, this method returns the generated key. |
If you want to use the autoincrement behavior you must activate the autoincrement flag when defining the entity.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
EntityEntry provider_entry =
entry()
.set("uri", "from.some.where")
.set("certified", true)
.build();
long generatedKey = engine.persist("provider", provider_entry, true);
PDB also provides support for batches. With batches you reduce the amount of communication overhead, thereby improving performance.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
engine.beginTransaction();
try {
EntityEntry entry = entry()
.set("code", "SINT")
.set("description", "A special kind of INT")
.build();
engine.addBatch("data_type", entry);
entry = entry()
.set("code", "VARBOOLEAN")
.set("description", "A boolean with a variable number of truth values")
.build();
engine.addBatch("data_type", entry);
// Perform more additions...
engine.flush();
engine.commit();
} finally {
if (engine.isTransactionActive()) {
engine.rollback();
}
}
Function | Description |
---|---|
beginTransaction | Starts a transaction. |
addBatch | Adds an entry to the current batch. |
flush | Executes all entries registered in the batch. |
commit | Commits the current transaction transaction. |
isTransactionActive | Tests if the transaction is active. |
rollback | Rolls back the transaction. |
Now you might want to the update data or simply erase them. Let's see how this can be done.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
engine.executeUpdate(
update(table("stream"))
.set(
eq(column("cd"), k("Double")),
eq(column("description", k("Double precision floating point number")))
.where(eq(column(id), k(1))));
Expressions that produce changes to the database are executed with engine's executeUpdate method. There are some defined static methods that allow you to create SQL queries. Update is one of them. In this section we describe queries that make changes to the database, while in the following section selection queries will be present in detail.
Function | Description |
---|---|
update | Creates an update query that will affect the table referred by the given expression. |
set | Expression that defines the values that will be assigned to each given column. |
where | Expression for filtering/selecting the affected entries. |
table | Creates a reference to a table of your choice. |
Maybe you want to delete entries instead. In that case creating a delete query is required.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
engine.executeUpdate(
delete(table("stream")));
engine.executeUpdate(
delete(table("stream"
.where(eq(column(id), k(1))));
Function | Description |
---|---|
delete | Creates a delete query that will affect the table referred by the given expression. |
where | Expression for filtering/selecting the affected entries. |
If what you seek is to delete all table entries at once, it is recommended to use the truncate query.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
engine.executeUpdate(truncate(table("stream")));
Function | Description |
---|---|
truncate | Creates a truncate query that will affect the table referred by the given expression. |
Now things will get interesting. In this section we will see how PDB uses SQL to select data from the database. Using the query method we get the result for any given query as a list of entries. These entries are represented as a map of column name to content.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
Expression query =
select(all())
.from(table("streams"));
// Fetches everything! Use with care when you know the result set is small.
List<Map<String, ResultColumn>> results = engine.query(query);
for(Map<String, ResultColumn> result : results) {
Int id = result.get("id").toInt();
String description = result.get("description").toString();
System.out.println(id + ": "+ description);
}
// If your result set is large consider using the iterator.
ResultIterator it = engine.iterator(select(all()).from(table("streams")));
Map<String, ResultColumn> next;
while ((next = it.next()) != null) {
Int id = next.get("id").toInt();
String description = next.get("description").toString();
System.out.println(id + ": "+ description);
}
The iterator closes automatically when it reaches the end of the result set, but you can close it on any time by calling it.close().
Function | Description |
---|---|
query | Processes a given query and computes the corresponding result. It returns a List of results if any. For each column a result is a Map that maps column names to ResultColumn objects. |
iterator | Returns an iterator to cycle through the result set. Preferable when dealing with large result sets. |
toXXX | ResultColumn provides methods to convert the data to the type of your preference. It throws an exception if you try to convert the underlying data to some incompatible type. |
Let's see this simple query in more detail. Where we list all entries in table Streams and return all columns.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
results = engine.query(
select(all())
.from(table("streams")));
Function | Description |
---|---|
select | Expression defining the selection of columns or other manipulations of its values. |
distinct | Filter the query so it only returns distinct values. |
from | Defines what tables or combination of them the data must be fetched from. By default the listed sources will be joined together with an inner join. |
all | Defines a reference to all column the underlying query might return. |
k | Creates a Constant from obj. |
lit | Creates a Literal from obj. |
column | Defines a reference to a given column. |
with | Provides a way to write auxiliary statements for use in a larger query. |
This is useful but not very interesting. We should proceed by filtering the results with some condition of our choice.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
results = engine.query(
select(all())
.from(table("streams"))
.where(eq(column("data_type_id"), k(4)))
.andWhere(like(column("description"), k("match t%xt"))));
Function | Description |
---|---|
where | Defines a series of testes a entry must oblige in order to be part of the result set. |
andWhere | If there is already ab where clause it defines an and expression with the old clause. |
eq | Applies the equality condition to the expressions. It is also used in insertion queries to represent attribution. |
neq | Negation of the equality condition. |
like | Likelihood comparison between expression. Those expression must resolve to String constants or columns of the same type. |
lt | Predicate over numerical or alphanumerical values. |
lteq | Predicate over numerical or alphanumerical values. |
gt | Predicate over numerical or alphanumerical values. |
gteq | Predicate over numerical or alphanumerical values. |
A more complex filter would be one that select Streams from a given range of data Types and a set of Providers. And we manage just that with the following query.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
results = engine.query(
select(all())
.from(table("streams"))
.where(
and(between(column("data_type_id"), k(2), k(5)),
notIn(column("provider_id"), L(k(1), k(7), k(42))))));
Function | Description |
---|---|
and | Computes the boolean result of the underlying expressions. |
or | Computes the boolean result of the underlying expressions. |
between | Defines a test condition that asserts if exp1 is part of the range of values from exp2 to exp3. |
notBetween | Defines a test condition that asserts if exp1 is part of the range of values from exp2 to exp3. |
in | Defines a test condition that asserts if exp1 is part of exp2. Expression exp2 might be a List of constants or the result of a sub query. |
notIn | Defines a test condition that asserts if exp1 is part of exp2. Expression exp2 might be a List of constants or the result of a sub query. |
L | Defines a list of elements represent by the passing expressions. |
caseWhen | Defines a test using a list of conditions by going through them and returning a value when the first condition is met. If none are met, it will return the otherwise clause or NULL if not defined. |
cast | Specifies how to perform a conversion between two data types. |
It is widely known that greater the id greater the Stream of data. For this purpose you just design a query that selects the maximum Stream id of data Type 4 from Provider 1. You might just get a raise for this.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
results = engine.query(
select(max(column("id")).alias("the_best"))
.from(table("streams"))
.where(
and(eq(column("data_type_id"), k(4)),
eq(column("provider_id"), k(1)))));
Function | Description |
---|---|
alias | Assigns an alias to the expression. |
count | Aggregation operator for numeric values. They are applicable to expression involving columns. |
max | Aggregation operator for numeric values. They are applicable to expression involving columns. |
min | Aggregation operator for numeric values. They are applicable to expression involving columns. |
sum | Aggregation operator for numeric values. They are applicable to expression involving columns. |
avg | Aggregation operator for numeric values. They are applicable to expression involving columns. |
stddev | Aggregation operator for numeric values. They are applicable to expression involving columns. |
stringAgg | Aggregation operator that aggregates data of a column into a string. |
[concat](https://feedzai.github.io/pdb/com/feedzai/commons/sql/abstraction/dml/dialect/SqlBuilder.html#concat(com.feedzai.commons.sql.abstraction.dml.Expression, com.feedzai.commons.sql.abstraction.dml.Expression...)) | Concatenates the expressions with a delimiter. |
floor | Operator that returns the largest integer value that is smaller than or equal to a number. They are applicable to expression involving columns. |
ceil | Operator that returns the smallest integer value that is larger than or equal to a number. They are applicable to expression involving columns. |
udf | If you have defined your own sql function you may access it with udf. |
Sometimes it is required to merge the content of more than one table. For that purpose you can use joins. They allow you to merge content of two or more table regrading some condition. In this example we provide a little bit more flavor to the result by adding the data Type information to the Stream information.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
results = engine.query(
select(all())
from(table("stream")
.innerJoin((table("data_type"),
join(
column("stream", "data_type_id"),
column("data_type", "id")))));
Function | Description |
---|---|
innerJoin | Merges the table results of two expression regarding a condition. |
leftOuterJoin | Merges the table results of two expression regarding a condition. |
rightOuterJoin | Merges the table results of two expression regarding a condition. |
fullOuterJoin | Merges the table results of two expression regarding a condition. |
join | Applies the equality condition to the expressions. |
union | Unions the results of multiple expressions. |
The market is collapsing! The reason, some say, is that some provider messed up. In your contract it is stated that Provider with id 4 provides a given number of streams for each data_type. With the following query you will find out if the actual data in the database matches the contract. By filtering the results to only account for Provider 4 and grouping on the data Type you are able to count the number of streams by Type. Your Boss will be pleased.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
results = engine.query(
select(column("data_type_id"), count(column("id")).alias("count"))
.from(table("streams"))
.where(eq(column("provider_id"), k(4)))
.groupby(column("data_type_id"))
.orderby(column("data_type_id").asc());
Function | Description |
---|---|
groupby | Groups the result on some of the table columns. |
orderby | Orders the result according to some expression of the table columns. |
asc | Sets the ordering as ascendant. |
desc | Sets the ordering as descendant. |
Some documents leaked online last week suggest that there are some hidden message in our data. To visualize this hidden message we need to do some arithmetic's with the ids of the provider and data_type on table Streams. Even more strange is the need to filter the description column, where in case of a null value an alternative is presented.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
results = engine.query(
select(
plus(
column("data_type_id"),
column("provider_id"),
k(1)
).alias("mess_ids"),
coalesce(
column("description"),
k("Romeo must die")))
.from(table("streams")));
Function | Description |
---|---|
minus | Applies the subtraction operator to the list of value with left precedence. |
mult | Applies the multiplication operator to the list of value with left precedence. |
plus | Applies the addiction operator to the list of value with left precedence. |
div | Applies the division operator to the list of value with left precedence. |
mod | Applies the module operator to the list of value with left precedence. |
coalesce | Coalesce tests a given expression and returns its value if it is not null. If the primary expression is null, it will return the first alternative that is not. |
For this next example, imagine you want to select all Streams for which the sum of data_type_id and provider_id is greater than 5. It might not be a very useful query, but when you had that you just want 10 rows of the result with and offset of 2, people might wonder what you are up to.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
results = engine.query(
select(all())
.from(table("streams"))
.having(
gt(plus(
column("data_type_id"),
column("provider_id")),
k(5)))
.limit(10)
.offset(2));
Function | Description |
---|---|
having | Query will select only the result rows where aggregate values meet the specified conditions. |
limit | Defines the number of rows that the query returns. |
offset | Defines the offset for the start position of the resulting rows. |
PDB also allows the creation of prepared statements. Here you have two of the previous example queries done using prepared statements
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
Query query = select(all())
.from(table("streams"))
.where(eq(column("data_type_id"), lit("?")))
.andWhere(like(column("description"), k("match t%xt")));
engine.createPreparedStatement("MyPS", query);
// It is always a good policy to clear the parameters
engine.clearParameters("MyPS");
engine.setParameter("MyPS", 1, 10);
engine.executePS("MyPS");
List<Map<String, ResultColumn>> result = engine.getPSResultSet("MyPS");
In PDB prepared statements are stored internally and they are maintained if the connection is lost, but the parameters are always lost.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
Update update = update(table("stream"))
.set(
eq(column("cd"), lit("?")),
eq(column("description", lit("?")))
.where(eq(column(id), k(1))));
engine.createPreparedStatement("MyPS", query);
engine.clearParameters("MyPS");
engine.setParameter("MyPS", 1, "INT");
engine.setParameter("MyPS", 2, "Your regular integer implementation.");
int affectedEntries = engine.executePSUpdate("MyPS");
Function | Description |
---|---|
createPreparedStatement | Creates a prepared statement and assigns it to a given identifier. |
clearParameters | Clears the parameters of a given prepared statement. |
setParameter | Assigns a object to a given parameter of a prepared statement. |
executePS | Executes a given prepared statement. |
executePSUpdate | Executes a given update prepared statement and returns the number of affected rows. |
getPSResultSet | Returns the result set of the last executed query. |
getPSIterator | Returns an iterator to the result set of the last executed query. |
Sometimes, for security reasons or just for simplicity, it is useful to have a view of the database.
import static com.feedzai.commons.sql.abstraction.dml.dialect.SqlBuilder.*;
(...)
Expression view = createView("simple_stream")
.as(select(column("id"), column("data_type_id"))
.from(table("stream")))
.replace();
engine.executeUpdate(view);
Function | Description |
---|---|
createView | Creates a view with the given name. |
as | Defines the query that provides the data for this view. |
replace | Whether or not the view creation is authorized to overwrite over existing views. |
After creating a view, you may also want to delete it.
engine.dropView("view_name");
Function | Description |
---|---|
dropView | Drops a view with the given name. |
For more insight on the available functionality please see projects javadoc.
For more information please contact [email protected], we will happily answer your questions.
- Miguel Miranda ([email protected]) for the documentation and first steps on making this library opensource
Copyright 2014 Feedzai
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.