JDBC Driver

Connect to Ignite using standard JDBC driver.

Ignite is shipped with JDBC drivers that allow processing distributed data using standard SQL statements like SELECT, INSERT, UPDATE or DELETE directly from the JDBC side.

Presently, there are two drivers supported by Ignite - the lightweight and easy to use JDBC Thin Driver and the one that interacts with the cluster by means of a client node. This documentation explains how to set up and use both:

JDBC Thin Driver

The JDBC Thin driver is lightweight and easy to use from an application side. To start using the driver, just add ignite-core-{version}.jar to the classpath of the application.

The driver connects to one of the cluster nodes and forwards all the queries to it for final execution. The node takes care of the query distribution and result's aggregations. Then, the result is sent back to the client application.

JDBC connection string has the following pattern:

jdbc:ignite:thin://host[:port][?<params>]
  • host is required and defines host of the cluster node to connect to.
  • port to connect to. 10800 is used by default if this parameter is omitted.
  • <params> are optional and have the following below:
param1=value1?param2=value2?...:paramN=valueN

The name of the driver's class is org.apache.ignite.IgniteJdbcThinDriver. For instance, this is how you can open a JDBC connection to the Ignite cluster node listening on IP address 192.168.0.50:

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
 
// Open the JDBC connection.
Connection conn = DriverManager.getConnection("jdbc:ignite:thin://192.168.0.50");

The table below lists all the parameters that are accepted and support by the JDBC connection string:

distributedJoins

Whether to use distributed joins for non collocated data or not.

false

enforceJoinOrder

Whether to enforce join order of tables in the query or not. If set to true query optimizer will not reorder tables in join.

false

collocated

Whether your data is co-located or not. Whenever Ignite executes a distributed query, it sends sub-queries to individual cluster members. If you know in advance that the elements of your query selection are collocated together on the same node, Ignite can make significant performance and network optimizations.

false

replicatedOnly

Whether query contains only replicated tables or not. This is a hint for potentially more effective execution.

false

autoCloseServerCursor

Whether to close server-side cursor automatically when last piece of result set is retrieved or not. When enabled, a call to ResultSet.close() will not require a network call, what could improve performance. However, if the server-side cursor is already closed, you may get an exception when trying to call ResultSet.getMetadata() method. This is why it defaults to false.

false

socketSendBuffer

Socket send buffer size. When set to 0, OS default will be used.

0

socketReceiveBuffer

Socket receive buffer size. When set to 0, OS default will be used.

0

tcpNoDelay

Whether to use TCP_NODELAY option.

true

Additional Connection String Examples

  • jdbc:ignite:thin://myHost - connect to myHost on the port 10800 with all defaults
  • jdbc:ignite:thin://myHost:11900 - connect to myHost on custom port 11900 with all defaults
  • jdbc:ignite:thin://myHost:11900?distributedJoins=true&autoCloseServerCursor=true - connect to myHost on custom port 11900 with enabled distributed joins and autoCloseServerCursor optimization.

Cluster Configuration

In order to accept and process requests from JDBC Thin Driver, a cluster node binds to a local network interface on port 10800 and listens to incoming requests.

Use ClientConnectorConfiguration, that is set via IgniteConfiguration, to change any parameters:

IgniteConfiguration cfg = new IgniteConfiguration()
    .setClientConnectorConfiguration(new ClientConnectorConfiguration());
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
  <property name="sqlConnectorConfiguration">
    <bean class="org.apache.ignite.configuration.SqlConnectorConfiguration" />
  </property>
</bean>

The following parameters are supported:

Properties

Description

Default

host

Host name or IP address to bind to. When set to null, IgniteConfigiration.localHost will be used.

null

port

Port to bind to. If the specified port is already in use, Ignite will try to find another available port using portRange property.

10800

portRange

Defines number of ports to try to bind to. E.g. if port is set to 10800 and port range is 100, Ignite will try to find available port within [10800, 10900] range, starting with 10800.

100

maxOpenCursorsPerConnection

Maximum number of open server cursors per connection. If exceeded, an exception will be thrown when trying to open another cursor.

128

threadPoolSize

Number of threads dedicated for query execution.

MAX(8, CPU cores)

socketSendBufferSize

Socket send buffer size. When set to 0, OS default is used.

0

socketReceiveBufferSize

Socket receive buffer size. When set to 0, OS default is used.

0

tcpNoDelay

Whether to use TCP_NODELAY option or not.

true

JDBC Client Node Driver

JDBC Client Node Driver connects to the cluster using its own full-fledged client node connection. This requires users to provide a complete Spring XML configuration as part of the JDBC connection string and copy all the jar files below to the classpath of your application or SQL tool:

  • All the jars under {apache_ignite_release}\libs directory.
  • All the jars under {apache_ignite_release}\ignite-indexing and {apache_ignite_release}\ignite-spring directories.

The driver is more heavyweight and might not support the latest SQL features of Ignite but since it uses the client node connection underneath, it can execute, distribute queries, and aggregate their results directly from the application side.

The JDBC connection URL has the following pattern:

jdbc:ignite:cfg://[<params>@]<config_url>
  • <config_url> is required and represents any valid URL that points to an Ignite configuration file for Ignite client node. This node will be started within the Ignite JDBC Client Node Driver when it (JDBC driver) tries to establish a connection with the cluster.
  • <params> is optional and has the following format:
param1=value1:param2=value2:...:paramN=valueN

The name of the driver's class is org.apache.ignite.IgniteJdbcDriver. For instance, this is how you can open a JDBC connection to the Ignite cluster:

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");
 
// Open JDBC connection (cache name is not specified, which means that we use default cache).
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://file:///etc/config/ignite-jdbc.xml");

The following parameters are supported:

Properties

Description

Default

cache

Cache name. If it is not defined the default cache will be used. Note that the cache name is case sensitive.

nodeId

ID of node where query will be executed. It can be useful for querying through local caches.

local

Query will be executed only on a local node. Use this parameter with nodeId parameter in order to limit data set by specified node.

false

collocated

Flag that is used for optimization purposes. Whenever Ignite executes a distributed query, it sends sub-queries to individual cluster members. If you know in advance that the elements of your query selection are collocated together on the same node, Ignite can make significant performance and network optimizations.

false

distributedJoins

Allows use distributed joins for non collocated data.

false

streaming

Turns on bulk data load mode via INSERT statements for this connection. Refer to Streaming Mode section for more details.

false

streamingAllowOverwrite

Tells Ignite to overwrite values for existing keys on duplication instead of skipping them. Refer to Streaming Mode section for more details.

false

streamingFlushFrequency

Timeout, in milliseconds, that data streamer should use to flush data. By default, the data is flushed on connection close. Refer to Streaming Mode section for more details.

0

streamingPerNodeBufferSize

Data streamer's per node buffer size. Refer to Streaming Mode section for more details.

1024

streamingPerNodeParallelOperations

Data streamer's per node parallel operations number. Refer to Streaming Mode section for more details.

16

transactionsAllowed

Presently ACID Transactions are supported, but only at key-value API level. At SQL level Ignite supports atomic, but not yet transactional consistency.

It means that the JDBC driver might throw Transactions are not supported if you try to use this functionality.

However, some of BI tools might force the transactional behavior all the times preventing you from issuing SQL read queries. Set this parameter to true to overcome the requirement if the transactions are not of your interest.

In general, Ignite community plans to implement SQL transactions in version 2.2.

false

📘

Cross-Cache Queries

Cache that the driver is connected to is treated as the default schema. To query across multiple caches, Cross-Cache Query functionality can be used.

Streaming Mode

It's feasible to add data into an Ignite cluster in a streaming mode (bulk mode) using the JDBC driver. In this mode, the driver instantiates IgniteDataStreamer internally and feeds data to it. To activate this mode, add streaming parameter set to true to a JDBC connection string:

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");
 
// Opening connection in the streaming mode.
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://streaming=true@file:///etc/config/ignite-jdbc.xml");

Presently, the streaming mode is supported only for INSERT operations. This is useful for cases when you want to achieve fast data preloading into a cache. The JDBC driver defines multiple connection parameters that affect the behavior of the streaming mode. These parameters are listed in the parameters table above.

The parameters cover almost all settings of a general IgniteDataStreamer and allow you to fine tune the streamer according to your needs. Please refer to the Data Streamers section of Ignite docs for more information on how to configure the streamer.

📘

Time Based Flushing

By default, the data is flushed when either a connection is closed or streamingPerNodeBufferSize is met. If you need to flush the data in a timely manner, then adjust the streamingFlushFrequency parameter.

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");
 
// Opening a connection in the streaming mode and time based flushing set.
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://streaming=true:streamingFlushFrequency=1000@file:///etc/config/ignite-jdbc.xml");

PreparedStatement stmt = conn.prepareStatement(
  "INSERT INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");

// Adding the data.
for (int i = 1; i < 100000; i++) {
      // Inserting a Person object with a Long key.
      stmt.setInt(1, i);
      stmt.setString(2, "John Smith");
      stmt.setInt(3, 25);
  
      stmt.execute();
}

conn.close();

// Beyond this point, all data is guaranteed to be flushed into the cache.

Example

To start processing the data located in the cluster, you need to create a JDBC Connection object using one of the methods below:

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
 
// Open the JDBC connection.
Connection conn = DriverManager.getConnection("`jdbc:ignite:thin://192.168.0.50");
// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");
 
// Open JDBC connection (cache name is not specified, which means that we use default cache).
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://file:///etc/config/ignite-jdbc.xml");

Right after that you can execute SQL SELECT queries of your choice:

// Query names of all people.
ResultSet rs = conn.createStatement().executeQuery("select name from Person");
 
while (rs.next()) {
    String name = rs.getString(1);
    ...
}
 
// Query people with specific age using prepared statement.
PreparedStatement stmt = conn.prepareStatement("select name, age from Person where age = ?");
 
stmt.setInt(1, 30);
 
ResultSet rs = stmt.executeQuery();
 
while (rs.next()) {
    String name = rs.getString("name");
    int age = rs.getInt("age");
    ...
}

Moreover, you can modify the data with the usage of DML statements.

INSERT

// Insert a Person with a Long key.
PreparedStatement stmt = conn.prepareStatement("INSERT INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
 
stmt.setInt(1, 1);
stmt.setString(2, "John Smith");
stmt.setInt(3, 25);

stmt.execute();

MERGE

// Merge a Person with a Long key.
PreparedStatement stmt = conn.prepareStatement("MERGE INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
 
stmt.setInt(1, 1);
stmt.setString(2, "John Smith");
stmt.setInt(3, 25);
 
stmt.executeUpdate();

UPDATE

// Update a Person.
conn.createStatement().
  executeUpdate("UPDATE Person SET age = age + 1 WHERE age = 25");

DELETE

conn.createStatement().execute("DELETE FROM Person WHERE age = 25");

  翻译: