Apache Ignite SQL Documentation

The Apache Ignite SQL Developer Hub

Welcome to the Apache Ignite SQL developer hub. You'll find comprehensive guides and documentation to help you start working with Apache Ignite SQL as quickly as possible, as well as support if you get stuck. Let's jump right in!

Get Started    

JDBC Client Driver

The documentation covers JDBC Client Driver that interacts with the cluster by means of a client node:

JDBC Client Driver

The JDBC Client Driver connects to the Ignite cluster using its own fully established client node connection. This requires users to provide a complete Spring XML configuration as part of the JDBC connection string, and copy all the jar files mentioned below to the classpath of your application or SQL tool:

  • All the jars under {apache_ignite_release}\libs directory.
  • All the jars under {apache_ignite_release}\ignite-indexing and {apache_ignite_release}\ignite-spring directories.

The driver is more heavyweight and might not support the latest SQL features of Ignite but since it uses the client node connection underneath, it can execute, distribute queries, and aggregate their results directly from the application side.

The JDBC connection URL has the following pattern:

jdbc:ignite:cfg://[<params>@]<config_url>
  • <config_url> is required and represents any valid URL that points to an Ignite configuration file for Ignite client node. This node will be started within the Ignite JDBC Client Driver when it (JDBC driver) tries to establish a connection with the cluster.
  • <params> is optional and has the following format:
param1=value1:param2=value2:...:paramN=valueN

The name of the driver's class is org.apache.ignite.IgniteJdbcDriver. For instance, this is how you can open a JDBC connection to the Ignite cluster:

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");
 
// Open JDBC connection (cache name is not specified, which means that we use default cache).
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://file:///etc/config/ignite-jdbc.xml");

📘

Securing Connection

For information on how to secure the JDBC client driver connection, you can refer to the Advanced Security documentation.

The following parameters are supported:

PropertiesDescriptionDefault
cacheCache name. If it is not defined the default cache will be used. Note that the cache name is case sensitive.
nodeIdID of node where query will be executed. It can be useful for querying through local caches.
localQuery will be executed only on a local node. Use this parameter with nodeId parameter in order to limit data set by specified node.false
collocatedFlag that is used for optimization purposes. Whenever Ignite executes a distributed query, it sends sub-queries to individual cluster members. If you know in advance that the elements of your query selection are collocated together on the same node, Ignite can make significant performance and network optimizations.false
distributedJoinsAllows use distributed joins for non collocated data.false
streamingTurns on bulk data load mode via INSERT statements for this connection. Refer to Streaming Mode section for more details.false
streamingAllowOverwriteTells Ignite to overwrite values for existing keys on duplication instead of skipping them. Refer to Streaming Mode section for more details.false
streamingFlushFrequencyTimeout, in milliseconds, that data streamer should use to flush data. By default, the data is flushed on connection close. Refer to Streaming Mode section for more details.0
streamingPerNodeBufferSizeData streamer's per node buffer size. Refer to Streaming Mode section for more details.1024
streamingPerNodeParallelOperationsData streamer's per node parallel operations number. Refer to Streaming Mode section for more details.16
transactionsAllowedPresently ACID Transactions are supported, but only at key-value API level. At SQL level Ignite supports atomic, but not yet transactional consistency.

This means that the JDBC driver might throw Transactions are not supported exception if you try to use this functionality.

However, in cases when you need transactional syntax to work (even without transactional semantics), e.g. some of BI tools might force the transactional behavior, set this parameter to true to avoid the exceptions from being thrown.


Ignite community plans to implement SQL transactions in version 2.5.
false
multipleStatementsAllowedJDBC driver will be able to process multiple SQL statements at a time returning multiple ResultSet objects. If the parameter is disabled, the query with multiple statements will fail.false
lazyLazy query execution.

By default, Ignite attempts to fetch the whole query result set to memory and send it to the client. For small and medium result sets, this provides optimal performance and minimize duration of internal database locks, thus increasing concurrency.

However, if the result set is too big to fit in the available memory, then it can lead to excessive GC pauses and even OutOfMemoryError. Use this flag as a hint for Ignite to fetch the result set lazily, thus minimizing memory consumption at the cost of moderate performance hit.
false
skipReducerOnUpdateEnables server side update feature.

When Ignite executes a DML operation, first, it fetches all the affected intermediate rows for analysis to the query initiator (also known as reducer), and only then prepares batches of updated values that will be sent to remote nodes.

This approach might affect performance, and saturate network if a DML operation has to move many entries over it.

Use this flag as a hint for Ignite to do all intermediate rows analysis and updates "in-place" on corresponding remote data nodes.

Defaults to false, meaning that intermediate results will be fetched to the query initiator first.
false

📘

Cross-Cache Queries

The cache that the driver is connected to is treated as the default schema. To query across multiple caches, Cross-Cache Query functionality can be used.

Streaming Mode

It's feasible to add data into an Ignite cluster in a streaming mode (bulk mode) using the JDBC driver. In this mode, the driver instantiates IgniteDataStreamer internally and feeds data to it. To activate this mode, add streaming parameter set to true to a JDBC connection string:

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");
 
// Opening connection in the streaming mode.
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://[email protected]:///etc/config/ignite-jdbc.xml");

Presently, the streaming mode is supported only for INSERT operations. This is useful for cases when you want to achieve fast data preloading into a cache. The JDBC driver defines multiple connection parameters that affect the behavior of the streaming mode. These parameters are listed in the parameters table above.

❗️

Cache Name

Make sure you specify target cache for streaming as argument to cache= parameter in JDBC connection string. If cache is not specified or does not match the table used in streaming DML statements, updates will be ignored.

The parameters cover almost all settings of a general IgniteDataStreamer and allow you to tune the streamer according to your needs. Please refer to the Data Streamers section of Ignite docs for more information on how to configure the streamer.

📘

Time Based Flushing

By default, the data is flushed when either a connection is closed or streamingPerNodeBufferSize is met. If you need to flush the data in a timely manner, then adjust the streamingFlushFrequency parameter.

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");
 
// Opening a connection in the streaming mode and time based flushing set.
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://streaming=true:[email protected]:///etc/config/ignite-jdbc.xml");

PreparedStatement stmt = conn.prepareStatement(
  "INSERT INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");

// Adding the data.
for (int i = 1; i < 100000; i++) {
      // Inserting a Person object with a Long key.
      stmt.setInt(1, i);
      stmt.setString(2, "John Smith");
      stmt.setInt(3, 25);
  
      stmt.execute();
}

conn.close();

// Beyond this point, all data is guaranteed to be flushed into the cache.

Example

To start processing the data located in the cluster, you need to create a JDBC Connection object using one of the methods below:

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");
 
// Open JDBC connection (cache name is not specified, which means that we use default cache).
Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://file:///etc/config/ignite-jdbc.xml");

Right after that you can execute SQL SELECT queries of your choice:

// Query names of all people.
ResultSet rs = conn.createStatement().executeQuery("select name from Person");
 
while (rs.next()) {
    String name = rs.getString(1);
    ...
}
 
// Query people with specific age using prepared statement.
PreparedStatement stmt = conn.prepareStatement("select name, age from Person where age = ?");
 
stmt.setInt(1, 30);
 
ResultSet rs = stmt.executeQuery();
 
while (rs.next()) {
    String name = rs.getString("name");
    int age = rs.getInt("age");
    ...
}

Moreover, you can modify the data with the usage of DML statements.

INSERT

// Insert a Person with a Long key.
PreparedStatement stmt = conn.prepareStatement("INSERT INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
 
stmt.setInt(1, 1);
stmt.setString(2, "John Smith");
stmt.setInt(3, 25);

stmt.execute();

MERGE

// Merge a Person with a Long key.
PreparedStatement stmt = conn.prepareStatement("MERGE INTO Person(_key, name, age) VALUES(CAST(? as BIGINT), ?, ?)");
 
stmt.setInt(1, 1);
stmt.setString(2, "John Smith");
stmt.setInt(3, 25);
 
stmt.executeUpdate();

UPDATE

// Update a Person.
conn.createStatement().
  executeUpdate("UPDATE Person SET age = age + 1 WHERE age = 25");

DELETE

conn.createStatement().execute("DELETE FROM Person WHERE age = 25");

Updated about a year ago

JDBC Client Driver


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.