. . .

JDBC Module

Introduction

The JDBC module enables app developers to access relational SQL databases via the middleware. It is never used directly but as a meta module for concrete database-specific modules instead.

Rather than having to integrate multiple endpoints into the data access layer of the app, the middleware is the single point for communication from the app and the generated ApiOmat SDK provides all the functionality.

Both regular tables as well as table-valued functions are supported.

Configuration

There are some settings that can be modified in the JDBC module that affect all sub modules (such as the MySQL module). Some other settings are database specific and are set when starting the respective crawling process.

These settings can be changed via module configuration:

  • Default result limit: Defines the default result limit, which gets used when no limit gets set when querying all objects of a class. Can be set to 0 to disable this.

  • Thread-pool-size: Number of threads used to create the Meta Models and Attributes. Should be set to 0 to trigger an automatic computation of values. Change only if you encounter any problems with the computed values.

  • Connection Pool related settings are taken / used 1:1 from HikariCP (see https://github.com/brettwooldridge/HikariCP):

    • Connection timeout: This property controls the maximum number of milliseconds that a client (that's you) will wait for a connection from the pool. If this time is exceeded without a connection becoming available, a SQLException will be thrown. Lowest acceptable connection timeout is 250 ms. Default: 30000 (30 seconds)

    • Leak detection threshold: This property controls the amount of time that a connection can be out of the pool before a message is logged indicating a possible connection leak. A value of 0 means leak detection is disabled. Lowest acceptable value for enabling leak detection is 2000 (2 seconds). Default: 0

    • Max lifetime: This property controls the maximum lifetime of a connection in the pool. When a connection reaches this timeout it will be retired from the pool, subject to a maximum variation of +30 seconds. An in-use connection will never be retired, only when it is closed will it then be removed. We strongly recommend setting this value, and it should be at least 30 seconds less than any database-level connection timeout. A value of 0 indicates no maximum lifetime (infinite lifetime), subject of course to the idleTimeout setting. Default: 1800000 (30 minutes)

    • Max pool size: This property controls the maximum size that the pool is allowed to reach, including both idle and in-use connections. Basically this value will determine the maximum number of actual connections to the database backend. A reasonable value for this is best determined by your execution environment. When the pool reaches this size, and no idle connections are available, calls to getConnection() will block for up to connectionTimeout milliseconds before timing out. Default: 10

    • Validation timeout: This property controls the maximum amount of time that a connection will be tested for aliveness. This value must be less than the connectionTimeout. Lowest acceptable validation timeout is 250 ms. Default: 5000

  • Check config change interval: The connection pool gets reset as soon as the configuration values regarding the CP change. But only check for configuration value changes every x milliseconds, because it leads to a MongoDB read for every configuration value, which decreases the performance. Default: 10000.

  • Make crawled classes visible via REST: Configures if classes in a crawled module should be visible. Invisible makes them only accessible via other modules, where SQL injection safe methods can be called to get data from the DB. Note: If set to true, a user may access the DB in a malicious way by requesting datamodels and passing a malicious query. The best practice here would be to make the classes invisible and create an orchestration or facade-module which handles the actions on these datamodels. Default: true (visible)

The configuration window in the Dashboard also contains some info about the settings.

Use

The JDBC module performs two major tasks:

  • Crawling a database and creating a new module

  • Performing CRUD operations on a crawled database

Crawl

To crawl a database, refer to the documentations of the corresponding concrete module (for example MySQL).

Dashboard Data Editor

To check if the crawl was successful, you can use the Data Editor in the dashboard. The behaviour is a bit different for classes that are counterpart of a table vs a table-valued function.

Table

Select the class on the left that is the counterpart of the database table you want to retrieve data from. The Data Editor now shows data from the SQL server table.

You can also add new rows as well as edit and delete existing ones. Please note that due to database constraints it is not possible to edit primary key column values of existing rows.

Table-valued function

For classes that are the counterpart of table-valued functions, you might need to pass a parameter. At the moment the passing of parameters is done within the query string. This can be done using the input field on the top right of the Data Editor .

Passing arguments can be combined with other where-clauses, such as “price between 0 and 50”.

SDK

Table

Retrieving the list of objects of a specific class that’s the counterpart of a database table is not different from regular classes. For example, in the C# SDK you would use:


List<Product> products;
// all objects, unfiltered
products = Product.GetProductsAsync();
// filtered
products = Product.GetProductsAsync("price between 0 and 50");

Table-valued function

For table-valued functions it’s the same as in the Data Editor. You have to pass the parameters in the query. Example:


List<UserRecommendation> userRecs;
userRecs = UserRecommendation.GetUserRecommendationsAsync("[\"exampleUser\"]");


Like in the Data Editor you can also combine this with where-clauses for filtering results.

Internals

As part of the crawling process, JDBC not only creates a new module but as well saves database meta data to meta models inside the JDBC module. These meta models have the suffix "Info" and should be handled very carefully.

Currently there are some limitations but we are working on them.

  • When converting a query from an ApiOmat query to an SQL where-clause, there are some specifics for the different SQL implementations. Currently the following implementations get handled:

    • MS SQL Server

    • MySQL

    • PostgresQL

    • Oracle Database

  • Not all data types might be supported. Again this is because of differences between the SQL implementations. If you experience any errors related to this, just tell us and we can fix it.

  • JDBC type real, double, numeric, decimal and float are treated as Java Double and
    the Double type of Java looses precision, when the value contains more then 14 digits.

  • Names of database column attributes are converted to camelCase. Thus, two columns "user_name" and "UserName" are both mapped to the name "userName".
    Crawling a table containing columns whose names would be mapped to the same string will therefore lead to unexpected results.

  • Instead of using ApiOmat queries, standard SQL where-clauses have to be passed as query.