B4J Question JBDC pool connections: how to use correctly ?

peacemaker

Expert
Licensed User
Longtime User
HI, All

If a B4J server app (works on a Host1 of a LAN) uses a big MySQL database on a dedicated Host0 (of this LAN), and also planned to make other apps on different hostsX (and connecting to the same Host0 db) - how correctly to use the connections pool to avoid errors ?

There are tons of subs inside the app where the DB is used - is a new connection to be opened at each sub and closed at sub finish ?
 

amykonio

Active Member
Licensed User
Longtime User
Speed may be related to many factors: available memory, hard disk speed, processor speed, general workload of your computer, network, mysql configuration...

Now that you have seen pool is working you can change pooling=false to pooling=true in connection string.

Also, you must be aware, that each insert takes a connection from the pool and releases it. This was made like that to extensively test pooling. Same happens for each sql query. So for 1000 insert you take 1000 time a connection from the pool and release it.

Andreas.
 
Upvote 0

OliverA

Expert
Licensed User
Longtime User
pooling=false to pooling=true in connection string.
No. The B4J SQL library does not support JDBC driver-based pooling. Disable all driver-based pooling and use a 3rd party (such as ConnectionPool) object/library to manage your connection pool.

Links:
This shows that you should go through Java's DataSource class to retrieve connections from the MySQL JDBC internal pool. The internal code for the SQL library for B4J does not do this/provide for this (https://github.com/AnywhereSoftware...src/anywheresoftware/b4j/objects/SQL.java#L75).

ConnectionPool, on the other hand, will properly use the provided JDBC URL to create a connection that is used with in ConnectionPool's pool (3CP0). That connection URL must disable any internal JDBC pooling (and it looks like ConnectorJ defaults to pool=true, so you must actually pass pool=false (I previously erroneously said to remove it from the string)).
 
Upvote 0

alwaysbusy

Expert
Licensed User
Longtime User
I have only been able to skim through the whole post, but I don't seem to find anyone mentioning using transactions or background workers (instead of just a timer). We have many concurrent users in our ABM apps inserting/updating tens of thousands of records without a hiccup. We do not use the default connection pool but HikariCP instead. Our pool is setup with 100 connections and can handle many tens of thousands of users. It may be worth spending some time looking at the indexes you've set on your tables. We've seen spectacular results due to modifying them where queries went from +10sec down to -100ms.
 
Upvote 0

OliverA

Expert
Licensed User
Longtime User
What version of the MySQL JDBC driver are you using? More current versions of Connector/J (8+) seem to have their own connection pool (if enabled, and supposedly enabled by default). That MAY conflict with a third party pool (the reason for MAY is that I've not used a current version of Connector/J with 3rd party pooling, so yes, I'm guessing, but I think it's a good guess based on current MySQL documentation)
 
Upvote 0

alwaysbusy

Expert
Licensed User
Longtime User
And what were these modifications (to be more clear, if possible) ?
Depends on your queries. Make sure you have good indexes for your WHERE clauses. We use a tool like dbForge Studio to analyze queries and give us some pointers on which indexes it uses and how long they take. Sometimes you can see, for some queries, MySQL has to make temporary tables in memory. By changing the structure of the query or making an extra index, you can avoid that which results in a much faster execution.

What version of the MySQL JDBC driver are you using?
We use mysql-connector-java-5.1.37-bin

HikariCP has some tuning settings you can play with and some nice 'memory leak' logging to indicate a connection is not closed:

B4X:
    pool.Initialize("com.mysql.jdbc.Driver", "jdbc:mysql://xxx.xxx.xxx.xxx/PRD_xxxxxxxxxxxxxxxx?characterEncoding=utf8&zeroDateTimeBehavior=convertToNull", login, password)
    pool.MaxLifetime = 1800000
    Dim jocp As JavaObject = pool
    jocp.RunMethod("setMaximumPoolSize", Array(poolSize))
    jocp.RunMethod("addDataSourceProperty", Array("cachePrepStmts", True))
    jocp.RunMethod("addDataSourceProperty", Array("prepStmtCacheSize", 250))
    jocp.RunMethod("addDataSourceProperty", Array("prepStmtCacheSqlLimit", 2048))
    jocp.RunMethod("addDataSourceProperty", Array("useServerPrepStmts", True))
    jocp.RunMethod("addDataSourceProperty", Array("useLocalSessionState", True))
    jocp.RunMethod("addDataSourceProperty", Array("useLocalTransactionState", True))
    jocp.RunMethod("addDataSourceProperty", Array("rewriteBatchedStatements", True))
    jocp.RunMethod("addDataSourceProperty", Array("cacheResultSetMetadata", True))
    jocp.RunMethod("addDataSourceProperty", Array("cacheServerConfiguration", True))
    jocp.RunMethod("addDataSourceProperty", Array("elideSetAutoCommits", True))
    jocp.RunMethod("addDataSourceProperty", Array("maintainTimeStats", True))
    Dim thres As Long = 60000
    jocp.RunMethod("setLeakDetectionThreshold", Array(thres))
 
Upvote 0

chams

Member
Licensed User
Longtime User
Maybe the use of a message broker like Apache Kafka can avoid to perform endless calls to the differents hosts.
 
Upvote 0

peacemaker

Expert
Licensed User
Longtime User
a message broker
Thanks for some new idea here.
But, as i could understand, the message broker helps to exchange data "many-to-many" hosts. But my situation is just to collect data "many-to-one" into a common DB.
 
Upvote 0

chams

Member
Licensed User
Longtime User
@Erel wrote an excellent MQTT tutorial :
https://www.b4x.com/android/forum/threads/iot-mqtt-protocol.59471/
The different clients send a messages to the broker with a topic. Your "data collector" is connect to the broker with this topic. When an event occurs, you perform the collector task, or directly read the response in the message (depending of the task that you have to achieve).
Apache Kafka is also a message broker : You write a producer message for your different clients, and one consumer connector for your "data collector". The messages are stored in Kafka. So, if your database is broken, you can replay the stored messages and recreate the database. You can also put a second machine on the network with your "data collector" program, and you get a redundancy machine without mechanism of database replication.
I don't know the exact goal of your work, but it seams to me that this architecture is more convenient.
Thanks to Erel, the MQTT protocol is easy to use. In the other hand, Apache Kafka is a heavy duty broker with a learning curve. https://kafka.apache.org/documentation/#gettingStarted
 
Upvote 0
Cookies are required to use this site. You must accept them to continue using the site. Learn more…