Articles & Content

StackOverflow

StackOverflow is a collaborative programming Q&A community venture

Note: The content of this blog is the opinion and thoughts of the blogger and does not necessarily represent the opinions of IDUG.

 

 
Select (show) only different columns from almost similar rows

I have a table with many columns 50+. in order to take decisions I analyze any variant data.
Actually my query:
SELECT maincol, count(maincol) FROM table where (conditions) group by maincol having count(maincol) > 1
then:
SELECT * FROM table where (conditions) and maincol = (previous result)
before consult displays all rows and I have to search one by one

col1, col2, col3, col4, col5, col6, manycolumns..., colN5 7 1 13 341 9 1235 7 2 13 341 5 123

I want to get:

col3, col6 1 9 2 5

because it's difficult searching manually column by column.
- N columns could be different
- I don't have access to credentials, then I can't use a programing language to manage results.
- Working on DB2

Querying DB2 every 15 seconds causing memory leak in NodeJS

I have an application which checks for new entries in DB2 every 15 seconds on the iSeries using IBM's idb-connector. I have async functions which return the result of the query to socket.io which emits an event with the data included to the front end. I've narrowed down the memory leak to the async functions. I've read multiple articles on common memory leak causes and how to diagnose them.

MDN: memory management

Rising Stack: garbage collection explained

Marmelab: Finding And Fixing Node.js Memory Leaks: A Practical Guide

But I'm still not seeing where the problem is. Also, I'm unable to get permission to install node-gyp on the system which means most memory management tools are off limits as memwatch, heapdump and the like need node-gyp to install. Here's an example of what the functions basic structure is.

 const { dbconn, dbstmt } = require('idb-connector');// require idb-connector async function queryDB() { const sSql = `SELECT * FROM LIBNAME.TABLE LIMIT 500`; // create new promise let promise = new Promise ( function(resolve, reject) { // create new connection const connection = new dbconn(); connection.conn("*LOCAL"); const statement = new dbstmt(connection); statement.exec(sSql, (rows, err) => { if (err) { throw err; } let ticks = rows; statement.close(); connection.disconn(); connection.close(); resolve(ticks.length);// resolve promise with varying data }) }); let result = await promise;// await promise return result; }; async function getNewData() { const data = await queryDB();// get new data io.emit('newData', data)// push to front end setTimeout(getNewData, 2000);// check again in 2 seconds };

Any ideas on where the leak is? Am i using async/await incorrectly? Or else am i creating/destroying DB connections improperly? Any help on figuring out why this code is leaky would be much appreciated!!

Edit: Forgot to mention that i have limited control on the backend processes as they are handled by another team. I'm only retrieving the data they populate the DB with and adding it to a web page.

Can you create a linked server from SQL Server to DB2 using JDBC?

I'm trying to create a linked server in Sql Server 2014 to a vendors DB2 database which is on a remote system. I have found a lot of good instructions for doing this but have been not having luck getting that to work. The vendor doesn't know how to do this and has their people use Squirrel SQL with a JDBC connection. Since we need this setup pretty quick I install Squirrel and was able to get that connected and working. I would still prefer to use SSMS on our end for many reasons.

While I'm still trying to figure out the DB2 linked server, I was wondering if I can create a linked server using a JDBC connection since we have that working?

Export data from DB2 to Prometheus using query-exporter

I am following this article to achieve my goal (to export DB2 Metrics into Prometheus with the help of query-exporter) :

https://pypi.org/project/query-exporter/

I am getting following error while running query-exporter on a linux machine having DB2 as a Database.

[root@jupiter-vm792 ~]# query-exporter config.yaml2019-06-17 06:36:04,814 - ERROR - query-exporter - error from database "db1": '\x02'2019-06-17 06:36:04,824 - ERROR - query-exporter - error from database "db2": '\x02'2019-06-17 06:36:04,857 - ERROR - query-exporter - error from database "db2": '\x02'^C[root@jupiter-vm792 ~]#I searched for this error online and came across following. https://www.ibm.com/support/knowledgecenter/en/SSEPEK_10.0.0/codes/src/tpc/n30073.htmlwhich talks about '\x02' but I didn't get.

This is what my config.yml file looks like.

config.yml

databases: db1: dsn: ibm_db_sa://db2inst1:passw0rd@x.x.x.x:50000/ db2: dsn: ibm_db_sa://db2inst1:passw0rd@x.x.x.x:50000/ keep-connected: truemetrics: metric1: type: gauge description: A sample gauge metric2: type: summary description: A sample summary labels: [l1, l2] metric3: type: histogram description: A sample histogram buckets: [10, 20, 50, 100, 1000]queries: query1: interval: 5 databases: [db2] metrics: [metric1] sql: SELECT * FROM HEALTHMETRICS.ALERT_CONFIG_PARAM
Java Spring Hibernate SQL select with order by causes in DB2 big performance impact

I have table T_LOG, which has a few millions of row. Hibernate 3.6.10.Final using Detached Criteria produces inefficient SQL query, wich causes big performance impact.

Simlified sample:

 select temp.* from ( select row_number() over(order by a.dateTime asc) as rownumber, a.dateTime, a.*from T_LOG a order by a.dateTime asc) as temp where rownumber <= 75;

The problem is order by a.dateTime asc (2), which is redundant, because whole work is doing by first clause row_number() over(order by a.dateTime asc) (1).This additional second order by clause causes about 1000 times worse result, than the same one without it's.For example, the statement above needs more than 10 minutes on 25 milions rows, but this simplified statement:

 select temp.* from ( select row_number() over(order by a.dateTime asc) as rownumber, a.dateTime, a.* from T_LOG a) as temp where rownumber <= 75;

Takes only 0.1 second. The result set is the same. The column dateTime is indexed non-unique index.

In Java detached Criteria (simplified) looks like this:

 DetachedCriteria crit = DetachedCriteria.forClass(Log.class);... crit.addOrder(org.hibernate.criterion.Order.asc("dateTime"));// Used for pagination List logs = logRepository.findByCriteria(crit, from, count);

How can I reduce the SQL statement in detached criteria to avoid performance impact?

Sample of good cost:

Proper cost of SQL

And bed cost:

Very bad cost of SQL

Doobie streaming query causes invalid operation

Doobie

Without much experience in either Scala or Doobie I am trying to select data from a DB2 database. Following query works fine and prints as expected 5 employees.

import doobie.imports._, scalaz.effect.IOobject ScalaDoobieSelect extends App { val urlPrefix = "jdbc:db2:" val schema = "SCHEMA" val obdcName = "ODBC" val url = urlPrefix + obdcName + ":" + "currentSchema=" + schema + ";" + "currentFunctionPath=" + schema + ";" val driver = "com.ibm.db2.jcc.DB2Driver" val username = "username" val password = "password" implicit val han = LogHandler.jdkLogHandler // (ii) val xa = DriverManagerTransactor[IO]( driver, url, username, password ) case class User(id: String, name: String) def find(): ConnectionIO[List[User]] = sql"SELECT ID, NAME FROM EMPLOYEE FETCH FIRST 10 ROWS ONLY" .query[User] .process .take(5) // (i) .list find() .transact(xa) .unsafePerformIO .foreach(e => println("ID = %s, NAME = %s".format(e.id, e.name)))}

Issue

When I want to read all selected rows and remove take(5), so I have .process.list instead of .process.take(5).list, I get following error. (i)

com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][3.64.133] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null

I am wondering what take(5) changes that it does not return an error. To get more information about the invalid operation, I have tried to enable logging. (ii) Unfortunately, logging is not supported for streaming. How can I get more information about what operation causes this error?

Plain JDBC

Below, in my opinion equivalent, plain JDBC query works as expected and returns all 10 rows.

import java.sql.{Connection,DriverManager}object ScalaJdbcConnectSelect extends App { val urlPrefix = "jdbc:db2:" val schema = "SCHEMA" val obdcName = "ODBC" val url = urlPrefix + obdcName + ":" + "currentSchema=" + schema + ";" + "currentFunctionPath=" + schema + ";" val driver = "com.ibm.db2.jcc.DB2Driver" val username = "username" val password = "password" var connection:Connection = _ try { Class.forName(driver) connection = DriverManager.getConnection(url, username, password) val statement = connection.createStatement val rs = statement.executeQuery( "SELECT ID, NAME FROM EMPLOYEE FETCH FIRST 10 ROWS ONLY" ) while (rs.next) { val id = rs.getString("ID") val name = rs.getString("NAME") println("ID = %s, NAME = %s".format(id,name)) } } catch { case e: Exception => e.printStackTrace } connection.close}

Environment

As can be seen in the error message, I am using db2jcc.jar version 3.64.133. DB2 is used in version 11.

How to add explicit WHERE clause in Kafka Connect JDBC Source connector

I am using kafka connect to source data from DB2 to kafka topic and i am configuring sql query to read the data from DB2 , below is query

SELECT SEQ_I AS error_id, TRIM(SEND_I) AS sca , to_char(CREATE_TS,'YYYY-MM-DD HH24:MI:SS.FF3') AS create_timestamp, CREATE_TS, TRIM(ERR_MSG) AS error_message , CASE substr(ERR_MSG,1,locate('-',ERR_MSG)-1) WHEN 'WARNING' THEN 'W' WHEN 'SUSPENDED' THEN 'F' END ERROR_TYPE FROM INTCHG_ERROR_DIR WHERE TRAN_I ='503' AND PRCS_N = 'GLOBAL'

am using setting "timestamp.column.name": "CREATE_TS" here problem is in the query their is already WHERE clause , and kafka connect tried to add another where clause with timestamp column and it is creating issue and one more issue is if i remove where clause from sql clause like below

SELECT SEQ_I AS error_id, TRIM(SEND_I) AS sca , to_char(CREATE_TS,'YYYY-MM-DD HH24:MI:SS.FF3') AS create_timestamp, CREATE_TS, TRIM(ERR_MSG) AS error_message , CASE substr(ERR_MSG,1,locate('-',ERR_MSG)-1) WHEN 'WARNING' THEN 'W' WHEN 'SUSPENDED' THEN 'F' END ERROR_TYPE FROM INTCHG_ERROR_DIR

then am getting error with substr , like below

SQL Error [22011]: THE SECOND OR THIRD ARGUMENT OF THE SUBSTR OR SUBSTRING FUNCTION IS OUT OF RANGE. SQLCODE=-138, SQLSTATE=22011, DRIVER=4.19.26

can anyone suggest on both is this issues , am stuck at this point .

Retrieve field offset via recursive query in db2

Assume that I've got key-value table of field_name-field_len pair.

As follows:

-----------------------------------field_name | field_len |-----------------------------------FIELD_A | 10 |-----------------------------------FIELD_B | 20 |-----------------------------------...-----------------------------------FIELD_X | 2 |-----------------------------------FIELD_Y | 100 |-----------------------------------

Then I need an offset of each field to be in third column. Like this:

-----------------------------------------------------field_name | field_len | offset |-----------------------------------------------------FIELD_A | 10 | 0 |-----------------------------------------------------FIELD_B | 20 | 10 |-----------------------------------------------------...-----------------------------------------------------FIELD_X | 2 | 250 |-----------------------------------------------------FIELD_Y | 100 | 252 |-----------------------------------------------------

So I've wrote this script based on some manuals (1,2):

with offsets (column_name, length, offset) as ((select column_name, length, CAST(0 AS SMALLINT) from myschema.sizes a start with rrn(a) = 1) union all (select b.column_name, b.length, offset + o.length from offsets o, myschema.sizes b where rrn(b) between 2 and 100))select * from offsets;

However, it keeps getting into infinite loop.

Also this version gives same result:

with offsets (column_name, length, offset) as ((select column_name, length, CAST(0 AS SMALLINT) from myschema.sizes a fetch first row only) union all (select b.column_name, b.length, offset + o.length from offsets o join myschema.sizes b on b.column_name = o.column_name where o.column_name <>'LAST_FIELD'))select * from offsets;

I guess, that messed somewhere with exit condition, but can not figure exact place to fix it.

Would be great to avoid any table specific metadata like row count too.

DB2 shell run issue

When i run shell in DB2 instance.

VAR=`db2 CONNECT TO ${DBNAME} ${DBUSRSTR}`if [ ! $? -eq 0 ]; then logErr ${MSG002E} logErr ${VAR} exit 2filogInfo "【2】 DB2 connect " $?logInfo ${VAR}db2 set current schema TESTif [ ! $? -eq 0 ]; then logErr ${MSG002E} exit 2fi2019-06-17 03:32:03,123 INFO [test.sh] 【2】 DB2 connect: 02019-06-17 03:32:03,124 INFO [test.sh] Database Connection Information Database server = DB2/LINUXX8664 10.5.5 SQL authorization ID = DB2INST1 Local database alias = TESTDB21034E The command was processed as an SQL statement because it was not avalid Command Line Processor command. During SQL processing it returned:SQL1024N A database connection does not exist. SQLSTATE=080032019-06-17 03:32:04,127 ERR [test.sh] 

I don't know why it's seemed to connect success .. but can't execute sql?

I try it in the db2 command.

db2 =>

It was working fine. But in the shell. it's failed.

Compare archive and master database content

I have a master database in DB2 9.1 and I created a database in Oracle 12g to archive it. I have done it successfully with some datatype change programmatically.

I want to compare the content of two master (in DB2) and archive (in Oracle) databases to ensure the correctness of content transfer. Because of some limitations, I can not use any tools to do this. So I must be doing it programmatically with Java, C# or...

My first solution is to read rows content from the master database in Byte and concatenate columns to gather and create hashcode for each master and archive rows in the database and compare this hash codes. This is very time-consuming in big tables (having types of CLOB and BLOB and row update).Does anyone have a good idea?