Introduction
This is a follow-up to an already published article about configuring the DB2REMOTE: option in a local DB2 instance (“local” in this case meaning “in the local data centre, not in AWS“) to enable direct DB2 backups to the AWS S3 storage.
In this article, we will go one step further and examine a new option in DB2 v11.5.7 which allows LOG archiving to be done directly to the AWS S3 storage for the first time.
Background
Without going into too many technical details, we are going to use a DB2 database TESTDB created on a local Linux server within a DB2 instance that has just been upgraded to the required level v11.5.7.
(for more details on the actual set up please read the previous article)
The current DB2 instance level:
db2level
DB21085I This instance or install (instance name, where applicable:
"db2dwtst") uses "64" bits and DB2 code release "SQL11057" with level
identifier "0608010F".
Informational tokens are "DB2 v11.5.7.0", "s2110181747", "DYN2110181747AMD64",
and Fix Pack "0".
Product is installed at "/opt/ibm/db2/V11.5".
Configuration
In order to use the AWS S3 storage as the target for our archived DB2 transaction logs (as well as the database backups) we must do a bit of preparation first: install and configure the AWS CLI, then create the local keystore and configure DB2 to use it (again, see the previous article for details on how all this is done).
Next, a STORAGE ACCESS ALIAS must be defined in our DB2 instance – this will specify the AWS S3 bucket to be used (and provide the connection details such as the username/password which will be stored in the local keystore in encrypted format).
db2 "CATALOG STORAGE ACCESS ALIAS S3DWRND
VENDOR S3
SERVER s3.us-east-2.amazonaws.com
USER '********************'
PASSWORD '********************'
CONTAINER dwrnd.triton.co.uk"
NOTE: This command configures DB2 to use the S3 bucket dwrnd.triton.co.uk which I have defined beforehand in AWS using my personal AWS account. Obviously, I have omitted the username and the password for security reasons here, so if you want to try this out you will have to use your own AWS account to create (your own) AWS Bucket 😊
Lastly, we must update one of the DB2LOGARCHMETH database parameters to use the Storage Access Alias that we have just defined (that points to the AWS bucket in this case – with the only other available option (for now?) being the SoftLayer container, according to the DB2 docs) .
We will also specify a subdirectory (i.e. the AWS bucket prefix) “TESTDB_LOGS” where the LOGs will be saved to within the AWS bucket, to keep them separated from the other files:
db2 update db cfg for TESTDB using LOGARCHMETH1 'DB2REMOTE://S3DWRND//TESTDB_LOGS'
DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully.
The following entry appears in the DB2 diagnostic log and confirms the configuration change is OK:
2021-12-11-16.01.26.705399+000 I811850E569 LEVEL: Event
PID : 11376 TID : 139858219624192 PROC : db2sysc 0
INSTANCE: db2dwtst NODE : 000 DB :
APPHDL : 0-58 APPID: *LOCAL.db2dwtst.211211160051
AUTHID : DB2DWTST HOSTNAME: itchy
EDUID : 25 EDUNAME: db2agent (instance) 0
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:20
CHANGE : CFG DB TESTDB: "Logarchmeth1" From: "DISK:/home/db2dwtst/logarchive/" To: "DB2REMOTE://S3DWRND//TESTDB_LOGS"
DB2 Restarts
Notice that we didn’t have to stop/restart the DB2 instance at any point during this exercise – this is due to the fact that the DB2 instance has already been configured to use the local keystore. But if you are going to do all this from the scratch, a DB2 instance restart will be required, keep that in mind.
As for the database, again a similar situation, no restart (deactivation) was required because our test database TESTDB has already been configured for the log archiving before this exercise:
First log archive method (LOGARCHMETH1) = DISK:/home/db2dwtst/logarchive/
Second log archive method (LOGARCHMETH2) = OFF
If this weren’t the case then we would have had to restart the database after reconfiguring it for the log archiving!
Test
At this point, the local database TESTDB should be completely configured to archive the transaction logs directly to the S3 storage.
In order to check this, we should start executing transactional activities on the database to start consuming (and archiving) the log space. [that’s precisely what happens here, but let’s not go into details 😊]
After a while, the following message(s) appear in the DB2 diagnostic log:
2021-12-11-16.26.19.866332+000 E913527E609 LEVEL: Info
PID : 11376 TID : 139858211235584 PROC : db2sysc 0
INSTANCE: db2dwtst NODE : 000 DB : TESTDB
HOSTNAME: itchy
EDUID : 78 EDUNAME: db2logmgr (TESTDB) 0
FUNCTION: DB2 UDB, data protection services, sqlpgArchiveLogFile, probe:3175
MESSAGE : ADM1846I Completed archive for log file "S0000305.LOG" to
"DB2REMOTE://S3DWRND//TESTDB_LOGS/db2dwtst/TESTDB/NODE0000/
LOGSTREAM0000/C0000000/" from
"/home/db2dwtst/db2dwtst/NODE0000/SQL00001/LOGSTREAM0000/".
This confirms the closed DB2 transaction log files are being archived to the (remote) S3 storage.
Just to be completely sure, let’s check the contents of the S3 Bucket (prefix TESTDB_LOGS):
aws s3 ls s3://dwrnd.triton.co.uk/TESTDB_LOGS/db2dwtst/TESTDB/NODE0000
/LOGSTREAM0000/C0000000/
...
2021-12-11 16:26:15 4202496 S0000304.LOG
2021-12-11 16:26:18 4202496 S0000305.LOG
2021-12-11 16:27:16 2121728 S0000306.LOG
...
We can clearly see that our log file(s) are archived in the S3 Bucket!
The Ultimate Step
Last but not the least, we must now try to convince our clients to upgrade to the latest DB2 level in order to start using this new option.
This is usually not a simple task because of the many levels of bureaucracy involved in the approval process 😬 however in this case it should be plain sailing given the benefit(s) reaped from the upgrade 😊
Conclusion
This is (was) an eagerly awaited and very welcome extension of the DB2 “cloud” capabilities, which well complements and extends the already existing support for leveraging AWS services and will considerably improve the DB2 manageability in the AWS cloud.