Home » Articles

P6R demonstrates KMIP Client SDK’s Standards Comformance at the 2023 Interop

By Mark Joseph - November 30, 2023 @ 9:52 am

P6R participated in a month-long interop this past October to test its implementation of the KMIP 3.0 standard. Included below are graphs showing the testing results which demonstrate that P6R’s Secure KMIP Client (SKC) SDK passed all tests that where executed during the Interop. For this Interop, P6R used its KMIP Server Protocol Verification Suite (KVS) product to run the test cases (KVS uses the SKC SDK internally). There where approximately 200 test cases defined for the Interop which had to be executed in all 4 supported KMIP formats: TTLV, TTLV over HTTPS, JSON, XML, against 5 KMIP servers (i.e., Cryptsoft C & Java servers, IBM GKLM Server, NetApps, Hancom MDS KeyManager).





P6R’s SQLiteTDE Page Encrypted Database

By Jim Susoy - April 17, 2023 @ 10:12 pm


In today’s digital age, data is more valuable than ever before, and the security of that data is paramount. One way to protect data is by using encryption to secure it while it is at rest, which means it is stored in a database or on a hard drive. SQLite is a popular open-source database engine used by many applications, including mobile apps, web browsers, and operating systems. In this article, we will explore the mechanisms by which P6R’s SqliteTDE library is secured with encryption using two popular AEAD ciphers, AES-GCM and ChaCha20Poly1305, and why we chose those specific ciphers.

First, let’s talk about why securing data at rest is important. Data at rest is vulnerable to attacks by hackers, insiders, and malware. Encryption can make it more difficult for an attacker to access sensitive information, even if they manage to access the database or hard drive. Encryption works by scrambling the data so that only authorized users with the correct key can unscramble it. This makes it virtually impossible for an attacker to read the data, even if they have physical access to the storage device.


SQLiteTDE is based on SQLite, and we periodically integrate stable upstream features into it. We’ve minimized our changes to the SQLite core wherever possible. SQLite modifications for SQLiteTDE enable the encryption of all database pages. Only the file header identifying the file as a SQLite database remains unencrypted. Encryption/decryption are performed as database pages are written and read.

In addition to minimizing our changes in SQLite, we also wanted to minimize any changes to your application. you only need a change requiring adding a single line of code to enable encryption. The rest is completely transparent to your application.


We use OpenSSL to provide these ciphers for SQliteTDE. OpenSSL’s ciphers have undergone extensive review and testing to ensure they provide strong security against known attacks.

OpenSSL’s AEAD ciphers have undergone extensive review and testing to ensure they provide strong security against known attacks. Building your own AEAD from other ciphers may introduce vulnerabilities that have not been fully analyzed or understood.


We chose to use only AEAD ciphers for encryption of the database blocks in our SQliteTDE library. AEAD stands for Authenticated Encryption with Associated Data. It is a cryptographic mode of operation that provides both confidentiality and authenticity guarantees for data being transmitted over an insecure channel.

AEAD achieves both confidentiality and authenticity by combining symmetric-key encryption with message authentication. It encrypts the data using a symmetric encryption algorithm, such as AES or ChaCha20, and then generates an authentication tag using a message authentication code (MAC), such as Poly1305 or HMAC. The authentication tag is then packaged with the encrypted data, and the recipient can use the same symmetric key and MAC algorithm to verify the authenticity of the data and decrypt it.

AEAD also allows additional associated data to be included in the authentication process, such as metadata or headers. This data is authenticated but not encrypted, allowing the recipient to verify its authenticity without needing to decrypt it.


Now, let’s dive into the two ciphers we mentioned earlier: AES-GCM and ChaCha20Poly1305. AES-GCM is a cipher that has been widely adopted as the standard for encryption in many applications. It is fast, can be hardware accelerated, is secure, and resistant to attacks such as the BEAST attack. GCM stands for Galois/Counter Mode, which is a mode of operation that provides authenticated encryption. This means that the data is not only encrypted but also verified to ensure its integrity.


We chose to use AES-256-GCM over other modes of AES such as CBC (Cipher Block Chaining) because it offers several advantages over the other modes:

  1. Authentication: AES-GCM provides both encryption and authentication in a single operation. This means that data can be encrypted and protected from unauthorized access, while also ensuring that the data has not been tampered with or modified.
  2. Performance: AES-GCM is a fast encryption mode that can provide high-speed data processing and has wide support for hardware acceleration. It can perform encryption and authentication in parallel, which helps to reduce the processing time and improve performance.
  3. Security: AES-GCM is considered to be a secure encryption mode. It uses a counter mode of operation to ensure that each block of data is encrypted with a unique key, which makes it difficult for attackers to find patterns in the encrypted data.
  4. Data integrity: AES-GCM uses a message authentication code (MAC) to ensure data integrity. This means that if an attacker tries to modify the encrypted data, the MAC will fail and the decryption process will not succeed.
  5. Storage: AES-GCM produces encrypted data that is the same size as the original data, which makes it easy to store and transfer data.


ChaCha20Poly1305 is a stream cipher that is gaining popularity as an alternative to AES-GCM. It is also fast and secure, but it is designed to be more efficient on devices with limited processing power, such as mobile phones and IoT devices. ChaCha20 is the encryption algorithm, and Poly1305 is the message authentication code. Together, they provide authenticated encryption, similar to AES-GCM.

  1. Security: ChaCha20Poly1305 provides high security against attacks, including both encryption and message authentication. It has been selected as a recommended cipher suite for Transport Layer Security (TLS) by the Internet Engineering Task Force (IETF) due to its high security features.
  2. Efficiency: ChaCha20Poly1305 is designed to be highly efficient on a wide range of hardware, including mobile devices and embedded systems. It is lightweight, fast, and consumes less power, making it a great choice for devices with limited resources.
  3. Parallelism: ChaCha20Poly1305 allows for parallelism, which means it can encrypt or authenticate multiple messages simultaneously. This is particularly useful in multi-core systems, where it can take advantage of multiple cores to process multiple messages concurrently.
  4. Nonce misuse resistance: ChaCha20Poly1305 is designed to be resistant to nonce misuse. If a nonce is accidentally or deliberately reused, the algorithm will detect it and prevent any data from being decrypted or authenticated.
  5. Flexibility: ChaCha20Poly1305 can be used in a variety of applications, including secure communication protocols, data storage systems, and secure file sharing systems. It is also compatible with many different platforms, including Linux, Windows, macOS, and mobile operating systems.
  6. Storage: ChaCha20Poly1305 produces encrypted data that is the same size as the original data, which makes it easy to store and transfer data.

So, why are these ciphers the best to use for securing SQLiteTDE? They both provide authenticated encryption, which means they protect against both eavesdropping and tampering. They are also both fast and efficient. Finally, they are both well-tested and widely adopted, which means that they have been reviewed by security experts and are less likely to have vulnerabilities. In addition both only require a single key to use. This reduces the overhead and added complexity of managing multiple keys.


Securing data at rest is crucial in today’s digital age, and encryption is an effective way to do so. AES-GCM and ChaCha20Poly1305 are two ciphers that provide authenticated encryption, are fast and efficient, and are widely adopted and well-tested. If you are using SQLite, you can easily switch to using SQLiteTDE with the addition of a single line of code. The rest is completely transparent to your application. With the right precautions, you can protect your sensitive data from unauthorized access and ensure its integrity.

Adding Transparent Data Encryption (TDE) to SQlite

By Mark Joseph - September 2, 2022 @ 3:17 pm

This document was updated on 4 November 2022

SQlite stores a database in a single file on disk. This single file is broken up into pages of around 4k bytes (but the size can be changed). Each page has 3 sections: header, data, and footer. The data section contains a database’s information. SQlite source code already has hooks, in the pager.c code file, to support page encryption (i.e., to encrypt the data section of each page). We took the SQlite design and extended it to use AES GCM (256 bit key) and ChaCha20Poly1305 authenticated encryption ciphers thus handling encryption and signing with a single key. P6R has packaged this work into a new standalone product

Page encryption is transparent to an application (i.e., decryption occurs as the page is read from file, and encryption is performed as the page is written to disk), and encrypts all data in the database. In addition to page encryption an application can also add field level encryption of selected columns in a defined schema.

P6R has implemented page encryption as follows. Each time a page is to be authenticated encrypted we generate a new Initialization Vector (IV). We take the IV and page number concatenated as the input to a cipher’s associated data in generating an authentication tag. Next we store the IV and authentication tag in a page’s non-encrypted footer. On decryption, we use the IV and authentication tag from the page’s footer to perform the authenticated decryption. The use of the ChaCha20Poly1305 cipher requires the use of OpenSSL 1.1.x or greater.

Some other designs that have implemented page encryption in SQlite use two keys: one for encryption and an HMAC for signing a page’s contents. Our approach is simpler (and more secure) as it only requires a single key and a single cryptographic operation per page.

P6R uses SQlite to implement our Keystore component which is used in several of our products. For example, we have incorporated a Keystore with TDE into our KMIP Client SDK for a managed object cache (e.g., keys). This provides our customers with 2 levels of encryption: field level encryption of key material implemented in the Keystore component, and page encryption in the SQlite database used to implement the Keystore.

Another difference between the P6R design and other extensions to SQlite is that our implementation does not generate nor store the key used for authenticated encryption. Instead, we have added a new API function where an application can set the key which it has generated and maintains outside of the database (e.g., on a key server, an external drive)

P6R’s SQLite Extended API

Below is the list of P6R added API calls, which provide page encryption functionality, to the standard SQLite code base.

[1] Set the page encryption key each time a database is opened. 
This function should be called first thing after the call to sqlite3_open_v2().

int sqlite3_p6r_setkeys(
  sqlite3 *db,                   /* Database to be encrypted */
  const char *pDbName,           /* Name of the database */
  const unsigned char *pKey,     /* Encryption Key bytes */
  unsigned int nKeySize,         /* key size in bytes, must be 32 */
  unsigned int encryptFlags      /* mostly control logging: trace, error */

[2] Re-key the entire database with a new key and possibly a different cipher.
This function should be called first thing after the call to sqlite3_open_v2()
and then sqlite3_close() afterwards.  All clear data appears only in memory.
Nothing is written unencrypted to the disk.

int sqlite3_p6r_rekeydb(
  sqlite3 *db,
  const char *pDbName,           /* Name of the database */
  const unsigned char *pOldKey,  /* Encryption Key bytes currently in use */
  unsigned int nOldSize,         /* old key size in bytes, must be 32 */
  unsigned int oldCipher,        /* One of the P6RCIPHER constants */
  const unsigned char *pNewKey,  /* Encryption Key bytes to rekey with */
  unsigned int nNewSize,         /* new key size in bytes, must be 32 */
  unsigned int newCipher,        /* One of the P6RCIPHER constants */
  unsigned int encryptFlags      /* mostly control logging: trace, error */

[3] Decrypt a previously SQLite3 P6R encrypted database.
This function should be called first thing after the call to sqlite3_open_v2()
and then sqlite3_close() afterwards.

int sqlite3_p6r_decryptdb(
  sqlite3 *db,                   /* Database to be decrypted */
  const char *pDbName,           /* Name of the database */
  const unsigned char *pKey,     /* Encryption Key bytes */
  unsigned int nKeySize,         /* key size in bytes, must be 32 */
  unsigned int encryptFlags      /* mostly control logging: trace, error */

[4] Re-encrypt a database that was previously decrypted by a call to 
sqlite3_p6r_decryptdb().  This function should be called first thing after the 
call to sqlite3_open_v2() and then sqlite3_close() afterwards.

int sqlite3_p6r_encryptdb(
  sqlite3 *db,                   /* Database to be encrypted */
  const char *pDbName,           /* Name of the database */
  const unsigned char *pKey,     /* Encryption Key bytes */
  unsigned int nKeySize,         /* key size in bytes, must be 32 */
  unsigned int encryptFlags      /* mostly control logging: trace, error */

A KMIP Managed Object Cache

By Mark Joseph - October 19, 2021 @ 9:34 am

This document was updated on 17 Feb 2022.

To improve performance by reducing network traffic to a KMIP server for frequently used Keys, Certificates, Secret Data, and other KMIP objects we have added an object cache to our client side products
(i.e., Secure KMIP Client SDK (SKC) and our PKCS11 library’s KMIP token). For SKC a unique cache instance is created and shared for each KMIP server’s FQDN or IP address. For our PKCS11 library a unique cache instance is created for each slot defined to use a KMIP token.

To implement our object cache we have extended our existing Keystore. We have added a special “mode” to the Keystore that implements standard cache eviction policies to its entries (i.e., time to live, LRU). The benefits of using our Keystore are two-fold. First, key material is already field encrypted in the Keystore and so it is encrypted in an object cache instance. And second the Keystore can use the SQLite database and thus can be created on disk or in memory with no additional work. Thus, our KMIP object cache has options to store it on disk or just to keep it in memory for the current KMIP TLS session.

For SKC, it is also possible to use a Postgres database to hold the cache. This can be setup to be either local to the KMIP client or on a remote server. An application can configure the Managed Object Cache so that one cache exists per KMIP server or one cache is used to share Managed objects of multiple KMIP servers.

Both the managed object bytes and many of the object’s associated attributes can be stored in the object cache. Objects in the cache are stored under their KMIP unique identifier attribute. The object cache defaults off and a customer has to enable and configure it to use it. The object cache functionality is available for any KMIP protocol version and both the PKCS11 versions 2.40 and 3.0.

The object cache is very easy to use and after configured its use is transparent to the customer. For SKC, the object cache requires a customer to call a couple of new API calls. For the PKCS11 library, the object cache setup is done completely by setting values into a configuration file. For example, when a customer’s code calls the KMIP Client Get operation the API implementation will first look into an object cache (if enabled) and return the required KMIP object if present. Otherwise the client will request the object from a KMIP server and store the response into an object cache instance for future reference before returning it to the caller.

Our object cache maintains a standard set of statistics which are available via a new cache API and also via a setting to produce them via logging. The new cache API allows the caller to set different logging preferences to see exactly what the cache is doing (e.g., what objects are being evicted or referenced). The new cache API also provides a function that forces the cache to run its time-to-live eviction policy so a caller could force a cache clean up if desired.

p6pythonkmip – a python extension for KMIP

By Mark Joseph - July 1, 2019 @ 8:20 am

P6R has written a Python C Extension for its Secure KMIP Client (SKC). SKC is a full featured
KMIP client SDK that supports KMIP protocol versions 1.0, 1.1, 1.2, 1.3, 1.4, 2.0, and 2.1 with all message formats TTLV, JSON, XML, and TTLV over HTTPS. p6pythonkmip provides the following benefits:

  1. Exposes the full SKC KMIP features via an easy to use yet powerful API. A more complete and better tested KMIP implementation than what is available in open source.
  2. Supports both Python versions 2.X and 3.X
  3. Runs on both Linux and Windows operating systems
  4. P6R provides support and product maintenance. KMIP can be a complex protocol and P6R provides KMIP education and expert help with troubleshooting.

P6R demonstrates KMIP Client SDK’s Standards Conformance at RSA 2019

By Mark Joseph - March 8, 2019 @ 1:09 pm

P6R participated in the OASIS KMIP interoperability demonstration at the 2019 RSA conference.
P6R was showcasing the latest release of its SKC product, which contains a full featured KMIP Client with UEFI support, and PKCS11. P6R also demoed its KMIP Server Protocol Verification Suite (KVS).

As part of this demonstration P6R participated in an OASIS run interoperation test with other vendors in the KMIP Technical Committee. The results of these tests is shown below.


KVS Automated Testing for KMIP Servers

By jsusoy - February 15, 2019 @ 11:33 am

P6R’s KMIP Verification Suite (KVS) automates testing of a KMIP server for compliance with any of the defined OASIS KMIP protocol versions, including those defined in OASIS KMIP profiles. P6R has been using this tool for the past several years in at least 4 separate formal OASIS interops. The result is a very mature and easy to use tool. We’ve used it to perform interop testing, with one person running the tests, on as many as 10 servers in only days. It can be used for running single or multiple test-cases from the command line, integrating with a CI system to automate testing for every build.


  • Comprehensive coverage. KVS implements all of the defined OASIS KMIP test cases. This is over 2600 test cases, all message formats (TTLV/XML/JSON), and all KMIP versions. These tests are actually used in formal interop testing.
  • Very detailed logging. KVS provides logging of tests and KMIP protocol interactions in a variety of formats (eg. Test TTLV, but log in XML with TTLV hex dump). Detail is important to help find out what going on when a test fails. Supported log formats are TTLV, XML and JSON. Logs provide a source of proof that server is working or not. Provides separate logs for protocol logging and test output to make finding problems easier.
  • Flexible. Can easily run just one, a subset, all tests in a category, or all tests. Tests can be excluded if that is more convenient. Multiple different output formats making integration much easier.
  • Interoperable. KVS can provide jUnit style output allowing integration with a variety of tools. Runs from the command line allowing it to be scripted, or run from a variety of tools including CI systems.

Reduces QA and Development Costs

Customers that are using it are realizing huge QA cost reductions, enabling them to concentrate their engineering efforts on their server instead of writing, verifying and maintaining test cases.

“The KVS test suite has been a great asset for interoperability testing” stated John Leiseboer, CTO of QuintessenceLabs. “It works seamlessly, delivering both a significant time and resource saving, as well as a more robust process.”

P6R maintains and enhances KVS, adding test cases as they evolve in the standard. This is a significant effort. In the latest interop, around 180 new test cases were added to test the KMIP 2.0 protocol version and this does not yet cover all the new KMIP 2.0 features.

Enables Test-Driven/Test-First Development (TDD)

KVS is not just for QA. KVS enables Test-Driven Development (TDD) by providing the test cases for KMIP features that have yet to be implemented. Developers then write the code to fulfill those test cases. A huge benefit of this practice is the ability to refactor code with confidence. Developers can easily verify refactored code by running the tests periodically as they change code.

SKC Compatibility

P6R’s Secure KMIP Client (SKC) is the foundation of KVS and so another benefit of using KVS is proving compatibility with P6R’s KMIP Client.

KVS is currently shipping and available for purchase. P6R is an active  member of the OASIS KMIP Technical Committee.

The KMIP Bug Report: Db2 KMIP Client

By Mark Joseph - May 19, 2018 @ 12:20 pm

Recently a customer using the P6R’s KMIP Server Protocol Library (KSL) sent us a KMIP message that our parser pointed out as not valid KMIP. This customer was performing integration testing between their KMIP server and IBM’s Db2 Database (version 11.1 running on windows) (https://www.ibm.com/analytics/us/en/db2/). Db2 has some KMIP Client code so that it can obtain keys for database encryption from an outside KMIP server. This is what KMIP was designed for. We have no idea of the origin of the KMIP client code but assume that it was written internally.

[1] Incorrect Text String encoding

The KMIP message that was of issue was trying to execute a KMIP Activate operation and is encoded in TTLV (a binary format) KMIP 1.0 protocol version. The offending KMIP message is as follows:


The broken part of this message comes at the very end in the following bytes encoded in TTLV (which stands for Tag Type Length Value):

420094 07 00000018 41574d6b736570694a54625a49736f5f30586f7200000000

The first part “420094″ is the Tag which indicates that the data following is a unique identifier. The second part “07″ indicates the Type is a text string. The third part “00000018″ is the Length of the following data which is 24. The fourth part “41574d6b736570694a54625a49736f5f30586f7200000000″ is the Value and is the unique identifier of a key on the KMIP server to activate.

The problem is with the length part. It is set to 24 but should be set to 20 bytes. Text strings in KMIP are padded with zeros to make their length a multiple of 8 so the last 4 bytes are all padded zeros. But this padding must not be included in the length of the text string value. In the above message the Db2 KMIP client did include the padding as part of the text string length and that is incorrect.

Since the above message uses KMIP protocol version 1.0 we are going to reference the KMIP 1.0 specification document (http://docs.oasis-open.org/kmip/spec/v1.0/os/kmip-spec-1.0-os.html). From this specification, Section Item Length,

     "...If the Item Type is Integer, Enumeration, Text String, Byte String, or Interval, then 
     the Item Length is the number of bytes excluding the padding bytes. Text Strings and Byte 
     Strings SHALL be padded with the minimal number of bytes following the Item Value to obtain 
     a multiple of eight bytes." (emphasis added)

[2] An Extension Credential Type with an Incorrect Byte String Encoding

While not an error this message does use an interesting encoding for the message credential.

420024 05 00000004 8000000100000000 
  420025 08 00000018 540001010000001054000407000000034442320000000000

The “420024″ Tag indicates “Credential Type”, the “05″ Type is enumeration, the “00000004″ is the length of the enumeration, and finally the
“8000000100000000″ is a Db2 defined KMIP extension type. Now KMIP does provide several ways to extend the protocol but now our customer has to find out the Db2 proprietary definition and implement that.

The next part is the Credential Value itself and is encoded as a Byte String. Interesting the length value of the Byte String value also looks wrong as the apparent padding is again included in the defined length.

Full featured KMIP clients are not easy to implement and that is why P6R has spent years developing and testing with every commercially available KMIP server it can find. P6R’s Secure KMIP Client (SKC) is a commercial product that has been shipping for over 5 years.

2018 OASIS KMIP Client Interop Results

By Mark Joseph - April 13, 2018 @ 2:40 pm

P6R had a pod in the OASIS booth at the 2018 RSA Conference Expo. This was our 4th year at this conference. We where showing off our KMIP and PKCS#11 products.
P6R at 2018 RSA Conference Expo

P6R participated in its 4th OASIS KMIP Interop. This interop tested KMIP 1.4 and the draft specification of KMIP 2.0. P6R tested its Secure KMIP Client (SKC) with the following list of KMIP Servers: Cryptsoft KMIP C and Java Servers, Fornetix Key Orchestration Server, IBM SKLM, Kryptus kNET HSM, Micro Focus ESKM, Q-Labs qCrypt, Thales DSM, and Unbound KMIP server. Below are the results of all clients participating in the Interop. Note that P6R’s SKC results is the full column on the right. P6R SKC passed all test cases conducted in this Interop.

KeyNexus and P6R Announce KMIP Partnership

By Mark Joseph - March 27, 2018 @ 8:57 am

KeyNexus and P6R Announce KMIP Partnership, Delivering a Complete End-to-End Client-Server KMIP Solution