Application Performance Tuning
Job Count
To achieve the best throughput of cryptographic jobs (such as Sign or Decrypt) in your application, arrange for multiple jobs to be on the go at the same time, rather than doing them one at a time. This is true even when using only a single HSM in your system.
When using a Solo, Solo+, Connect or Connect+, around 40 outstanding jobs per HSM is a good target when using an application that is coded directly against the nCore API. When using higher-level APIs such as PKCS#11 or CNG, your application may benefit from increasing the job count further, e.g. to 60 or more outstanding jobs per HSM.
The ncperftest
utility supports performance measurements of a range of cryptographic operations with different job counts and client thread counts.
You may find this useful to inform tuning of your application.
Run ncperftest --help
to see the available options.
Client Configuration
If your application is coded directly against nCore, you have a choice of sending multiple jobs asynchronously from a single client connection to the hardserver, or having multiple threads each with their own client connection to the hardserver with a single job sent synchronously in each.
You can use the --threads
parameter to the ncperftest
utility to experiment with the performance impact of having more threads/connections with fewer jobs outstanding in each, or having fewer or just one thread/connection with more jobs outstanding in that connection.
When using higher-level APIs such as PKCS#11 or CNG, all cryptographic operations are synchronous, so larger numbers of threads must be used to increase the job count and make full use of HSM resources.
These APIs automatically create a hardserver connection for each thread.
If many HSMs are being used, a great many threads may be required to achieve best throughput.
You can adjust the thread counts in the performance test tools for these APIs (e.g. cksigtest
for PKCS#11 and cngsoak
for CNG) to gauge how much concurrency is required for best throughput in your application.
Highly Multi-threaded Client Applications
If your application is highly multi-threaded, operating system defaults may not be optimal for best performance:
You may benefit from using a scalable memory allocator that is designed to be efficient in multi-threaded applications, examples include tcmalloc
.
On some systems the default operating system scheduling algorithm is also not optimized for highly multi-threaded applications. A real-time scheduling algorithm such as the POSIX round-robin scheduler may yield noticeable performance improvements for your application.
File Descriptor Limits
On Linux systems, large numbers of threads each with their own hardserver connection will require your application to make use of large numbers of file descriptors.
It may be necessary to increase the file descriptor limit for your application.
This can be done using ulimit -n NewLimit
on most systems, but you may need to increase system-wide hard limits first.