perfcheck

perfcheck [OPTIONS] [SUITE:TEST_NUMBER ...]
Only supported in FIPS 140-2 Level 2 Security Worlds.

Runs various tests to measure the cryptographic performance of a module.

By default, tests are run 3 times and with both max-queue and the queue of 1. To run tests just once each with max-queue, use the -s or --single parameter.

perfcheck example command lines

Run default set of tests

perfcheck

Run particular suites of tests

perfcheck kx signing

Run particular tests by ID

perfcheck kx:1 kx:2

Run particular tests by exact name

perfcheck "signing:RSA using RSApPKCS1 with 2048-bit n."

Run particular tests by prefix

perfcheck signing:RSA*

Run a list of tests from file

perfcheck --testlist-file=tests.txt

List default set of tests

perfcheck --list

List the tests within particular suites

perfcheck --list kx signing

List the available suites

perfcheck --help-suites

Compare two results sets

perfcheck --diff --old=OLD_PATH --new=NEW_PATH

perfcheck syntax

Option Description

Output options for test execution and report diffs

--capture-raw-data

Raw data for each test case will be written to the raw_results subdirectory of the output directory (--outputdir) which must be specified. Input parameters and overall results for each run of a test case are stored in the JSON files, and the timings of each individual job are stored in the CSV files.

-o OUTPUT, --outputdir=OUTPUT

Output destination directory name. A subdirectory named perfcheck.DATE_AND_TIME based on current time will be created.

-n, --nosubdir

Does not create a subdirectory of the outputdir. Reports will be written directly to that path which must not already exist.

-p, --show-progress

Prints a summary of the latest results for a test every second during execution. No interim results will be printed for tests that take less than a second to complete.

Diff options to produce a comparison report between two sets of test results to identify significant regressions or improvements in performance. The -o, --outputdir and -n, --nosubdir output options can be used in conjunction with this.

--diff

Creates a comparison report.

--new=NEW_RESULTS_PATH

Path to the new result set.

--old=OLD_RESULTS_PATH

Path to the old result set.

--threshold=THRESHOLD

Threshold percentage regression to report as error.
The threshold will be applied only to resullts that have a significance rating of at least 50% and to tests in suites other than Miscellaneous and Key Generation. The error will be reported on stderr and reflected in the exit code, but does not otherwise affect report generation.

Testing options to configure the test execution behavior.

--accuracy=base|default|high

Accuracy level.
Options are base, default and high in order of increasing test accuracy. More accurate tests take longer to execute. This adjusts the default values of --target-test-rse, --max-test-time, and `--run`s, but if any of those options are set explicitly, the explicit value will take precedence.

--client-throttle-spins=SPIN_COUNT

Throttle input to system by spinning for the specified number of instructions before submitting each job.
Default: 0 for no throttling

-d headline|overview|core|default|full, --depth=headline|overview|core|default|full

Depth of testing.
Options are headline, overview, core, default and full in order of increasing test depth. The higher the depth of testing, the more test cases are run. This option is not relevant if individual tests are specified directly by name or test id, but controls the depth of testing where all tests are requested or a particular suite. This option also affects listing tests, so to see all available tests specify --depth=full when listing; this will show some additional parametrizations that are not run by default.

-f TESTLIST_FILE, --testlist-file=TESTLIST_FILE

Path to a file containing a list of tests to run.
Each line should contain a test in the same format as is supported on the command-line. For example, suite:description, suite to run a whole suite or suite:PREFIX* to run all tests whose name starts with PREFIX. To run the same tests as from a previous run, pass the path to the testlist.txt file that was automatically written to the output directory of that run.

--max-test-time=TIME_SECONDS

Time limit for individual test cases or 0 for no limit.
Defaults based on accuracy level: base: 30 seconds, default: 60 seconds, high: 150 seconds.

-q QUEUE_SIZE, --queue=QUEUE_SIZE

Specifies the request queue size or 0 to run with max queue reported by enquiry.
Default: run operations both with queue of 1 and with the max queue reported by enquiry, that is, both one-off and bandwidth measurements.

-r REPS, --repetitions=REPS

Runs this many repetitions instead of the default. More may be run due to other constraints.

--runs=COUNT

Runs each selected test case the specified number of times. Values above 1 allow variance between runs to be detected.
Defaults based on accuracy level: base: 1, default: 3, high: 5.

-s, --single

Does a single run of each test case.
This is a shorthand for --runs=1 --queue=0 (that, is max-q) but if either of those options are specified explicitly the explict value takes precedence.

-t TIME_SECONDS, --min-test-time=TIME_SECONDS

Minimum time to run individual test cases.
If set to 10 or above, the --show-progress option will be turned on automatically.
Default: 0 seconds.

--target-test-rse=RSE_PERCENT

Target relative standard error percentage or 0 for none. Each test will keep running until this error target is met or --max-test-time is reached.
Defaults based on accuracy level: base: 1.0%, default/high: 0.1%.

--thread-count=THREAD_COUNT

Number of client threads from which to fill the queue. The queue will be split evenly across the threads.
Default: 1.

Option to address HSMs

-m, --module=MODULE

Specifies the number of the module to perform the test with.
If you only have one module, <MODULE> is 1.
Default: 1.

Help options

-h, --help

Displays help for perfcheck.

--help-suites

displays help for the available test suites.

-l, --list

Lists all tests that will be run in the selected suites.

-u, --usage

Displays a brief usage summary for perfcheck.

-v, --version

Displays the version number of the Security World Software that deploys perfcheck.

perfcheck tests

The available tests are grouped into suites:

  • kx (key exchange)

  • keygen (key generation)

  • signing (signing)

  • verify (verification)

  • enc (encryption)

  • dec (decryption)

  • misc (miscellaneous)

To see the list of tests run by default in a particular suite, run a command of the form:

perfcheck --list suite

To see all available tests in a particular suite, including those not run by default, run a command of the form:

perfcheck --list --depth=full suite

For example, to list all the signing tests, run the command:

perfcheck --list --depth=full signing
>>> Suite `signing' -- Signing (374 tests)
>>>    signing 1 - DSA using RIPEMD160 with 1024-bit p and 160-bit q.
>>>    signing 2 - DSA using RIPEMD160 with 2048-bit p and 160-bit q.
>>>    signing 3 - DSA using RIPEMD160 with 3072-bit p and 160-bit q.
>>>    signing 4 - DSA using SHA1 with 1024-bit p and 160-bit q.
>>>    signing 5 - DSA using SHA1 with 2048-bit p and 160-bit q.
>>>    signing 6 - DSA using SHA1 with 3072-bit p and 160-bit q.

In the output, each listed test in the suite is identified with a number.

You can reference a test either by its number or by its name:

  • by test number:

    perfcheck suite:test_number

    To use test 16 of the signing suite:

    perfcheck signing:16
  • by test name:

    perfcheck "signing:RSA using RSAhSHA3b512pPSS with 4096-bit n."

    Example:

    perfcheck "signing:RSA using RSApPKCS1 with 2048-bit n."

The test numbers change between releases. If you want to rerun tests for comparison, reference the tests by their names.

perfcheck prints the results of individual tests to output as it goes along, and then prints a full report at the end. By default, perfcheck runs each test three times for both minimum and maximum queue sizes, and then collates the results in the final report. See --help for the options to adjust this behavior.

Optionally, perfcheck can write its output to a directory in multiple formats using the --outputdir option to specify a directory name. This will create a new subdirectory under the specified directory to write the output. The --nosubdir option can be added as well to write output to the specified directory directly, in which case that directory must not already exist. The output directory will contain perfcheck.html, perfcheck.txt, perfcheck.csv, and perfcheck.json files that contain the report in HTML, text, CSV, and JSON format respectively. JSON files that contain the detailed results of individual tests will also be written to the output directory.

Output reports from test suites include the following information about each test:

Value Description

CV (%)

This value is the coefficient of variation expressed as a percentage of the mean latency. It gives an indication of the variability in the time it takes individual jobs to complete.
If a test has been rerun, this is the mean of the CV (%) values from each run.

Max latency (ms)

This value is the time in milliseconds that the slowest individual job across all the test runs took to round-trip.

Max rate (tps)

This is the estimated upper bound of the throughput for this queue size in transactions per second.
The value becomes more accurate if more test runs of the same test are done. When it is compared against Min rate (tps) and Mean rate (tps), Max rate (tps) gives an indication of the variability between runs.

Mean latency (ms)

This value is the mean time in milliseconds that jobs took to round-trip.
If a test has been rerun, this is the mean of the mean latency values from each run.

Mean rate (tps)

This is a measure of throughput. Unlike Rate (Units/s), it is expressed in transactions per second, that is, as the number of jobs that round-trip per second.
Mean rate (tps) is included for comparison against the Min rate (tps) and Max rate (tps) figures.

Min latency (ms)

This value is the time in milliseconds that the quickest individual job across all the test runs took to round-trip.

Min rate (tps)

This is the estimated lower bound of the throughput for this queue size in transactions per second.
The value becomes more accurate if more test runs of the same test are done. When it is compared against Mean rate (tps) and Max rate (tps), Min rate (tps) gives an indication of the variability between runs.

Queue

This value is the number of outstanding jobs in the queue when the test was run.
By default, most tests run both with a queue of 1, and with a fully maxed out module queue, to give an indication of both one-at-a-time performance and the bandwidth for the operation. The queue can be set differently using the --queue option, in which case only that queue length will be run with, except for some misc suite tests which set their own queue.

Rate (Units/s)

This value is a measure of throughput. It is calculated by dividing the number of repetitions by total time.
If a test has been rerun to improve accuracy, as is the case by default, then this is the mean across all the runs.
Some tests, for example enc, set the Unit to something other than an operation, for example KB, to indicate the amount of data that can be encrypted.

Reps

This value is the number of repetitions that were actually carried out, that is, the number of jobs that were round-tripped over all tests of this operation for this queue size.
If a test was rerun, this is the sum of the repetitions for each run. The target repetitions for an individual run can be set using the --repetitions option but note that in most cases more repetitions will be run depending on the --accuracy setting provided that the timeout is not reached. It is recommended to set --accuracy rather than --repetitions to control the accuracy of the test instead of adjusting the repetitions.

How perfcheck calculates statistics

The perfcheck utility sends multiple simultaneous nCore commands to keep the HSM busy. It can send more commands if a required number of repetitions has not yet been reached.

After sending some initial commands, perfcheck begins marking commands with the time at which are submitted. When a command comes back with a timestamp, perfcheck checks the amount of time needed to complete the command and updates the values for Std dev and Latency. The value of Total time is the amount of time from sending the first job to receiving the final one.

When an nCore command is submitted to an HSM by a client application, it is processed as follows:

PCIe and USB HSMs

Because an HSM can execute several commands at once, throughput is maximized by ensuring there is always at least one command in the hardserver queue (so that there are always commands available to give to the HSM).

  1. The command is passed to the hardserver.

  2. The client hardserver encrypts the command.

  3. When the HSM is free, the command is submitted from the hardserver queue.

  4. The command is executed by the HSM, and the reply is given to the hardserver.

  5. The unit hardserver queues the reply.

  6. The unit hardserver sends the command back to the client hardserver over the network.

  7. When the client application is ready, the queued reply is returned to it.

network-attached HSMs

Because an HSM can execute several commands at once, throughput is maximized by ensuring there is always at least one command in the unit hardserver queue (so that there are always commands available to give to the HSM).

  1. The command is passed to the client hardserver.

  2. The client hardserver encrypts the command.

  3. The encrypted command is sent to the unit hardserver over the network.

  4. The unit hardserver decrypts the command and queues it.

  5. When the internal security module is free, the command is submitted from the hardserver queue.

  6. The command is executed by the HSM, and the reply is given to the unit hardserver.

  7. The unit hardserver encrypts the command.

  8. The unit hardserver sends the command back to the client hardserver over the network.

  9. The client hardserver decrypts the reply and queues it.

  10. When the client application is ready, the queued reply is returned to it.