Posted  by  admin

Python Slots Performance

Overview of the Qt unit testing framework.

Qt Test is a framework for unit testing Qt based applications and libraries. Qt Test provides all the functionality commonly found in unit testing frameworks as well as extensions for testing graphical user interfaces.

Python Slots Performance

Qt Test is designed to ease the writing of unit tests for Qt based applications and libraries:

Feature

Details

Lightweight

Qt Test consists of about 6000 lines of code and 60 exported symbols.

Self-contained

Qt Test requires only a few symbols from the Qt Core module for non-gui testing.

Rapid testing

Qt Test needs no special test-runners; no special registration for tests.

Data-driven testing

A test can be executed multiple times with different test data.

Basic GUI testing

Qt Test offers functionality for mouse and keyboard simulation.

Benchmarking

Qt Test supports benchmarking and provides several measurement back-ends.

IDE friendly

Qt Test outputs messages that can be interpreted by Qt Creator, Visual Studio, and KDevelop.

Thread-safety

The error reporting is thread safe and atomic.

Type-safety

Extensive use of templates prevent errors introduced by implicit type casting.

Easily extendable

Custom types can easily be added to the test data and test output.

You can use a Qt Creator wizard to create a project that contains Qt tests and build and run them directly from Qt Creator. For more information, see Running Autotests.

Performance

In a performance test function, the QBENCHMARK or setBenchmarkResult should be followed by a verification step using QCOMPARE, QVERIFY, and so on. You can then flag a performance result as invalid if another code path than the intended one was measured. A performance analysis tool can use this information to filter out invalid results. Slots are very useful for library calls to eliminate the “named method dispatch” when making function calls. This is mentioned in the SWIG documentation. For high performance libraries that want to reduce function overhead for commonly called functions using slots is much faster. Now this may not be directly related to the OPs question. There are two ways to define a Python slot function: dispatch to a statically defined function; or dispatch to a method defined on the operand. To dispatch to a statically defined function, use%feature('python:slot'), where slot is the name of a field in a PyTypeObject, PyNumberMethods, PyMappingMethods, PySequenceMethods or PyBufferProcs. ARGB MID TOWER CASE High-performance mid tower case with a full tempered glass front and side panels to showcase the inside of your rig. Comes equipped with two 20cm Addressable RGB fans in the front and one 12cm Addressable RGB fan in the rear of the case. Addressable RGB fans can be controlled using one of two ways: RGB LED control button or Addressable RGB motherboard. Includes a 6-port hub. Maybe the same effect can be achieved by smaller changes. But I tried and failed to achieve the same performance with a smaller patch yet. Maybe you will be more lucky. Note that even with this patch type slots still about 5% slower than ordinal methods (despite the fact that using operators needs less bytecode instructions than calling a method).

Creating a Test¶

To create a test, subclass QObject and add one or more private slots to it. Each private slot is a test function in your test. qExec() can be used to execute all test functions in the test object.

In addition, you can define the following private slots that are not treated as test functions. When present, they will be executed by the testing framework and can be used to initialize and clean up either the entire test or the current test function.

  • initTestCase() will be called before the first test function is executed.

  • initTestCase_data() will be called to create a global test data table.

  • cleanupTestCase() will be called after the last test function was executed.

  • init() will be called before each test function is executed.

  • cleanup() will be called after every test function.

Use initTestCase() for preparing the test. Every test should leave the system in a usable state, so it can be run repeatedly. Cleanup operations should be handled in cleanupTestCase(), so they get run even if the test fails.

Use init() for preparing a test function. Every test function should leave the system in a usable state, so it can be run repeatedly. Cleanup operations should be handled in cleanup(), so they get run even if the test function fails and exits early.

Alternatively, you can use RAII (resource acquisition is initialization), with cleanup operations called in destructors, to ensure they happen when the test function returns and the object moves out of scope.

If initTestCase() fails, no test function will be executed. If init() fails, the following test function will not be executed, the test will proceed to the next test function.

Example:

Finally, if the test class has a static public voidinitMain() method, it is called by the QTEST_MAIN macros before the QApplication object is instantiated. This was added in 5.14.

For more examples, refer to the Qt Test Tutorial .

Syntax¶

The syntax to execute an autotest takes the following simple form:

Substitute testname with the name of your executable. testfunctions can contain names of test functions to be executed. If no testfunctions are passed, all tests are run. If you append the name of an entry in testdata, the test function will be run only with that test data.

For example:

Runs the test function called toUpper with all available test data.

Python slots performance game

Runs the toUpper test function with all available test data, and the toInt test function with the test data called zero (if the specified test data doesn’t exist, the associated test will fail).

Runs the testMyWidget function test, outputs every signal emission and waits 500 milliseconds after each simulated mouse/keyboard event.

Logging Options¶

The following command line options determine how test results are reported:

  • -ofilename,format Writes output to the specified file, in the specified format (one of txt, xml, lightxml, junitxml or tap). The special filename - may be used to log to standard output.

  • -ofilename Writes output to the specified file.

  • -txt Outputs results in plain text.

  • -xml Outputs results as an XML document.

  • -lightxml Outputs results as a stream of XML tags.

  • -junitxml Outputs results as an JUnit XML document.

  • -csv Outputs results as comma-separated values (CSV). This mode is only suitable for benchmarks, since it suppresses normal pass/fail messages.

  • -teamcity Outputs results in TeamCity format.

  • -tap Outputs results in Test Anything Protocol (TAP) format.

The first version of the -o option may be repeated in order to log test results in multiple formats, but no more than one instance of this option can log test results to standard output.

If the first version of the -o option is used, neither the second version of the -o option nor the -txt, -xml, -lightxml, -teamcity, -junitxml or -tap options should be used.

If neither version of the -o option is used, test results will be logged to standard output. If no format option is used, test results will be logged in plain text.

Test Log Detail Options¶

The following command line options control how much detail is reported in test logs:

  • -silent Silent output; only shows fatal errors, test failures and minimal status messages.

  • -v1 Verbose output; shows when each test function is entered. (This option only affects plain text output.)

  • -v2 Extended verbose output; shows each QCOMPARE() and QVERIFY() . (This option affects all output formats and implies -v1 for plain text output.)

  • -vs Shows all signals that get emitted and the slot invocations resulting from those signals. (This option affects all output formats.)

Testing Options¶

Python Slots Performance

The following command-line options influence how tests are run:

  • -functions Outputs all test functions available in the test, then quits.

  • -datatags Outputs all data tags available in the test. A global data tag is preceded by ‘ __global__ ‘.

  • -eventdelayms If no delay is specified for keyboard or mouse simulation (QTest::keyClick(), QTest::mouseClick() etc.), the value from this parameter (in milliseconds) is substituted.

  • -keydelayms Like -eventdelay, but only influences keyboard simulation and not mouse simulation.

  • -mousedelayms Like -eventdelay, but only influences mouse simulation and not keyboard simulation.

  • -maxwarningsnumber Sets the maximum number of warnings to output. 0 for unlimited, defaults to 2000.

  • -nocrashhandler Disables the crash handler on Unix platforms. On Windows, it re-enables the Windows Error Reporting dialog, which is turned off by default. This is useful for debugging crashes.

  • -platformname This command line argument applies to all Qt applications, but might be especially useful in the context of auto-testing. By using the “offscreen” platform plugin (-platform offscreen) it’s possible to have tests that use QWidget or QWindow run without showing anything on the screen. Currently the offscreen platform plugin is only fully supported on X11.

Benchmarking Options¶

The following command line options control benchmark testing:

  • -callgrind Uses Callgrind to time benchmarks (Linux only).

  • -tickcounter Uses CPU tick counters to time benchmarks.

  • -eventcounter Counts events received during benchmarks.

  • -minimumvaluen Sets the minimum acceptable measurement value.

  • -minimumtotaln Sets the minimum acceptable total for repeated executions of a test function.

  • -iterationsn Sets the number of accumulation iterations.

  • -mediann Sets the number of median iterations.

  • -vb Outputs verbose benchmarking information.

Miscellaneous Options¶

  • -help Outputs the possible command line arguments and gives some useful help.

Qt Test Environment Variables¶

You can set certain environment variables in order to affect the execution of an autotest:

Python Performance Time

  • QTEST_DISABLE_CORE_DUMP Setting this variable to a non-zero value will disable the generation of a core dump file.

  • QTEST_DISABLE_STACK_DUMP Setting this variable to a non-zero value will prevent Qt Test from printing a stacktrace in case an autotest times out or crashes.

Creating a Benchmark¶

To create a benchmark, follow the instructions for creating a test and then add a QBENCHMARK macro or setBenchmarkResult() to the test function that you want to benchmark. In the following code snippet, the macro is used:

A test function that measures performance should contain either a single QBENCHMARK macro or a single call to setBenchmarkResult(). Multiple occurrences make no sense, because only one performance result can be reported per test function, or per data tag in a data-driven setup.

Avoid changing the test code that forms (or influences) the body of a QBENCHMARK macro, or the test code that computes the value passed to setBenchmarkResult(). Differences in successive performance results should ideally be caused only by changes to the product you are testing. Changes to the test code can potentially result in misleading report of a change in performance. If you do need to change the test code, make that clear in the commit message.

In a performance test function, the QBENCHMARK or setBenchmarkResult() should be followed by a verification step using QCOMPARE() , QVERIFY() , and so on. You can then flag a performance result as invalid if another code path than the intended one was measured. A performance analysis tool can use this information to filter out invalid results. For example, an unexpected error condition will typically cause the program to bail out prematurely from the normal program execution, and thus falsely show a dramatic performance increase.

Selecting the Measurement Back-end¶

The code inside the QBENCHMARK macro will be measured, and possibly also repeated several times in order to get an accurate measurement. This depends on the selected measurement back-end. Several back-ends are available. They can be selected on the command line:

Name

Command-line Argument

Availability

Walltime

(default)

All platforms

CPU tick counter

-tickcounter

Windows, macOS, Linux, many UNIX-like systems.

Event Counter

-eventcounter

All platforms

Valgrind Callgrind

-callgrind

Linux (if installed)

Linux Perf

-perf

Linux

In short, walltime is always available but requires many repetitions to get a useful result. Tick counters are usually available and can provide results with fewer repetitions, but can be susceptible to CPU frequency scaling issues. Valgrind provides exact results, but does not take I/O waits into account, and is only available on a limited number of platforms. Event counting is available on all platforms and it provides the number of events that were received by the event loop before they are sent to their corresponding targets (this might include non-Qt events).

The Linux Performance Monitoring solution is available only on Linux and provides many different counters, which can be selected by passing an additional option -perfcountercountername, such as -perfcountercache-misses, -perfcounterbranch-misses, or -perfcounterl1d-load-misses. The default counter is cpu-cycles. The full list of counters can be obtained by running any benchmark executable with the option -perfcounterlist.

  • Using the performance counter may require enabling access to non-privileged applications.

  • Devices that do not support high-resolution timers default to one-millisecond granularity.

See Writing a Benchmark in the Qt Test Tutorial for more benchmarking examples.

Using Global Test Data¶

You can define initTestCase_data() to set up a global test data table. Each test is run once for each row in the global test data table. When the test function itself is data-driven , it is run for each local data row, for each global data row. So, if there are g rows in the global data table and d rows in the test’s own data-table, the number of runs of this test is g times d.

Global data is fetched from the table using the QFETCH_GLOBAL() macro.

The following are typical use cases for global test data:

  • Selecting among the available database backends in QSql tests to run every test against every database.

  • Doing all networking tests with and without SSL (HTTP versus HTTPS) and proxying.

  • Testing a timer with a high precision clock and with a coarse one.

  • Selecting whether a parser shall read from a QByteArray or from a QIODevice .

For example, to test each number provided by roundTripInt_data() with each locale provided by initTestCase_data():

© 2020 The Qt Company Ltd. Documentation contributions included herein are the copyrights of their respective owners. The documentation provided herein is licensed under the terms of the GNU Free Documentation License version 1.3 as published by the Free Software Foundation. Qt and respective logos are trademarks of The Qt Company Ltd. in Finland and/or other countries worldwide. All other trademarks are property of their respective owners.

Previous Chapter: OOP, Inheritance Example
Next Chapter: Classes and Class Creation

Slots

Avoiding Dynamically Created Attributes

Python Slots Performance Game

The attributes of objects are stored in a dictionary __dict__. Like any other dictionary, a dictionary used for attribute storage doesn't have a fixed number of elements. In other words, you can add elements to dictionaries after they are defined, as we have seen in our chapter on dictionaries. This is the reason, why you can dynamically add attributes to objects of classes that we have created so far:

Performance

Python Slots Performance Tasks

The dictionary containing the attributes of 'a' can be accessed like this:

You might have wondered that you can dynamically add attributes to the classes, we have defined so far, but that you can't do this with built-in classes like 'int', or 'list':

Using a dictionary for attribute storage is very convenient, but it can mean a waste of space for objects, which have only a small amount of instance variables. The space consumption can become critical when creating large numbers of instances. Slots are a nice way to work around this space consumption problem. Instead of having a dynamic dict that allows adding attributes to objects dynamically, slots provide a static structure which prohibits additions after the creation of an instance.

When we design a class, we can use slots to prevent the dynamic creation of attributes. To define slots, you have to define a list with the name __slots__. The list has to contain all the attributes, you want to use. We demonstrate this in the following class, in which the slots list contains only the name for an attribute 'val'.

Monty Python Slots

If we start this program, we can see, that it is not possible to create dynamically a new attribute. We fail to create an attribute 'new'.

We mentioned in the beginning that slots are preventing a waste of space with objects. Since Python 3.3 this advantage is not as impressive any more. With Python 3.3 Key-Sharing Dictionaries are used for the storage of objects. The attributes of the instances are capable of sharing part of their internal storage between each other, i.e. the part which stores the keys and their corresponding hashes. This helps reducing the memory consumption of programs, which create many instances of non-builtin types.

Previous Chapter: OOP, Inheritance Example
Next Chapter: Classes and Class Creation

High Performance Python