Renderer Test Utility

Macula can utilize GPU for some operations in order to decrease the CPU load. To enable GPU acceleration, run the Renderer Test Utility from the Macula suite on the Macula Monitor workstation.

There are two supported GPU operations are:

  • frame decoding on Macula Monitor side

  • rendering on Macula Monitor side

GPU acceleration limitations:

  • Only Windows 10 and corresponding Windows server OS editions (2016, 2019), or Windows 11.

  • Only H.264 streams for live view and digital PTZ in live view

  • For fisheye image dewarp, only Fisheye-II is supported (choose the Fisheye lens (6MP and larger resolution) option in Macula Console)

All other cases will use CPU for decoding/rendering operations.

Before configuring GPU acceleration, make sure you have installed the latest official drivers for all your graphics cards. We also recommend having more memory for GPU (the more, the better). For integrated video cards, you can change this setting in BIOS. For discrete graphics, choose ones with more onboard memory (1GB per display or more).

GPU Test and Configuration

On every client workstation where you want to enable GPU usage, launch the Renderer Test Utility by locating it in the Start menu, or simply by typing a part of the name in the Windows search.

The first wizard screen is a summary. Here, you can select, which graphics cards will be used for decoding and for 3D rendering. To be able to do so, first run the GPU test so that Macula can learn about your GPU capabilities and possible maximum load. During the test, each GPU is consequently loaded with test videos of different resolutions, starting from bigger ones. As a result, a value list is created for each GPU, which is then used by the Macula Monitor application for load balancing.

We recommend that you re-run the GPU test after each system change that may be related to graphics, as well as major OS updates (like feature updates) and Macula software upgrade.

To run the performance test: hit the Test GPU button in the bottom left corner. In the list, mark all GPUs that you wish to engage by putting check marks in the Test column, then click Test below.

If you have already launched the test earlier, the Status column will reflect the last test results. For the GPUs that have passed the test, there is no need to re-launch it, unless you have made changes to that video card configuration (e.g., added memory for the integrated card, installed a different device driver etc.). The test may take some time. If your Macula Monitor application is open, the wizard will ask you to close it and re-open later (and offer to do so automatically).

During the test, click the Show log button to see how the test is going. After the test is finished, the wizard will automatically switch to the previous screen, and you still will be able to view the last test log.

Log colors:

  • red: most important (errors, failures)

  • yellow: warnings

  • blue: information

  • black: default

  • grey: trace, details or low importance

Flags used in the test log indicate GPU capabilities:

  • supported: graphics card is supported

  • unsupported: hardware decoding is not supported by OS for this GPU

  • legacy: video card is old or has old drivers, max resolution will be limited to 1080p

  • canDecode: the GPU is OK to be used for decoding

  • canRender: the GPU is OK to be used for rendering

As a result, the previously Not passed GPUs will change their status. GPUs that have passed the test, will be available for decoding (putting a check mark in the Decode column). Below the table, you can choose, which GPU will be used for rendering.

If you graphics card can decode both H.264 and H.265, you will have both codecs enabled after the test. However, you can deselect H.265 if you do not wish it to be decoded by GPU. Do not forget to restart Macula Monitor if you have just opened the wizard to change the settings (without running the test).

Video output process on the Macula Monitor side consists of two stages: decoding frames and rendering for displaying them. After decoding, the frames are converted and passed for rendering. If decoding and rendering operations happen on different GPUs, CPU is used in between so its load may grow a bit. Therefore, if only one GPU is used for decoding, it may be wiser to use the same GPU for rendering. Same logic is to be applied for cases when one GPU takes the most decoding load (this can be deduced from the GPU test log). But, if you happen to have a GPU that does not support decoding, you may want to use it for rendering, so that the total load is split between GPUs.

In general, according to our tests, Intel GPUs have better decoding capabilities, and Nvidia GPUs (hi-end) are good at 3D rendering.

Click OK to save the settings and exit. If you close the wizard by clicking Cancel or X, the GPU settings will not be saved.

You can re-open the wizard at any time to run the test again and/or change the settings.

Usage in Macula Monitor

After you have enabled GPU settings via wizard, the Macula Monitor application on the same machine will be able to use GPU capabilities. Using GPU will significantly decrease the CPU load and will allow you to output more channels simultaneously on the same workstation. By combining GPU acceleration with substream usage you can gain even more, as using lower resolution streams for multichannel output is more efficient.

Macula Monitor will automatically use GPUs enabled via wizard, you do not have to enable anything else in the application settings. Limitations:

  • live view and DPTZ

  • fisheye dewarp (supported dewarp mode must be set in Macula Console, as described above)

  • stream codec must be H.264

  • stream resolution must be supported (see GPU test log for details), e.g., legacy GPUs will not be used for resolutions greater than FullHD

If you want to check whether the decoding is currently performed by GPU, enable rendering info in the Macula Monitor application settings. In the main menu, choose Edit > Settings > select the Usability tab > enable the Show decoder information option > Save.

After you have enabled this setting, each viewport in the live mode will have a label next to the timestamp (upper right corner):

  • CPU: decoding is performed using CPU (GPU is not configured or overloaded, or stream codec/resolution is not compatible)

  • GPU: the corresponding graphics card type will appear as a label - Intel, Nvidia, AMD, or other GPU.

Macula Monitor will automatically switch to CPU decoding if the configured GPU is overloaded (more than 80% of its decoder, renderer or memory is used).

Make sure you have at least 512MB of dedicated video memory per display (recommended minimum is 1GB per display).

Troubleshooting

If, immediately after enabling hardware acceleration, your Macula Monitor application behaves strangely, crashes, or causes other problems, try running it without GPU decoding. To do so a single time - for troubleshooting - use the Macula Monitor without GPU decoding shortcut from the Start menu (similar icon but in grey colors). This shortcut activates a so-called "safe mode" for the Macula Monitor application, which completely ignores the GPU settings configured via GPU test utility.

After launching the Macula Monitor application in "safe mode", check if the issue is gone. If the no-GPU mode helps, disable GPU decoding via Renderer test utility by de-selecting GPUs in the list (remove the check mark in the Decode column). If you have multiple graphics cards, the issue may be caused by one of them, so a wise approach would be to enable/disable the graphics adapters one by one in order to find out, which one is causing problems.

Last updated