Validation

Run-time integration tests

The run-time integration tests are a mechanism for validating the Arm Auto Solutions’ core functionalities.

The tests are run on the image using the OEQA test framework. Refer to OEQA FVP for more information on this framework.

In this section, details on the structure, implementation and debugging of the tests is given.

OEQA tests in the BSP

The Processing Elements and Components tested by the framework are detailed below. The testing scripts can be found in yocto/meta-zena-css-bsp/lib/oeqa/runtime/cases and meta-arm/lib/oeqa/runtime/cases/.

  • test_00_aspen_boot
    • test_scp

      This validates that the CMN has been configured, the handshake from the RSE has been received and that the SCP-firmware module initialization has completed successfully.

    • test_uboot_boot

      This method monitors the console output for the expected U-Boot message within a defined timeout period, ensuring the uboot bootloader has successfully initialized.

    • test_safety_island_cl1

      This validates that the Safety Island CL1 processing element is operational by checking for the expected console output from the safety_island_c1 console.

  • test_00_rse
    • test_normal_boot

      This validates that the SI CL0 is released out of reset and the handshake from the SCP-firmware has been received for Arm Zena CSS.

    • test_measured_boot

      This validates enhanced trustworthiness provided by measured boot functionality by reading the slot and sw_type from the boot logs.

  • Primary Compute
    • FVP devices

      The entry point to these tests is meta-arm/lib/oeqa/runtime/cases/fvp_devices.py. To find out more about the applicable tests, see FVP device tests.

    • FVP boot

      The script that implements the test is meta-arm/lib/oeqa/runtime/cases/fvp_boot.py. The test waits for Linux to boot on the Primary Compute then checks for common error patterns on all consoles.

    • Ping

      The script that implements the test is meta/lib/oeqa/runtime/cases/ping.py. The test verifies network connectivity to the target by sending ICMP echo requests to the target IP address and expects five consecutive successful ping responses. If the target uses localhost-based networking, the test is skipped.

    • SSH

      The script that implements the test is meta/lib/oeqa/runtime/cases/ssh.py. The test depends on the ping test and verifies remote shell access to the target by executing uname -a over SSH. It retries connection attempts for transient SSH failures and passes when the command executes successfully.

    • test_20_aspen_ap_dsu
      • test_dsu_cluster

        This validates that the AP’s DSU-120AE has been configured correctly by checking the L3 cache size, shared CPU list and the DSU-120AE PMU counters.

    • test_01_systemd_boot
      • test_systemd_boot_message

        This test ensures that the RD-Aspen platform is using the UEFI boot manager, systemd-boot. It verifies that the boot message contains the string ‘Boot in’ to confirm systemd-boot is being used.

    • test_30_configurable_pc_cores
      • test_configured_pc_cpus_in_tf_a

        This validates that the TF-A correctly brings up the configured number of Primary Compute CPUs.

      • test_configured_pc_cpus_in_linux

        This validates that the configured number of Primary Compute CPUs is visible in Linux by checking the number of CPUs listed in the device tree and the number of CPUs started at runtime using the nproc command.

    • test_00_secure_partition
      • test_optee_normal

        The test waits for the Primary Compute to log that OP-TEE loads the required Secure Partitions (SPs) and primary CPU switches to Normal world boot.

FVP device tests

These tests consist of a series of device tests that can be found in meta-arm/lib/oeqa/runtime/cases/fvp_devices.py.

  • networking

    Checks that the network device and its correct driver are available and accessible via the filesystem and that outbound connections work (invoking wget).

  • RTC

    Checks that the Real-Time Clock (RTC) device and its correct driver are available and accessible via the filesystem and verifies that the hwclock command runs successfully.

  • cpu_hotplug

    Checks for CPU availability and that basic functionality works, like enabling and stopping CPUs and preventing all of them from being disabled at the same time.

  • virtiorng

    Check that the virtio-rng device is available through the filesystem and that it is able to generate random numbers when required.

  • watchdog

    Checks that the watchdog device and its correct driver are available and accessible via the filesystem.

PSA APIs test suite integration on Primary Compute

The meta-arm Yocto layer provides Trusted Service OEQA tests which you can use for automated Trusted Services Test Executables. The script that implements the test is meta-arm/lib/oeqa/runtime/cases/trusted_services.py.

Currently, the following test cases for psa-api-test (from the PSA Arch Tests project) are supported:

  • ts-psa-crypto-api-test

    Used for PSA Crypto API conformance testing for PSA Crypto API.

  • ts-psa-ps-api-test

    Used for PSA Protected Storage API conformance testing for PSA Secure Storage API.

  • ts-psa-its-api-test

    Used for PSA Internal Trusted Storage API conformance testing for PSA Secure Storage API.

  • ts-psa-iat-api-test

    Used for PSA Initial Attestation API conformance testing for PSA Attestation API.

Platform Fault Detection Interface (PFDI) Test

The Platform Fault Detection Interface (PFDI) test is designed to validate the correct functioning of the PFDI integration. It does this by verifying the systemd service status of pfdi-app, the execution of the PFDI application, and the validation of the PFDI command-line interface (CLI).

The script that implements the test is yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_10_pfdi.py.

The following tests are executed to validate PFDI

  • test_init_systemd_service

    The test_init_systemd_service method verifies that the pfdi-app systemd service starts correctly on boot. It uses journalctl to inspect the logs, ensuring the presence of expected service initialization messages and confirming the absence of error patterns in the log output.

  • test_pfdi_app

    The test_pfdi_app method validates the end-to-end execution of PFDI tool commands. It uses pfdi-tool to generate and pack diagnostic configuration files, then runs those diagnostics using the pfdi-sample-app. The test checks that diagnostics execute successfully across all CPU cores configured in the system.

  • test_pfdi_cli

    The test_pfdi_cli method checks the CLI interface by running commands such as --info, --pfdi_info, and --count. It validates that version information is correctly reported and that each core passes the Out of Reset (OoR) diagnostic check using the --result command.

  • test_pfdi_cli_force_error

    The test_pfdi_cli_force_error method injects a simulated fault on a CPU core using the pfdi-cli -e command. It then checks the systemd journal to verify that the failure was captured correctly, with log entries indicating that the Online (OnL) test failed for a CPU and reporting the appropriate input/output error code.

  • test_pfdi_app_monitoring

    The test_pfdi_app_monitoring test checks that PFDI monitoring starts properly on every CPU core. It looks at the system’s cluster and core layout, then confirms that each one shows the correct Started PFDI monitoring log message. If any core’s log is missing, late, or incorrect, the test will fail.

  • test_pfdi_app_monitoring_error

    The test_pfdi_app_monitoring_error test checks how the system behaves when an error is forced using the pfdi-cli. For each CPU core in every cluster, it triggers an error with the --force_error option and then verifies that the PFDI monitor reports the correct failure message. The test passes if all cores show the expected “Failed, stopping PFDI monitoring” logs.

  • test_pfdi_sbistc

    The test_pfdi_sbistc test validates system response when PFDI errors are forced on every CPU core. For each (cluster, core), it triggers an error using the pfdi-cli and then checks that the expected FMU non-critical fault and SBISTC failure logs appear. The test passes if all cores report both log messages within the timeout windows, it fails if any expected log is missing, delayed, or incorrect.

PFDI Safety Island CL1 tests

The Safety Island PFDI validation test verifies the correct operation of the Platform Fault Detection Interface (PFDI) on the Safety Island cluster 1. It validates CPU-level control, diagnostic execution, result reporting, error handling, and stability under repeated execution.

The script that implements the test is yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_992_safety_island_pfdi.py.

  • test_01_pfdi_cluster_status

    The test_01_pfdi_cluster_status method verifies that the status of PFDI monitoring can be queried for each CPU core in the Safety Island cluster 1. It executes the status command and checks that each CPU reports a valid state such as running, stopped, or disabled. The test passes if every configured CPU returns a valid operational state.

  • test_02_pfdi_run_all_tests

    The test_02_pfdi_run_all_tests method validates execution of all diagnostic blocks for each CPU core. It invokes the PFDI run command and verifies that: The return code is rc=0 Scheduled, success, and skipped counters are present The test ensures full diagnostic execution completes successfully on all CPUs.

  • test_03_pfdi_run_block

    The test_03_pfdi_run_block method validates execution of a specific diagnostic block and verifies proper handling of invalid block IDs. An invalid block ID must return an error message A valid block ID must complete successfully with rc=0 The test passes if invalid blocks are rejected and valid blocks execute without error.

  • test_04_pfdi_run_invalid_params

    The test_04_pfdi_run_invalid_params method verifies that invalid CLI parameter combinations are properly rejected. It tests scenarios such as: Negative block IDs Invalid part ranges Start > end conditions Incorrect parameter combinations The test passes if all invalid commands return appropriate error messages.

  • test_05_pfdi_run_block_valid

    The test_05_pfdi_run_block_valid method validates successful block-level execution. It verifies: Full diagnostic execution for a CPU Execution of a specific block Correct reporting of execution statistics The test passes if valid commands execute with rc=0 and correct output formatting.

  • test_06_pfdi_run_range_valid

    The test_06_pfdi_run_range_valid method validates execution of a specific block part range. It ensures: The specified part range executes successfully The CLI reports correct block and range information Execution statistics are displayed The test passes if valid part ranges complete successfully.

  • test_07_pfdi_invalid_cpu_value

    The test_07_pfdi_invalid_cpu_value method verifies that non-numeric or malformed CPU identifiers are rejected by the CLI.

  • test_08_pfdi_cpu_out_of_range

    The test_08_pfdi_cpu_out_of_range method validates that CPU indices outside the configured CPU range are rejected.

  • test_09_pfdi_count_blocks

    The test_09_pfdi_count_blocks method verifies that the CLI correctly reports the number of diagnostic blocks available for each CPU.

  • test_10_pfdi_count_block_parts

    The test_10_pfdi_count_block_parts method validates that the CLI correctly reports the number of parts within a specific diagnostic block.

  • test_11_pfdi_result

    The test_11_pfdi_result method verifies result reporting functionality. It checks that each CPU reports a SUCCESS result after execution.

  • test_12_pfdi_set_state_toggle

    The test_12_pfdi_set_state_toggle method verifies that the monitoring state of PFDI can be toggled. The test ensures: Disabling a CPU transitions it to disabled or stopped Enabling a CPU restores it to running

  • test_13_pfdi_force_error_effect

    The test_13_pfdi_force_error_effect method validates forced error injection behavior. The test: Injects a forced error Verifies error acknowledgement Confirms the diagnostic result transitions to FAILED

  • test_14_pfdi_multiple_runs_consistency_3x

    The test_14_pfdi_multiple_runs_consistency_3x method validates stability across repeated diagnostic execution. For each CPU, diagnostics are executed three consecutive times. The test passes if all runs complete successfully.

  • test_15_pfdi_stress_5x

    The test_15_pfdi_stress_5x method performs a stress test by executing diagnostics five consecutive times per CPU. The test passes if no failures occur across repeated execution.

  • test_16_pfdi_info

    The test_16_pfdi_info method verifies firmware identification reporting. Depending on configuration, it validates either: Stub firmware detection message, or Vendor firmware information including vendor ID, implementation ID, and version number The test passes if firmware information matches the expected configuration.

Safety Diagnostics tests

These tests consist of safety island tests that can be found in yocto/meta-zena-css-bsp/lib/oeqa/runtime/cases/ test_10_safetydiagnostics_ssu_fmu.py.

  • test_10_safetydiagnostics_ssu_fmu
    • test_safety_island_fmu

      This validates that FMU collects all faults from upstream fault sources and collates them to a single pair of non-critical(NC) and critical(C) error signals.

    • test_safety_island_ssu

      This validates that SSU has mechanism to validate critical or non-critical state transition with SSU SYS_CTRL and SYS_STATUS registers.

Primary Compute CPUs RAS tests

These tests consist of RAS CPU tests that can be found in yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_10_ras_cpu.py.

  • test_10_ras_cpu
    • test_01_ts_ras_inject_list

      The test_01_ts_ras_inject_list captures the line that contains the list of ras errors such as “CorrectableCpuError, UncorrectableFatalCpuError, DeferredCpuError” successfully.

    • test_02_ts_ras_inject_invalid_cpu_error

      The test_02_ts_ras_inject_invalid_cpu_error validates that providing an invalid error name to ts-ras-inject returns a clear error message (for example “Unknown error type: InvalidErrorType”) and returns to the Linux prompt.

    • test_03_ts_ras_inject_usage

      The test_03_ts_ras_inject_usage validates the CLI usage output when ts-ras-inject is invoked without an error name, and confirms that the usage output lists the supported CPU error types.

    • test_04_ts_ras_inject_correctable_cpu_error

      The test_04_ts_ras_inject_correctable_cpu_error injects a CorrectableCpuError using ts-ras-inject and validates:

      • The CLI indicates the injection started and finished with Success.

      • TF-A reports receiving CPU RAS interrupt and expected status values.

      • Linux dmesg contains the expected corrected error severity and associated context information.

    • test_05_ts_ras_inject_deferred_cpu_error

      The test_05_ts_ras_inject_deferred_cpu_error injects a DeferredCpuError using ts-ras-inject and validates:

      • The CLI indicates the injection started and finished with Success.

      • TF-A reports receiving CPU RAS interrupt and expected status values.

      • Linux dmesg reports a recoverable event severity.

    • test_06_ts_ras_inject_correctable_cpu_error_10x

      The test_06_ts_ras_inject_correctable_cpu_error_10x injects CorrectableCpuError 10 times and validates that each iteration returns to the Linux prompt. To avoid potential kernel log rate-limiting, the test waits before collecting dmesg and then matches the indexed hardware error form (for example “{10}[Hardware Error]: … event severity: corrected”).

    • test_07_ts_ras_inject_uncorrectable_cpu_error

      The test_07_ts_ras_inject_uncorrectable_cpu_error injects an UncorrectableFatalCpuError and validates that the injection is initiated from Linux and that SCP logs report the faulty CPU identification and uncontainable fault handling. Because the platform may enter a hang state, the test transitions the target Off/On and reboots back to Linux.

    • test_08_ts_ras_inject_correctable_deferred_cpu_error

      The test_08_ts_ras_inject_correctable_deferred_cpu_error injects both CorrectableCpuError and DeferredCpuError sequentially in a single shell command and validates that both injections are initiated, at least one injection finishes with Success, TF-A reports receiving a CPU RAS interrupt, and the test returns to the Linux prompt.

    • test_09_journalctl_service

      The test_09_journalctl_service validates rasdaemon service operation by checking the rasdaemon journal for “rasdaemon: ras:arm_event event enabled”. The test also verifies that the error indicator “affinity: -1” does not appear in the rasdaemon journal, as this would indicate an incorrect setup.

Safety Island Cluster 1

This test validates Safety Island Cluster 1 and is implemented in yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_10_safety_island.py.

  • test_10_safety_island
    • test_cluster1 Verifies the Safety Island Cluster 1 (Zephyr) boot flow for the Arm Zena CSS platform. The test checks the Zephyr Hello World demo application boot on the cluster, and also checks that all SMD cores are up and operational.

Arm Cryptographic Extension Performance Tests

The Arm Cryptographic Extension performance test validates the performance benefits of the Arm Cryptographic Extension by comparing HTTPS download times with and without the extension enabled. This test demonstrates real-world performance improvements in cryptographic operations. On the FVP, the Arm Cryptographic Extension is simulated with a cryptography plugin.

The script that implements the test is yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_50_cryptographic_extension.py.

  • test_50_cryptographic_extension
    • test_cryptographic_extension_performance

      This test validates the performance benefits of the Arm Cryptographic Extension through a comprehensive HTTPS download comparison. The test performs the following operations:

      1. Certificate Generation: Creates a self-signed certificate using OpenSSL with RSA 2048-bit key for secure SSL/TLS connections.

      2. SSL Server Setup: Starts an SSL server that serves 10MB of random data using the generated certificate, simulating real-world encrypted data transfer scenarios.

      3. Performance Measurement with Extension: Downloads data over HTTPS with the Arm Cryptographic Extension enabled, using AES256-GCM-SHA384 cipher suite. The time command measures real time, user time, and system time for the operation.

      4. Performance Measurement without Extension: Downloads the same data with the Arm Cryptographic Extension disabled by setting OPENSSL_armcap=0x0 environment variable, forcing OpenSSL to use software-based cryptographic implementations.

      5. Performance Validation: Compares the timing results to verify that: * Real time (wall-clock time) is lower with the extension enabled

        • User time (CPU time in user mode) is significantly reduced with hardware acceleration

        • The cryptographic extension provides measurable performance improvements

      6. Cleanup: Properly terminates the SSL server process and removes generated certificate files to ensure clean test environment.

      The test uses OpenSSL’s capability detection and cipher suite selection to demonstrate hardware-accelerated cryptography versus software-only implementation. Performance improvements are expected due to dedicated cryptographic hardware instructions available in the Arm Cortex-A720AE core.

Power Management CPU idle power states (C-states)

The CPU Idle test suite validates the correct functionality of the CPU idle states and transitions on the Primary Compute of the Arm Zena CSS platform. It includes tests for usage, entry and exit latency, residency, and transitions between different CPU idle states and CPU idle governors.

The script that implements the test is yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_60_cpuidle_cstates.py.

The following tests validate CPU idle functionality:

  • test_ensure_cpuidle_or_skip

    This test checks if the cpuidle sysfs interface is present on the system and loads the C-state information for all CPUs. If no C-states are found, the subsequent tests are skipped. This serves as a prerequisite validation to ensure the CPU idle framework is available.

  • test_cpuidle_c_states

    This test validates that the required CPU idle C-states exist and have the expected names. It verifies the presence of three C-states: WFI (state0), cpu-sleep (state1), and cluster-sleep (state2) for each CPU core by checking the sysfs interface.

  • test_cstates_default_status

    This test verifies that all required CPU idle C-states are enabled by default when the kernel exposes the default_status interface. It ensures that the power management states are properly configured for optimal system operation.

  • test_disable_cstate

    This test validates the ability to disable individual C-states and verifies that usage counters do not increase while a state is disabled. The test also ensures that the original state can be restored, confirming proper runtime control of CPU idle states.

  • test_cstate_residency_latency

    This test checks that the latency and residency values for each C-state match the expected platform-specific values. It also verifies that usage and time counters advance when C-states are entered, confirming that the power management states are actively used.

  • test_cpuidle_governors

    This test validates the CPU idle governor framework by checking that the current governor (read-only interface) is one of the available governors, and if a read-write interface exists, it matches the read-only value. This ensures proper governor configuration and interface consistency.

  • test_cpuidle_governor_switching

    This test validates runtime switching between CPU idle governors when supported. It attempts to switch to each available governor and verifies that the change takes effect in both read-only and read-write interfaces, ensuring dynamic power management policy changes work correctly.

  • test_invalid_cpuidle_governor

    This test ensures that writing an invalid governor name fails appropriately and does not change the current governor setting. It validates the robustness of the governor selection interface against invalid inputs.

CPU Frequency Scaling tests

The CPU Frequency Scaling test suite validates the correct functionality of CPU frequency scaling (DVFS - Dynamic Voltage and Frequency Scaling) on the Primary Compute of the Arm Zena CSS platform. It includes comprehensive tests for frequency policies, governors, frequency ranges, and the SCMI-based scaling driver functionality.

The script that implements the test is yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_60_cpu_frequency.py.

The following tests validate CPU frequency scaling functionality:

  • test_cpu_frequency_policy

    This test validates that CPU frequency policies are available for all online cores and verifies the correct number of policies based on the performance domain configuration. For Arm Zena CSS, it expects one policy per 4-core cluster and confirms that all required governors (ondemand, performance, powersave, and schedutil) are available for each policy.

  • test_cpufreq_default_governors

    This test verifies that the default CPU frequency governor is set to schedutil for all policies. The schedutil governor provides CPU frequency scaling based on scheduler utilization data, offering optimal performance and power balance.

  • test_cpufreq_set_governors

    This test validates that all supported CPU frequency governors can be set for each policy. It iterates through all available governors (ondemand, performance, powersave, schedutil) and verifies that each can be applied and read back correctly. The test restores the default governor after testing.

  • test_cpufreq_scaling_driver

    This test verifies that the CPU frequency scaling driver is configured as scmi for all policies. The SCMI (System Control and Management Interface) driver enables communication with the System Control Processor (SCP) for frequency management operations.

  • test_current_frequency_per_governor

    This test validates that the current frequency is reported correctly for each governor. It sets each governor in turn and verifies that the reported current frequency falls within the expected range of supported frequencies (1.8, 2.0, 2.5 MHz). This ensures proper frequency reporting and governor functionality.

  • test_cpufreq_affected_cpus_per_policy

    This test verifies that CPU frequency changes apply to the correct set of CPUs within each performance domain. For Arm Zena CSS’s cluster configuration, it validates that each policy affects exactly 4 consecutive CPU cores, confirming proper performance domain mapping.

  • test_update_invalid_governor

    This test ensures system robustness by verifying that attempts to set invalid governor names fail gracefully without changing the current governor setting. It validates proper error handling in the governor selection interface.

  • test_update_scaling_min_frequencies

    This test validates the ability to adjust minimum scaling frequencies for each policy. It tests setting various frequency values within the supported range while ensuring the minimum frequency does not exceed the maximum frequency. The test verifies proper frequency boundary enforcement and restores original settings after testing.

  • test_update_scaling_max_frequencies

    This test validates the ability to adjust maximum scaling frequencies for each policy. It tests setting various frequency values within the supported range while ensuring the maximum frequency is not set below the minimum frequency. The test verifies proper frequency limit management and configuration persistence.

  • test_update_min_max_scaling_frequencies_negative

    This test validates system robustness by ensuring that invalid frequency configurations are rejected. It attempts to set minimum frequencies higher than maximum frequencies and vice versa, verifying that the system prevents invalid configurations and maintains frequency boundary integrity. When invalid values are provided, the system either rejects them entirely or clamps them to valid ranges.

Integration tests validating Xen

These tests consist of Xen integration tests that can be found in yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_40_virtualization.py.

  • DomU lifecycle management

    Test verifies DomU Lifecycle management, including status checking, destroy and restart. It uses ptest-runner to execute 01-xendomains.bats Bash Automated Test System (BATS) tests in yocto/meta-arm-auto-solutions/recipes-test/xen/files/tests/01-xendomains.bats

  • FVP Guest Devices
    • networking

      Checks that the network device and its correct driver are available and accessible via the filesystem, and that outbound connections work (invoking wget).

    • cpu_hotplug

      Checks for CPU availability and that basic functionality works, like enabling and stopping CPUs and preventing all of them from being disabled at the same time.

    • RTC, virtiorng, and watchdog

      These devices are not available for the Xen guests and are skipped.

Mission Based Power Profile (MBPP) demonstration tests

These tests validate the Mission Based Power Profile (MBPP) demonstration script.

The script that implements the test is yocto/meta-arm-auto-solutions/lib/oeqa/runtime/cases/test_70_mission_based_profiles.py.

  • test_01_script_exists_and_is_executable

    Verifies that the mbpp.sh script exists in the /root directory and has the correct executable permissions (-r-xr--r--). Ensures the script is available, accessible, and executable for runtime validation.

  • test_02_help_and_list

    Verifies that running mbpp.sh with the -h and -l options displays the correct help information and available power profiles. Ensures that Parking, City and Highway profiles are listed without any console errors or missing details.

  • test_03_dump_initial_then_set_parking_and_verify

    Performs an initial state dump using mbpp.sh -d to verify the current power profile, then sets the system to parking mode using -s parking. Confirms that the mode change is successful and that the current mode dump matches the expected setting.

  • test_04_idempotent_all_profiles

    Verifies idempotent behavior by re-selecting each profile (parking, city and highway). Ensures that when a profile is already active, the script correctly reports “Power profile is already set.” without redundant reconfiguration.

  • test_05_case_insensitive_all_profiles

    Validates that profile names are case-insensitive. Checks variants such as PARKING, ParkIng and parking to ensure consistent behavior and correct application of CPU governor settings for each mode.

  • test_06_invalid_profile_selection

    Ensures proper handling of invalid inputs such as sport, eco and xyz. Verifies that the script returns an appropriate “Invalid profile selection” message and that the previously active profile remains unchanged.

  • test_07_toggle_all_modes

    Cycles through all valid profiles (city, highway and parking) multiple times. Ensures consistent transitions between modes and verifies that the correct CPU governors are applied after each switch without error or inconsistency.

  • test_08_guard_when_not_all_cores_online

    Validates that the MBPP script correctly detects when not all CPU cores are online. Ensures that in such cases, the script aborts the operation and reports “Not all N cores are online.” to maintain system integrity.

  • test_09_set_governor_to_default

    Restores all CPU frequency governors to the default schedutil mode after the MBPP tests are executed. Brings all CPU cores online, and updates each CPU’s governor to schedutil. Ensures that the test environment returns to a clean and consistent state for subsequent test runs or validation cycles.

HIPC Baremetal Network Tests

The HIPC mid baremetal test suite validates end-to-end communication, shared memory layout, Linux enablement, and functional networking between Linux (PC) and Safety Island CL1.

It covers device-tree validation, remoteproc enablement, memory layout, ICMP connectivity, UDP/TCP flows, and boundary conditions.

  • test_01_mid_sanity_dt_and_shared_memory Validates CL1 presence and shared memory configuration in device tree. It: Validates si-cl1 node presence Extracts memory-region information Matches reserved-memory nodes Verifies phandle mapping Calculates total reserved SRAM

    The test passes if: CL1 node is present and total reserved memory equals 512 KB.

  • test_02_enablement_linux_stack Validates Linux-side HIPC enablement and runtime state. It: Checks dmesg logs for mailbox, remoteproc and rpmsg Verifies required kernel modules are present Validates remoteproc state Verifies ethsi1 and brsi1 interfaces are UP Confirms IP configuration

    The test passes if: Required modules are present, remoteproc is active, and interfaces are configured correctly.

  • test_03_memory_layout Validates reserved-memory layout for IPC. It: Identifies required reserved-memory nodes Validates each region size is 128 KB Ensures regions are contiguous Confirms total memory equals 512 KB Verifies remoteproc runtime state

    The test passes if: All regions are contiguous, correctly sized, and total memory is 512 KB.

  • test_04_icmp_bidirectional Validates ICMP connectivity between Linux and CL1. It: Executes ping from Zephyr to Linux Executes ping from Linux to Zephyr

    The test passes if: Both directions complete with 0 percent packet loss.

  • test_05_udp_pc_to_cl1 Validates UDP transfer from PC to CL1. It: Starts UDP server on CL1 Runs iperf UDP client on Linux Verifies packet transmission and session statistics

    The test passes if: No packet loss is observed and Zephyr reports valid session statistics.

  • test_06_tcp_pc_to_cl1 Validates TCP data transfer from PC to CL1. It: Starts TCP server on CL1 Runs iperf TCP client on Linux Verifies bandwidth output and session completion

    The test passes if: Bandwidth is reported and Zephyr confirms successful session completion.

  • test_07_udp_cl1_to_pc Validates UDP transfer from CL1 to PC. It: Starts UDP server on Linux Executes UDP upload from CL1 Verifies packet count, order, and loss

    The test passes if: No packets are lost or reordered and Linux reports zero packet loss.

  • test_08_tcp_cl1_to_pc Validates TCP transfer from CL1 to PC. It: Starts TCP server on Linux Executes TCP upload from CL1 Verifies packet transmission and server output

    The test passes if: Linux reports valid throughput and Zephyr reports zero errors.

  • test_09_boundary_payload_sizes Validates UDP payload boundary handling. It: Tests valid payload sizes Tests oversized payload handling Verifies system stability

    The test passes if: Valid payloads complete with zero packet loss and no kernel crash occurs.

  • test_10_boundary_multistream Validates UDP multistream behavior. It: Runs parallel streams with P=2 and P=4 Verifies aggregated transmission statistics

    The test passes if: All streams complete successfully with zero packet loss.

SMCF Integration Tests

The SMCF test suite validates SCP-side SMCF client functionality, integration execution, and sensor monitoring behavior.

  • test_01_smcf_client_start Verifies SMCF client startup via SCP logs.

    The test passes if: Expected startup messages are present in logs.

  • test_02_execute_smcf_test Executes SMCF integration test via SCP CLI.

    The test passes if: Test start, summary with zero failures, and completion markers are present.

  • test_03_run_smcf_3x Executes SMCF test three times to validate stability.

    The test passes if: All runs complete successfully without failures.

  • test_04_smcf_client_sensor_monitor Validates sensor monitoring output.

    The test passes if: Sensor values are reported correctly in expected format.

PFDI Monitoring on Safety Island Tests

The PFDI SI monitoring test validates monitoring behavior of PFDI on the Safety Island.

  • test_si_pfdi_monitoring Validates monitoring across all supported clusters and cores.

    The test passes if: All cluster and core combinations for pfdi-monitor complete without failure.