diff --git a/Runner/suites/Kernel/RT-tests/CyclicDeadline/CyclicDeadline.yaml b/Runner/suites/Kernel/RT-tests/CyclicDeadline/CyclicDeadline.yaml new file mode 100755 index 00000000..f584ae30 --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/CyclicDeadline/CyclicDeadline.yaml @@ -0,0 +1,30 @@ +metadata: + name: CyclicDeadline + format: "Lava-Test Test Definition 1.0" + description: "Run rt-tests cyclicdeadline in JSON mode and parse results without requiring python3." + os: + - linux + scope: + - performance + - preempt-rt + +params: + INTERVAL: "1000" + STEP: "500" + THREADS: "1" + DURATION: "2m" + BACKGROUND_CMD: "" + ITERATIONS: "1" + USER_BASELINE: "" + BINARY: "" + OUT_DIR: "./logs_CyclicDeadline" + VERBOSE: "0" + PROGRESS_EVERY: "1" + HEARTBEAT_SEC: "10" + +run: + steps: + - REPO_PATH=$PWD + - cd Runner/suites/Kernel/RT-tests/CyclicDeadline + - ./run.sh --interval "${INTERVAL}" --step "${STEP}" --threads "${THREADS}" --duration "${DURATION}" --iterations "${ITERATIONS}" --background-cmd "${BACKGROUND_CMD}" --user-baseline "${USER_BASELINE}" --binary "${BINARY}" --out "${OUT_DIR}" --progress-every "${PROGRESS_EVERY}" --heartbeat-sec "${HEARTBEAT_SEC}" $( [ "${VERBOSE}" = "1" ] && echo "--verbose" ) || true + - $REPO_PATH/Runner/utils/send-to-lava.sh CyclicDeadline.res diff --git a/Runner/suites/Kernel/RT-tests/CyclicDeadline/CyclicDeadline_README.md b/Runner/suites/Kernel/RT-tests/CyclicDeadline/CyclicDeadline_README.md new file mode 100644 index 00000000..3cdd6f2d --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/CyclicDeadline/CyclicDeadline_README.md @@ -0,0 +1,187 @@ +# CyclicDeadline + +## Overview + +`CyclicDeadline` is the qcom-linux-testkit wrapper for the `rt-tests` `cyclicdeadline` binary. + +It is similar to `cyclictest`, but instead of using `SCHED_FIFO` with `nanosleep()` to measure jitter, it uses `SCHED_DEADLINE` and treats the deadline as the wakeup interval. + +This wrapper: + +- runs `cyclicdeadline` in JSON mode +- parses KPI using `lib_rt.sh` +- supports repeated iterations +- prints per-iteration, aggregate, and per-thread aggregate results +- can keep partial results on user interrupt when supported by the common RT helpers +- writes final PASS/FAIL/SKIP summary to `CyclicDeadline.res` + +The script is LAVA-friendly and always exits `0`. CI gating should use the `.res` file. + +## Defaults + +Defaults are aligned to the Linaro test-definition behavior unless explicitly overridden. + +- `INTERVAL=1000` +- `STEP=500` +- `THREADS=1` +- `DURATION=5m` +- `BACKGROUND_CMD=""` +- `ITERATIONS=1` +- `USER_BASELINE=""` + +## Files generated + +By default, the test writes logs under: + +`./logs_CyclicDeadline` + +Typical outputs: + +- `CyclicDeadline.res` - final PASS/FAIL/SKIP summary +- `logs_CyclicDeadline/result.txt` - parsed KPI output +- `logs_CyclicDeadline/iter_kpi.txt` - per-iteration KPI +- `logs_CyclicDeadline/agg_kpi.txt` - overall aggregate KPI +- `logs_CyclicDeadline/thread_agg_kpi.txt` - per-thread aggregate KPI +- `logs_CyclicDeadline/cyclicdeadline-.json` - raw JSON per iteration +- `logs_CyclicDeadline/cyclicdeadline_stdout_iter.log` - console/stdout capture per iteration +- `logs_CyclicDeadline/max_latencies.txt` - extracted max latency values when baseline comparison is used + +## Usage + +```sh +./run.sh [OPTIONS] +``` + +## Supported wrapper options + +### Wrapper control + +- `--out DIR` + - Output directory +- `--result FILE` + - Result text file path +- `--duration TIME` + - Test duration passed as `-D TIME` +- `--iterations N` + - Number of iterations to run +- `--background-cmd CMD` + - Optional background workload to run during the test +- `--binary PATH` + - Explicit path to the `cyclicdeadline` binary +- `--progress-every N` + - Progress message cadence across iterations +- `--heartbeat-sec N` + - Periodic "still running" heartbeat while a long iteration is executing +- `--verbose` + - Enable additional wrapper logging + +### cyclicdeadline options supported by the wrapper + +- `--interval-us USEC` + - Base interval in microseconds, maps to `-i` +- `--step-us USEC` + - Step size in microseconds, maps to `-s` +- `--threads N` + - Number of threads, maps to `-t` + - If set to `0`, wrapper expands it to `nproc` +- `--user-baseline VALUE` + - Baseline max latency to compare against when iteration count is high enough + +## Baseline comparison behavior + +When `ITERATIONS` is greater than `2`, the wrapper can evaluate max latency results against a baseline. + +Behavior: + +- extracts all `max-latency` values from per-iteration parsed output +- if `USER_BASELINE` is set, that value is used as the baseline +- otherwise, the minimum observed max latency becomes the baseline +- counts how many max latency values are above the baseline +- compares that count against `ITERATIONS / 2` + +This provides a simple consistency check across repeated runs. + +## Examples + +Run one default iteration using auto-detected binary: + +```sh +./run.sh +``` + +Run with explicit binary, 3 iterations, and 1 minute duration: + +```sh +./run.sh --binary /tmp/cyclicdeadline --duration 1m --iterations 3 +``` + +Run with one thread per CPU: + +```sh +./run.sh --threads 0 --duration 1m +``` + +Run with custom interval and step: + +```sh +./run.sh --interval-us 1000 --step-us 500 --threads 4 --duration 60s +``` + +Run with baseline comparison: + +```sh +./run.sh --binary /tmp/cyclicdeadline --iterations 5 --user-baseline 120 +``` + +Run with heartbeat messages every 10 seconds: + +```sh +./run.sh --binary /tmp/cyclicdeadline --duration 60s --heartbeat-sec 10 +``` + +## LAVA integration notes + +Typical YAML wiring passes parameters into `run.sh` and reports using: + +```sh +$REPO_PATH/Runner/utils/send-to-lava.sh CyclicDeadline.res +``` + +Recommended CI behavior: + +- rely on `CyclicDeadline.res` for PASS/FAIL/SKIP +- keep `result.txt` and JSON files as artifacts for debugging +- use `--binary` when the binary is staged outside standard PATH + +## Expected console behavior + +The wrapper may print: + +- environment and scheduler context +- selected binary and options +- per-iteration start messages +- optional heartbeat messages for long-running iterations +- per-iteration KPI +- aggregate KPI +- per-thread aggregate KPI +- final PASS/FAIL/SKIP summary + +## Interrupt behavior + +If the shared RT helper functions in `lib_rt.sh` are present and enabled, `Ctrl-C` can preserve partial output and mark the run as `SKIP` instead of `FAIL`. + +This depends on the common RT helper implementation already being present in your tree. + +## Dependencies + +The wrapper expects: + +- `cyclicdeadline` binary available either in `PATH` or via `--binary` +- `functestlib.sh` +- `lib_rt.sh` +- standard shell utilities such as `awk`, `sed`, `grep`, `tee`, `mkdir`, `cat`, `tr`, and `date` + +## Notes + +- Keep changes aligned with existing qcom-linux-testkit conventions. +- For CI, use the `.res` file as the authoritative result. diff --git a/Runner/suites/Kernel/RT-tests/CyclicDeadline/run.sh b/Runner/suites/Kernel/RT-tests/CyclicDeadline/run.sh new file mode 100755 index 00000000..19578870 --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/CyclicDeadline/run.sh @@ -0,0 +1,294 @@ +#!/bin/sh +# Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. +# SPDX-License-Identifier: BSD-3-Clause +# +# CyclicDeadline wrapper for qcom-linux-testkit +# - Runs rt-tests cyclicdeadline ITERATIONS times (JSON output) +# - Parses KPI using lib_rt.sh (no python required) +# - Emits KPI lines to result.txt and summary PASS/FAIL/SKIP to CyclicDeadline.res +# +# Notes: +# - Always exits 0 (LAVA-friendly). Use CyclicDeadline.res for gating. + +SCRIPT_DIR="$( + cd "$(dirname "$0")" || exit 1 + pwd +)" + +INIT_ENV="" +SEARCH="$SCRIPT_DIR" +while [ "$SEARCH" != "/" ]; do + if [ -f "$SEARCH/init_env" ]; then + INIT_ENV="$SEARCH/init_env" + break + fi + SEARCH=$(dirname "$SEARCH") +done + +if [ -z "$INIT_ENV" ]; then + echo "[ERROR] Could not find init_env (starting at $SCRIPT_DIR)" >&2 + exit 1 +fi + +if [ -z "${__INIT_ENV_LOADED:-}" ]; then + # shellcheck disable=SC1090 + . "$INIT_ENV" + __INIT_ENV_LOADED=1 +fi + +# shellcheck disable=SC1091 +. "$TOOLS/functestlib.sh" +# shellcheck disable=SC1091 +. "$TOOLS/lib_rt.sh" + +TESTNAME="CyclicDeadline" +test_path=$(find_test_case_by_name "$TESTNAME") +if [ -n "$test_path" ]; then + : +else + test_path="$SCRIPT_DIR" +fi + +RES_FILE="$test_path/${TESTNAME}.res" +OUT_DIR="${OUT_DIR:-$test_path/logs_${TESTNAME}}" +RESULT_TXT="${RESULT_TXT:-$OUT_DIR/result.txt}" + +INTERVAL="${INTERVAL:-1000}" +STEP="${STEP:-500}" +THREADS="${THREADS:-1}" +DURATION="${DURATION:-5m}" +BACKGROUND_CMD="${BACKGROUND_CMD:-}" +ITERATIONS="${ITERATIONS:-1}" +USER_BASELINE="${USER_BASELINE:-}" +QUIET="${QUIET:-true}" +BINARY="${BINARY:-}" +VERBOSE="${VERBOSE:-0}" +PROGRESS_EVERY="${PROGRESS_EVERY:-1}" +HEARTBEAT_SEC="${HEARTBEAT_SEC:-10}" + +usage() { + cat <"$RES_FILE" + exit 0 + ;; + esac + shift +done + +LOG_PREFIX="$OUT_DIR/cyclicdeadline" +TMP_ONE="$OUT_DIR/tmp_result_one.txt" +ITER_KPI="$OUT_DIR/iter_kpi.txt" +AGG_KPI="$OUT_DIR/agg_kpi.txt" +THREAD_AGG_KPI="$OUT_DIR/thread_agg_kpi.txt" +MAX_LAT_FILE="$OUT_DIR/max_latencies.txt" +GATE_KPI="$OUT_DIR/gate_kpi.txt" + +rt_prepare_output_layout \ + "$OUT_DIR" \ + "$RESULT_TXT" \ + "$TMP_ONE" \ + "$ITER_KPI" \ + "$AGG_KPI" \ + "$THREAD_AGG_KPI" \ + "$MAX_LAT_FILE" \ + "$GATE_KPI" + +rt_check_clock_sanity "$TESTNAME" || true + +log_info "------------------- Starting $TESTNAME -------------------" +log_info "$TESTNAME: Checking for the tools required to run cyclicdeadline" + +if ! rt_require_common_tools uname awk sed grep tr head tail mkdir cat sh sleep kill date sort wc; then + log_skip "$TESTNAME: basic tools missing" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +if ! rt_require_json_helpers; then + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +rt_normalize_common_params + +CDL_BIN=$(rt_resolve_binary cyclicdeadline "$BINARY" 2>/dev/null || echo "") +if [ -z "$CDL_BIN" ] || [ ! -x "$CDL_BIN" ]; then + log_skip "$TESTNAME: cyclicdeadline binary not found/executable (${CDL_BIN:-none})" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +rt_log_common_runtime_env "$TESTNAME" "$CDL_BIN" +log_info "$TESTNAME: iterations=$ITERATIONS duration=$DURATION interval=$INTERVAL step=$STEP threads=$THREADS" +log_info "$TESTNAME: heartbeat=$HEARTBEAT_SEC seconds" + +RT_INTERRUPTED=0 +export RT_INTERRUPTED + +trap 'rt_handle_int; perf_rt_bg_stop >/dev/null 2>&1 || true' INT TERM +trap 'perf_rt_bg_stop >/dev/null 2>&1 || true' EXIT + +perf_rt_bg_start "$TESTNAME" "$BACKGROUND_CMD" + +overall_fail=0 + +i=1 +while [ "$i" -le "$ITERATIONS" ] 2>/dev/null; do + rt_log_iteration_progress "$TESTNAME" "$i" "$ITERATIONS" "$PROGRESS_EVERY" + + jsonfile="${LOG_PREFIX}-${i}.json" + stdoutlog="${OUT_DIR}/cyclicdeadline_stdout_iter${i}.log" + + set -- "$CDL_BIN" + case "$QUIET" in + true|TRUE|1|yes|YES) + set -- "$@" -q + ;; + esac + set -- "$@" -i "$INTERVAL" -s "$STEP" -t "$THREADS" -D "$DURATION" --json="$jsonfile" + + if rt_run_json_iteration "$TESTNAME" "$HEARTBEAT_SEC" "$stdoutlog" "$jsonfile" "$@"; then + rc=$RT_RUN_RC + else + rc=$RT_RUN_RC + fi + + if [ "${RT_INTERRUPTED:-0}" -eq 1 ] 2>/dev/null; then + log_warn "$TESTNAME: interrupted by user during iteration $i/$ITERATIONS" + break + fi + + if [ "$rc" -ne 0 ] 2>/dev/null; then + log_fail "$TESTNAME: cyclicdeadline exited rc=$rc (iter $i/$ITERATIONS)" + overall_fail=1 + fi + + if [ "${RT_RUN_JSON_OK:-0}" -ne 1 ] 2>/dev/null; then + log_fail "$TESTNAME: missing json output: $jsonfile" + overall_fail=1 + i=$((i + 1)) + continue + fi + + if ! rt_parse_and_append_iteration_kpi "cyclicdeadline" "$jsonfile" "$TMP_ONE" "$ITER_KPI" "$RESULT_TXT" "$i"; then + log_fail "$TESTNAME: failed to parse/store KPI (iter $i/$ITERATIONS): $jsonfile" + overall_fail=1 + fi + + i=$((i + 1)) +done + +perf_rt_bg_stop >/dev/null 2>&1 || true + +rt_emit_kpi_block "$TESTNAME" "per-iteration results" "$ITER_KPI" +rt_emit_aggregate_kpi "$TESTNAME" "cyclicdeadline" "$ITER_KPI" "$AGG_KPI" "$RESULT_TXT" || true +rt_emit_thread_aggregate_kpi "$TESTNAME" "cyclicdeadline" "$ITER_KPI" "$THREAD_AGG_KPI" "$RESULT_TXT" || true + +if [ "${RT_INTERRUPTED:-0}" -ne 1 ] 2>/dev/null && [ "$ITERATIONS" -gt 2 ] 2>/dev/null; then + if rt_collect_named_metric_values "$RESULT_TXT" "max-latency" "$MAX_LAT_FILE"; then + if [ -n "$USER_BASELINE" ]; then + if ! rt_evaluate_majority_threshold_gate "$TESTNAME" "$ITERATIONS" "$MAX_LAT_FILE" "$GATE_KPI" "$RESULT_TXT" "$USER_BASELINE" "max-latency" "us"; then + log_fail "$TESTNAME: baseline gate failed (${RT_BASELINE_FAIL_COUNT} >= ${RT_BASELINE_FAIL_LIMIT})" + overall_fail=1 + fi + else + log_info "$TESTNAME: no user baseline provided; skipping baseline gate" + fi + else + log_warn "$TESTNAME: no max-latency values found for baseline comparison" + overall_fail=1 + fi +fi + +if rt_kpi_file_has_fail "cyclicdeadline" "$ITER_KPI"; then + overall_fail=1 +fi + +rt_emit_interrupt_aware_result "$TESTNAME" "$RES_FILE" "$RESULT_TXT" "$OUT_DIR" "${RT_INTERRUPTED:-0}" "$overall_fail" +exit 0 diff --git a/Runner/suites/Kernel/RT-tests/Cyclictest/README_RT_Cyclictest.md b/Runner/suites/Kernel/RT-tests/Cyclictest/README_RT_Cyclictest.md new file mode 100644 index 00000000..e921d79f --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/Cyclictest/README_RT_Cyclictest.md @@ -0,0 +1,238 @@ +# Cyclictest (rt-tests / cyclictest) + +This test is part of **qcom-linux-testkit** and wraps `rt-tests` **cyclictest** to measure timer wakeup latency and report KPIs (min/avg/max) in a LAVA-friendly way. + +It is designed to work on minimal images **without Python modules** by parsing cyclictest JSON output using POSIX shell helpers in `Runner/utils/lib_rt.sh`. + +--- + +## What this test does + +For each iteration, the wrapper: + +1. Validates required tools and privileges (must run as **root**). +2. Optionally starts a background load command (`BACKGROUND_CMD`) to stress the system. +3. Runs `cyclictest` with `--json=` and captures console output into a per-iteration `.out`. +4. Parses the JSON and emits KPI lines like: + - `t0-min-latency pass us` + - `t0-avg-latency pass us` + - `t0-max-latency pass us` + - … (for additional threads, `t1-…`, `t2-…`, etc., if present in JSON) +5. Aggregates **t0** latency KPIs across iterations and reports averages. +6. Emits `.res` as PASS/FAIL/SKIP for CI/LAVA gating. + +> Note: The test is **warn-only** if the kernel is not RT-enabled. It will still run and report latencies. + +--- + +## Location + +- Test: `Runner/suites/Kernel/RT-tests/Cyclictest/run.sh` +- Helpers: `Runner/utils/lib_rt.sh` (JSON parsing, progress logging, background load helpers) + +--- + +## Prerequisites + +### Permissions +- Must run as **root** (`id -u == 0`), since cyclictest typically uses RT scheduling and `mlockall()`. + +### Tools required +The wrapper expects the following tools to exist (or it will SKIP): +- `uname`, `awk`, `sed`, `grep`, `tr`, `head`, `tail`, `mkdir`, `cat`, `sh`, `tee`, `sleep`, `kill`, `date` +- `cyclictest` executable (either in `$PATH` or provided via `--binary` / `BINARY`) + +### Kernel considerations (recommended) +- RT kernel (PREEMPT_RT) is recommended for meaningful RT KPIs. +- The script prints a warning if `uname -r` / `uname -v` doesn’t look RT-enabled. + +Useful runtime knobs (optional): +- `/proc/sys/kernel/sched_rt_runtime_us` (RT bandwidth) +- system frequency governor / CPU online state can affect results. + +--- + +## Basic usage + +From the test directory: + +```sh +cd Runner/suites/Kernel/RT-tests/Cyclictest +sudo ./run.sh +``` + +If cyclictest is not in PATH: + +```sh +sudo ./run.sh --binary /tmp/cyclictest +``` + +Run multiple iterations (example: 5 runs): + +```sh +sudo ./run.sh --binary /tmp/cyclictest --iterations 5 +``` + +Use more threads (example: 8): + +```sh +sudo ./run.sh --binary /tmp/cyclictest --threads 8 +``` + +Change duration (example: 30 seconds): + +```sh +sudo ./run.sh --binary /tmp/cyclictest --duration 30s +``` + +> Note: `THREADS=0` means “auto”: uses `nproc` and sets `AFFINITY=all`. + +--- + +## Parameters (Environment / LAVA params) + +All options can be provided as environment variables (LAVA `params:`) or as CLI options. + +### Output / logging +- `OUT_DIR` (default: `./logs_Cyclictest`) +- `RESULT_TXT` (default: `$OUT_DIR/result.txt`) +- `VERBOSE` (default: `0`) + +### cyclictest control +- `PRIORITY` (default: `98`) → cyclictest `-p` +- `INTERVAL` (default: `1000`) microseconds → cyclictest `-i` +- `THREADS` (default: `1`) → cyclictest `-t` +- `AFFINITY` (default: `0`) CPU id or `all` → cyclictest `-a` +- `DURATION` (default: `1m`) → cyclictest `-D` +- `HISTOGRAM_MAX` (default: empty) → cyclictest `-h` (optional) + +### Iteration / progress +- `ITERATIONS` (default: `1`) number of iterations +- `PROGRESS_STEP` (default: `5`) seconds between “still running…” progress logs + +### Background load +- `BACKGROUND_CMD` (default: empty) command to run during test (stopped afterward) + +### Binary override +- `BINARY` (default: empty) explicit path to cyclictest executable + +--- + +## CLI options + +`run.sh` supports: + +- `--out DIR` +- `--result FILE` +- `--priority N` +- `--interval USEC` +- `--threads N` +- `--affinity CPU|all` +- `--duration DUR` +- `--histogram-max USEC` +- `--iterations N` +- `--progress-step S` +- `--background-cmd CMD` +- `--binary PATH` +- `--verbose` +- `-h, --help` + +--- + +## Output files + +Within `OUT_DIR` (default: `logs_Cyclictest`): + +- `cyclictest_iterN.json` : cyclictest JSON output per iteration +- `cyclictest_iterN.out` : cyclictest stdout/stderr per iteration +- `parsed_iterN.txt` : parsed KPI lines per iteration (from JSON) +- `metrics_all.txt` : KPI lines used for averaging (t0 only by default) +- `average_summary.txt` : computed averages across iterations (t0 min/avg/max) +- `result.txt` : concatenated per-iteration KPI lines + averages + final verdict + +At the test root: +- `Cyclictest.res` : single-line PASS/FAIL/SKIP result for LAVA + +--- + +## Console output (what to expect) + +You’ll see: + +- Start banner +- Tool checks +- RT kernel status (INFO or WARN) +- System context (uname, nproc, cpu online, governor, etc.) +- Progress logs every `PROGRESS_STEP` seconds while cyclictest runs +- cyclictest’s own output (the `T:` lines) +- Parsed KPI lines per iteration: + - `t0-min-latency pass ... us` + - `t0-avg-latency pass ... us` + - `t0-max-latency pass ... us` +- Final averages across iterations (t0) +- PASS/FAIL + +--- + +## LAVA integration + +### Do we need to pass variables in the `run:` step? + +Usually **no**. LAVA exports `params:` as environment variables before executing `run.steps`. +So this is typically enough: + +```yaml +run: + steps: + - REPO_PATH=$PWD + - cd Runner/suites/Kernel/RT-tests/Cyclictest + - ./run.sh || true + - $REPO_PATH/Runner/utils/send-to-lava.sh Cyclictest.res +``` + +If you want to override a param only for one step, you can still prefix env vars inline. + +--- + +## Troubleshooting + +### 1) “must run as root” / SKIP +Run via `sudo` or ensure the test is executed as root in LAVA. + +### 2) “cyclictest binary not found” +- Install `rt-tests`, or +- Provide explicit path: `--binary /path/to/cyclictest` or `BINARY=/path/to/cyclictest`. + +### 3) Latency lines not printed +Ensure JSON parsing is working: +- Check `OUT_DIR/parsed_iterN.txt` exists and contains `t*-min/avg/max-latency` lines. +- If the JSON format changed, update the parser in `Runner/utils/lib_rt.sh`. + +### 4) Very large max latency spikes (e.g., tens of ms) +Common causes: +- Non-RT kernel or RT throttling (`sched_rt_runtime_us`) +- CPU frequency scaling / idle states +- Interrupt storms / background load +- Thermal throttling +Try: +- Use RT kernel +- Pin affinity (`AFFINITY=0` or isolated CPU) +- Reduce system activity / background load +- Increase priority cautiously + +--- + +## Notes / Design choices + +- This wrapper is POSIX shell and ShellCheck-friendly (avoid python dependency). +- It produces `.res` and always exits `0` (LAVA-friendly), while still gating via `.res`. +- KPI lines are intended to be easy to post-process and trend. + +--- + +## Maintainers / Contribution + +If you update `lib_rt.sh`, please keep: +- POSIX compatibility +- ShellCheck cleanliness (avoid `A && B || C` for control flow, avoid unused vars) +- Robustness on minimal images diff --git a/Runner/suites/Kernel/RT-tests/Cyclictest/RT_Cyclictest.yaml b/Runner/suites/Kernel/RT-tests/Cyclictest/RT_Cyclictest.yaml new file mode 100755 index 00000000..641e9f24 --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/Cyclictest/RT_Cyclictest.yaml @@ -0,0 +1,33 @@ +metadata: + name: Cyclictest + format: "Lava-Test Test Definition 1.0" + description: "Run rt-tests cyclictest and collect latency KPI in JSON; parse results without requiring python3." + os: + - linux + scope: + - performance + - preempt-rt + +params: + INTERVAL: "1000" + THREADS: "1" + DURATION: "2m" + BACKGROUND_CMD: "" + ITERATIONS: "1" + PRIO: "98" + QUIET: "true" + MLOCKALL: "true" + SMP: "false" + BINARY: "" + VERBOSE: "0" + PROGRESS_EVERY: "1" + HEARTBEAT_SEC: "10" + OUT_DIR: "./logs_Cyclictest" + +run: + steps: + - 'REPO_PATH="$PWD"' + - 'cd Runner/suites/Kernel/RT-tests/Cyclictest' + - './run.sh --out "${OUT_DIR}" --interval "${INTERVAL}" --threads "${THREADS}" --duration "${DURATION}" --iterations "${ITERATIONS}" --background-cmd "${BACKGROUND_CMD}" --binary "${BINARY}" --progress-every "${PROGRESS_EVERY}" --heartbeat-sec "${HEARTBEAT_SEC}" $( [ "${VERBOSE}" = "1" ] && echo "--verbose" ) || true' + - 'if [ -f CyclicTest.res ]; then sed "s/^CyclicTest /Cyclictest /" CyclicTest.res > Cyclictest.res; fi' + - '$REPO_PATH/Runner/utils/send-to-lava.sh Cyclictest.res' diff --git a/Runner/suites/Kernel/RT-tests/Cyclictest/run.sh b/Runner/suites/Kernel/RT-tests/Cyclictest/run.sh new file mode 100755 index 00000000..f3287bd8 --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/Cyclictest/run.sh @@ -0,0 +1,262 @@ +#!/bin/sh +# Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. +# SPDX-License-Identifier: BSD-3-Clause +# +# CyclicTest wrapper for qcom-linux-testkit +# - Runs rt-tests cyclictest ITERATIONS times (JSON output) +# - Parses KPI using lib_rt.sh (no python required) +# - Emits KPI lines to result.txt and summary PASS/FAIL/SKIP to CyclicTest.res +# +# Notes: +# - Always exits 0 (LAVA-friendly). Use CyclicTest.res for gating. + +SCRIPT_DIR="$( + cd "$(dirname "$0")" || exit 1 + pwd +)" + +INIT_ENV="" +SEARCH="$SCRIPT_DIR" +while [ "$SEARCH" != "/" ]; do + if [ -f "$SEARCH/init_env" ]; then + INIT_ENV="$SEARCH/init_env" + break + fi + SEARCH=$(dirname "$SEARCH") +done + +if [ -z "$INIT_ENV" ]; then + echo "[ERROR] Could not find init_env (starting at $SCRIPT_DIR)" >&2 + exit 1 +fi + +if [ -z "${__INIT_ENV_LOADED:-}" ]; then + # shellcheck disable=SC1090 + . "$INIT_ENV" + __INIT_ENV_LOADED=1 +fi + +# shellcheck disable=SC1091 +. "$TOOLS/functestlib.sh" +# shellcheck disable=SC1091 +. "$TOOLS/lib_rt.sh" + +TESTNAME="CyclicTest" +test_path=$(find_test_case_by_name "$TESTNAME") +[ -n "$test_path" ] || test_path="$SCRIPT_DIR" + +RES_FILE="$test_path/${TESTNAME}.res" +OUT_DIR="${OUT_DIR:-$test_path/logs_${TESTNAME}}" +RESULT_TXT="${RESULT_TXT:-$OUT_DIR/result.txt}" + +INTERVAL="${INTERVAL:-1000}" +THREADS="${THREADS:-1}" +DURATION="${DURATION:-5m}" +BACKGROUND_CMD="${BACKGROUND_CMD:-}" +ITERATIONS="${ITERATIONS:-1}" +PRIO="${PRIO:-98}" +QUIET="${QUIET:-true}" +MLOCKALL="${MLOCKALL:-true}" +SMP="${SMP:-false}" +BINARY="${BINARY:-}" +VERBOSE="${VERBOSE:-0}" +PROGRESS_EVERY="${PROGRESS_EVERY:-1}" +HEARTBEAT_SEC="${HEARTBEAT_SEC:-10}" + +usage() { + cat <"$RES_FILE" + exit 0 + ;; + esac + shift +done + +LOG_PREFIX="$OUT_DIR/cyclictest" +TMP_ONE="$OUT_DIR/tmp_result_one.txt" +ITER_KPI="$OUT_DIR/iter_kpi.txt" +AGG_KPI="$OUT_DIR/agg_kpi.txt" +THREAD_AGG_KPI="$OUT_DIR/thread_agg_kpi.txt" + +rt_prepare_output_layout \ + "$OUT_DIR" \ + "$RESULT_TXT" \ + "$TMP_ONE" \ + "$ITER_KPI" \ + "$AGG_KPI" \ + "$THREAD_AGG_KPI" + +rt_check_clock_sanity "$TESTNAME" || true + +log_info "------------------- Starting $TESTNAME -------------------" +log_info "$TESTNAME: Checking for the tools required to run cyclictest" + +if ! rt_require_common_tools uname awk sed grep tr head tail mkdir cat sh sleep kill date sort wc; then + log_skip "$TESTNAME: basic tools missing" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +if ! rt_require_json_helpers; then + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +rt_normalize_common_params + +CYCLICTEST_BIN=$(rt_resolve_binary cyclictest "$BINARY" 2>/dev/null || echo "") +if [ -z "$CYCLICTEST_BIN" ] || [ ! -x "$CYCLICTEST_BIN" ]; then + log_skip "$TESTNAME: cyclictest binary not found/executable (${CYCLICTEST_BIN:-none})" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +rt_log_common_runtime_env "$TESTNAME" "$CYCLICTEST_BIN" +log_info "$TESTNAME: iterations=$ITERATIONS duration=$DURATION interval=$INTERVAL threads=$THREADS prio=$PRIO" +log_info "$TESTNAME: heartbeat=$HEARTBEAT_SEC seconds" + +RT_INTERRUPTED=0 +export RT_INTERRUPTED + +trap 'rt_handle_int; perf_rt_bg_stop >/dev/null 2>&1 || true' INT TERM +trap 'perf_rt_bg_stop >/dev/null 2>&1 || true' EXIT + +perf_rt_bg_start "$TESTNAME" "$BACKGROUND_CMD" + +overall_fail=0 + +i=1 +while [ "$i" -le "$ITERATIONS" ] 2>/dev/null; do + rt_log_iteration_progress "$TESTNAME" "$i" "$ITERATIONS" "$PROGRESS_EVERY" + + jsonfile="${LOG_PREFIX}-${i}.json" + stdoutlog="${OUT_DIR}/cyclictest_stdout_iter${i}.log" + + set -- "$CYCLICTEST_BIN" + case "$QUIET" in + true|TRUE|1|yes|YES) + set -- "$@" -q + ;; + esac + case "$MLOCKALL" in + true|TRUE|1|yes|YES) + set -- "$@" -m + ;; + esac + case "$SMP" in + true|TRUE|1|yes|YES) + set -- "$@" -S + ;; + esac + set -- "$@" -p "$PRIO" -i "$INTERVAL" -t "$THREADS" -D "$DURATION" --json="$jsonfile" + + if rt_run_json_iteration "$TESTNAME" "$HEARTBEAT_SEC" "$stdoutlog" "$jsonfile" "$@"; then + rc=$RT_RUN_RC + else + rc=$RT_RUN_RC + fi + + if [ "${RT_INTERRUPTED:-0}" -eq 1 ] 2>/dev/null; then + log_warn "$TESTNAME: interrupted by user during iteration $i/$ITERATIONS" + break + fi + + if [ "$rc" -ne 0 ] 2>/dev/null; then + log_fail "$TESTNAME: cyclictest exited rc=$rc (iter $i/$ITERATIONS)" + overall_fail=1 + fi + + if [ "${RT_RUN_JSON_OK:-0}" -ne 1 ] 2>/dev/null; then + log_fail "$TESTNAME: missing json output: $jsonfile" + overall_fail=1 + i=$((i + 1)) + continue + fi + + if ! rt_parse_and_append_iteration_kpi "cyclictest" "$jsonfile" "$TMP_ONE" "$ITER_KPI" "$RESULT_TXT" "$i"; then + log_fail "$TESTNAME: failed to parse/store KPI (iter $i/$ITERATIONS): $jsonfile" + overall_fail=1 + fi + + i=$((i + 1)) +done + +perf_rt_bg_stop >/dev/null 2>&1 || true + +rt_emit_kpi_block "$TESTNAME" "per-iteration results" "$ITER_KPI" +rt_emit_aggregate_kpi "$TESTNAME" "cyclictest" "$ITER_KPI" "$AGG_KPI" "$RESULT_TXT" || true +rt_emit_thread_aggregate_kpi "$TESTNAME" "cyclictest" "$ITER_KPI" "$THREAD_AGG_KPI" "$RESULT_TXT" || true + +if rt_kpi_file_has_fail "cyclictest" "$ITER_KPI"; then + overall_fail=1 +fi + +rt_emit_interrupt_aware_result "$TESTNAME" "$RES_FILE" "$RESULT_TXT" "$OUT_DIR" "${RT_INTERRUPTED:-0}" "$overall_fail" +exit 0 diff --git a/Runner/suites/Kernel/RT-tests/Hackbench/README_Hackbench.md b/Runner/suites/Kernel/RT-tests/Hackbench/README_Hackbench.md new file mode 100644 index 00000000..ec4910a9 --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/Hackbench/README_Hackbench.md @@ -0,0 +1,229 @@ +# Hackbench (qcom-linux-testkit) + +Hackbench is both a benchmark and a stress test for the Linux kernel scheduler. It creates groups of communicating tasks (threads or processes) via sockets or pipes and measures how long they take to exchange data. + +This test wrapper runs `hackbench` for **N iterations**, captures all output, parses `Time:` samples, and emits KPI lines (mean/min/max and worst-sample), plus a LAVA-friendly `.res` verdict. + +--- + +## Location + +- Test: `Runner/suites/Kernel/RT-tests/Hackbench/run.sh` +- Shared helpers: `Runner/utils/lib_rt.sh` +- Logging/helpers: `Runner/utils/functestlib.sh` + +--- + +## What this test produces + +### Console (examples) + +You will see high-signal context and KPI lines, for example: + +- `Hackbench: uname -a: ...` +- `Hackbench: sched_rt_runtime_us=...` +- `Hackbench: hackbench opts: -s 100 -l 100 -g 10 -f 20 -T` +- `hackbench-mean pass 0.220660 s` +- `hackbench-min pass 0.185000 s` +- `hackbench-max pass 0.272000 s` +- `hackbench-worst pass 0.272000 s` *(worst-sample = max for the run)* + +> Note: On some hackbench versions, the output lines are `Time: `. + +### Files + +By default, output is written under: + +- `logs_Hackbench/` (or `OUT_DIR` if overridden) + - `hackbench-output-host.txt` – raw log with all `Time:` samples + - `parsed_hackbench.txt` – parsed KPI lines + - `result.txt` – same KPI lines used for LAVA result submission +- `Hackbench.res` – single-line verdict (`Hackbench PASS|FAIL|SKIP`) + +--- + +## Requirements + +### Mandatory +- `hackbench` binary (either in `PATH` or provided via `--binary`) +- Standard tools: `uname`, `awk`, `sed`, `grep`, `tr`, `head`, `tail`, `mkdir`, `cat`, `sh`, `tee`, `sleep`, `kill`, `date` + +The script uses your testkit’s `check_dependencies` to validate the above. + +### Optional (nice-to-have) +- `ensure_reasonable_clock()` from `functestlib.sh` + If available, it will be used to avoid epoch timestamps (e.g., 1970) in logs. +- Background workload command (`--background-cmd`) to apply system load while measuring. + +--- + +## Usage + +Run from the test folder: + +```sh +cd Runner/suites/Kernel/RT-tests/Hackbench +./run.sh +``` + +### Common examples + +Run 200 iterations with threaded mode: + +```sh +./run.sh --iteration 200 --threads true +``` + +Run with pipes (instead of sockets): + +```sh +./run.sh --iteration 200 --pipe true +``` + +Explicit hackbench binary path: + +```sh +./run.sh --binary /tmp/hackbench --iteration 200 +``` + +Increase message size / loops / groups: + +```sh +./run.sh --datasize 1024 --loops 200 --grps 20 --fds 20 --iteration 100 +``` + +Add background workload: + +```sh +./run.sh --background-cmd "sh -c 'while :; do :; done'" --iteration 200 +``` + +Control progress logging (default: every 50 iterations): + +```sh +./run.sh --iteration 500 --progress-every 25 +``` + +Verbose mode: + +```sh +./run.sh --verbose +``` + +--- + +## Parameters + +The wrapper accepts both **CLI arguments** and **environment variables**. +If both are set, the CLI argument wins. + +### Output control +- `--out DIR` / `OUT_DIR` + Output directory (default: `./logs_Hackbench` under the test path). +- `--result FILE` / `RESULT_TXT` + KPI output file (default: `${OUT_DIR}/result.txt`). +- `--log FILE` / `TEST_LOG` + Raw hackbench log (default: `${OUT_DIR}/hackbench-output-host.txt`). + +### Hackbench workload knobs (Linaro-style) +- `--iteration N` / `ITERATION` (default: `1000`) +- `--target host|kvm` / `TARGET` *(informational label only)* +- `--datasize BYTES` / `DATASIZE` → `-s` +- `--loops N` / `LOOPS` → `-l` +- `--grps N` / `GRPS` → `-g` +- `--fds N` / `FDS` → `-f` +- `--pipe true|false` / `PIPE` + Adds `-p` when true. +- `--threads true|false` / `THREADS` + Adds `-T` when true. (Default is process mode.) + +### Testkit extras +- `--background-cmd CMD` / `BACKGROUND_CMD` + Runs a background workload during the benchmark (best-effort stop on exit). +- `--binary PATH` / `BINARY` + Explicit `hackbench` path. +- `--progress-every N` / `PROGRESS_EVERY` + Progress log cadence (default: `50`). +- `--verbose` / `VERBOSE=1` + +--- + +## Result parsing and KPIs + +The parsing is done by `rt_hackbench_parse_times` from `Runner/utils/lib_rt.sh`. + +It extracts all lines like: + +``` +Time: 0.210 +``` + +…and computes: + +- `hackbench-mean pass s` +- `hackbench-min pass s` +- `hackbench-max pass s` +- `hackbench-worst pass s` *(worst-sample = max)* + +These are written to: +- `${OUT_DIR}/parsed_hackbench.txt` +- `${OUT_DIR}/result.txt` + +--- + +## LAVA integration + +A typical test definition YAML can run this via CLI args: + +```yaml +run: + steps: + - cd Runner/suites/Kernel/RT-tests/Hackbench + - >- + ./run.sh + --out "${OUT_DIR}" + --iteration "${ITERATION}" + --datasize "${DATASIZE}" + --loops "${LOOPS}" + --grps "${GRPS}" + --fds "${FDS}" + --pipe "${PIPE}" + --threads "${THREADS}" + --background-cmd "${BACKGROUND_CMD}" + --binary "${BINARY}" + --progress-every "${PROGRESS_EVERY}" + $( [ "${VERBOSE}" = "1" ] && echo "--verbose" ) + || true + - ../../../../utils/send-to-lava.sh Hackbench.res +``` + +> LAVA exports `params:` variables automatically into the test shell environment. +> Using CLI args makes the command line explicit and reproducible, matching the Linaro style. + +--- + +## Troubleshooting + +### 1) Timestamps show 1970-01-01 +- Your board clock is likely not set. +- If `ensure_reasonable_clock()` exists in `functestlib.sh`, the script can call it before running. +- Otherwise, set time via NTP / RTC / manual `date`. + +### 2) No KPI lines (mean/min/max) +- Check that `${OUT_DIR}/hackbench-output-host.txt` contains `Time:` lines. +- If the hackbench output format differs (some variants use `Time:` with different formatting), update the parser in `lib_rt.sh` accordingly. + +### 3) Hackbench not found +- Provide `--binary /path/to/hackbench`, or ensure `hackbench` is in `PATH`. + +### 4) High variance / outliers +- Run with a background workload to characterize worst-case scheduling. +- Increase iterations to stabilize mean. +- Pin CPU frequency governor if needed (platform policy dependent). + +--- + +## Notes + +- The `.res` file is always created for LAVA. +- The script is intended to be POSIX `sh` compatible and CI-friendly. diff --git a/Runner/suites/Kernel/RT-tests/Hackbench/hackbench.yaml b/Runner/suites/Kernel/RT-tests/Hackbench/hackbench.yaml new file mode 100755 index 00000000..82d2aa3f --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/Hackbench/hackbench.yaml @@ -0,0 +1,38 @@ +metadata: + name: Hackbench + format: "Lava-Test Test Definition 1.0" + description: > + Hackbench is both a benchmark and a stress test for the Linux kernel scheduler. + It creates groups of communicating tasks via sockets or pipes and measures the + time taken. This wrapper runs hackbench for N iterations, parses Time samples + into mean/min/max and worst-sample, and emits Hackbench.res. + os: + - linux + scope: + - performance + - preempt-rt + +params: + OUT_DIR: "./logs_Hackbench" + VERBOSE: "0" + PROGRESS_EVERY: "1" + HEARTBEAT_SEC: "10" + + ITERATIONS: "5" + TARGET: "host" + DATASIZE: "100" + LOOPS: "100" + GRPS: "10" + FDS: "20" + PIPE: "false" + THREADS: "false" + + BACKGROUND_CMD: "" + BINARY: "" + +run: + steps: + - REPO_PATH=$PWD + - cd Runner/suites/Kernel/RT-tests/Hackbench + - ./run.sh --out "${OUT_DIR}" --iterations "${ITERATIONS}" --target "${TARGET}" --datasize "${DATASIZE}" --loops "${LOOPS}" --grps "${GRPS}" --fds "${FDS}" --pipe "${PIPE}" --threads "${THREADS}" --background-cmd "${BACKGROUND_CMD}" --binary "${BINARY}" --progress-every "${PROGRESS_EVERY}" --heartbeat-sec "${HEARTBEAT_SEC}" $( [ "${VERBOSE}" = "1" ] && echo "--verbose" ) || true + - $REPO_PATH/Runner/utils/send-to-lava.sh Hackbench.res diff --git a/Runner/suites/Kernel/RT-tests/Hackbench/run.sh b/Runner/suites/Kernel/RT-tests/Hackbench/run.sh new file mode 100755 index 00000000..6bad8930 --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/Hackbench/run.sh @@ -0,0 +1,302 @@ +#!/bin/sh +# Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. +# SPDX-License-Identifier: BSD-3-Clause +# +# Hackbench wrapper for qcom-linux-testkit +# - Runs hackbench ITERATIONS times +# - Captures output to a log file +# - Parses Time lines -> min/mean/max via lib_rt.sh +# - Adds worst-sample Time for quick debug visibility +# - Emits Hackbench.res PASS/FAIL/SKIP +# +# Notes: +# - Always exits 0 (LAVA-friendly). Use Hackbench.res for gating. +# - --iteration is kept as a compatibility alias for --iterations. + +SCRIPT_DIR="$( + cd "$(dirname "$0")" || exit 1 + pwd +)" + +INIT_ENV="" +SEARCH="$SCRIPT_DIR" +while [ "$SEARCH" != "/" ]; do + if [ -f "$SEARCH/init_env" ]; then + INIT_ENV="$SEARCH/init_env" + break + fi + SEARCH=$(dirname "$SEARCH") +done + +if [ -z "$INIT_ENV" ]; then + echo "[ERROR] Could not find init_env (starting at $SCRIPT_DIR)" >&2 + exit 1 +fi + +if [ -z "${__INIT_ENV_LOADED:-}" ]; then + # shellcheck disable=SC1090 + . "$INIT_ENV" + __INIT_ENV_LOADED=1 +fi + +# shellcheck disable=SC1091 +. "$TOOLS/functestlib.sh" +# shellcheck disable=SC1091 +. "$TOOLS/lib_rt.sh" + +TESTNAME="Hackbench" +test_path=$(find_test_case_by_name "$TESTNAME") +[ -n "$test_path" ] || test_path="$SCRIPT_DIR" + +RES_FILE="$test_path/${TESTNAME}.res" +OUT_DIR="${OUT_DIR:-$test_path/logs_${TESTNAME}}" +RESULT_TXT="${RESULT_TXT:-$OUT_DIR/result.txt}" +TEST_LOG="${TEST_LOG:-$OUT_DIR/hackbench-output-host.txt}" + +ITERATIONS="${ITERATIONS:-1000}" +TARGET="${TARGET:-host}" +DATASIZE="${DATASIZE:-100}" +LOOPS="${LOOPS:-100}" +GRPS="${GRPS:-10}" +FDS="${FDS:-20}" +PIPE="${PIPE:-false}" +THREADS="${THREADS:-false}" +BACKGROUND_CMD="${BACKGROUND_CMD:-}" +BINARY="${BINARY:-}" +VERBOSE="${VERBOSE:-0}" +PROGRESS_EVERY="${PROGRESS_EVERY:-50}" +HEARTBEAT_SEC="${HEARTBEAT_SEC:-10}" + +usage() { + cat <"$RES_FILE" + exit 0 + ;; + esac + shift +done + +case "$ITERATIONS" in ''|*[!0-9]*|0) ITERATIONS=1 ;; esac +case "$PROGRESS_EVERY" in ''|*[!0-9]*|0) PROGRESS_EVERY=50 ;; esac +case "$HEARTBEAT_SEC" in ''|*[!0-9]*|0) HEARTBEAT_SEC=10 ;; esac +case "$DATASIZE" in ''|*[!0-9]*) DATASIZE=100 ;; esac +case "$LOOPS" in ''|*[!0-9]*) LOOPS=100 ;; esac +case "$GRPS" in ''|*[!0-9]*) GRPS=10 ;; esac +case "$FDS" in ''|*[!0-9]*) FDS=20 ;; esac + +PARSED="$OUT_DIR/parsed_hackbench.txt" + +rt_prepare_output_layout \ + "$OUT_DIR" \ + "$RESULT_TXT" \ + "$TEST_LOG" \ + "$PARSED" + +rt_check_clock_sanity "$TESTNAME" || true + +log_info "------------------- Starting $TESTNAME -------------------" +log_info "$TESTNAME: Checking for the tools required to run hackbench" + +if ! rt_require_common_tools uname awk sed grep tr head tail mkdir cat sh sleep kill date sort wc; then + log_skip "$TESTNAME: basic tools missing" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +if ! command -v rt_parse_token_numeric_samples >/dev/null 2>&1; then + log_skip "$TESTNAME: rt_parse_token_numeric_samples missing (lib_rt.sh not loaded?)" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +HB_BIN=$(rt_resolve_binary hackbench "$BINARY" 2>/dev/null || echo "") +if [ -z "$HB_BIN" ] || [ ! -x "$HB_BIN" ]; then + log_skip "$TESTNAME: hackbench binary not found/executable (${HB_BIN:-none})" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +rt_log_common_runtime_env "$TESTNAME" "$HB_BIN" +log_info "$TESTNAME: iterations=$ITERATIONS target=$TARGET" +log_info "$TESTNAME: datasize=$DATASIZE loops=$LOOPS grps=$GRPS fds=$FDS pipe=$PIPE threads=$THREADS" +log_info "$TESTNAME: heartbeat=$HEARTBEAT_SEC seconds" + +RT_INTERRUPTED=0 +export RT_INTERRUPTED + +trap 'rt_handle_int; perf_rt_bg_stop >/dev/null 2>&1 || true' INT TERM +trap 'perf_rt_bg_stop >/dev/null 2>&1 || true' EXIT + +perf_rt_bg_start "$TESTNAME" "$BACKGROUND_CMD" + +overall_fail=0 + +i=1 +while [ "$i" -le "$ITERATIONS" ] 2>/dev/null; do + rt_log_iteration_progress "$TESTNAME" "$i" "$ITERATIONS" "$PROGRESS_EVERY" "running" + + iter_log="${TEST_LOG}.iter${i}" + + set -- "$HB_BIN" -s "$DATASIZE" -l "$LOOPS" -g "$GRPS" -f "$FDS" + case "$PIPE" in + true|TRUE|1|yes|YES) + set -- "$@" -p + ;; + esac + case "$THREADS" in + true|TRUE|1|yes|YES) + set -- "$@" -T + ;; + esac + + if rt_run_and_capture "$TESTNAME" "$HEARTBEAT_SEC" "$iter_log" "$@"; then + rc=$RT_RUN_RC + else + rc=$RT_RUN_RC + fi + + if [ -r "$iter_log" ]; then + cat "$iter_log" >>"$TEST_LOG" 2>/dev/null || true + rm -f "$iter_log" 2>/dev/null || true + fi + + if [ "${RT_INTERRUPTED:-0}" -eq 1 ] 2>/dev/null; then + log_warn "$TESTNAME: interrupted by user during iteration $i/$ITERATIONS" + break + fi + + if [ "$rc" -ne 0 ] 2>/dev/null; then + log_fail "$TESTNAME: hackbench failed rc=$rc (iter $i/$ITERATIONS)" + overall_fail=1 + break + fi + + i=$((i + 1)) +done + +perf_rt_bg_stop >/dev/null 2>&1 || true +: >"$PARSED" 2>/dev/null || true + +if [ "$overall_fail" -eq 0 ] 2>/dev/null; then + if [ -s "$TEST_LOG" ]; then + if rt_parse_token_numeric_samples "hackbench-time" "$TEST_LOG" "Time:" "s" >"$PARSED" 2>/dev/null; then + cat "$PARSED" >>"$RESULT_TXT" 2>/dev/null || true + rt_emit_worst_sample_from_log "hackbench-worst-sample" "$TEST_LOG" "Time:" "s" "$PARSED" "$RESULT_TXT" "$TESTNAME" || true + + while IFS= read -r line; do + [ -n "$line" ] || continue + log_info "$TESTNAME: $line" + done <"$PARSED" + else + if [ "${RT_INTERRUPTED:-0}" -eq 1 ] 2>/dev/null; then + log_warn "$TESTNAME: no complete Time samples collected before interrupt" + else + log_fail "$TESTNAME: unable to parse any Time lines from $TEST_LOG" + overall_fail=1 + fi + fi + else + if [ "${RT_INTERRUPTED:-0}" -eq 1 ] 2>/dev/null; then + log_warn "$TESTNAME: no output collected before interrupt" + else + log_fail "$TESTNAME: hackbench output log is empty: $TEST_LOG" + overall_fail=1 + fi + fi +fi + +rt_emit_interrupt_aware_result "$TESTNAME" "$RES_FILE" "$RESULT_TXT" "$OUT_DIR" "${RT_INTERRUPTED:-0}" "$overall_fail" +exit 0 diff --git a/Runner/suites/Kernel/RT-tests/OSLat/OSLAT_README.md b/Runner/suites/Kernel/RT-tests/OSLat/OSLAT_README.md new file mode 100644 index 00000000..186cec9d --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/OSLat/OSLAT_README.md @@ -0,0 +1,228 @@ +# OSLAT + +## Overview + +OSLAT is an OS latency detector from rt-tests. It runs busy loops on selected CPUs and measures operating system induced latency while optionally applying workloads such as `memmove`. In the qcom-linux-testkit wrapper, OSLAT is executed in JSON mode, parsed through `lib_rt.sh`, and summarized into machine-friendly and human-readable result files. + +This wrapper follows the same structure used for the other RT tests in `Runner/suites/Kernel/RT-tests`, including: + +- standard `run.sh` flow +- `PASS` / `FAIL` / `SKIP` summary in `OSLAT.res` +- detailed KPI in `logs_OSLAT/result.txt` +- aggregate KPI files under `logs_OSLAT/` +- LAVA-friendly behavior with exit code `0` +- heartbeat logging for long-running executions +- partial-result preservation on user interrupt + +## Default behavior + +The wrapper defaults are chosen to be safe and practical for embedded boards while still matching the oslat binary options. + +Default wrapper values: + +- `DURATION=1m` +- `ITERATIONS=1` +- `BACKGROUND_CMD=""` +- `QUIET=true` +- `WORKLOAD=no` +- `CPU_MAIN_THREAD=0` +- `PROGRESS_EVERY=1` +- `HEARTBEAT_SEC=10` + +Unset binary options are passed only when explicitly requested. + +## Files generated + +Typical output directory: + +`logs_OSLAT/` + +Generated files include: + +- `result.txt` - all parsed KPI lines and summary data +- `iter_kpi.txt` - per-iteration KPI lines +- `agg_kpi.txt` - aggregate KPI across iterations +- `thread_agg_kpi.txt` - per-thread aggregate KPI +- `oslat-.json` - raw JSON output from each iteration +- `oslat_stdout_iter.log` - captured console output for each iteration +- `tmp_result_one.txt` - temporary per-iteration parsed result file +- `OSLAT.res` - final summary result used by LAVA gating + +## Supported wrapper options + +### Wrapper options + +- `--out DIR` + Override output directory. + +- `--result FILE` + Override result file path. + +- `--duration TIME` + Test duration passed to oslat via `-D`. + +- `--iterations N` + Number of iterations to run. + +- `--background-cmd CMD` + Background workload command launched alongside the test. + +- `--binary PATH` + Explicit path to `oslat` binary. + +- `--progress-every N` + Iteration start progress cadence. + +- `--heartbeat-sec N` + Emit periodic "still running" messages while the binary is executing. + +- `--verbose` + Enable extra wrapper debug output. + +## Supported oslat options in run.sh + +The wrapper is expected to support the full set of useful oslat runtime options. + +- `--bucket-size N` + Pass `-b N`. + +- `--bias BOOL` + Pass `-B` when enabled. + +- `--cpu-list LIST` + Pass `-c LIST`, for example `1,3,5,7-15`. + +- `--cpu-main-thread CPU` + Pass `-C CPU`. Default is `0`. + +- `--rtprio N` + Pass `-f N`. + +- `--workload-mem SIZE` + Pass `-m SIZE`, for example `4K`, `1M`. + +- `--quiet BOOL` + Pass `-q` when enabled. + +- `--single-preheat BOOL` + Pass `-s` when enabled. + +- `--trace-threshold USEC` + Pass `-T USEC`. + +- `--workload TYPE` + Pass `-w TYPE`. Supported by oslat: `no`, `memmove`. + +- `--bucket-width NS` + Pass `-W NS`. + +- `--zero-omit BOOL` + Pass `-z` when enabled. + +## Example commands + +Run with defaults using an explicit binary: + +```sh +./run.sh --binary /tmp/oslat +``` + +Run on selected CPUs for 1 minute with memmove workload: + +```sh +./run.sh \ + --binary /tmp/oslat \ + --duration 1m \ + --cpu-list 0-3 \ + --cpu-main-thread 0 \ + --workload memmove \ + --workload-mem 1M +``` + +Run 3 iterations with heartbeat and FIFO priority: + +```sh +./run.sh \ + --binary /tmp/oslat \ + --duration 60s \ + --iterations 3 \ + --rtprio 95 \ + --heartbeat-sec 10 +``` + +Run with histogram tuning: + +```sh +./run.sh \ + --binary /tmp/oslat \ + --bucket-size 128 \ + --bucket-width 1000 \ + --bias true \ + --zero-omit true +``` + +## Result interpretation + +The parser extracts latency KPI from the JSON output and emits standard lines such as: + +- per-thread minimum latency +- per-thread average latency +- per-thread maximum latency +- test return code and verdict + +Aggregate summaries typically include: + +- all-thread minimum latency min / mean / max +- all-thread average latency min / mean / max +- all-thread maximum latency min / mean / max +- worst thread maximum latency +- worst thread id +- per-thread aggregate summaries across iterations + +These results are appended to `logs_OSLAT/result.txt` and also echoed to stdout in the standard qcom-linux-testkit format. + +## Interrupt behavior + +If the user presses `Ctrl-C` during execution: + +- the wrapper asks the running binary to exit cleanly +- partial stdout and any flushed JSON are preserved +- parsed results collected so far are still printed +- final status is marked as `SKIP` instead of `FAIL` + +This matches the improved handling used in the recent RT test wrappers. + +## Expected repository layout + +Typical placement inside qcom-linux-testkit: + +`Runner/suites/Kernel/RT-tests/OSLAT/` + +Expected files: + +- `run.sh` +- `oslat.yaml` +- `README.md` + +And supporting utilities: + +- `Runner/utils/functestlib.sh` +- `Runner/utils/lib_rt.sh` + +## LAVA integration notes + +The wrapper is designed to integrate with the existing RT test YAML style used in your repository: + +- repository-relative `cd` into the test directory +- invoke `./run.sh` with YAML params +- always call `send-to-lava.sh OSLAT.res` + +The `.res` file is the gating artifact. The detailed KPI remains under `logs_OSLAT/`. + +## Notes + +- OSLAT should be run as root. +- CPU list and main-thread CPU should be chosen carefully on small embedded systems. +- `--single-preheat` should only be used when CPU frequency behavior is understood and controlled. +- `--trace-threshold` is useful only if ftrace is configured and available on the target. +- `--workload memmove` plus large `--workload-mem` can significantly increase system pressure. diff --git a/Runner/suites/Kernel/RT-tests/OSLat/oslat.yaml b/Runner/suites/Kernel/RT-tests/OSLat/oslat.yaml new file mode 100755 index 00000000..26c985c1 --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/OSLat/oslat.yaml @@ -0,0 +1,40 @@ +metadata: + name: OSLat + format: "Lava-Test Test Definition 1.0" + description: "Run rt-tests oslat (OS latency detector) in JSON mode and parse results without requiring python3." + os: + - linux + scope: + - performance + - preempt-rt + +params: + DURATION: "1m" + BACKGROUND_CMD: "" + ITERATIONS: "1" + + BUCKET_SIZE: "" + BIAS: "false" + CPU_LIST: "" + CPU_MAIN_THREAD: "" + RTPRIO: "" + WORKLOAD_MEM: "" + QUIET: "true" + SINGLE_PREHEAT: "false" + TRACE_THRESHOLD_US: "" + WORKLOAD: "" + BUCKET_WIDTH_NS: "" + ZERO_OMIT: "false" + + BINARY: "" + OUT_DIR: "./logs_OSLat" + VERBOSE: "0" + PROGRESS_EVERY: "1" + HEARTBEAT_SEC: "10" + +run: + steps: + - REPO_PATH=$PWD + - cd Runner/suites/Kernel/RT-tests/OSLat + - ./run.sh --duration "${DURATION}" --iterations "${ITERATIONS}" --background-cmd "${BACKGROUND_CMD}" --bucket-size "${BUCKET_SIZE}" --bias "${BIAS}" --cpu-list "${CPU_LIST}" --cpu-main-thread "${CPU_MAIN_THREAD}" --rtprio "${RTPRIO}" --workload-mem "${WORKLOAD_MEM}" --quiet "${QUIET}" --single-preheat "${SINGLE_PREHEAT}" --trace-threshold-us "${TRACE_THRESHOLD_US}" --workload "${WORKLOAD}" --bucket-width-ns "${BUCKET_WIDTH_NS}" --zero-omit "${ZERO_OMIT}" --binary "${BINARY}" --out "${OUT_DIR}" --progress-every "${PROGRESS_EVERY}" --heartbeat-sec "${HEARTBEAT_SEC}" $( [ "${VERBOSE}" = "1" ] && echo "--verbose" ) || true + - $REPO_PATH/Runner/utils/send-to-lava.sh OSLat.res diff --git a/Runner/suites/Kernel/RT-tests/OSLat/run.sh b/Runner/suites/Kernel/RT-tests/OSLat/run.sh new file mode 100755 index 00000000..25151de3 --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/OSLat/run.sh @@ -0,0 +1,389 @@ +#!/bin/sh +# Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. +# SPDX-License-Identifier: BSD-3-Clause +# +# OSLat wrapper for qcom-linux-testkit +# - Runs rt-tests oslat ITERATIONS times (JSON output) +# - Parses KPI using lib_rt.sh (no python required) +# - Emits KPI lines to result.txt and summary PASS/FAIL/SKIP to OSLat.res +# +# Notes: +# - Always exits 0 (LAVA-friendly). Use OSLat.res for gating. +# - Ctrl-C/user interrupt is treated as SKIP and partial results are preserved. +# - Heartbeat is enabled by default. + +SCRIPT_DIR="$( + cd "$(dirname "$0")" || exit 1 + pwd +)" + +INIT_ENV="" +SEARCH="$SCRIPT_DIR" +while [ "$SEARCH" != "/" ]; do + if [ -f "$SEARCH/init_env" ]; then + INIT_ENV="$SEARCH/init_env" + break + fi + SEARCH=$(dirname "$SEARCH") +done + +if [ -z "$INIT_ENV" ]; then + echo "[ERROR] Could not find init_env (starting at $SCRIPT_DIR)" >&2 + exit 1 +fi + +if [ -z "${__INIT_ENV_LOADED:-}" ]; then + # shellcheck disable=SC1090 + . "$INIT_ENV" + __INIT_ENV_LOADED=1 +fi + +# shellcheck disable=SC1091 +. "$TOOLS/functestlib.sh" +# shellcheck disable=SC1091 +. "$TOOLS/lib_rt.sh" + +TESTNAME="OSLat" +RT_CUR_TESTNAME="$TESTNAME" +export RT_CUR_TESTNAME + +test_path=$(find_test_case_by_name "$TESTNAME") +if [ -z "$test_path" ]; then + test_path="$SCRIPT_DIR" +fi + +RES_FILE="$test_path/${TESTNAME}.res" +OUT_DIR="${OUT_DIR:-$test_path/logs_${TESTNAME}}" +RESULT_TXT="${RESULT_TXT:-$OUT_DIR/result.txt}" + +DURATION="${DURATION:-1m}" +BACKGROUND_CMD="${BACKGROUND_CMD:-}" +ITERATIONS="${ITERATIONS:-1}" +BUCKET_SIZE="${BUCKET_SIZE:-}" +BIAS="${BIAS:-false}" +CPU_LIST="${CPU_LIST:-}" +CPU_MAIN_THREAD="${CPU_MAIN_THREAD:-}" +RTPRIO="${RTPRIO:-}" +WORKLOAD_MEM="${WORKLOAD_MEM:-}" +QUIET="${QUIET:-true}" +SINGLE_PREHEAT="${SINGLE_PREHEAT:-false}" +TRACE_THRESHOLD_US="${TRACE_THRESHOLD_US:-}" +WORKLOAD="${WORKLOAD:-}" +BUCKET_WIDTH_NS="${BUCKET_WIDTH_NS:-}" +ZERO_OMIT="${ZERO_OMIT:-false}" +BINARY="${BINARY:-}" +VERBOSE="${VERBOSE:-0}" +PROGRESS_EVERY="${PROGRESS_EVERY:-1}" +HEARTBEAT_SEC="${HEARTBEAT_SEC:-10}" + +usage() { + cat <"$RES_FILE" + exit 0 + ;; + esac + shift +done + +LOG_PREFIX="$OUT_DIR/oslat" +TMP_ONE="$OUT_DIR/tmp_result_one.txt" +ITER_KPI="$OUT_DIR/iter_kpi.txt" +AGG_KPI="$OUT_DIR/agg_kpi.txt" +THREAD_AGG_KPI="$OUT_DIR/thread_agg_kpi.txt" + +rt_prepare_output_layout \ + "$OUT_DIR" \ + "$RESULT_TXT" \ + "$TMP_ONE" \ + "$ITER_KPI" \ + "$AGG_KPI" \ + "$THREAD_AGG_KPI" + +rt_check_clock_sanity "$TESTNAME" || true + +log_info "------------------- Starting $TESTNAME -------------------" +log_info "$TESTNAME: Checking for the tools required to run oslat" + +if ! rt_require_common_tools uname awk sed grep tr head tail mkdir cat sh sleep kill date mkfifo rm tee sort wc; then + log_skip "$TESTNAME: basic tools missing" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +if ! rt_require_json_helpers; then + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +if ! rt_require_stream_helpers; then + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +rt_normalize_common_params + +case "$BUCKET_SIZE" in ''|*[!0-9]*) BUCKET_SIZE="" ;; esac +case "$CPU_MAIN_THREAD" in ''|*[!0-9]*) CPU_MAIN_THREAD="" ;; esac +case "$RTPRIO" in ''|*[!0-9]*) RTPRIO="" ;; esac +case "$TRACE_THRESHOLD_US" in ''|*[!0-9]*) TRACE_THRESHOLD_US="" ;; esac +case "$BUCKET_WIDTH_NS" in ''|*[!0-9]*) BUCKET_WIDTH_NS="" ;; esac + +OSLAT_BIN=$(rt_resolve_binary oslat "$BINARY" 2>/dev/null || echo "") +if [ -z "$OSLAT_BIN" ] || [ ! -x "$OSLAT_BIN" ]; then + log_skip "$TESTNAME: oslat binary not found/executable (${OSLAT_BIN:-none})" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +rt_log_common_runtime_env "$TESTNAME" "$OSLAT_BIN" +log_info "$TESTNAME: iterations=$ITERATIONS duration=$DURATION" +log_info "$TESTNAME: heartbeat=$HEARTBEAT_SEC seconds" + +RT_INTERRUPTED=0 +export RT_INTERRUPTED + +trap 'rt_handle_int; rt_cleanup_pipes; rt_stop_heartbeat; perf_rt_bg_stop >/dev/null 2>&1 || true' INT TERM +trap 'rt_cleanup_pipes; rt_stop_heartbeat; perf_rt_bg_stop >/dev/null 2>&1 || true' EXIT + +perf_rt_bg_start "$TESTNAME" "$BACKGROUND_CMD" + +overall_fail=0 +i=1 +while [ "$i" -le "$ITERATIONS" ] 2>/dev/null; do + rt_log_iteration_progress "$TESTNAME" "$i" "$ITERATIONS" "$PROGRESS_EVERY" + + jsonfile="${LOG_PREFIX}-${i}.json" + stdoutlog="${OUT_DIR}/oslat_stdout_iter${i}.log" + + set -- "$OSLAT_BIN" + + case "$QUIET" in + true|TRUE|1|yes|YES) + set -- "$@" -q + ;; + esac + + case "$BIAS" in + true|TRUE|1|yes|YES) + set -- "$@" -B + ;; + esac + + case "$SINGLE_PREHEAT" in + true|TRUE|1|yes|YES) + set -- "$@" -s + ;; + esac + + case "$ZERO_OMIT" in + true|TRUE|1|yes|YES) + set -- "$@" -z + ;; + esac + + if [ -n "$BUCKET_SIZE" ]; then + set -- "$@" -b "$BUCKET_SIZE" + fi + + if [ -n "$CPU_LIST" ]; then + set -- "$@" -c "$CPU_LIST" + fi + + if [ -n "$CPU_MAIN_THREAD" ]; then + set -- "$@" -C "$CPU_MAIN_THREAD" + fi + + if [ -n "$RTPRIO" ]; then + set -- "$@" -f "$RTPRIO" + fi + + if [ -n "$WORKLOAD_MEM" ]; then + set -- "$@" -m "$WORKLOAD_MEM" + fi + + if [ -n "$TRACE_THRESHOLD_US" ]; then + set -- "$@" -T "$TRACE_THRESHOLD_US" + fi + + if [ -n "$WORKLOAD" ]; then + set -- "$@" -w "$WORKLOAD" + fi + + if [ -n "$BUCKET_WIDTH_NS" ]; then + set -- "$@" -W "$BUCKET_WIDTH_NS" + fi + + set -- "$@" -D "$DURATION" --json="$jsonfile" + + if rt_run_streaming_iteration "$TESTNAME" "$HEARTBEAT_SEC" "$stdoutlog" "$jsonfile" "$@"; then + rc=$RT_RUN_RC + else + rc=$RT_RUN_RC + fi + + if [ "$rc" -ne 0 ] 2>/dev/null; then + if [ "${RT_INTERRUPTED:-0}" -eq 1 ] 2>/dev/null && [ "$rc" -eq 130 ] 2>/dev/null; then + log_warn "$TESTNAME: oslat interrupted by user (rc=$rc); reporting partial results" + else + log_fail "$TESTNAME: oslat exited rc=$rc (iter $i/$ITERATIONS)" + overall_fail=1 + fi + fi + + if [ "${RT_RUN_JSON_OK:-0}" -ne 1 ] 2>/dev/null; then + if [ "${RT_INTERRUPTED:-0}" -eq 1 ] 2>/dev/null; then + log_warn "$TESTNAME: json output not available after interrupt: $jsonfile" + break + fi + + log_fail "$TESTNAME: missing json output: $jsonfile" + overall_fail=1 + i=$((i + 1)) + continue + fi + + if ! rt_parse_and_append_iteration_kpi "oslat" "$jsonfile" "$TMP_ONE" "$ITER_KPI" "$RESULT_TXT" "$i"; then + if [ "${RT_INTERRUPTED:-0}" -eq 1 ] 2>/dev/null; then + log_warn "$TESTNAME: parse incomplete after interrupt (iter $i/$ITERATIONS): $jsonfile" + else + log_fail "$TESTNAME: failed to parse/store KPI (iter $i/$ITERATIONS): $jsonfile" + overall_fail=1 + fi + fi + + if [ "${RT_INTERRUPTED:-0}" -eq 1 ] 2>/dev/null; then + break + fi + + i=$((i + 1)) +done + +perf_rt_bg_stop >/dev/null 2>&1 || true + +rt_emit_kpi_block "$TESTNAME" "per-iteration results" "$ITER_KPI" +rt_emit_aggregate_kpi "$TESTNAME" "oslat" "$ITER_KPI" "$AGG_KPI" "$RESULT_TXT" || true +rt_emit_thread_aggregate_kpi "$TESTNAME" "oslat" "$ITER_KPI" "$THREAD_AGG_KPI" "$RESULT_TXT" || true + +if rt_kpi_file_has_fail "oslat" "$ITER_KPI"; then + overall_fail=1 +fi + +rt_emit_interrupt_aware_result "$TESTNAME" "$RES_FILE" "$RESULT_TXT" "$OUT_DIR" "${RT_INTERRUPTED:-0}" "$overall_fail" +exit 0 diff --git a/Runner/suites/Kernel/RT-tests/PI_Stress/README_PI_Stress.md b/Runner/suites/Kernel/RT-tests/PI_Stress/README_PI_Stress.md new file mode 100644 index 00000000..8f452b4a --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/PI_Stress/README_PI_Stress.md @@ -0,0 +1,176 @@ +# PI_Stress (rt-tests pi_stress) — qcom-linux-testkit + +This test wraps **rt-tests `pi_stress`** (Priority Inheritance stress) for **qcom-linux-testkit** and LAVA. +It runs one or more `pi_stress` iterations, collects JSON output, parses KPIs **without requiring Python**, and emits a `.res` summary for LAVA gating. + +> **What it measures** +> +> `pi_stress` exercises PI mutexes (priority inheritance) by creating intentional priority-inversion scenarios. +> The JSON output includes an **`inversion` counter** (total inversions observed/generated in that run). With `--iterations 1`, +> you’ll often see min/mean/max all equal (one sample). + +--- + +## Location + +``` +Runner/suites/Kernel/RT-tests/PI_Stress/ +├── run.sh +├── PI_Stress.res # created at runtime +└── logs_PI_Stress/ # created at runtime (default) + ├── pi_stress_iter1.json + ├── parsed_pi_stress.txt + └── result.txt +``` + +--- + +## Requirements + +- Run as **root** (recommended/required for best behavior; `--mlockall` especially). +- `pi_stress` binary available on target, either: + - in `$PATH` (`command -v pi_stress`), or + - provided via `--binary /path/to/pi_stress` +- Common tools: `uname`, `awk`, `sed`, `grep`, `tr`, `head`, `tail`, `mkdir`, `cat`, `sh`, `tee`, `sleep`, `kill`, `date` + +This test uses helpers from: + +- `Runner/utils/functestlib.sh` (logging, deps, background workload helper, clock sanity helper if available) +- `Runner/utils/lib_rt.sh` (rt-tests JSON parsing helpers) + +--- + +## Quick start + +### Run with a custom binary +```sh +cd Runner/suites/Kernel/RT-tests/PI_Stress +./run.sh --binary /tmp/pi_stress --duration 1m +``` + +### Enable mlockall and SCHED_RR threads +```sh +./run.sh --binary /tmp/pi_stress --duration 1m --mlockall true --rr true +``` + +### Multiple iterations +```sh +./run.sh --binary /tmp/pi_stress --duration 1m --iterations 5 +``` + +### Optional: add background workload +```sh +./run.sh --binary /tmp/pi_stress --duration 1m --background-cmd "stress-ng --cpu 4 --timeout 60s" +``` + +--- + +## Command line options (run.sh) + +```text +--out DIR Output directory (default: ./logs_PI_Stress) +--result FILE Result file path (default: /result.txt) + +--duration D pi_stress runtime per iteration (default: 5m) +--iterations N Number of iterations (default: 1) + +--mlockall true|false Enable --mlockall (default: false) +--rr true|false Enable --rr (SCHED_RR) (default: false) + +--background-cmd CMD Optional background workload command (default: empty) +--binary PATH Explicit pi_stress binary path (default: auto-detect) +--verbose Extra logs +-h, --help Show help +``` + +**Notes** +- `--mlockall true` may fail if memlock limits are too low; your scripts print memlock(soft/hard) from `/proc/self/limits`. +- `--rr true` switches to SCHED_RR; default is SCHED_FIFO in `pi_stress`. + +--- + +## Outputs + +### Result files +- `PI_Stress.res` + Contains only the **PASS/FAIL/SKIP** summary for LAVA. +- `logs_PI_Stress/result.txt` + Contains parsed KPI lines for LAVA test parsing and artifact collection. +- `logs_PI_Stress/parsed_pi_stress.txt` + Same KPI lines (intermediate), helpful for debugging. +- `logs_PI_Stress/pi_stress_iterN.json` + Raw JSON output from `pi_stress`. + +### Example KPI lines +```text +pi-stress-inversion-min pass 13630990 count +pi-stress-inversion-mean pass 13630990 count +pi-stress-inversion-max pass 13630990 count +pi-stress pass +``` + +> With only one iteration, min/mean/max are identical because there’s one inversion value sample. + +--- + +## LAVA integration + +1) Ensure the repository is available on the DUT (or fetched by your job). +2) Use a LAVA test definition YAML to call `run.sh` and then send `.res` to LAVA. + +Minimal example (Linaro-style CLI mapping): + +```yaml +metadata: + name: pi-stress + format: "Lava-Test Test Definition 1.0" + description: "Run rt-tests pi_stress and collect inversion KPI in JSON; parse results without requiring python3." + os: + - linux + scope: + - functional + - preempt-rt + +params: + OUT_DIR: "./logs_PI_Stress" + DURATION: "5m" + ITERATIONS: "1" + MLOCKALL: "false" + RR: "false" + BACKGROUND_CMD: "" + BINARY: "" + +run: + steps: + - cd Runner/suites/Kernel/RT-tests/PI_Stress + - ./run.sh --out "${OUT_DIR}" --duration "${DURATION}" --iterations "${ITERATIONS}" --mlockall "${MLOCKALL}" --rr "${RR}" --background-cmd "${BACKGROUND_CMD}" --binary "${BINARY}" || true + - ../../../utils/send-to-lava.sh PI_Stress.res +``` + +--- + +## Troubleshooting + +### Timestamps show 1970-01-01 +If the system clock is invalid at boot, logs may show epoch time. If `functestlib.sh` provides `ensure_reasonable_clock()`, +the script attempts a **local-only** clock sanity step (RTC / kernel build time) before running. + +### pi_stress prints large inversion counts +The `inversion` KPI is a **total counter** per run; large values can be normal depending on CPU and load. +Use multiple iterations to compare distribution across runs. + +### Missing binary +Provide `--binary /path/to/pi_stress` or ensure `pi_stress` is in `$PATH`. + +--- + +## Exit codes + +- The script always exits `0` (LAVA-friendly). PASS/FAIL/SKIP is communicated via `PI_Stress.res`. + +--- + +## Maintainers / notes + +- Keep the implementation POSIX `sh` compatible and ShellCheck-clean. +- Prefer existing helpers in `functestlib.sh` and `lib_rt.sh` instead of adding new ones, unless necessary. diff --git a/Runner/suites/Kernel/RT-tests/PI_Stress/pi_stress.yaml b/Runner/suites/Kernel/RT-tests/PI_Stress/pi_stress.yaml new file mode 100755 index 00000000..3e0c9b83 --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/PI_Stress/pi_stress.yaml @@ -0,0 +1,31 @@ +metadata: + name: PI_Stress + format: "Lava-Test Test Definition 1.0" + description: "Run rt-tests pi_stress priority inversion stress test and collect inversion KPIs from JSON without requiring python3." + os: + - linux + scope: + - functional + - preempt-rt + +params: + DURATION: "1m" + MLOCKALL: "false" + RR: "false" + BACKGROUND_CMD: "" + ITERATIONS: "1" + USER_BASELINE: "" + + BINARY: "" + OUT_DIR: "./logs_PI_Stress" + + VERBOSE: "0" + PROGRESS_EVERY: "1" + HEARTBEAT_SEC: "10" + +run: + steps: + - REPO_PATH=$PWD + - cd Runner/suites/Kernel/RT-tests/PI_Stress + - ./run.sh --duration "${DURATION}" --mlockall "${MLOCKALL}" --rr "${RR}" --iterations "${ITERATIONS}" --background-cmd "${BACKGROUND_CMD}" --user-baseline "${USER_BASELINE}" --binary "${BINARY}" --out "${OUT_DIR}" --progress-every "${PROGRESS_EVERY}" --heartbeat-sec "${HEARTBEAT_SEC}" $( [ "${VERBOSE}" = "1" ] && echo "--verbose" ) || true + - $REPO_PATH/Runner/utils/send-to-lava.sh PI_Stress.res diff --git a/Runner/suites/Kernel/RT-tests/PI_Stress/run.sh b/Runner/suites/Kernel/RT-tests/PI_Stress/run.sh new file mode 100755 index 00000000..3f85566b --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/PI_Stress/run.sh @@ -0,0 +1,432 @@ +#!/bin/sh +# Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. +# SPDX-License-Identifier: BSD-3-Clause +# +# PI_Stress wrapper for qcom-linux-testkit +# - Runs rt-tests pi_stress ITERATIONS times (JSON output) +# - Parses inversion count + pass/fail using lib_rt.sh (no python required) +# - Emits KPI lines to result.txt and summary PASS/FAIL/SKIP to PI_Stress.res +# +# Notes: +# - pi_stress may send SIGTERM when it detects failures; we ignore TERM so the +# wrapper can continue and still collect logs. +# - Always exits 0 (LAVA-friendly). Use PI_Stress.res for gating. + +SCRIPT_DIR="$( + cd "$(dirname "$0")" || exit 1 + pwd +)" + +INIT_ENV="" +SEARCH="$SCRIPT_DIR" +while [ "$SEARCH" != "/" ]; do + if [ -f "$SEARCH/init_env" ]; then + INIT_ENV="$SEARCH/init_env" + break + fi + SEARCH=$(dirname "$SEARCH") +done + +if [ -z "$INIT_ENV" ]; then + echo "[ERROR] Could not find init_env (starting at $SCRIPT_DIR)" >&2 + exit 1 +fi + +if [ -z "${__INIT_ENV_LOADED:-}" ]; then + # shellcheck disable=SC1090 + . "$INIT_ENV" + __INIT_ENV_LOADED=1 +fi + +# shellcheck disable=SC1091 +. "$TOOLS/functestlib.sh" +# shellcheck disable=SC1091 +. "$TOOLS/lib_rt.sh" + +TESTNAME="PI_Stress" +test_path=$(find_test_case_by_name "$TESTNAME") +[ -n "$test_path" ] || test_path="$SCRIPT_DIR" + +RES_FILE="$test_path/${TESTNAME}.res" +OUT_DIR="${OUT_DIR:-$test_path/logs_${TESTNAME}}" +RESULT_TXT="${RESULT_TXT:-$OUT_DIR/result.txt}" + +# params (env/LAVA can override) +DURATION="${DURATION:-5m}" +MLOCKALL="${MLOCKALL:-false}" +RR="${RR:-false}" +BACKGROUND_CMD="${BACKGROUND_CMD:-}" +ITERATIONS="${ITERATIONS:-1}" +USER_BASELINE="${USER_BASELINE:-}" + +# Optional extras +BINARY="${BINARY:-}" +VERBOSE="${VERBOSE:-0}" +PROGRESS_EVERY="${PROGRESS_EVERY:-1}" +HEARTBEAT_SEC="${HEARTBEAT_SEC:-10}" + +usage() { + cat < add --mlockall (default: $MLOCKALL) + --rr BOOL true|false -> add --rr (default: $RR) + --iterations N Number of iterations (default: $ITERATIONS) + --user-baseline N Optional inversion baseline (count). If set, FAIL when + a majority of iterations exceed this baseline + (requires ITERATIONS >= 3) + + --background-cmd CMD Optional background workload + --binary PATH Explicit pi_stress binary path + --progress-every N Log progress every N iterations (default: $PROGRESS_EVERY) + --heartbeat-sec N Heartbeat interval in seconds (default: $HEARTBEAT_SEC) + --verbose Extra logs + -h, --help Help + +Examples: + $0 --binary /tmp/pi_stress --duration 1m --mlockall true --rr false --iterations 3 + $0 --duration 10 --iterations 3 --heartbeat-sec 1 + $0 --iterations 5 --user-baseline 10 +EOF +} + +while [ "$#" -gt 0 ]; do + case "$1" in + -h|--help) + usage + exit 0 + ;; + --out) + shift + OUT_DIR="$1" + ;; + --result) + shift + RESULT_TXT="$1" + ;; + --duration) + shift + DURATION="$1" + ;; + --mlockall) + shift + MLOCKALL="$1" + ;; + --rr) + shift + RR="$1" + ;; + --iterations) + shift + ITERATIONS="$1" + ;; + --user-baseline) + shift + USER_BASELINE="$1" + ;; + --background-cmd) + shift + BACKGROUND_CMD="$1" + ;; + --binary) + shift + BINARY="$1" + ;; + --progress-every) + shift + PROGRESS_EVERY="$1" + ;; + --heartbeat-sec) + shift + HEARTBEAT_SEC="$1" + ;; + --verbose) + VERBOSE=1 + ;; + *) + log_warn "Unknown option: $1" + usage + echo "$TESTNAME FAIL" >"$RES_FILE" + exit 1 + ;; + esac + shift +done + +LOG_PREFIX="$OUT_DIR/pi-stress" +TMP_ONE="$OUT_DIR/tmp_result_one.txt" +ITER_KPI="$OUT_DIR/iter_kpi.txt" +INV_VALUES="$OUT_DIR/inversion_values.txt" +GATE_KPI="$OUT_DIR/gate_kpi.txt" + +rt_prepare_output_layout \ + "$OUT_DIR" \ + "$RESULT_TXT" \ + "$TMP_ONE" \ + "$ITER_KPI" \ + "$INV_VALUES" \ + "$GATE_KPI" + +rt_check_clock_sanity "$TESTNAME" || true + +log_info "------------------- Starting $TESTNAME -------------------" +log_info "$TESTNAME: Checking for the tools required to run pi_stress" + +if ! rt_require_common_tools uname awk sed grep tr head tail mkdir cat sh tee sleep kill date sort wc; then + log_skip "$TESTNAME: basic tools missing" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +if ! command -v perf_parse_rt_tests_json >/dev/null 2>&1; then + log_skip "$TESTNAME: perf_parse_rt_tests_json missing (lib_rt.sh not loaded?)" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +if ! command -v rt_require_duration_seconds >/dev/null 2>&1; then + log_skip "$TESTNAME: rt_require_duration_seconds missing (lib_rt.sh not updated/loaded?)" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +case "$ITERATIONS" in + ''|*[!0-9]*|0) + ITERATIONS=1 + ;; +esac + +case "$PROGRESS_EVERY" in + ''|*[!0-9]*|0) + PROGRESS_EVERY=1 + ;; +esac + +case "$HEARTBEAT_SEC" in + ''|*[!0-9]*|0) + HEARTBEAT_SEC=10 + ;; +esac + +PI_BIN=$(rt_resolve_binary pi_stress "$BINARY" 2>/dev/null || echo "") +if [ -z "$PI_BIN" ] || [ ! -x "$PI_BIN" ]; then + log_skip "$TESTNAME: pi_stress binary not found/executable (${PI_BIN:-none})" + echo "$TESTNAME SKIP" >"$RES_FILE" + exit 0 +fi + +PI_DURATION_SECS=$(rt_require_duration_seconds "$TESTNAME" "$DURATION") || { + echo "$TESTNAME FAIL" >"$RES_FILE" + exit 0 +} + +rt_log_common_runtime_env "$TESTNAME" "$PI_BIN" +log_info "$TESTNAME: iterations=$ITERATIONS duration=$DURATION (${PI_DURATION_SECS}s) mlockall=$MLOCKALL rr=$RR" +log_info "$TESTNAME: heartbeat=$HEARTBEAT_SEC seconds" + +if [ "$VERBOSE" -eq 1 ] 2>/dev/null; then + log_info "$TESTNAME: OUT_DIR=$OUT_DIR" + log_info "$TESTNAME: RESULT_TXT=$RESULT_TXT" + log_info "$TESTNAME: BACKGROUND_CMD=${BACKGROUND_CMD:-none}" + log_info "$TESTNAME: USER_BASELINE=${USER_BASELINE:-none}" +fi + +RT_INTERRUPTED=0 +export RT_INTERRUPTED + +trap '' TERM +trap 'rt_handle_int; perf_rt_bg_stop >/dev/null 2>&1 || true' INT +trap 'perf_rt_bg_stop >/dev/null 2>&1 || true' EXIT + +perf_rt_bg_start "$TESTNAME" "$BACKGROUND_CMD" + +overall_fail=0 +fail_count=0 + +baseline_ok=0 +case "$USER_BASELINE" in + '') + baseline_ok=0 + ;; + *[!0-9]*) + baseline_ok=0 + ;; + *) + baseline_ok=1 + ;; +esac + +RT_RUN_TARGET_DURATION_SECS="$PI_DURATION_SECS" +export RT_RUN_TARGET_DURATION_SECS + +i=1 +while [ "$i" -le "$ITERATIONS" ] 2>/dev/null; do + rt_log_iteration_progress "$TESTNAME" "$i" "$ITERATIONS" "$PROGRESS_EVERY" + + jsonfile="${LOG_PREFIX}-${i}.json" + stdoutlog="${OUT_DIR}/pi_stress_stdout_iter${i}.log" + + set -- "$PI_BIN" "-q" "-D" "$PI_DURATION_SECS" + + case "$MLOCKALL" in + true|TRUE|1|yes|YES) + set -- "$@" "--mlockall" + ;; + esac + + case "$RR" in + true|TRUE|1|yes|YES) + set -- "$@" "--rr" + ;; + esac + + set -- "$@" "--json=$jsonfile" + + if rt_run_json_iteration "$TESTNAME" "$HEARTBEAT_SEC" "$stdoutlog" "$jsonfile" "$@"; then + rc=$RT_RUN_RC + else + rc=$RT_RUN_RC + fi + + if [ "${RT_INTERRUPTED:-0}" -eq 1 ] 2>/dev/null; then + if [ -r "$jsonfile" ]; then + : >"$TMP_ONE" 2>/dev/null || true + if perf_parse_rt_tests_json "pi-stress" "$jsonfile" >"$TMP_ONE" 2>/dev/null; then + rt_append_iteration_kpi "$i" "$TMP_ONE" "$ITER_KPI" "$RESULT_TXT" || true + + inv=$(awk '/^inversion[[:space:]]+pass[[:space:]]+[0-9]+/ { print $3; exit }' "$TMP_ONE" 2>/dev/null) + if [ -n "$inv" ]; then + printf '%s\n' "$inv" >>"$INV_VALUES" 2>/dev/null || true + fi + fi + fi + + log_warn "$TESTNAME: interrupted by user during iteration $i/$ITERATIONS" + break + fi + + if [ "$rc" -ne 0 ] 2>/dev/null; then + log_fail "$TESTNAME: pi_stress exited rc=$rc (iter $i/$ITERATIONS)" + overall_fail=1 + fi + + if [ ! -r "$jsonfile" ]; then + log_fail "$TESTNAME: missing json output: $jsonfile" + overall_fail=1 + i=$((i + 1)) + continue + fi + + : >"$TMP_ONE" 2>/dev/null || true + if perf_parse_rt_tests_json "pi-stress" "$jsonfile" >"$TMP_ONE" 2>/dev/null; then + rt_append_iteration_kpi "$i" "$TMP_ONE" "$ITER_KPI" "$RESULT_TXT" || true + + inv=$(awk '/^inversion[[:space:]]+pass[[:space:]]+[0-9]+/ { print $3; exit }' "$TMP_ONE" 2>/dev/null) + if [ -n "$inv" ]; then + printf '%s\n' "$inv" >>"$INV_VALUES" 2>/dev/null || true + fi + + if [ "$baseline_ok" -eq 1 ] 2>/dev/null && [ -n "$inv" ]; then + if [ "$inv" -gt "$USER_BASELINE" ] 2>/dev/null; then + fail_count=$((fail_count + 1)) + fi + fi + else + log_fail "$TESTNAME: failed to parse json (iter $i/$ITERATIONS): $jsonfile" + overall_fail=1 + fi + + i=$((i + 1)) +done + +RT_RUN_TARGET_DURATION_SECS="" +export RT_RUN_TARGET_DURATION_SECS + +perf_rt_bg_stop >/dev/null 2>&1 || true + +if [ -s "$ITER_KPI" ]; then + rt_emit_kpi_block "$TESTNAME" "per-iteration results" "$ITER_KPI" +else + if [ "${RT_INTERRUPTED:-0}" -eq 1 ] 2>/dev/null; then + log_warn "$TESTNAME: no completed iteration data collected before interrupt" + fi +fi + +if [ -s "$INV_VALUES" ]; then + agg=$( + awk ' + BEGIN { min=""; max=""; sum=0; n=0 } + /^[0-9]+$/ { + v=$1 + if (min=="" || vmax) max=v + sum+=v + n++ + } + END { + if (n>0) { + mean=sum/n + if (mean==int(mean)) printf("%d|%d|%d|%d\n", min, int(mean), max, n) + else printf("%d|%.3f|%d|%d\n", min, mean, max, n) + } + } + ' "$INV_VALUES" 2>/dev/null + ) + + if [ -n "$agg" ]; then + inv_min=$(printf '%s' "$agg" | awk -F'|' '{print $1}') + inv_mean=$(printf '%s' "$agg" | awk -F'|' '{print $2}') + inv_max=$(printf '%s' "$agg" | awk -F'|' '{print $3}') + inv_n=$(printf '%s' "$agg" | awk -F'|' '{print $4}') + + echo "pi-stress-inversion-min pass ${inv_min} count" >>"$RESULT_TXT" 2>/dev/null || true + echo "pi-stress-inversion-mean pass ${inv_mean} count" >>"$RESULT_TXT" 2>/dev/null || true + echo "pi-stress-inversion-max pass ${inv_max} count" >>"$RESULT_TXT" 2>/dev/null || true + + log_info "$TESTNAME: pi-stress-inversion-min pass ${inv_min} count" + log_info "$TESTNAME: pi-stress-inversion-mean pass ${inv_mean} count" + log_info "$TESTNAME: pi-stress-inversion-max pass ${inv_max} count" + + if [ "$PI_DURATION_SECS" -gt 0 ] 2>/dev/null; then + inv_rate=$( + awk -v inv="$inv_mean" -v sec="$PI_DURATION_SECS" 'BEGIN { + if (sec > 0) printf("%.6f", inv/sec) + else printf("0.000000") + }' 2>/dev/null + ) + + echo "pi-stress-inversion-rate pass ${inv_rate} inv/s" >>"$RESULT_TXT" 2>/dev/null || true + log_info "$TESTNAME: pi-stress-inversion-rate pass ${inv_rate} inv/s" + fi + + if [ "$baseline_ok" -eq 1 ] 2>/dev/null; then + log_info "$TESTNAME: USER_BASELINE=$USER_BASELINE (fail_count=$fail_count over $inv_n runs)" + fi + fi +fi + +if [ "${RT_INTERRUPTED:-0}" -ne 1 ] 2>/dev/null && \ + [ "$baseline_ok" -eq 1 ] 2>/dev/null && \ + [ "$ITERATIONS" -ge 3 ] 2>/dev/null; then + fail_limit=$(((ITERATIONS + 1) / 2)) + : >"$GATE_KPI" 2>/dev/null || true + + echo "inversion-baseline pass ${USER_BASELINE} count" >"$GATE_KPI" + echo "inversion-fail-limit pass ${fail_limit} count" >>"$GATE_KPI" + echo "inversion-fail-count pass ${fail_count} count" >>"$GATE_KPI" + + cat "$GATE_KPI" >>"$RESULT_TXT" 2>/dev/null || true + rt_emit_kpi_block "$TESTNAME" "baseline comparison results" "$GATE_KPI" + + if [ "$fail_count" -ge "$fail_limit" ] 2>/dev/null; then + overall_fail=1 + fi +fi + +rt_emit_interrupt_aware_result "$TESTNAME" "$RES_FILE" "$RESULT_TXT" "$OUT_DIR" "${RT_INTERRUPTED:-0}" "$overall_fail" +exit 0 diff --git a/Runner/suites/Kernel/RT-tests/PMQTest/PMQTest_README.md b/Runner/suites/Kernel/RT-tests/PMQTest/PMQTest_README.md new file mode 100644 index 00000000..03a9418c --- /dev/null +++ b/Runner/suites/Kernel/RT-tests/PMQTest/PMQTest_README.md @@ -0,0 +1,217 @@ + + +# PMQTest (rt-tests) — Runner integration + +This test runs **pmqtest** (from the `rt-tests` suite) and reports latency KPIs per-thread and as aggregates, following the same conventions used by other RT-tests in `qcom-linux-testkit`. + +The runner is designed to be: +- **POSIX + ShellCheck clean** +- **LAVA-friendly** (writes a `.res` summary, logs to `logs_/`, exits 0) +- **Deterministic KPIs** (parses JSON output from pmqtest) + +--- + +## What PMQTest measures + +`pmqtest` is a real-time scheduling latency workload from `rt-tests`. It creates multiple sender/receiver threads and records receiver latencies: + +- **min** latency (µs) +- **avg** latency (µs) +- **max** latency (µs) + +In this runner, KPIs are emitted: +- **Per-thread** (`t0..tN`) for each iteration +- **Aggregated across all threads and iterations** +- **Worst-thread** (thread id with the highest observed `max` latency) + +--- + +## Prerequisites + +### On DUT +- `pmqtest` binary available on the DUT (e.g. from `rt-tests`) +- Root privileges recommended (RT priority / memlock / scheduling) +- Tools (typical): `awk`, `sed`, `grep`, `tr`, `head`, `date`, `uname`, `nproc` + +### Recommended kernel/runtime settings +- PREEMPT / RT kernel for meaningful comparison +- `sched_rt_runtime_us` configured appropriately +- `ulimit -l` (memlock) sufficiently high (runner prints current values) + +> Note: The runner may warn if the kernel does not look RT-enabled. This warning does **not** fail the test. + +--- + +## How it runs + +For each iteration, the runner executes: + +1. `pmqtest` with JSON output enabled (one JSON per iteration) +2. Parses thread metrics from the JSON and emits standardized KPI lines +3. Computes aggregates (min/mean/max) across all observed threads and iterations + +--- + +## Usage + +From the PMQTest directory: + +```sh +./run.sh --binary /tmp/pmqtest --duration 1m --iterations 3 +``` + +### Arguments + +| Option | Description | Default | +|---|---|---| +| `--binary ` | Path to pmqtest binary | required (or runner default if set) | +| `--duration