Skip to content

fix: remove mutex/io dependency from the dsl#20

Open
nazarhussain wants to merge 1 commit intomainfrom
nh/dsl-remove-io-dep
Open

fix: remove mutex/io dependency from the dsl#20
nazarhussain wants to merge 1 commit intomainfrom
nh/dsl-remove-io-dep

Conversation

@nazarhussain
Copy link
Copy Markdown
Contributor

Replace std.Thread.Mutex with an atomic spinlock in the DSL class registry, eliminating the need to thread std.Io through the public API when migrating to Zig 0.16.

Motivation

Zig 0.16 replaces std.Thread.Mutex with std.Io.Mutex, which requires an Io instance for lock(io)/unlock(io). PR #9 addresses this by adding a mandatory .io option
to exportModule and threading Io through registerDeclsregisterClass → the internal mutex.

This is problematic because:

  • The DSL is ~99% comptime — Io is an unrelated runtime concern
  • The mutex is an internal implementation detail of the class registry, not something users should provide
  • It breaks the clean exportModule API for every DSL consumer

Approach

The class registry mutex protects a short linked list with very low contention:

  • Writes: module init/teardown only (once per class per env)
  • Reads: materializeClassInstance (when a DSL function returns a class)
  • Deletes: environment cleanup hooks

A spinlock using std.atomic.Value(bool) + cmpxchgWeak is ideal here — nanosecond critical sections, near-zero contention, and zero dependency on std.Thread or std.Io.
The public API remains unchanged.

Changes

  • src/js/class_runtime.zig: Replace std.Thread.Mutex with atomic spinlock (lock/unlock using cmpxchgWeak + spinLoopHint)
  • No changes to export_module.zig, examples, or any user-facing API

Comment thread src/js/class_runtime.zig
var locked: std.atomic.Value(bool) = std.atomic.Value(bool).init(false);

fn lock() void {
while (locked.cmpxchgWeak(false, true, .acquire, .monotonic) != null) {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For someone curiosity how this code works, here is some explanation.

CPUs provide a single hardware instruction that does two things atomically (cannot be interrupted):

▎ "Read the value. If it equals X, replace it with Y. Tell me if it worked."

This is cmpxchg — it's a single CPU instruction, not two separate operations. The CPU's cache coherency protocol guarantees that only one core can win the race.

The signature is:

locked.cmpxchgWeak(expected_value, new_value, success_order, fail_order)

So as per call we instruct

  1. success_case: If locked value is false, set it to true - means mark the resource locked.
  2. fail_case: If locked value is not false, keep trying in the loop.
  3. In success_case instruct atomic state to be acquire, so no CPU instruction is allowed to re-ordered in memory.
  4. In fail_case instruct atomic state to be monotonic, that suggest to perform any atomic operation afterwards.

Comment thread src/js/class_runtime.zig

fn lock() void {
while (locked.cmpxchgWeak(false, true, .acquire, .monotonic) != null) {
std.atomic.spinLoopHint();
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can add noop here inside loop or add sleep. In both cases CPU resources are exhausted.

spinLoopHint() — Emits a PAUSE instruction (x86) or YIELD (ARM). This tells the CPU don't waste power or starve the sibling hyperthread. Without this, the tight loop burns CPU unnecessarily.

@nazarhussain nazarhussain changed the title refactor: remove mutex/io dependency from the dsl fix: remove mutex/io dependency from the dsl Apr 22, 2026
@GrapeBaBa
Copy link
Copy Markdown

GrapeBaBa commented Apr 22, 2026

If it doesn't provide Io, the user function also can't use any Io related functions(e.g, Timer, sleep, deadline, std.Io.File) such as https://github.com/ChainSafe/zapi/pull/9/changes#diff-73bd664bb3f77929a0c3aa8d7af66c6ed2008813ab8eef8e43030b33f7d8510eR212, is it a problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants