Spawn & Await
Wyn uses real OS threads for spawn — each spawned task runs on a thread pool worker, giving true multi-core parallelism. No green threads, no cooperative scheduling limitations.
Performance
| Metric | Value |
|---|---|
| Spawn overhead | 6μs per spawn+await |
| CPU scaling (4x fib(38)) | 4x speedup on 4 cores |
| I/O scaling (4x sleep) | 4x speedup |
| 10K spawn+await | 47ms |
| Thread pool | Sized to CPU count, 512KB stacks |
Basic Spawn
wyn
fn compute(n: int) -> int {
var sum = 0
for i in 0..n { sum = sum + i }
return sum
}
fn main() -> int {
var f1 = spawn compute(100000)
var f2 = spawn compute(200000)
var total = await f1 + await f2
println("total = ${total}")
return 0
}spawn starts a function on a thread pool worker and returns a future. await blocks until the result is ready.
CPU Parallelism
wyn
fn fib(n: int) -> int {
if n <= 1 { return n }
return fib(n - 1) + fib(n - 2)
}
fn main() -> int {
// Each fib(38) takes ~240ms
// All 4 complete in ~240ms total = 4x speedup
var a = spawn fib(38)
var b = spawn fib(38)
var c = spawn fib(38)
var d = spawn fib(38)
println((await a + await b + await c + await d).to_string())
return 0
}I/O Parallelism
wyn
fn fetch(url: string) -> string {
return http_get(url)
}
fn main() -> int {
// All 3 requests run simultaneously
var f1 = spawn fetch("https://api.example.com/users")
var f2 = spawn fetch("https://api.example.com/posts")
var f3 = spawn fetch("https://api.example.com/comments")
println(await f1)
println(await f2)
println(await f3)
return 0
}Shared Atomic Values
For lock-free shared state between spawns:
wyn
fn increment(counter: int) -> int {
Shared.add(counter, 1)
return 0
}
fn main() -> int {
var counter = Shared.new(0)
for i in 0..100 {
var f = spawn increment(counter)
await f
}
println("counter = ${Shared.get(counter)}") // Always 100
return 0
}Channels
wyn
fn producer(ch: int) -> int {
for i in 0..10 {
Task.send(ch, i)
}
Task.close(ch)
return 0
}
fn main() -> int {
var ch = Task.channel(10)
spawn producer(ch)
var val = Task.recv(ch)
while val >= 0 {
println(val.to_string())
val = Task.recv(ch)
}
return 0
}How It Works
spawn f(x) → allocate task → enqueue to thread pool → worker picks up → runs f(x)
await f → spin briefly → sched_yield() → return result when ready- Thread pool: fixed pool sized to CPU count, created on first spawn
- Lock-free futures: slab-allocated, recycled after await — zero malloc per spawn
- Task queue: mutex-protected ring buffer, condition variable wake
- Fallback: if queue is full, task runs inline on the calling thread
Try It
Press Run or Ctrl+Enter
See Also
- Channels — communicate between tasks
- Closures — pass functions to spawned tasks
- Benchmarks — spawn/await performance numbers