All benchmarks run on Github Actions on ubuntu-latest
matrix. We measure various metrics of the following applications:
Tauri | Wry | Electron |
---|---|---|
tauri_cpu_intensive | wry_cpu_intensive | electron_cpu_intensive |
tauri_hello_world | wry_hello_world | electron_hello_world |
tauri_3mb_transfer | wry_custom_protocol | electron_3mb_transfer |
interface ExecTimeData {
mean: number;
stddev: number;
user: number;
system: number;
min: number;
max: number;
}
interface BenchmarkData {
created_at: string;
sha1: string;
exec_time: {
[key: string]: ExecTimeData;
};
binary_size: {
[key: string]: number;
};
thread_count: {
[key: string]: number;
};
syscall_count: {
[key: string]: number;
};
cargo_deps: {
[key: string]: number;
};
}
This shows how much time total it takes intialize the application and wait the DOMContentLoaded
event. We use hyperfine under the hood and run 3 warm-up sequence then, we run 10 sequences to calculate the average execution time.
We track the size of various files here. All binary are compiled in release mode.
We use time -v
to get the max memory usage during execution. Smaller is better.
How many threads the application use. Smaller is better.
How many total syscalls are performed when executing a given application. Smaller is better.
The CPU intensive benchmark measures how much time it takes to calculate all the prime numbers under XXXX wihtout blocking the UI and reporting how many have been found so far using web workers.
This benchmark measures how long it takes to get an application fully started.
Test WRY with a custom protocol. (local files)
We would like to thank the authors and contributors to deno for their groundbreaking work upon which the benchmarking system is not only based, but also leans heavily upon. Thankyou!!!