benchLibrary "bench"
A simple banchmark library to analyse script performance and bottlenecks.
Very useful if you are developing an overly complex application in Pine Script, or trying to optimise a library / function / algorithm...
Supports artificial looping benchmarks (of fast functions)
Supports integrated linear benchmarks (of expensive scripts)
One important thing to note is that the Pine Script compiler will completely ignore any calculations that do not eventually produce chart output. Therefore, if you are performing an artificial benchmark you will need to use the bench.reference(value) function to ensure the calculations are executed.
Please check the examples towards the bottom of the script.
Quick Reference
(Be warned this uses non-standard space characters to get the line indentation to work in the description!)
```
// Looping benchmark style
benchmark = bench.new(samples = 500, loops = 5000)
data = array.new_int()
if bench.start(benchmark)
while bench.loop(benchmark)
array.unshift(data, timenow)
bench.mark(benchmark)
while bench.loop(benchmark)
array.unshift(data, timenow)
bench.mark(benchmark)
while bench.loop(benchmark)
array.unshift(data, timenow)
bench.stop(benchmark)
bench.reference(array.get(data, 0))
bench.report(benchmark, '1x array.unshift()')
// Linear benchmark style
benchmark = bench.new()
data = array.new_int()
bench.start(benchmark)
for i = 0 to 1000
array.unshift(data, timenow)
bench.mark(benchmark)
for i = 0 to 1000
array.unshift(data, timenow)
bench.stop(benchmark)
bench.reference(array.get(data, 0))
bench.report(benchmark,'1000x array.unshift()')
```
Detailed Interface
new(samples, loops) Initialises a new benchmark array
Parameters:
samples : int, the number of bars in which to collect samples
loops : int, the number of loops to execute within each sample
Returns: int , the benchmark array
active(benchmark) Determing if the benchmarks state is active
Parameters:
benchmark : int , the benchmark array
Returns: bool, true only if the state is active
start(benchmark) Start recording a benchmark from this point
Parameters:
benchmark : int , the benchmark array
Returns: bool, true only if the benchmark is unfinished
loop(benchmark) Returns true until call count exceeds bench.new(loop) variable
Parameters:
benchmark : int , the benchmark array
Returns: bool, true while looping
reference(number, string) Add a compiler reference to the chart so the calculations don't get optimised away
Parameters:
number : float, a numeric value to reference
string : string, a string value to reference
mark(benchmark, number, string) Marks the end of one recorded interval and the start of the next
Parameters:
benchmark : int , the benchmark array
number : float, a numeric value to reference
string : string, a string value to reference
stop(benchmark, number, string) Stop the benchmark, ending the final interval
Parameters:
benchmark : int , the benchmark array
number : float, a numeric value to reference
string : string, a string value to reference
report(Prints, benchmark, title, text_size, position)
Parameters:
Prints : the benchmarks results to the screen
benchmark : int , the benchmark array
title : string, add a custom title to the report
text_size : string, the text size of the log console (global size vars)
position : string, the position of the log console (global position vars)
unittest_bench(case) Cache module unit tests, for inclusion in parent script test suite. Usage: bench.unittest_bench(__ASSERTS)
Parameters:
case : string , the current test case and array of previous unit tests (__ASSERTS)
unittest(verbose) Run the bench module unit tests as a stand alone. Usage: bench.unittest()
Parameters:
verbose : bool, optionally disable the full report to only display failures