Backtest Strategy Optimizer AdapterBacktest Strategy Optimizer Adapter
With this library, you will be able to run one or multiple backtests with different variables (combinations). For example, you can run 100 backtests of Supertrend at once with an increment factor of 0.1. This way, you can easily fetch the most profitable settings and apply them to your strategy.
To get a better understanding of the code, you can check the code below.
Single backtest results
= backtest.results(date_start, date_end, long_entry, long_exit, take_profit_percentage, stop_loss_percentage, atr_length, initial_capital, order_size, commission)
Add backtest results to a table
backtest.table(initial_capital, profit_and_loss, open_balance, winrate, entries, exits, wins, losses, backtest_table_position, backtest_table_margin, backtest_table_transparency, backtest_table_cell_color, backtest_table_title_cell_color, backtest_table_text_color)
Backtest result without chart labels
= backtest.run(date_start, date_end, long_entry, long_exit, take_profit_percentage, stop_loss_percentage, atr_length, initial_capital, order_size, commission)
Backtest result profit
profit = backtest.profit(date_start, date_end, long_entry, long_exit, take_profit_percentage, stop_loss_percentage, atr_length, initial_capital, order_size, commission)
Backtest result winrate
winrate = backtest.winrate(date_start, date_end, long_entry, long_exit, take_profit_percentage, stop_loss_percentage, atr_length, initial_capital, order_size, commission)
Start Date
You can set the start date either by using a timestamp or a number that refers to the number of bars back.
Stop Loss / Take Profit Issue
Unfortunately, I did not manage to achieve 100% accuracy for the take profit and stop loss. The original TradingView backtest can stop at the correct position within a bar using the strategy.exit stop and limit variables. However, it seems unachievable with a crossunder/crossover function in PineScript unless it is calculated on every tick (which would make the backtesting results invalid). So far, I have not found a workaround, and I would be grateful if someone could solve this issue, if it is even possible. If you have any solutions or fixes, please let me know!
Multiple Backtest Results / Optimizer
You can run multiple backtests in a single strategy or indicator, but there are certain requirements for placing the correct code in the right way. To view examples of running multiple backtests, you can refer to the links provided in the updates I posted below. In the samples I have also explained how you can auto-generate code for your backtest strategy.
AI
FunctionNNLayerLibrary "FunctionNNLayer"
Generalized Neural Network Layer method.
function(inputs, weights, n_nodes, activation_function, bias, alpha, scale) Generalized Layer.
Parameters:
inputs : float array, input values.
weights : float array, weight values.
n_nodes : int, number of nodes in layer.
activation_function : string, default='sigmoid', name of the activation function used.
bias : float, default=1.0, bias to pass into activation function.
alpha : float, default=na, if required to pass into activation function.
scale : float, default=na, if required to pass into activation function.
Returns: float
FunctionNNPerceptronLibrary "FunctionNNPerceptron"
Perceptron Function for Neural networks.
function(inputs, weights, bias, activation_function, alpha, scale) generalized perceptron node for Neural Networks.
Parameters:
inputs : float array, the inputs of the perceptron.
weights : float array, the weights for inputs.
bias : float, default=1.0, the default bias of the perceptron.
activation_function : string, default='sigmoid', activation function applied to the output.
alpha : float, default=na, if required for activation.
scale : float, default=na, if required for activation.
@outputs float
MLActivationFunctionsLibrary "MLActivationFunctions"
Activation functions for Neural networks.
binary_step(value) Basic threshold output classifier to activate/deactivate neuron.
Parameters:
value : float, value to process.
Returns: float
linear(value) Input is the same as output.
Parameters:
value : float, value to process.
Returns: float
sigmoid(value) Sigmoid or logistic function.
Parameters:
value : float, value to process.
Returns: float
sigmoid_derivative(value) Derivative of sigmoid function.
Parameters:
value : float, value to process.
Returns: float
tanh(value) Hyperbolic tangent function.
Parameters:
value : float, value to process.
Returns: float
tanh_derivative(value) Hyperbolic tangent function derivative.
Parameters:
value : float, value to process.
Returns: float
relu(value) Rectified linear unit (RELU) function.
Parameters:
value : float, value to process.
Returns: float
relu_derivative(value) RELU function derivative.
Parameters:
value : float, value to process.
Returns: float
leaky_relu(value) Leaky RELU function.
Parameters:
value : float, value to process.
Returns: float
leaky_relu_derivative(value) Leaky RELU function derivative.
Parameters:
value : float, value to process.
Returns: float
relu6(value) RELU-6 function.
Parameters:
value : float, value to process.
Returns: float
softmax(value) Softmax function.
Parameters:
value : float array, values to process.
Returns: float
softplus(value) Softplus function.
Parameters:
value : float, value to process.
Returns: float
softsign(value) Softsign function.
Parameters:
value : float, value to process.
Returns: float
elu(value, alpha) Exponential Linear Unit (ELU) function.
Parameters:
value : float, value to process.
alpha : float, default=1.0, predefined constant, controls the value to which an ELU saturates for negative net inputs. .
Returns: float
selu(value, alpha, scale) Scaled Exponential Linear Unit (SELU) function.
Parameters:
value : float, value to process.
alpha : float, default=1.67326324, predefined constant, controls the value to which an SELU saturates for negative net inputs. .
scale : float, default=1.05070098, predefined constant.
Returns: float
exponential(value) Pointer to math.exp() function.
Parameters:
value : float, value to process.
Returns: float
function(name, value, alpha, scale) Activation function.
Parameters:
name : string, name of activation function.
value : float, value to process.
alpha : float, default=na, if required.
scale : float, default=na, if required.
Returns: float
derivative(name, value, alpha, scale) Derivative Activation function.
Parameters:
name : string, name of activation function.
value : float, value to process.
alpha : float, default=na, if required.
scale : float, default=na, if required.
Returns: float
MLLossFunctionsLibrary "MLLossFunctions"
Methods for Loss functions.
mse(expects, predicts) Mean Squared Error (MSE) " MSE = 1/N * sum ((y - y')^2) ".
Parameters:
expects : float array, expected values.
predicts : float array, prediction values.
Returns: float
binary_cross_entropy(expects, predicts) Binary Cross-Entropy Loss (log).
Parameters:
expects : float array, expected values.
predicts : float array, prediction values.
Returns: float