regressionsLibrary "regressions"
This library computes least square regression models for polynomials of any form for a given data set of x and y values.
fit(X, y, reg_type, degrees)
Takes a list of X and y values and the degrees of the polynomial and returns a least square regression for the given polynomial on the dataset.
Parameters:
X (array) : (float ) X inputs for regression fit.
y (array) : (float ) y outputs for regression fit.
reg_type (string) : (string) The type of regression. If passing value for degrees use reg.type_custom
degrees (array) : (int ) The degrees of the polynomial which will be fit to the data. ex: passing array.from(0, 3) would be a polynomial of form c1x^0 + c2x^3 where c2 and c1 will be coefficients of the best fitting polynomial.
Returns: (regression) returns a regression with the best fitting coefficients for the selecected polynomial
regress(reg, x)
Regress one x input.
Parameters:
reg (regression) : (regression) The fitted regression which the y_pred will be calulated with.
x (float) : (float) The input value cooresponding to the y_pred.
Returns: (float) The best fit y value for the given x input and regression.
predict(reg, X)
Predict a new set of X values with a fitted regression. -1 is one bar ahead of the realtime
Parameters:
reg (regression) : (regression) The fitted regression which the y_pred will be calulated with.
X (array)
Returns: (float ) The best fit y values for the given x input and regression.
generate_points(reg, x, y, left_index, right_index)
Takes a regression object and creates chart points which can be used for plotting visuals like lines and labels.
Parameters:
reg (regression) : (regression) Regression which has been fitted to a data set.
x (array) : (float ) x values which coorispond to passed y values
y (array) : (float ) y values which coorispond to passed x values
left_index (int) : (int) The offset of the bar farthest to the realtime bar should be larger than left_index value.
right_index (int) : (int) The offset of the bar closest to the realtime bar should be less than right_index value.
Returns: (chart.point ) Returns an array of chart points
plot_reg(reg, x, y, left_index, right_index, curved, close, line_color, line_width)
Simple plotting function for regression for more custom plotting use generate_points() to create points then create your own plotting function.
Parameters:
reg (regression) : (regression) Regression which has been fitted to a data set.
x (array)
y (array)
left_index (int) : (int) The offset of the bar farthest to the realtime bar should be larger than left_index value.
right_index (int) : (int) The offset of the bar closest to the realtime bar should be less than right_index value.
curved (bool) : (bool) If the polyline is curved or not.
close (bool) : (bool) If true the polyline will be closed.
line_color (color) : (color) The color of the line.
line_width (int) : (int) The width of the line.
Returns: (polyline) The polyline for the regression.
series_to_list(src, left_index, right_index)
Convert a series to a list. Creates a list of all the cooresponding source values
from left_index to right_index. This should be called at the highest scope for consistency.
Parameters:
src (float) : (float ) The source the list will be comprised of.
left_index (int) : (float ) The left most bar (farthest back historical bar) which the cooresponding source value will be taken for.
right_index (int) : (float ) The right most bar closest to the realtime bar which the cooresponding source value will be taken for.
Returns: (float ) An array of size left_index-right_index
range_list(start, stop, step)
Creates an from the start value to the stop value.
Parameters:
start (int) : (float ) The true y values.
stop (int) : (float ) The predicted y values.
step (int) : (int) Positive integer. The spacing between the values. ex: start=1, stop=6, step=2:
Returns: (float ) An array of size stop-start
regression
Fields:
coeffs (array__float)
degrees (array__float)
type_linear (series__string)
type_quadratic (series__string)
type_cubic (series__string)
type_custom (series__string)
_squared_error (series__float)
X (array__float)
Statistics
FunctionDiscreteCosineTransformLibrary "FunctionDiscreteCosineTransform"
Discrete Cosine Transform (DCT)
The Discrete Cosine Transform (DCT) is a mathematical algorithm that converts a series of samples of a signal, typically in the time domain, into another domain called the frequency or spectral domain. It's commonly used for data compression and image/video coding applications such as JPEG and MPEG standards.
The DCT works by multiplying the input sequence with specific cosine functions that are pre-defined and then summing up these products to obtain a new series of values, which represent the frequency components of the original signal. The main advantage of the DCT over other transforms like Fourier Transform is its ability to handle non-stationary signals (i.e., signals with varying statistical properties) more effectively due to its localized basis functions.
In simple terms, the DCT can be thought of as a way to break down an image or video into different frequency components and then compress them without losing too much information. This compression technique is essential for efficient transmission and storage of digital media files over the internet or on devices with limited memory capacity.
~Mixtral4x7b
___
Reference:
lcamtuf.substack.com
dct(data, len)
Discrete Cosine Transform.
Parameters:
data (array) : Data source.
len (int) : Length of the sampling window.
Returns: List with frequency domain transformed information.
dct(data, len)
Discrete Cosine Transform.
Parameters:
data (float) : Data source.
len (int) : Length of the sampling window.
Returns: List with frequency domain transformed information.
idct(data, len)
Inverse Discrete Cosine Transform.
Parameters:
data (array) : Data source.
len (int) : Length of the sampling window.
Returns: List with time domain transformed information.
idct(data, len)
Inverse Discrete Cosine Transform.
Parameters:
data (float) : Data source.
len (int) : Length of the sampling window.
Returns: List with time domain transformed information.
Kernels©2024, GoemonYae; copied from @jdehorty's "KernelFunctions" on 2024-03-09 to ensure future dependency compatibility. Will also add more functions to this script.
Library "KernelFunctions"
This library provides non-repainting kernel functions for Nadaraya-Watson estimator implementations. This allows for easy substition/comparison of different kernel functions for one another in indicators. Furthermore, kernels can easily be combined with other kernels to create newer, more customized kernels.
rationalQuadratic(_src, _lookback, _relativeWeight, startAtBar)
Rational Quadratic Kernel - An infinite sum of Gaussian Kernels of different length scales.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_relativeWeight (simple float) : Relative weighting of time frames. Smaller values resut in a more stretched out curve and larger values will result in a more wiggly curve. As this value approaches zero, the longer time frames will exert more influence on the estimation. As this value approaches infinity, the behavior of the Rational Quadratic Kernel will become identical to the Gaussian kernel.
startAtBar (simple int)
Returns: yhat The estimated values according to the Rational Quadratic Kernel.
gaussian(_src, _lookback, startAtBar)
Gaussian Kernel - A weighted average of the source series. The weights are determined by the Radial Basis Function (RBF).
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
startAtBar (simple int)
Returns: yhat The estimated values according to the Gaussian Kernel.
periodic(_src, _lookback, _period, startAtBar)
Periodic Kernel - The periodic kernel (derived by David Mackay) allows one to model functions which repeat themselves exactly.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period (simple int) : The distance between repititions of the function.
startAtBar (simple int)
Returns: yhat The estimated values according to the Periodic Kernel.
locallyPeriodic(_src, _lookback, _period, startAtBar)
Locally Periodic Kernel - The locally periodic kernel is a periodic function that slowly varies with time. It is the product of the Periodic Kernel and the Gaussian Kernel.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period (simple int) : The distance between repititions of the function.
startAtBar (simple int)
Returns: yhat The estimated values according to the Locally Periodic Kernel.
NormalDistributionFunctionsLibrary "NormalDistributionFunctions"
The NormalDistributionFunctions library encompasses a comprehensive suite of statistical tools for financial market analysis. It provides functions to calculate essential statistical measures such as mean, standard deviation, skewness, and kurtosis, alongside advanced functionalities for computing the probability density function (PDF), cumulative distribution function (CDF), Z-score, and confidence intervals. This library is designed to assist in the assessment of market volatility, distribution characteristics of asset returns, and risk management calculations, making it an invaluable resource for traders and financial analysts.
meanAndStdDev(source, length)
Calculates and returns the mean and standard deviation for a given data series over a specified period.
Parameters:
source (float) : float: The data series to analyze.
length (int) : int: The lookback period for the calculation.
Returns: Returns an array where the first element is the mean and the second element is the standard deviation of the data series for the given period.
skewness(source, mean, stdDev, length)
Calculates and returns skewness for a given data series over a specified period.
Parameters:
source (float) : float: The data series to analyze.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
length (int) : int: The lookback period for the calculation.
Returns: Returns skewness value
kurtosis(source, mean, stdDev, length)
Calculates and returns kurtosis for a given data series over a specified period.
Parameters:
source (float) : float: The data series to analyze.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
length (int) : int: The lookback period for the calculation.
Returns: Returns kurtosis value
pdf(x, mean, stdDev)
pdf: Calculates the probability density function for a given value within a normal distribution.
Parameters:
x (float) : float: The value to evaluate the PDF at.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
Returns: Returns the probability density function value for x.
cdf(x, mean, stdDev)
cdf: Calculates the cumulative distribution function for a given value within a normal distribution.
Parameters:
x (float) : float: The value to evaluate the CDF at.
mean (float) : float: The mean of the distribution.
stdDev (float) : float: The standard deviation of the distribution.
Returns: Returns the cumulative distribution function value for x.
confidenceInterval(mean, stdDev, size, confidenceLevel)
Calculates the confidence interval for a data series mean.
Parameters:
mean (float) : float: The mean of the data series.
stdDev (float) : float: The standard deviation of the data series.
size (int) : int: The sample size.
confidenceLevel (float) : float: The confidence level (e.g., 0.95 for 95% confidence).
Returns: Returns the lower and upper bounds of the confidence interval.
ApproximateGaussianSmoothingLibrary "ApproximateGaussianSmoothing"
This library provides a novel smoothing function for time-series data, serving as an alternative to SMA and EMA. Additionally, it provides some statistical processing, using moving averages as expected values in statistics.
'Approximate Gaussian Smoothing' (AGS) is designed to apply weights to time-series data that closely resemble Gaussian smoothing weights. it is easier to calculate than the similar ALMA.
In case AGS is used as a moving average, I named it 'Approximate Gaussian Weighted Moving Average' (AGWMA).
The formula is:
AGWMA = (EMA + EMA(EMA) + EMA(EMA(EMA)) + EMA(EMA(EMA(EMA)))) / 4
The EMA parameter alpha is 5 / (N + 4) , using time period N (or length).
ma(src, length)
Calculate moving average using AGS (AGWMA).
Parameters:
src (float) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Moving average.
analyse(src, length)
Calculate mean and variance using AGS.
Parameters:
src (float) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Mean and variance.
analyse(dimensions, sources, length)
Calculate mean and variance covariance matrix using AGS.
Parameters:
dimensions (simple int) : Dimensions of sources to process.
sources (array) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Mean and variance covariance matrix.
trend(src, length)
Calculate intercept (LSMA) and slope using AGS.
Parameters:
src (float) : Series of values to process.
length (simple int) : Number of bars (length).
Returns: Intercept and slope.
FVG Detector LibraryLibrary "FVG Detector Library"
🔵 Introduction
To save time and improve accuracy in your scripts for identifying Fair Value Gaps (FVGs), you can utilize this library. Apart from detecting and plotting FVGs, one of the most significant advantages of this script is the ability to filter FVGs, which you'll learn more about below. Additionally, the plotting of each FVG continues until either a new FVG occurs or the current FVG is mitigated.
🔵 Definition
Fair Value Gap (FVG) refers to a situation where three consecutive candlesticks do not overlap. Based on this definition, the minimum conditions for detecting a fair gap in the ascending scenario are that the minimum price of the last candlestick should be greater than the maximum price of the third candlestick, and in the descending scenario, the maximum price of the last candlestick should be smaller than the minimum price of the third candlestick.
If the filter is turned off, all FVGs that meet at least the minimum conditions are identified. This mode is simplistic and results in a high number of identified FVGs.
If the filter is turned on, you have four options to filter FVGs :
1. Very Aggressive : In addition to the initial condition, another condition is added. For ascending FVGs, the maximum price of the last candlestick should be greater than the maximum price of the middle candlestick. Similarly, for descending FVGs, the minimum price of the last candlestick should be smaller than the minimum price of the middle candlestick. In this mode, a very small number of FVGs are eliminated.
2. Aggressive : In addition to the conditions of the Very Aggressive mode, in this mode, the size of the middle candlestick should not be small. This mode eliminates more FVGs compared to the Very Aggressive mode.
3. Defensive : In addition to the conditions of the Very Aggressive mode, in this mode, the size of the middle candlestick should be relatively large, and most of it should consist of the body. Also, for identifying ascending FVGs, the second and third candlesticks must be positive, and for identifying descending FVGs, the second and third candlesticks must be negative. In this mode, a significant number of FVGs are eliminated, and the remaining FVGs have a decent quality.
4. Very Defensive : In addition to the conditions of the Defensive mode, the first and third candlesticks should not resemble very small-bodied doji candlesticks. In this mode, the majority of FVGs are filtered out, and the remaining ones are of higher quality.
By default, we recommend using the Defensive mode.
🔵 How to Use
🟣 Parameters
To utilize this library, you need to provide four input parameters to the function.
"FVGFilter" determines whether you wish to apply a filter on FVGs or not. The possible inputs for this parameter are "On" and "Off", provided as strings.
"FVGFilterType" determines the type of filter to be applied to the found FVGs. These filters include four modes: "Very Defensive", "Defensive", "Aggressive", and "Very Aggressive", respectively exhibiting decreasing sensitivity and indicating a higher number of Fair Value Gaps (FVG).
The parameter "ShowDeFVG" is a Boolean value defined as either "true" or "false". If this value is "true", FVGs are shown during the Bullish Trend; however, if it is "false", they are not displayed.
The parameter "ShowSuFVG" is a Boolean value defined as either "true" or "false". If this value is "true", FVGs are displayed during the Bearish Trend; however, if it is "false", they are not displayed.
FVGDetector(FVGFilter, FVGFilterType, ShowDeFVG, ShowSuFVG)
Parameters:
FVGFilter (string)
FVGFilterType (string)
ShowDeFVG (bool)
ShowSuFVG (bool)
🟣 Import Library
You can use the "FVG Detector" library in your script using the following expression:
import TFlab/FVGDetectorLibrary/1 as FVG
🟣 Input Parameters
The descriptions related to the input parameters were provided in the "Parameter" section. In this section, for your convenience, the code related to the inputs is also included, and you can copy and paste it into your script.
PFVGFilter = input.string('On', 'FVG Filter', )
PFVGFilterType = input.string('Defensive', 'FVG Filter Type', )
PShowDeFVG = input.bool(true, ' Show Demand FVG')
PShowSuFVG = input.bool(true, ' Show Supply FVG')
🟣 Call Function
You can copy the following code into your script to call the FVG function. This code is based on the naming conventions provided in the "Input Parameter" section, so if you want to use exactly this code, you should have similar parameter names or have copied the "Input Parameter" values.
FVG.FVGDetector(PFVGFilter, PFVGFilterType, PShowDeFVG, PShowSuFVG)
TimeSeriesRecurrencePlotLibrary "TimeSeriesRecurrencePlot"
In descriptive statistics and chaos theory, a recurrence plot (RP) is a plot showing, for each moment i i in time, the times at which the state of a dynamical system returns to the previous state at `i`, i.e., when the phase space trajectory visits roughly the same area in the phase space as at time `j`.
```
A recurrence plot (RP) is a graphical representation used in the analysis of time series data and dynamical systems. It visualizes recurring states or events over time by transforming the original time series into a binary matrix, where each element represents whether two consecutive points are above or below a specified threshold. The resulting Recurrence Plot Matrix reveals patterns, structures, and correlations within the data while providing insights into underlying mechanisms of complex systems.
```
~starling7b
___
Reference:
en.wikipedia.org
github.com
github.com
github.com
github.com
juliadynamics.github.io
distance_matrix(series1, series2, max_freq, norm)
Generate distance matrix between two series.
Parameters:
series1 (float) : Source series 1.
series2 (float) : Source series 2.
max_freq (int) : Maximum frequency to inpect or the size of the generated matrix.
norm (string) : Norm of the distance metric, default=`euclidean`, options=`euclidean`, `manhattan`, `max`.
Returns: Matrix with distance values.
method normalize_distance(M)
Normalizes a matrix within its Min-Max range.
Namespace types: matrix
Parameters:
M (matrix) : Source matrix.
Returns: Normalized matrix.
method threshold(M, threshold)
Updates the matrix with the condition `M(i,j) > threshold ? 1 : 0`.
Namespace types: matrix
Parameters:
M (matrix) : Source matrix.
threshold (float)
Returns: Cross matrix.
rolling_window(a, b, sample_size)
An experimental alternative method to plot a recurrence_plot.
Parameters:
a (array) : Array with data.
b (array) : Array with data.
sample_size (int)
Returns: Recurrence_plot matrix.
TimeSeriesGrammianAngularFieldLibrary "TimeSeriesGrammianAngularField"
provides Grammian angular field and associated utility functions.
___
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
www.researchgate.net
method normalize(data, a, b)
Normalize the series to a optional range, usualy within `(-1, 1)` or `(0, 1)`.
Namespace types: array
Parameters:
data (array) : Sample data to normalize.
a (float) : Minimum target range value, `default=-1.0`.
b (float) : Minimum target range value, `default= 1.0`.
Returns: Normalized array within new range.
___
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
normalize_series(source, length, a, b)
Normalize the series to a optional range, usualy within `(-1, 1)` or `(0, 1)`.\
*Note that this may provide a different result than the array version due to rolling range*.
Parameters:
source (float) : Series to normalize.
length (int) : Number of bars to sample the range.
a (float) : Minimum target range value, `default=-1.0`.
b (float) : Minimum target range value, `default= 1.0`.
Returns: Normalized series within new range.
method polar(data)
Turns a normalized sample array into polar coordinates.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Converted array into polar coordinates.
polar_series(source)
Turns a normalized series into polar coordinates.
Parameters:
source (float) : Source series.
Returns: Converted series into polar coordinates.
method gasf(data)
Gramian Angular Summation Field *`GASF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GASF`* values.
method gasf_id(data)
Trig. identity of Gramian Angular Summation Field *`GASF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GASF`* values.
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
method gadf(data)
Gramian Angular Difference Field *`GADF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GADF`* values.
method gadf_id(data)
Trig. identity of Gramian Angular Difference Field *`GADF`*.
Namespace types: array
Parameters:
data (array) : Sampled data values.
Returns: Matrix with *`GADF`* values.
Reference:
*Time Series Classification: A review of Algorithms and Implementations*.
CCOMET_Scanner_LibraryLibrary "CCOMET_Scanner_Library"
- A Trader's Edge (ATE)_Library was created to assist in constructing CCOMET Scanners
Loc_tIDs_Col(_string, _firstLocation)
TickerIDs: You must form this single tickerID input string exactly as described in the scripts info panel (little gray 'i' that
is circled at the end of the settings in the settings/input panel that you can hover your cursor over this 'i' to read the
details of that particular input). IF the string is formed correctly then it will break up this single string parameter into
a total of 40 separate strings which will be all of the tickerIDs that the script is using in your CCOMET Scanner.
Locations: This function is used when there's a desire to print an assets ALERT LABELS. A set Location on the scale is assigned to each asset.
This is created so that if a lot of alerts are triggered, they will stay relatively visible and not overlap each other.
If you set your '_firstLocation' parameter as 1, since there are a max of 40 assets that can be scanned, the 1st asset's location
is assigned the value in the '_firstLocation' parameter, the 2nd asset's location is the (1st asset's location+1)...and so on.
Parameters:
_string (simple string) : (string)
A maximum of 40 Tickers (ALL joined as 1 string for the input parameter) that is formulated EXACTLY as described
within the tooltips of the TickerID inputs in my CCOMET Scanner scripts:
assets = input.text_area(tIDset1, title="TickerID (MUST READ TOOLTIP)", tooltip="Accepts 40 TICKERID's for each
copy of the script on the chart. TEXT FORMATTING RULES FOR TICKERID'S:
(1) To exclude the EXCHANGE NAME in the Labels, de-select the next input option.
(2) MUST have a space (' ') AFTER each TickerID.
(3) Capitalization in the Labels will match cap of these TickerID's.
(4) If your asset has a BaseCurrency & QuoteCurrency (ie. ADAUSDT ) BUT you ONLY want Labels
to show BaseCurrency(ie.'ADA'), include a FORWARD SLASH ('/') between the Base & Quote (ie.'ADA/USDT')", display=display.none)
_firstLocation (simple int) : (simple int)
Optional (starts at 1 if no parameter added).
Location that you want the first asset to print its label if is triggered to do so.
ie. loc2=loc1+1, loc3=loc2+1, etc.
Returns: Returns 40 output variables in the tuple (ie. between the ' ') with the TickerIDs, 40 variables for the locations for alert labels, and 40 Colors for labels/plots
TickeridForLabelsAndSecurity(_ticker, _includeExchange)
This function accepts the TickerID Name as its parameter and produces a single string that will be used in all of your labels.
Parameters:
_ticker (simple string) : (string)
For this parameter, input the varible named '_coin' from your 'f_main()' function for this parameter. It is the raw
Ticker ID name that will be processed.
_includeExchange (simple bool) : (bool)
Optional (if parameter not included in function it defaults to false ).
Used to determine if the Exchange name will be included in all labels/triggers/alerts.
Returns: ( )
Returns 2 output variables:
1st ('_securityTickerid') is to be used in the 'request.security()' function as this string will contain everything
TV needs to pull the correct assets data.
2nd ('lblTicker') is to be used in all of the labels in your CCOMET Scanner as it will only contain what you want your labels
to show as determined by how the tickerID is formulated in the CCOMET Scanner's input.
InvalID_LblSz(_barCnt, _close, _securityTickerid, _invalidArray, _tablePosition, _stackVertical, _lblSzRfrnce)
INVALID TICKERIDs: This is to add a table in the middle right of your chart that prints all the TickerID's that were either not formulated
correctly in the '_source' input or that is not a valid symbol and should be changed.
LABEL SIZES: This function sizes your Alert Trigger Labels according to the amount of Printed Bars the chart has printed within
a set time period, while also keeping in mind the smallest relative reference size you input in the 'lblSzRfrnceInput'
parameter of this function. A HIGHER % of Printed Bars(aka...more trades occurring for that asset on the exchange),
the LARGER the Name Label will print, potentially showing you the better opportunities on the exchange to avoid
exchange manipulation liquidations.
*** SHOULD NOT be used as size of labels that are your asset Name Labels next to each asset's Line Plot...
if your CCOMET Scanner includes these as you want these to be the same size for every asset so the larger ones dont cover the
smaller ones if the plots are all close to each other ***
Parameters:
_barCnt (float) : (float)
Get the 1st variable('barCnt') from the Security function's tuple and input it as this functions 1st input
parameter which will directly affect the size of the 2nd output variable ('alertTrigLabel') that is also outputted by this function.
_close (float) : (float)
Put your 'close' variable named '_close' from the security function here.
_securityTickerid (string) : (string)
Throughout the entire charts updates, if a '_close' value is never registered then the logic counts the asset as INVALID.
This will be the 1st TickerID variable (named _securityTickerid) outputted from the tuple of the TickeridForLabels()
function above this one.
_invalidArray (array) : (array string)
Input the array from the original script that houses all of the invalidArray strings.
_tablePosition (simple string) : (string)
Optional (if parameter not included, it defaults to position.middle_right). Location on the chart you want the table printed.
Possible strings include: position.top_center, position.top_left, position.top_right, position.middle_center,
position.middle_left, position.middle_right, position.bottom_center, position.bottom_left, position.bottom_right.
_stackVertical (simple bool) : (bool)
Optional (if parameter not included, it defaults to true). All of the assets that are counted as INVALID will be
created in a list. If you want this list to be prited as a column then input 'true' here, otherwise they will all be in a row.
_lblSzRfrnce (string) : (string)
Optional (if parameter not included, it defaults to size.small). This will be the size of the variable outputted
by this function named 'assetNameLabel' BUT also affects the size of the output variable 'alertTrigLabel' as it uses this parameter's size
as the smallest size for 'alertTrigLabel' then uses the '_barCnt' parameter to determine the next sizes up depending on the "_barCnt" value.
Returns: ( )
Returns 2 variables:
1st output variable ('AssetNameLabel') is assigned to the size of the 'lblSzRfrnceInput' parameter.
2nd output variable('alertTrigLabel') can be of variying sizes depending on the 'barCnt' parameter...BUT the smallest
size possible for the 2nd output variable ('alertTrigLabel') will be the size set in the 'lblSzRfrnceInput' parameter.
PrintedBarCount(_time, _barCntLength, _barCntPercentMin)
The Printed BarCount Filter looks back a User Defined amount of minutes and calculates the % of bars that have printed
out of the TOTAL amount of bars that COULD HAVE been printed within the same amount of time.
Parameters:
_time (int) : (int)
The time associated with the chart of the particular asset that is being screened at that point.
_barCntLength (int) : (int)
The amount of time (IN MINUTES) that you want the logic to look back at to calculate the % of bars that have actually
printed in the span of time you input into this parameter.
_barCntPercentMin (int) : (int)
The minimum % of Printed Bars of the asset being screened has to be GREATER than the value set in this parameter
for the output variable 'bc_gtg' to be true.
Returns: ( )
Returns 2 outputs:
1st is the % of Printed Bars that have printed within the within the span of time you input in the '_barCntLength' parameter.
2nd is true/false according to if the Printed BarCount % is above the threshold that you input into the '_barCntPercentMin' parameter.
lib_fvgLibrary "lib_fvg"
further expansion of my object oriented library toolkit. This lib detects Fair Value Gaps and returns them as objects.
Drawing them is a separate step so the lib can be used with securities. It also allows for usage of current/close price to detect fill/invalidation of a gap and to adjust the fill level dynamically. FVGs can be detected while forming and extended indefinitely while they're unfilled.
method draw(this)
Namespace types: FVG
Parameters:
this (FVG)
method draw(fvgs)
Namespace types: FVG
Parameters:
fvgs (FVG )
is_fvg(mode, precondition, filter_insignificant, filter_insignificant_atr_factor, live)
Parameters:
mode (int) : switch for detection 1 for bullish FVGs, -1 for bearish FVGs
precondition (bool) : allows for other confluences to block/enable detection
filter_insignificant (bool) : allows to ignore small gaps
filter_insignificant_atr_factor (float) : allows to adjust how small (compared to a 50 period ATR)
live (bool) : allows to detect FVGs while the third bar is forming -> will cause repainting
Returns: a tuple of (bar_index of gap bar, gap top, gap bottom)
create_fvg(mode, idx, top, btm, filled_at_pc, config)
Parameters:
mode (int) : switch for detection 1 for bullish FVGs, -1 for bearish FVGs
idx (int) : the bar_index of the FVG gap bar
top (float) : the top level of the FVG
btm (float) : the bottom level of the FVG
filled_at_pc (float) : the ratio (0-1) that the fill source needs to retrace into the gap to consider it filled/invalidated/ready for removal
config (FVGConfig) : the plot configuration/styles for the FVG
Returns: a new FVG object if there was a new FVG, else na
detect_fvg(mode, filled_at_pc, precondition, filter_insignificant, filter_insignificant_atr_factor, live, config)
Parameters:
mode (int) : switch for detection 1 for bullish FVGs, -1 for bearish FVGs
filled_at_pc (float)
precondition (bool) : allows for other confluences to block/enable detection
filter_insignificant (bool) : allows to ignore small gaps
filter_insignificant_atr_factor (float) : allows to adjust how small (compared to a 50 period ATR)
live (bool) : allows to detect FVGs while the third bar is forming -> will cause repainting
config (FVGConfig)
Returns: a new FVG object if there was a new FVG, else na
method update(this, fill_src)
Namespace types: FVG
Parameters:
this (FVG)
fill_src (float) : allows for usage of different fill source series, e.g. high for bearish FVGs, low vor bullish FVGs or close for both
method update(all, fill_src)
Namespace types: FVG
Parameters:
all (FVG )
fill_src (float)
method remove_filled(unfilled_fvgs)
Namespace types: FVG
Parameters:
unfilled_fvgs (FVG )
method delete(this)
Namespace types: FVG
Parameters:
this (FVG)
method delete_filled_fvgs_buffered(filled_fvgs, max_keep)
Namespace types: FVG
Parameters:
filled_fvgs (FVG )
max_keep (int) : the number of filled, latest FVGs to retain on the chart.
FVGConfig
Fields:
box_args (|robbatt/lib_plot_objects/36;BoxArgs|#OBJ)
line_args (|robbatt/lib_plot_objects/36;LineArgs|#OBJ)
box_show (series__bool)
line_show (series__bool)
keep_filled (series__bool)
extend (series__bool)
FVG
Fields:
config (|FVGConfig|#OBJ)
startbar (series__integer)
mode (series__integer)
top (series__float)
btm (series__float)
center (series__float)
size (series__float)
fill_size (series__float)
fill_lvl_target (series__float)
fill_lvl_current (series__float)
fillbar (series__integer)
filled (series__bool)
_fvg_box (|robbatt/lib_plot_objects/36;Box|#OBJ)
_fill_line (|robbatt/lib_plot_objects/36;Line|#OBJ)
ForecastingThis Forecasting library has a couple of Novel and traditional approaches to forecasting stock prices.
Traditionally, it provides a basic ARIMA forecaster using simple autoregression, as well as a linear regression and quadratic regression channel forecaster.
Novel approaches to forecasting include:
1) A Moving Average based Forecaster (modelled after ARIMA), it is capable of forecasting based on a user selected SMA.
2) Z-Score Forecast: Forecasting based on Z-Score (example displayed in chart).
Library "Forecasting"
ARIMA_Modeller(src)
: Creates a generic autoregressive ARIMA model
Parameters:
src (float)
Returns: : arima_result, arima_ucl, arima_lcl, arima_cor, arima_r2, arima_err, y1, y2, y3, y0
machine_learning_regression(output, x1, x2, x3, x4, x5, show_statistics)
: Creates an automatic regression based forecast model (can be used for other regression operations) from a list of possible independent variables.
Parameters:
output (float)
x1 (float)
x2 (float)
x3 (float)
x4 (float)
x5 (float)
show_statistics (bool)
Returns: : result, upper bound levels, lower bound levels, optional statitics table that displays the model parameters and statistics
time_series_linear_forecast(src, forecast_length, standard_deviation_extension_1, standard_deviation_extension_2)
: Creates a simple linear regression time series channel
Parameters:
src (float)
forecast_length (int)
standard_deviation_extension_1 (float)
standard_deviation_extension_2 (float)
Returns: : Linreg Channel
quadratic_time_series_forecast(src, forecast_length)
: Creates a simple quadratic regression time series channel
Parameters:
src (float)
forecast_length (int)
Returns: : Quadratic Regression Channel
moving_average_forecaster(source, train_time, ma_length, forecast_length, forecast_result, upper_bound_result, lower_bound_result)
: Creates an ARIMA style moving average forecaster
Parameters:
source (float)
train_time (int)
ma_length (int)
forecast_length (int)
forecast_result (float )
upper_bound_result (float )
lower_bound_result (float )
Returns: : forecast_result, upper_bound_result, lower_bound_result, moving_average, ucl, lcl
zscore_forecast(z_length, z_source, show_alerts, forecast_length, show_forecast_table)
: Creates a Z-Score Forecast and is capable of plotting the immediate forecast via a Polyline
Parameters:
z_length (int)
z_source (float)
show_alerts (bool)
forecast_length (int)
show_forecast_table (bool)
Returns: : The export is void, it will export the Polyline forecast and the Z-forecast table if you enable it.
commonThe "Pineify/common" library presents a specialized toolkit crafted to empower traders and script developers with state-of-the-art time manipulation functions on the TradingView platform. It is instead a foundational utility aimed at enriching your script's ability to process and interpret time-based data with unparalleled precision.
Key Features
String Splitter:
The 'str_split_into_two' function is a universal string handler that separates any given input into two distinct strings based on a specified delimiter. This function is especially useful in parsing time strings or any scenario where a string needs to be divided into logical parts efficiently.
Example:
= str_split_into_two("a:b", ":")
// a = "a"
// b = "b"
Time Parser:
With 'time_to_hour_minute', users can effortlessly convert a time string into numerical hours and minutes. This function is pivotal for those who need to exact specific time series data or wish to schedule their trades down to the minute.
Example:
= time_to_hour_minute("02:30")
// time_hour = 2
// time_minute = 30
Unix Time Converter
The 'time_range_to_unix_time' function transcends traditional boundaries by converting a given time range into Unix timestamp format. This integration of date, time, and timezone, accounts for a comprehensive approach, allowing scripts to make timed decisions, perform historical analyses, and account for international markets across different time zones.
Example:
// Support 'hhmm-hhmm' and 'hh:mm-hh:mm'
= time_range_to_unix_time("09:30-12:00")
Summary:
Each function is meticulously designed to minimize complexity and maximize versatility. Whether you are a programmer seeking to streamline your code, or a trader requiring precise timing for your strategies, our library provides the logical framework that aligns with your needs.
The "Pineify/common" library is the bridge between high-level time concepts and actionable trading insights. It serves a multitude of purposes – from crafting elegant time-based triggers to dissecting complex string data. Embrace the power of precision with "Pineify/common" and elevate your TradingView scripting experience to new heights.
GuageLibrary "Gauge"
The gauge library utilizes a gaugeParams object, encapsulating crucial parameters for gauge creation. Essential attributes include num (the measured value) , min (the minimum value equating to 100% on the gauge's minimum scale) , and max (the maximum value equating to 100% on the gauge's maximum scale) . The size attribute (defaulting to 10) splits the scale into increments, each representing 100% divided by the specified size.
The num value dynamically shifts within the gauge based on its percentage move from the mathematical average between min and max . When num is below the average, the minimum portion of the scale activates, displaying the appropriate percentage based on the distance from the average to the minimum. The same principle applies when num exceeds the average. The 100% scale is reached at either end when num equals min or max .
The library offers full customization, allowing users to configure color schemes, labels, and titles. The gauge can be displayed either vertically (default) or horizontally. The colors employ a gradient, adapting based on the number's movement. Overall, the gauge library provides a flexible and comprehensive tool for visualizing and interpreting numerical values within a specified range.
StrategyDashboardLibrary ”StrategyDashboard”
Hey, everybody!
I haven’t done anything here for a long time, I need to get better ^^.
In my strategies, so far private, but not about that, I constantly use dashboards, which clearly show how my strategy is working out.
Of course, you can also find a number of these parameters in the standard strategy window, but I prefer to display everything on the screen, rather than digging through a bunch of boxes and dropdowns.
At the moment I am using 2 dashboards, which I would like to share with you.
1. monthly(isShow)
this is a dashboard with the breakdown of profit by month in per cent. That is, it displays how much percentage you made or lost in a particular month, as well as for the year as a whole.
Parameters:
isShow (bool) - determine allowance to display or not.
2. total(isShow)
The second dashboard displays more of the standard strategy information, but in a table format. Information from the series “number of consecutive losers, number of consecutive wins, amount of earnings per day, etc.”.
Parameters:
isShow (bool) - determine allowance to display or not.
Since I prefer the dark theme of the interface, now they are adapted to it, but in the near future for general convenience I will add the ability to adapt to light.
The same goes for the colour scheme, now it is adapted to the one I use in my strategies (because the library with more is made by cutting these dashboards from my strategies), but will also make customisable part.
If you have any wishes, feel free to write in the comments, maybe I can implement and add them in the next versions.
Statistics TableStrategy Statistics
This library will add a table with statistics from your strategy. With this library, you won't have to switch to your strategy tester tab to view your results and positions.
Usage:
You can choose whether to set the table by input fields by adding the below code to your strategy or replace the parameters with the ones you would like to use manually.
// Statistics table options.
statistics_table_enabled = input.string(title='Show a table with statistics', defval='YES', options= , group='STATISTICS')
statistics_table_position = input.string(title='Position', defval='RIGHT', options= , group='STATISTICS')
statistics_table_margin = input.int(title='Table Margin', defval=10, minval=0, maxval=100, step=1, group='STATISTICS')
statistics_table_transparency = input.int(title='Cell Transparency', defval=20, minval=1, maxval=100, step=1, group='STATISTICS')
statistics_table_text_color = input.color(title='Text Color', defval=color.new(color.white, 0), group='STATISTICS')
statistics_table_title_cell_color = input.color(title='Title Cell Color', defval=color.new(color.gray, 80), group='STATISTICS')
statistics_table_cell_color = input.color(title='Cell Color', defval=color.new(color.purple, 0), group='STATISTICS')
// Statistics table init.
statistics.table(strategy.initial_capital, close, statistics_table_enabled, statistics_table_position, statistics_table_margin, statistics_table_transparency, statistics_table_text_color, statistics_table_title_cell_color, statistics_table_cell_color)
Sample:
If you are interested in the strategy used for this statistics table, you can browse the strategies on my profile.
Backtest Strategy Optimizer AdapterBacktest Strategy Optimizer Adapter
With this library, you will be able to run one or multiple backtests with different variables (combinations). For example, you can run 100 backtests of Supertrend at once with an increment factor of 0.1. This way, you can easily fetch the most profitable settings and apply them to your strategy.
To get a better understanding of the code, you can check the code below.
Single backtest results
= backtest.results(date_start, date_end, long_entry, long_exit, take_profit_percentage, stop_loss_percentage, atr_length, initial_capital, order_size, commission)
Add backtest results to a table
backtest.table(initial_capital, profit_and_loss, open_balance, winrate, entries, exits, wins, losses, backtest_table_position, backtest_table_margin, backtest_table_transparency, backtest_table_cell_color, backtest_table_title_cell_color, backtest_table_text_color)
Backtest result without chart labels
= backtest.run(date_start, date_end, long_entry, long_exit, take_profit_percentage, stop_loss_percentage, atr_length, initial_capital, order_size, commission)
Backtest result profit
profit = backtest.profit(date_start, date_end, long_entry, long_exit, take_profit_percentage, stop_loss_percentage, atr_length, initial_capital, order_size, commission)
Backtest result winrate
winrate = backtest.winrate(date_start, date_end, long_entry, long_exit, take_profit_percentage, stop_loss_percentage, atr_length, initial_capital, order_size, commission)
Start Date
You can set the start date either by using a timestamp or a number that refers to the number of bars back.
Stop Loss / Take Profit Issue
Unfortunately, I did not manage to achieve 100% accuracy for the take profit and stop loss. The original TradingView backtest can stop at the correct position within a bar using the strategy.exit stop and limit variables. However, it seems unachievable with a crossunder/crossover function in PineScript unless it is calculated on every tick (which would make the backtesting results invalid). So far, I have not found a workaround, and I would be grateful if someone could solve this issue, if it is even possible. If you have any solutions or fixes, please let me know!
Multiple Backtest Results / Optimizer
You can run multiple backtests in a single strategy or indicator, but there are certain requirements for placing the correct code in the right way. To view examples of running multiple backtests, you can refer to the links provided in the updates I posted below. In the samples I have also explained how you can auto-generate code for your backtest strategy.
SPTS_StatsPakLibFinally getting around to releasing the library component to the SPTS indicator!
This library is packed with a ton of great statistics functions to supplement SPTS, these functions add to the capabilities of SPTS including a forecast function.
The library includes the following functions
1. Linear Regression (single independent and single dependent)
2. Multiple Regression (2 independent variables, 1 dependent)
3. Standard Error of Residual Assessment
4. Z-Score
5. Effect Size
6. Confidence Interval
7. Paired Sample Test
8. Two Tailed T-Test
9. Qualitative assessment of T-Test
10. T-test table and p value assigner
11. Correlation of two arrays
12. Quadratic correlation (curvlinear)
13. R Squared value of 2 arrays
14. R Squared value of 2 floats
15. Test of normality
16. Forecast function which will push the desired forecasted variables into an array.
One of the biggest added functionalities of this library is the forecasting function.
This function provides an autoregressive, trainable model that will export forecasted values to 3 arrays, one contains the autoregressed forecasted results, the other two contain the upper confidence forecast and the lower confidence forecast.
Hope you enjoy and find use for this!
Library "SPTS_StatsPakLib"
f_linear_regression(independent, dependent, len, variable)
TODO: creates a simple linear regression model between two variables.
Parameters:
independent (float)
dependent (float)
len (int)
variable (float)
Returns: TODO: returns 6 float variables
result: The result of the regression model
pear_cor: The pearson correlation of the regresion model
rsqrd: the R2 of the regression model
std_err: the error of residuals
slope: the slope of the model (coefficient)
intercept: the intercept of the model (y = mx + b is y = slope x + intercept)
f_multiple_regression(y, x1, x2, input1, input2, len)
TODO: creates a multiple regression model between two independent variables and 1 dependent variable.
Parameters:
y (float)
x1 (float)
x2 (float)
input1 (float)
input2 (float)
len (int)
Returns: TODO: returns 7 float variables
result: The result of the regression model
pear_cor: The pearson correlation of the regresion model
rsqrd: the R2 of the regression model
std_err: the error of residuals
b1 & b2: the slopes of the model (coefficients)
intercept: the intercept of the model (y = mx + b is y = b1 x + b2 x + intercept)
f_stanard_error(result, dependent, length)
x TODO: performs an assessment on the error of residuals, can be used with any variable in which there are residual values (such as moving averages or more comlpex models)
param x TODO: result is the output, for example, if you are calculating the residuals of a 200 EMA, the result would be the 200 EMA
dependent: is the dependent variable. In the above example with the 200 EMA, your dependent would be the source for your 200 EMA
Parameters:
result (float)
dependent (float)
length (int)
Returns: x TODO: the standard error of the residual, which can then be multiplied by standard deviations or used as is.
f_zscore(variable, length)
TODO: Calculates the z-score
Parameters:
variable (float)
length (int)
Returns: TODO: returns float z-score
f_effect_size(array1, array2)
TODO: Calculates the effect size between two arrays of equal scale.
Parameters:
array1 (float )
array2 (float )
Returns: TODO: returns the effect size (float)
f_confidence_interval(array1, array2, ci_input)
TODO: Calculates the confidence interval between two arrays
Parameters:
array1 (float )
array2 (float )
ci_input (float)
Returns: TODO: returns the upper_bound and lower_bound cofidence interval as float values
paired_sample_t(src1, src2, len)
TODO: Performs a paired sample t-test
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: Returns the t-statistic and degrees of freedom of a paired sample t-test
two_tail_t_test(array1, array2)
TODO: Perofrms a two tailed t-test
Parameters:
array1 (float )
array2 (float )
Returns: TODO: Returns the t-statistic and degrees of freedom of a two_tail_t_test sample t-test
t_table_analysis(t_stat, df)
TODO: This is to make a qualitative assessment of your paired and single sample t-test
Parameters:
t_stat (float)
df (float)
Returns: TODO: the function will return 2 string variables and 1 colour variable. The 2 string variables indicate whether the results are significant or not and the colour will
output red for insigificant and green for significant
t_table_p_value(df, t_stat)
TODO: This performs a quantaitive assessment on your t-tests to determine the statistical significance p value
Parameters:
df (float)
t_stat (float)
Returns: TODO: The function will return 1 float variable, the p value of the t-test.
cor_of_array(array1, array2)
TODO: This performs a pearson correlation assessment of two arrays. They need to be of equal size!
Parameters:
array1 (float )
array2 (float )
Returns: TODO: The function will return the pearson correlation.
quadratic_correlation(src1, src2, len)
TODO: This performs a quadratic (curvlinear) pearson correlation between two values.
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: The function will return the pearson correlation (quadratic based).
f_r2_array(array1, array2)
TODO: Calculates the r2 of two arrays
Parameters:
array1 (float )
array2 (float )
Returns: TODO: returns the R2 value
f_rsqrd(src1, src2, len)
TODO: Calculates the r2 of two float variables
Parameters:
src1 (float)
src2 (float)
len (int)
Returns: TODO: returns the R2 value
test_of_normality(array, src)
TODO: tests the normal distribution hypothesis
Parameters:
array (float )
src (float)
Returns: TODO: returns 4 variables, 2 float and 2 string
Skew: the skewness of the dataset
Kurt: the kurtosis of the dataset
dist = the distribution type (recognizes 7 different distribution types)
implication = a string assessment of the implication of the distribution (qualitative)
f_forecast(output, input, train_len, forecast_length, output_array, upper_array, lower_array)
TODO: This performs a simple forecast function on a single dependent variable. It will autoregress this based on the train time, to the desired length of output,
then it will push the forecasted values to 3 float arrays, one that contains the forecasted result, 1 that contains the Upper Confidence Result and one with the lower confidence
result.
Parameters:
output (float)
input (float)
train_len (int)
forecast_length (int)
output_array (float )
upper_array (float )
lower_array (float )
Returns: TODO: Will return 3 arrays, one with the forecasted results, one with the upper confidence results, and a final with the lower confidence results. Example is given below.
WIPFunctionLyaponovLibrary "WIPFunctionLyaponov"
Lyapunov exponents are mathematical measures used to describe the behavior of a system over
time. They are named after Russian mathematician Alexei Lyapunov, who first introduced the concept in the
late 19th century. The exponent is defined as the rate at which a particular function or variable changes
over time, and can be positive, negative, or zero.
Positive exponents indicate that a system tends to grow or expand over time, while negative exponents
indicate that a system tends to shrink or decay. Zero exponents indicate that the system does not change
significantly over time. Lyapunov exponents are used in various fields of science and engineering, including
physics, economics, and biology, to study the long-term behavior of complex systems.
~ generated description from vicuna13b
---
To calculate the Lyapunov Exponent (LE) of a given Time Series, we need to follow these steps:
1. Firstly, you should have access to your data in some format like CSV or Excel file. If not, then you can collect it manually using tools such as stopwatches and measuring tapes.
2. Once the data is collected, clean it up by removing any outliers that may skew results. This step involves checking for inconsistencies within your dataset (e.g., extremely large or small values) and either discarding them entirely or replacing with more reasonable estimates based on surrounding values.
3. Next, you need to determine the dimension of your time series data. In most cases, this will be equal to the number of variables being measured in each observation period (e.g., temperature, humidity, wind speed).
4. Now that we have a clean dataset with known dimensions, we can calculate the LE for our Time Series using the following formula:
λ = log(||M^T * M - I||)/log(||v||)
where:
λ (Lyapunov Exponent) is the quantity that will be calculated.
||...|| denotes an Euclidean norm of a vector or matrix, which essentially means taking the square root of the sum of squares for each element in the vector/matrix.
M represents our Jacobian Matrix whose elements are given by:
J_ij = (∂fj / ∂xj) where fj is the jth variable and xj is the ith component of the initial condition vector x(t). In other words, each element in this matrix represents how much a small change in one variable affects another.
I denotes an identity matrix whose elements are all equal to 1 (or any constant value if you prefer). This term essentially acts as a baseline for comparison purposes since we want our Jacobian Matrix M^T * M to be close to it when the system is stable and far away from it when the system is unstable.
v represents an arbitrary vector whose Euclidean norm ||v|| will serve as a scaling factor in our calculation. The choice of this particular vector does not matter since we are only interested in its magnitude (i.e., length) for purposes of normalization. However, if you want to ensure that your results are accurate and consistent across different datasets or scenarios, it is recommended to use the same initial condition vector x(t) as used earlier when calculating our Jacobian Matrix M.
5. Finally, once we have calculated λ using the formula above, we can interpret its value in terms of stability/instability for our Time Series data:
- If λ < 0, then this indicates that the system is stable (i.e., nearby trajectories will converge towards each other over time).
- On the other hand, if λ > 0, then this implies that the system is unstable (i.e., nearby trajectories will diverge away from one another over time).
~ generated description from airoboros33b
---
Reference:
en.wikipedia.org
www.collimator.ai
blog.abhranil.net
www.researchgate.net
physics.stackexchange.com
---
This is a work in progress, it may contain errors so use with caution.
If you find flaws or suggest something new, please leave a comment bellow.
_measure_function(i)
helper function to get the name of distance function by a index (0 -> 13).\
Functions: SSD, Euclidean, Manhattan, Minkowski, Chebyshev, Correlation, Cosine, Camberra, MAE, MSE, Lorentzian, Intersection, Penrose Shape, Meehl.
Parameters:
i (int)
_test(L)
Helper function to test the output exponents state system and outputs description into a string.
Parameters:
L (float )
estimate(X, initial_distance, distance_function)
Estimate the Lyaponov Exponents for multiple series in a row matrix.
Parameters:
X (map)
initial_distance (float) : Initial distance limit.
distance_function (string) : Name of the distance function to be used, default:`ssd`.
Returns: List of Lyaponov exponents.
max(L)
Maximal Lyaponov Exponent.
Parameters:
L (float ) : List of Lyapunov exponents.
Returns: Highest exponent.
lib_profileLibrary "lib_profile"
a library with functions to calculate a volume profile for either a set of candles within the current chart, or a single candle from its lower timeframe security data. All you need is to feed the
method delete(this)
deletes this bucket's plot from the chart
Namespace types: Bucket
Parameters:
this (Bucket)
method delete(this)
Namespace types: Profile
Parameters:
this (Profile)
method delete(this)
Namespace types: Bucket
Parameters:
this (Bucket )
method delete(this)
Namespace types: Profile
Parameters:
this (Profile )
method update(this, top, bottom, value, fraction)
updates this bucket's data
Namespace types: Bucket
Parameters:
this (Bucket)
top (float)
bottom (float)
value (float)
fraction (float)
method update(this, tops, bottoms, values)
update this Profile's data (recalculates the whole profile and applies the result to this object) TODO optimisation to calculate this incremental to improve performance in realtime on high resolution
Namespace types: Profile
Parameters:
this (Profile)
tops (float ) : array of range top/high values (either from ltf or chart candles using history() function
bottoms (float ) : array of range bottom/low values (either from ltf or chart candles using history() function
values (float ) : array of range volume/1 values (either from ltf or chart candles using history() function (1s can be used for analysing candles in bucket/price range over time)
method tostring(this)
allows debug print of a bucket
Namespace types: Bucket
Parameters:
this (Bucket)
method draw(this, start_t, start_i, end_t, end_i, args, line_color)
allows drawing a line in a Profile, representing this bucket and it's value + it's value's fraction of the Profile total value
Namespace types: Bucket
Parameters:
this (Bucket)
start_t (int) : the time x coordinate of the line's left end (depends on the Profile box)
start_i (int) : the bar_index x coordinate of the line's left end (depends on the Profile box)
end_t (int) : the time x coordinate of the line's right end (depends on the Profile box)
end_i (int) : the bar_index x coordinate of the line's right end (depends on the Profile box)
args (LineArgs type from robbatt/lib_plot_objects/24) : the default arguments for the line style
line_color (color) : the color override for POC/VAH/VAL lines
method draw(this, forced_width)
draw all components of this Profile (Box, Background, Bucket lines, POC/VAH/VAL overlay levels and labels)
Namespace types: Profile
Parameters:
this (Profile)
forced_width (int) : allows to force width of the Profile Box, overrides the ProfileArgs.default_size and ProfileArgs.extend arguments (default: na)
method init(this)
Namespace types: ProfileArgs
Parameters:
this (ProfileArgs)
method init(this)
Namespace types: Profile
Parameters:
this (Profile)
profile(tops, bottoms, values, resolution, vah_pc, val_pc, bucket_buffer)
split a chart/parent bar into 'resolution' sections, figure out in which section the most volume/time was spent, by analysing a given set of (intra)bars' top/bottom/volume values. Then return price center of the bin with the highest volume, essentially marking the point of control / highest volume (poc) in the chart/parent bar.
Parameters:
tops (float ) : array of range top/high values (either from ltf or chart candles using history() function
bottoms (float ) : array of range bottom/low values (either from ltf or chart candles using history() function
values (float ) : array of range volume/1 values (either from ltf or chart candles using history() function (1s can be used for analysing candles in bucket/price range over time)
resolution (int) : amount of buckets/price ranges to sort the candle data into (analyse how much volume / time was spent in a certain bucket/price range) (default: 25)
vah_pc (float) : a threshold percentage (of values' total) for the top end of the value area (default: 80)
val_pc (float) : a threshold percentage (of values' total) for the bottom end of the value area (default: 20)
bucket_buffer (Bucket ) : optional buffer of empty Buckets to fill, if omitted a new one is created and returned. The buffer length must match the resolution
Returns: poc (price level), vah (price level), val (price level), poc_index (idx in buckets), vah_index (idx in buckets), val_index (idx in buckets), buckets (filled buffer or new)
create_profile(start_idx, tops, bottoms, values, resolution, vah_pc, val_pc, args)
split a chart/parent bar into 'resolution' sections, figure out in which section the most volume/time was spent, by analysing a given set of (intra)bars' top/bottom/volume values. Then return price center of the bin with the highest volume, essentially marking the point of control / highest volume (poc) in the chart/parent bar.
Parameters:
start_idx (int) : the bar_index at which the Profile should start drawing
tops (float ) : array of range top/high values (either from ltf or chart candles using history() function
bottoms (float ) : array of range bottom/low values (either from ltf or chart candles using history() function
values (float ) : array of range volume/1 values (either from ltf or chart candles using history() function (1s can be used for analysing candles in bucket/price range over time)
resolution (int) : amount of buckets/price ranges to sort the candle data into (analyse how much volume / time was spent in a certain bucket/price range) (default: 25)
vah_pc (float) : a threshold percentage (of values' total) for the top end of the value area (default: 80)
val_pc (float) : a threshold percentage (of values' total) for the bottom end of the value area (default: 20)
args (ProfileArgs)
Returns: poc (price level), vah (price level), val (price level), poc_index (idx in buckets), vah_index (idx in buckets), val_index (idx in buckets), buckets (filled buffer or new)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (int)
len (int)
offset (int)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (float)
len (int)
offset (int)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (bool)
len (int)
offset (int)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (string)
len (int)
offset (int)
Bucket
Fields:
idx (series int) : the index of this Bucket within the Profile starting with 0 for the lowest Bucket at the bottom of the Profile
value (series float) : the value of this Bucket, can be volume or time, for using time pass and array of 1s to the update function
top (series float) : the top of this Bucket's price range (for calculation)
btm (series float) : the bottom of this Bucket's price range (for calculation)
center (series float) : the center of this Bucket's price range (for plotting)
fraction (series float) : the fraction this Bucket's value is compared to the total of the Profile
plot_bucket_line (Line type from robbatt/lib_plot_objects/24) : the line that resembles this bucket and it's valeu in the Profile
ProfileArgs
Fields:
show_poc (series bool) : whether to plot a POC line across the Profile Box (default: true)
show_profile (series bool) : whether to plot a line for each Bucket in the Profile Box, indicating the value per Bucket (Price range), e.g. volume that occured in a certain time and price range (default: false)
show_va (series bool) : whether to plot a VAH/VAL line across the Profile Box (default: false)
show_va_fill (series bool) : whether to fill the 'value' area between VAH/VAL line (default: false)
show_background (series bool) : whether to fill the Profile Box with a background color (default: false)
show_labels (series bool) : whether to add labels to the right end of the POC/VAH/VAL line (default: false)
show_price_levels (series bool) : whether add price values to the labels to the right end of the POC/VAH/VAL line (default: false)
extend (series bool) : whether extend the Profile Box to the current candle (default: false)
default_size (series int) : the default min. width of the Profile Box (default: 30)
args_poc_line (LineArgs type from robbatt/lib_plot_objects/24) : arguments for the poc line plot
args_va_line (LineArgs type from robbatt/lib_plot_objects/24) : arguments for the va line plot
args_poc_label (LabelArgs type from robbatt/lib_plot_objects/24) : arguments for the poc label plot
args_va_label (LabelArgs type from robbatt/lib_plot_objects/24) : arguments for the va label plot
args_profile_line (LineArgs type from robbatt/lib_plot_objects/24) : arguments for the Bucket line plots
args_profile_bg (BoxArgs type from robbatt/lib_plot_objects/24)
va_fill_color (series color) : color for the va area fill plot
Profile
Fields:
start (series int) : left x coordinate for the Profile Box
end (series int) : right x coordinate for the Profile Box
resolution (series int) : the amount of buckets/price ranges the Profile will dissect the data into
vah_threshold_pc (series float) : the percentage of the total data value to mark the upper threshold for the main value area
val_threshold_pc (series float) : the percentage of the total data value to mark the lower threshold for the main value area
args (ProfileArgs) : the style arguments for the Profile Box
h (series float) : the highest price of the data
l (series float) : the lowest price of the data
total (series float) : the total data value (e.g. volume of all candles, or just one each to analyse candle distribution over time)
buckets (Bucket ) : the Bucket objects holding the data for each price range bucket
poc_bucket_index (series int) : the Bucket index in buckets, that holds the poc Bucket
vah_bucket_index (series int) : the Bucket index in buckets, that holds the vah Bucket
val_bucket_index (series int) : the Bucket index in buckets, that holds the val Bucket
poc (series float) : the according price level marking the Point Of Control
vah (series float) : the according price level marking the Value Area High
val (series float) : the according price level marking the Value Area Low
plot_poc (Line type from robbatt/lib_plot_objects/24)
plot_vah (Line type from robbatt/lib_plot_objects/24)
plot_val (Line type from robbatt/lib_plot_objects/24)
plot_poc_label (Label type from robbatt/lib_plot_objects/24)
plot_vah_label (Label type from robbatt/lib_plot_objects/24)
plot_val_label (Label type from robbatt/lib_plot_objects/24)
plot_va_fill (LineFill type from robbatt/lib_plot_objects/24)
plot_profile_bg (Box type from robbatt/lib_plot_objects/24)
SimilarityMeasuresLibrary "SimilarityMeasures"
Similarity measures are statistical methods used to quantify the distance between different data sets
or strings. There are various types of similarity measures, including those that compare:
- data points (SSD, Euclidean, Manhattan, Minkowski, Chebyshev, Correlation, Cosine, Camberra, MAE, MSE, Lorentzian, Intersection, Penrose Shape, Meehl),
- strings (Edit(Levenshtein), Lee, Hamming, Jaro),
- probability distributions (Mahalanobis, Fidelity, Bhattacharyya, Hellinger),
- sets (Kumar Hassebrook, Jaccard, Sorensen, Chi Square).
---
These measures are used in various fields such as data analysis, machine learning, and pattern recognition. They
help to compare and analyze similarities and differences between different data sets or strings, which
can be useful for making predictions, classifications, and decisions.
---
References:
en.wikipedia.org
cran.r-project.org
numerics.mathdotnet.com
github.com
github.com
github.com
Encyclopedia of Distances, doi.org
ssd(p, q)
Sum of squared difference for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the squared euclidean distance.
euclidean(p, q)
Euclidean distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the straight-line (or Euclidean).
manhattan(p, q)
Manhattan distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of absolute differences between both points.
minkowski(p, q, p_value)
Minkowsky Distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
p_value (float) : `float` P value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: Measure of similarity in the normed vector space.
chebyshev(p, q)
Chebyshev distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
correlation(p, q)
Correlation distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
cosine(p, q)
Cosine distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Cosine distance between vectors `p` and `q`.
---
angiogenesis.dkfz.de
camberra(p, q)
Camberra distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Weighted measure of absolute differences between both points.
mae(p, q)
Mean absolute error is a normalized version of the sum of absolute difference (manhattan).
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean absolute error of vectors `p` and `q`.
mse(p, q)
Mean squared error is a normalized version of the sum of squared difference.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean squared error of vectors `p` and `q`.
lorentzian(p, q)
Lorentzian distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Lorentzian distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
intersection(p, q)
Intersection distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Intersection distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
penrose(p, q)
Penrose Shape distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Penrose shape distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
meehl(p, q)
Meehl distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Meehl distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
edit(x, y)
Edit (aka Levenshtein) distance for indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Number of deletions, insertions, or substitutions required to transform source string into target string.
---
generated description:
The Edit distance is a measure of similarity used to compare two strings. It is defined as the minimum number of
operations (insertions, deletions, or substitutions) required to transform one string into another. The operations
are performed on the characters of the strings, and the cost of each operation depends on the specific algorithm
used.
The Edit distance is widely used in various applications such as spell checking, text similarity, and machine
translation. It can also be used for other purposes like finding the closest match between two strings or
identifying the common prefixes or suffixes between them.
---
github.com
www.red-gate.com
planetcalc.com
lee(x, y, dsize)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
dsize (int) : `int` Dictionary size.
Returns: Distance between two strings by accounting for dictionary size.
---
www.johndcook.com
hamming(x, y)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Length of different components on both sequences.
---
en.wikipedia.org
jaro(x, y)
Distance between two indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Measure of two strings' similarity: the higher the value, the more similar the strings are.
The score is normalized such that `0` equates to no similarities and `1` is an exact match.
---
rosettacode.org
mahalanobis(p, q, VI)
Mahalanobis distance between two vectors with population inverse covariance matrix.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
VI (matrix) : `matrix` Inverse of the covariance matrix.
Returns: The mahalanobis distance between vectors `p` and `q`.
---
people.revoledu.com
stat.ethz.ch
docs.scipy.org
fidelity(p, q)
Fidelity distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya Coefficient between vectors `p` and `q`.
---
en.wikipedia.org
bhattacharyya(p, q)
Bhattacharyya distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya distance between vectors `p` and `q`.
---
en.wikipedia.org
hellinger(p, q)
Hellinger distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The hellinger distance between vectors `p` and `q`.
---
en.wikipedia.org
jamesmccaffrey.wordpress.com
kumar_hassebrook(p, q)
Kumar Hassebrook distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Kumar Hassebrook distance between vectors `p` and `q`.
---
github.com
jaccard(p, q)
Jaccard distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Jaccard distance between vectors `p` and `q`.
---
github.com
sorensen(p, q)
Sorensen distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Sorensen distance between vectors `p` and `q`.
---
people.revoledu.com
chi_square(p, q, eps)
Chi Square distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Chi Square distance between vectors `p` and `q`.
---
uw.pressbooks.pub
stats.stackexchange.com
www.itl.nist.gov
kulczynsky(p, q, eps)
Kulczynsky distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Kulczynsky distance between vectors `p` and `q`.
---
github.com
FunctionMatrixCovarianceLibrary "FunctionMatrixCovariance"
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector.
Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the `x` and `y` directions contain all of the necessary information; a `2 × 2` matrix would be necessary to fully characterize the two-dimensional variation.
Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself).
The covariance matrix of a random vector `X` is typically denoted by `Kxx`, `Σ` or `S`.
~wikipedia.
method cov(M, bias)
Estimate Covariance matrix with provided data.
Namespace types: matrix
Parameters:
M (matrix) : `matrix` Matrix with vectors in column order.
bias (bool)
Returns: Covariance matrix of provided vectors.
---
en.wikipedia.org
numpy.org
TradeTrackerv2Library "TradeTrackerv2"
This library can be used to track (hypothetical) trades on the chart. Enter the Open, SL, and TP prices (or TP in R to have it calculated) and then call Trade.TrackTrade(barIndex). Keep track of your trades in an array and then simply call TradeTracker.UpdateAllTrades(close) to update all trades based on the current close price.
How to use:
1. Import the library, as always. I'm assuming the alias of "Tracker" below.
2. The Type Trade is exported, so generate a Trade object like newTrade = Tracker.Trade.new() .
3. Set the values for Open, SL, and TP. TP can be set either by price or by R, which will calculate the R based on the Open->SL range:
newTrade.priceOpen = 1.0
newTrade.priceSl = 0.5
newTrade.priceTp = 2.0
-- or in place of the third line above --
newTrade.rTp = 2
4. On each interval you want to update (whether that's per tick/close or on each bar), call trades.UpdateAllTrades(close) . This snippet assumes you have an array named trades (var trades = array.new()) .
In future updates, more customization options will be created. This is the initial prototype.
method MakeTradeLines(t, barIdx)
Namespace types: Trade
Parameters:
t (Trade)
barIdx (int)
method UpdateLabel(t)
Namespace types: Trade
Parameters:
t (Trade)
method MakeLabel(t, barIdx)
Namespace types: Trade
Parameters:
t (Trade)
barIdx (int)
method CloseTrade(t)
Namespace types: Trade
Parameters:
t (Trade)
method OpenTrade(t)
Namespace types: Trade
Parameters:
t (Trade)
method OpenCloseTrade(t, _close)
Namespace types: Trade
Parameters:
t (Trade)
_close (float)
method CalculateProfits(t, _close)
Calculates profits/losses for the Trade, given _close price
Namespace types: Trade
Parameters:
t (Trade)
_close (float)
method UpdateTrade(t, _close)
Namespace types: Trade
Parameters:
t (Trade)
_close (float)
method SetInitialValues(t, barIdx)
Namespace types: Trade
Parameters:
t (Trade)
barIdx (int)
method UpdateAllTrades(trades, _close)
Namespace types: Trade
Parameters:
trades (Trade )
_close (float)
method TrackTrade(t, barIdx)
Namespace types: Trade
Parameters:
t (Trade)
barIdx (int)
Trade
Fields:
id (series__integer)
isOpen (series__bool)
isClosed (series__bool)
isBuy (series__bool)
priceOpen (series__float)
priceTp (series__float)
priceSl (series__float)
rTP (series__float)
profit (series__float)
r (series__float)
resultR (series__float)
lineOpen (series__line)
lineTp (series__line)
lineSl (series__line)
labelStats (series__label)
Extended Moving Average (MA) LibraryThis Extended Moving Average Library is a sophisticated and comprehensive tool for traders seeking to expand their arsenal of moving averages for more nuanced and detailed technical analysis.
The library contains various types of moving averages, each with two versions - one that accepts a simple constant length parameter and another that accepts a series or changing length parameter.
This makes the library highly versatile and suitable for a wide range of strategies and trading styles.
Moving Averages Included:
Simple Moving Average (SMA): This is the most basic type of moving average. It calculates the average of a selected range of prices, typically closing prices, by the number of periods in that range.
Exponential Moving Average (EMA): This type of moving average gives more weight to the latest data and is thus more responsive to new price information. This can help traders to react faster to recent price changes.
Double Exponential Moving Average (DEMA): This is a composite of a single exponential moving average, a double exponential moving average, and an exponential moving average of a triple exponential moving average. It aims to eliminate lag, which is a key drawback of using moving averages.
Jurik Moving Average (JMA): This is a versatile and responsive moving average that can be adjusted for market speed. It is designed to stay balanced and responsive, regardless of how long or short it is.
Kaufman's Adaptive Moving Average (KAMA): This moving average is designed to account for market noise or volatility. KAMA will closely follow prices when the price swings are relatively small and the noise is low.
Smoothed Moving Average (SMMA): This type of moving average applies equal weighting to all observations and smooths out the data.
Triangular Moving Average (TMA): This is a double smoothed simple moving average, calculated by averaging the simple moving averages of a dataset.
True Strength Force (TSF): This is a moving average of the linear regression line, a statistical tool used to predict future values from past values.
Volume Moving Average (VMA): This is a simple moving average of a volume, which can help to identify trends in volume.
Volume Adjusted Moving Average (VAMA): This moving average adjusts for volume and can be more responsive to volume changes.
Zero Lag Exponential Moving Average (ZLEMA): This type of moving average aims to eliminate the lag in traditional EMAs, making it more responsive to recent price changes.
Selector: The selector function allows users to easily select and apply any of the moving averages included in the library inside their strategy.
This library provides a broad selection of moving averages to choose from, allowing you to experiment with different types and find the one that best suits your trading strategy.
By providing both simple and series versions for each moving average, this library offers great flexibility, enabling users to pass both constant and changing length parameters as needed.