Central Bank Dark Energy TracerCentral Bank Dark Energy Tracer (CBDE Tracer)
What makes The Universe grow at an accelerating pace?
Dark Energy.
What makes The Economy grow at an accelerating pace?
Debt.
Debt is the Dark Energy of The Economy.
The CBDE Tracer is a tool that tracks currency assets in US dollars that can be scaled to fit other assets on TradingView.
The example provided is QQQ with scale factors and offsets applied that best curve fit to the most recent price action.
The white line is non-US assets from the following central banks:
-JPY (Japan)
-CNY (China)
-UK (British Pound)
-SNB (Swiss National Bank)
-ECB (European Central Bank )
The lime green line is for US Federal Reserve data including a midpoint of WRESBAL and the fed liquidity calculation (WALCL - WTREGEN) and then subtracting RRPONTSYD
The purple line is the average of the two, US assets, and non-US.
The settings can be configured so that only the average is showing, which should the closest aggregate of all liquidity data.
Statistics
TICK - Custom Tickers [Pt]Traditionally, the TICK index is a technical analysis indicator that shows the difference in the number of stocks that are trading on an uptick vs a downtick in a particular period of time. This indicator allows user to choose up to 40 tickers to calculate TICK.
By default, it uses the SPY Top 40 stocks, but can be changed to any tickers.
There are options to show:
- Top 7 , ie. can be used for just showing TICK for FAANGMT => $FB + $AMZN + $AAPL + $NFLX + $GOOG + $MSFT + $TSLA
- Top 10
- Top 20
- Top 30
- Top 40
Data can be displayed in candle bars, line, or both.
Enjoy~
RSI Impact Heat Map [Trendoscope]Here is a simple tool to measure and display outcome of certain RSI event over heat map.
🎲 Process
🎯Event
Event can be either Crossover or Crossunder of RSI on certain value.
🎯Measuring Impact
Impact of the event after N number of bars is measured in terms of highest and lowest displacement from the last close price. Impact can be collected as either number of times of ATR or percentage of price. Impact for each trigger is recorded separately and stored in array of custom type.
🎯Plotting Heat Map
Heat map is displayed using pine tables. Users can select heat map size - which can vary from 10 to 90. Selecting optimal size is important in order to get right interpretation of data. Having higher number of cells can give more granular data. But, chart may not fit into the window. Having lower size means, stats are combined together to get less granular data which may not give right picture of the results. Default value for size is 50 - meaning data is displayed in 51X51 cells.
Range of the heat map is adjusted automatically based on min and max value of the displacement. In order to filter out or merge extreme values, range is calculated based on certain percentile of the values. This will avoid displaying lots of empty cells which can obscure the actual impact.
🎲 Settings
Settings allow users to define their event, impact duration and reference, and few display related properties. The description of these parameters are as below:
🎲 Use Cases
In this script, we have taken RSI as an example to measure impact. But, we can do this for any event. This can be price crossing over/under upper/lower bollinger bands, moving average crossovers or even complex entry or exit conditions. Overall, we can use this to plot and evaluate our trade criteria.
🎲 Interpretation
Q1 - If more coloured dots appear on the top right corner of the table, then the event is considered to trigger high volatility and high risk environment.
Q2 - If more coloured dots appear on the top left corner, then the events are considered to trigger bearish environment.
Q3 - If more coloured dots appear on the bottom left corner of the chart, then the events are considered insignificant as they neither generate higher displacement in positive or negative side. You can further alter outlier percentage to reduce the bracket and hence have higher distribution move towards
Q4 - If more coloured dots appear on the bottom right corner, then the events are considered to trigger bullish environment.
Will also look forward to implement this as library so that any conditions or events can be plugged into it.
Historical Federal Fund Futures CurveUse this indicator to plot the federal funds futures implied rates term structure against historical curves
Based upon the work of @BarefootJoey, @longfiat, @OpptionsOnly
BTC / DXY, BTC / US10Y
The combination of the DXY and US02Y can be used to gauge market sentiment and assess the state of the global economy.
When the DXY is rising, it indicates that the U.S. dollar is strengthening relative to other currencies, which can lead to increased risk aversion among investors as the U.S. dollar is often seen as a safe-haven currency.
When the US02Y is rising, it suggests that market expectations for future inflation and interest rate increases are increasing, which can lead to a decrease in the value of riskier assets such as stocks.
In general, the combination of the DXY and US02Y can provide important information on the direction of global market trends and the state of the economy, and as such, they are important indicators to consider when making investment decisions.
RISK MANAGEMENTHi dear Traders,
Here I would present you my 'Risk Management' tool which is based on your buy orders, trading fees and your desired benefit.
Easily, fulfill the price and volume of each entries. Add to this, you can prepare the info about your next proposed entries, just let them not check at first and by meeting the prices then active the check-box beside it.
Two line will be presented on your candle-plot, one of the indicate the place that without any lose/win you can exit and also the desired exit position by considering user defined benefit.
Use it for free but please do not forget to boost the script.
Best regards and happy trading.
Shakib
Liquidation_linesLibrary "Liquidationline"
f_calculateLeverage(_leverage, _maintainance, _value, _direction)
Parameters:
_leverage
_maintainance
_value
_direction
f_liqline_update(_Liqui_Line, _killonlowhigh)
Parameters:
_Liqui_Line
_killonlowhigh
f_liqline_draw(_Liqui_Line, _priceorliq)
Parameters:
_Liqui_Line
_priceorliq
f_liqline_add(_Liqui_Line, linetoadd, _limit)
Parameters:
_Liqui_Line
linetoadd
_limit
Liquidationline
Fields:
creationtime
stoptime
price
leverage
maintainance
line_active
line_color
line_thickness
line_style
line_direction
line_finished
text_active
text_size
text_color
this library can draw typical liquidation lines, which can be called e.g. by indicator signals
You can see the default implementation in the lower part of the code, starting with RUNTIME
Don't forget to increase max lines to 500 in your script.
It can look like this screenshot here, with only minor changes to your executing script.
The base is the same
Global LiquidityPlots the sum of the balance sheets of the world's major central banks - FED, ECB, BoE, PBoC, BOJ, India and Switzerland - in a currency of your choice. Defaults to USD.
Also shows FED net liquidity (balance sheet - tga - rrp) for comparison. Uses a configurable multiplier to make the two lines viewable on the same price scale.
DistributionsLibrary "Distributions"
Library with price distribution zones calculation helpers.
Based on research from "Trading Systems and Methods, 5th Edition" by Perry J. Kaufman
getZones(h, l, c, window)
Returns price distribution zones based on HLC and for some period
Parameters:
h : high price
l : low price
c : close price
window : period to calculate distributions
Returns: tuple of 5 price zones in descent order, from highest to lowest
Balance of Force Day of the Week (BOFDW)The script is a custom technical indicator for TradingView that is based on an analysis of the price movements of a financial instrument over the course of a week. The indicator uses a variety of inputs, including the open and close prices for each day of the week, to determine the "BOF" (BOF) for each day.
The BOF is calculated based on the relative magnitude of bullish and bearish price movements and is then used to determine the average BOF over a moving window of data points. This average BOF is displayed on the chart as an overlay, providing a measure of the average bullishness or bearishness of the financial instrument over the course of a week.
The indicator also allows users to specify the location of the overlay on the chart and to customize the appearance of the overlay with options for text and box colors. The script provides a number of built-in options for chart position, including the top-left, top-middle, top-right, middle-left, middle-center, middle-right, bottom-left, bottom-middle, and bottom-right corners of the chart.
Overall, this custom technical indicator is a useful tool for traders and investors who are looking to gain a deeper understanding of the price trends of a financial instrument over the course of a week. By providing a clear and concise measure of the average POF over time, the indicator can help users identify key patterns in the market and make more informed trading decisions.
PerformanceTableLibrary "PerformanceTable"
TODO: add library description here
This library was created as a library because adding a performance table to an existing strategy script made the strategy script lengthy and inconvenient to manage.
The monthly table script referenced @QuantNomad's code.
The performance table script referenced @myncrypto's code.
To use, copy and paste the code below at the bottom of the strategy script you are using, and the table for strategy performance will be displayed on a chart.
//------------Copy & Paste --------------------------------------//
import Cube_Lee/PerformanceTable/1 as PT
PT = input.bool(true, "Show Performance Table", tooltip = "전략의 성과를 우측상단에 테이블로 표시합니다.", group = "Performance Table")
MT = input.bool(true, "Show Monthly Table", tooltip = "전략의 월별 수익률을 우측하단에 테이블로 표시합니다.", group = "Performance Table")
if PT
PT.PerformanceTable()
if MT
PT.MonthlyTable()
//------------Copy & Paste---------------------------------------//
PerformanceTable()
MonthlyTable()
VIX OscillatorThis is my VIX Oscillator indicator.
About it:
This indicator takes the Z-Score of the VIX and of the current ticker you are on and presents them in the format of an oscillator.
Key parts of the indicator:
A diagram of the key elements of the indicator are displayed above.
Purple Line: Represents the Z-Score of the current Ticker.
Blue Line: Represents the Z-Score of the VIX
Green fill line: Represents bullish divergence
Red fill line: Represents bearish divergence
How to use it:
Characteristics for long entries:
- Look for recent bullish divergence (green fill line)
- Look for the ticker line (purple line) to be holding above 0 (neutrality)
- look for a bullish cross (purple line (ticker) crossing over blue line (VIX))
Characteristics for short entries:
- Look for recent Bearish divergence
- Look for the VIX line (blue line) to be holding above 0 and the Ticker
- Look for the ticker line to be holding below 0
- Look for a bearish cross (blue crossing above purple)
Some principles:
The bands represent oversold, overbought and neutral.
0 is absolute neutrality. No bias here.
Anything towards + 2.5 is considered normal, moving towards overbought (2.5 or higher).
Anything towards -2.5 is considered normal, moving towards oversold (-2.5 or lower).
+2.5 or higher is overbought.
-2.5 or lower is oversold.
As always, I have prepared a quick tutorial video for your reference of this indicator:
Please let me know your questions, comments or suggestions about this indicator below.
Thank you for checking it out!
Delta Ladder [Kioseff Trading]Hello!
This script presents volume delta data in various forms!
Features
Classic mode: Volume delta boxes oriented to the right of the bar (sell closer / buy further)
On Bar mode: Volume delta boxes oriented on the bar (sell left / buy right)
Pure Ladder mode: Pure volume delta ladder
PoC highlighting
Color-coordinated delta boxes. Marginal volume differences are substantially shaded while large volume differences are lightly shaded.
Volume delta boxes can be merged and delta values removed to generate a color-only canvas reflecting vol. delta differences in price blocks.
Price bars can be split up to 497 times - allowing for greater precision.
Total volume delta for the bar and timestamp included
The image above shows Classic mode - delta blocks are oriented left/right contingent on positive/negative values!
The image above shows the same price sequence; however, delta blocks are superimposed on the price bar. Left-side blocks reflect negative delta while right-side blocks reflect positive delta! To apply this display method - select "On Bar" for the "Data Display Method" setting!
The image above shows "Pure Ladder" mode. Delta blocks remain color-coordinated; however, all delta blocks retain the same x-axis as the price bar they were calculated for!
Additionally, you can select to remove the delta values and merge the delta boxes to generate a color-based canvas indicative of volume delta at traded price levels!
The image above shows the same price sequence; however, the "Volume Assumption" setting is activated.
When active, the indicator assumes a 60/ 40 split when a level is traded at and only one metric - "buy volume" or "sell volume" is recorded. This means there shouldn't be any levels recorded where "buy volume" is greater than 0 and "sell volume" equals 0 and vice versa. While this assumption was performed arbitrarily, it may help better replicate volume delta and OI delta calculations seen on other charting platforms.
This option is configurable; you can select to have the script not assume a 60/ 40 split and instead record volume "as is" at the corresponding price level!
I plan to roll out additional features for the indicator - particularly tick-based price blocks! Stay tuned (:
Thank you!
Reinforced RSI - The Quant Science This strategy was designed and written with the goal of showing and motivating the community how to integrate our 'Probabilities' module with their own script.
We have recreated one of the simplest strategies used by many traders. The strategy only trades long and uses the overbought and oversold levels on the RSI indicator.
We added stop losses and take profits to offer more dynamism to the strategy. Then the 'Probabilities' module was integrated to create a probabilistic reinforcement on each trade.
Specifically, each trade is executed, only if the past probabilities of making a profitable trade is greater than or equal to 51%. This greatly increased the performance of the strategy by avoiding possible bad trades.
The backtesting was calculated on the NASDAQ:TSLA , on 15 minutes timeframe.
The strategy works on Tesla using the following parameters:
1. Lenght: 13
2. Oversold: 40
3. Overbought: 70
4. Lookback: 50
5. Take profit: 3%
6. Stop loss: 3%
Time period: January 2021 to date.
Our Probabilities Module, used in the strategy example:
Machine Learning: Lorentzian Classification█ OVERVIEW
A Lorentzian Distance Classifier (LDC) is a Machine Learning classification algorithm capable of categorizing historical data from a multi-dimensional feature space. This indicator demonstrates how Lorentzian Classification can also be used to predict the direction of future price movements when used as the distance metric for a novel implementation of an Approximate Nearest Neighbors (ANN) algorithm.
█ BACKGROUND
In physics, Lorentzian space is perhaps best known for its role in describing the curvature of space-time in Einstein's theory of General Relativity (2). Interestingly, however, this abstract concept from theoretical physics also has tangible real-world applications in trading.
Recently, it was hypothesized that Lorentzian space was also well-suited for analyzing time-series data (4), (5). This hypothesis has been supported by several empirical studies that demonstrate that Lorentzian distance is more robust to outliers and noise than the more commonly used Euclidean distance (1), (3), (6). Furthermore, Lorentzian distance was also shown to outperform dozens of other highly regarded distance metrics, including Manhattan distance, Bhattacharyya similarity, and Cosine similarity (1), (3). Outside of Dynamic Time Warping based approaches, which are unfortunately too computationally intensive for PineScript at this time, the Lorentzian Distance metric consistently scores the highest mean accuracy over a wide variety of time series data sets (1).
Euclidean distance is commonly used as the default distance metric for NN-based search algorithms, but it may not always be the best choice when dealing with financial market data. This is because financial market data can be significantly impacted by proximity to major world events such as FOMC Meetings and Black Swan events. This event-based distortion of market data can be framed as similar to the gravitational warping caused by a massive object on the space-time continuum. For financial markets, the analogous continuum that experiences warping can be referred to as "price-time".
Below is a side-by-side comparison of how neighborhoods of similar historical points appear in three-dimensional Euclidean Space and Lorentzian Space:
This figure demonstrates how Lorentzian space can better accommodate the warping of price-time since the Lorentzian distance function compresses the Euclidean neighborhood in such a way that the new neighborhood distribution in Lorentzian space tends to cluster around each of the major feature axes in addition to the origin itself. This means that, even though some nearest neighbors will be the same regardless of the distance metric used, Lorentzian space will also allow for the consideration of historical points that would otherwise never be considered with a Euclidean distance metric.
Intuitively, the advantage inherent in the Lorentzian distance metric makes sense. For example, it is logical that the price action that occurs in the hours after Chairman Powell finishes delivering a speech would resemble at least some of the previous times when he finished delivering a speech. This may be true regardless of other factors, such as whether or not the market was overbought or oversold at the time or if the macro conditions were more bullish or bearish overall. These historical reference points are extremely valuable for predictive models, yet the Euclidean distance metric would miss these neighbors entirely, often in favor of irrelevant data points from the day before the event. By using Lorentzian distance as a metric, the ML model is instead able to consider the warping of price-time caused by the event and, ultimately, transcend the temporal bias imposed on it by the time series.
For more information on the implementation details of the Approximate Nearest Neighbors (ANN) algorithm used in this indicator, please refer to the detailed comments in the source code.
█ HOW TO USE
Below is an explanatory breakdown of the different parts of this indicator as it appears in the interface:
Below is an explanation of the different settings for this indicator:
General Settings:
Source - This has a default value of "hlc3" and is used to control the input data source.
Neighbors Count - This has a default value of 8, a minimum value of 1, a maximum value of 100, and a step of 1. It is used to control the number of neighbors to consider.
Max Bars Back - This has a default value of 2000.
Feature Count - This has a default value of 5, a minimum value of 2, and a maximum value of 5. It controls the number of features to use for ML predictions.
Color Compression - This has a default value of 1, a minimum value of 1, and a maximum value of 10. It is used to control the compression factor for adjusting the intensity of the color scale.
Show Exits - This has a default value of false. It controls whether to show the exit threshold on the chart.
Use Dynamic Exits - This has a default value of false. It is used to control whether to attempt to let profits ride by dynamically adjusting the exit threshold based on kernel regression.
Feature Engineering Settings:
Note: The Feature Engineering section is for fine-tuning the features used for ML predictions. The default values are optimized for the 4H to 12H timeframes for most charts, but they should also work reasonably well for other timeframes. By default, the model can support features that accept two parameters (Parameter A and Parameter B, respectively). Even though there are only 4 features provided by default, the same feature with different settings counts as two separate features. If the feature only accepts one parameter, then the second parameter will default to EMA-based smoothing with a default value of 1. These features represent the most effective combination I have encountered in my testing, but additional features may be added as additional options in the future.
Feature 1 - This has a default value of "RSI" and options are: "RSI", "WT", "CCI", "ADX".
Feature 2 - This has a default value of "WT" and options are: "RSI", "WT", "CCI", "ADX".
Feature 3 - This has a default value of "CCI" and options are: "RSI", "WT", "CCI", "ADX".
Feature 4 - This has a default value of "ADX" and options are: "RSI", "WT", "CCI", "ADX".
Feature 5 - This has a default value of "RSI" and options are: "RSI", "WT", "CCI", "ADX".
Filters Settings:
Use Volatility Filter - This has a default value of true. It is used to control whether to use the volatility filter.
Use Regime Filter - This has a default value of true. It is used to control whether to use the trend detection filter.
Use ADX Filter - This has a default value of false. It is used to control whether to use the ADX filter.
Regime Threshold - This has a default value of -0.1, a minimum value of -10, a maximum value of 10, and a step of 0.1. It is used to control the Regime Detection filter for detecting Trending/Ranging markets.
ADX Threshold - This has a default value of 20, a minimum value of 0, a maximum value of 100, and a step of 1. It is used to control the threshold for detecting Trending/Ranging markets.
Kernel Regression Settings:
Trade with Kernel - This has a default value of true. It is used to control whether to trade with the kernel.
Show Kernel Estimate - This has a default value of true. It is used to control whether to show the kernel estimate.
Lookback Window - This has a default value of 8 and a minimum value of 3. It is used to control the number of bars used for the estimation. Recommended range: 3-50
Relative Weighting - This has a default value of 8 and a step size of 0.25. It is used to control the relative weighting of time frames. Recommended range: 0.25-25
Start Regression at Bar - This has a default value of 25. It is used to control the bar index on which to start regression. Recommended range: 0-25
Display Settings:
Show Bar Colors - This has a default value of true. It is used to control whether to show the bar colors.
Show Bar Prediction Values - This has a default value of true. It controls whether to show the ML model's evaluation of each bar as an integer.
Use ATR Offset - This has a default value of false. It controls whether to use the ATR offset instead of the bar prediction offset.
Bar Prediction Offset - This has a default value of 0 and a minimum value of 0. It is used to control the offset of the bar predictions as a percentage from the bar high or close.
Backtesting Settings:
Show Backtest Results - This has a default value of true. It is used to control whether to display the win rate of the given configuration.
█ WORKS CITED
(1) R. Giusti and G. E. A. P. A. Batista, "An Empirical Comparison of Dissimilarity Measures for Time Series Classification," 2013 Brazilian Conference on Intelligent Systems, Oct. 2013, DOI: 10.1109/bracis.2013.22.
(2) Y. Kerimbekov, H. Ş. Bilge, and H. H. Uğurlu, "The use of Lorentzian distance metric in classification problems," Pattern Recognition Letters, vol. 84, 170–176, Dec. 2016, DOI: 10.1016/j.patrec.2016.09.006.
(3) A. Bagnall, A. Bostrom, J. Large, and J. Lines, "The Great Time Series Classification Bake Off: An Experimental Evaluation of Recently Proposed Algorithms." ResearchGate, Feb. 04, 2016.
(4) H. Ş. Bilge, Yerzhan Kerimbekov, and Hasan Hüseyin Uğurlu, "A new classification method by using Lorentzian distance metric," ResearchGate, Sep. 02, 2015.
(5) Y. Kerimbekov and H. Şakir Bilge, "Lorentzian Distance Classifier for Multiple Features," Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods, 2017, DOI: 10.5220/0006197004930501.
(6) V. Surya Prasath et al., "Effects of Distance Measure Choice on KNN Classifier Performance - A Review." .
█ ACKNOWLEDGEMENTS
@veryfid - For many invaluable insights, discussions, and advice that helped to shape this project.
@capissimo - For open sourcing his interesting ideas regarding various KNN implementations in PineScript, several of which helped inspire my original undertaking of this project.
@RikkiTavi - For many invaluable physics-related conversations and for his helping me develop a mechanism for visualizing various distance algorithms in 3D using JavaScript
@jlaurel - For invaluable literature recommendations that helped me to understand the underlying subject matter of this project.
@annutara - For help in beta-testing this indicator and for sharing many helpful ideas and insights early on in its development.
@jasontaylor7 - For helping to beta-test this indicator and for many helpful conversations that helped to shape my backtesting workflow
@meddymarkusvanhala - For helping to beta-test this indicator
@dlbnext - For incredibly detailed backtesting testing of this indicator and for sharing numerous ideas on how the user experience could be improved.
MLExtensionsLibrary "MLExtensions"
normalizeDeriv(src, quadraticMeanLength)
Returns the smoothed hyperbolic tangent of the input series.
Parameters:
src : The input series (i.e., the first-order derivative for price).
quadraticMeanLength : The length of the quadratic mean (RMS).
Returns: nDeriv The normalized derivative of the input series.
normalize(src, min, max)
Rescales a source value with an unbounded range to a target range.
Parameters:
src : The input series
min : The minimum value of the unbounded range
max : The maximum value of the unbounded range
Returns: The normalized series
rescale(src, oldMin, oldMax, newMin, newMax)
Rescales a source value with a bounded range to anther bounded range
Parameters:
src : The input series
oldMin : The minimum value of the range to rescale from
oldMax : The maximum value of the range to rescale from
newMin : The minimum value of the range to rescale to
newMax : The maximum value of the range to rescale to
Returns: The rescaled series
color_green(prediction)
Assigns varying shades of the color green based on the KNN classification
Parameters:
prediction : Value (int|float) of the prediction
Returns: color
color_red(prediction)
Assigns varying shades of the color red based on the KNN classification
Parameters:
prediction : Value of the prediction
Returns: color
tanh(src)
Returns the the hyperbolic tangent of the input series. The sigmoid-like hyperbolic tangent function is used to compress the input to a value between -1 and 1.
Parameters:
src : The input series (i.e., the normalized derivative).
Returns: tanh The hyperbolic tangent of the input series.
dualPoleFilter(src, lookback)
Returns the smoothed hyperbolic tangent of the input series.
Parameters:
src : The input series (i.e., the hyperbolic tangent).
lookback : The lookback window for the smoothing.
Returns: filter The smoothed hyperbolic tangent of the input series.
tanhTransform(src, smoothingFrequency, quadraticMeanLength)
Returns the tanh transform of the input series.
Parameters:
src : The input series (i.e., the result of the tanh calculation).
smoothingFrequency
quadraticMeanLength
Returns: signal The smoothed hyperbolic tangent transform of the input series.
n_rsi(src, n1, n2)
Returns the normalized RSI ideal for use in ML algorithms.
Parameters:
src : The input series (i.e., the result of the RSI calculation).
n1 : The length of the RSI.
n2 : The smoothing length of the RSI.
Returns: signal The normalized RSI.
n_cci(src, n1, n2)
Returns the normalized CCI ideal for use in ML algorithms.
Parameters:
src : The input series (i.e., the result of the CCI calculation).
n1 : The length of the CCI.
n2 : The smoothing length of the CCI.
Returns: signal The normalized CCI.
n_wt(src, n1, n2)
Returns the normalized WaveTrend Classic series ideal for use in ML algorithms.
Parameters:
src : The input series (i.e., the result of the WaveTrend Classic calculation).
n1
n2
Returns: signal The normalized WaveTrend Classic series.
n_adx(highSrc, lowSrc, closeSrc, n1)
Returns the normalized ADX ideal for use in ML algorithms.
Parameters:
highSrc : The input series for the high price.
lowSrc : The input series for the low price.
closeSrc : The input series for the close price.
n1 : The length of the ADX.
regime_filter(src, threshold, useRegimeFilter)
Parameters:
src
threshold
useRegimeFilter
filter_adx(src, length, adxThreshold, useAdxFilter)
filter_adx
Parameters:
src : The source series.
length : The length of the ADX.
adxThreshold : The ADX threshold.
useAdxFilter : Whether to use the ADX filter.
Returns: The ADX.
filter_volatility(minLength, maxLength, useVolatilityFilter)
filter_volatility
Parameters:
minLength : The minimum length of the ATR.
maxLength : The maximum length of the ATR.
useVolatilityFilter : Whether to use the volatility filter.
Returns: Boolean indicating whether or not to let the signal pass through the filter.
backtest(high, low, open, startLongTrade, endLongTrade, startShortTrade, endShortTrade, isStopLossHit, maxBarsBackIndex, thisBarIndex)
Performs a basic backtest using the specified parameters and conditions.
Parameters:
high : The input series for the high price.
low : The input series for the low price.
open : The input series for the open price.
startLongTrade : The series of conditions that indicate the start of a long trade.`
endLongTrade : The series of conditions that indicate the end of a long trade.
startShortTrade : The series of conditions that indicate the start of a short trade.
endShortTrade : The series of conditions that indicate the end of a short trade.
isStopLossHit : The stop loss hit indicator.
maxBarsBackIndex : The maximum number of bars to go back in the backtest.
thisBarIndex : The current bar index.
Returns: A tuple containing backtest values
init_table()
init_table()
Returns: tbl The backtest results.
update_table(tbl, tradeStatsHeader, totalTrades, totalWins, totalLosses, winLossRatio, winrate, stopLosses)
update_table(tbl, tradeStats)
Parameters:
tbl : The backtest results table.
tradeStatsHeader : The trade stats header.
totalTrades : The total number of trades.
totalWins : The total number of wins.
totalLosses : The total number of losses.
winLossRatio : The win loss ratio.
winrate : The winrate.
stopLosses : The total number of stop losses.
Returns: Updated backtest results table.
USD Liquidity IndexThis USD Liquidity Index composed of 2 parts, total assets and major liabilities of the Federal Reserve .
There is a certain positive correlation between USD liquidity and risk asset price changes in history.
Suggested that USD Liquidity is mostly determined by the Federal Reserve balance (without leveraged), this index deducts three major liabilities from the total assets (in green color line) of the Federal Reserve . They are the currency in circulation (WCURCIR) in gold color, the Treasury General Account (WTREGEN) in blue color, the Reverse Repo (RRPONTSYD) in red color.
The grey line is the calculation result of the USD Liquidity Index. With it goes up, liquidity increases, vice versa.
GAVAD - Selling after a Strong MovimentThis strategy search for a moment whe the market make two candles are consistently strong, and open a Sell, searching the imediactly correction, on the new candle. It`s easy to see the bars on the histogram graph. Purple Bars represent the candle variation. when on candle cross ove the Signal line the graph plot an Yellow ci, if the second bar crossover the signal a green circle is ploted and the operation start on start of the next candle.
This strategy can be used in a lot of Stocks and other graphs. many times we need a small time of graph, maybe 1 or 5 minutes because the gain shoud be planned to a midle of the second candle. You need look the stocks you will use.
Stocks > 100 dolars isnt great, markets extremly volatly not too. but, Stocks that have a consistently development are very interisting. Look to markets searching maybe 0.5% or 1%.
For this moment, I make the development of a Brasilian Real x American Dollar. In 15 Minutes.
if you use in small timeframe the results can be better.
On this time we make more than 500 trades with a small lot of contracts, without a big percent profitable, but a small profit in each operation, maybe you search more than. To present a real trading system I insert a spreed to present a correct view of the results.
Each stock, Index, or crypto there is a specific configuration?
my suggestion for new stocks
You need choice a stock and using the setup search set over than 70% gain (percent profitable), using a 1% of gain and loss between 1-2%
as the exemple (WDO)
default I prepare a Brazilian Index
6-signal (6% is variation of a candle of the last candle)
10000- multiplicator (its important to configure diferences betwen a stock and an Indice)
gain 3 (this proportion will be set looking you target, how I say, 1% can be good)
loss 8 (this proportion will be set with you bankroll management, how I say, maybe 2%, you need evaluate)
for maximize operations I use in the 1 or 5 minute graph. Timeframes more large make slowlly results,
(but not unable that you use in a 1 hour or a 1 day.)
I make this script by zero. Maybe the code doesnt so organized, but is very easy to understand. If you have any doubts . leave a comment.
I hope help you.
occ3aka weighted fair price
The ultimate price source for all your stuff, unless you go completely nuts.
The ultimate way to build line charts & do pattern trading, unless you go completely nuts.
Why occ3?
You need a one-point estimate for every bar, a typical price of every bar aye? But then you see that every bar has a different distribution of prices. You can drop a stat test on every bar and pick median, mean, or whatever. But that's still prone to error (imagine borderline cases).
Instead, you can transform the task into a geometric one and say, "I wanna find the center of mass of all dem ticks within a particular interval (a day, a week, a century)". But lol ofc you won't do it, so lets's estimate it:
1) a straight line from Open to Close more/less estimates a regression line if you woulda dropped regression on all the ticks within a given interval;
2) centroid always lies on regression line, so it's always in between the endpoints of regression line. So that's why (open + close) /2;
3) Then, you remember that sequence matters, + generally the volume is higher near the close, so...;
4) Voila, (open + close + close) / 3
Why "fair" price?
Take a daily bar:
1) High & low were the best prices to sell & buy;
2) Opening & closing auctions had acceptable prices, in exchange for the the biggest potential to transact serious volume;
3) "Fair" price, logically, is somewhere in between the acceptable prices;
4) Market is fractal => the same principles propagate everywhere;
4) No, POCs and VPOCs don't make much sense as fair prices.
Nothing else to say, really advise to use it as a line chart if you trade price patterns.
Hulk Grid Algorithm - The Quant ScienceGrid-based intraday algorithm that works 50% in trend following and 50% in swing trading. Orders are executed on a grid of 10 levels. The grid levels are dynamic and calculated on the difference between the previous day's open and close. The algorithm makes only long trades based on the following logic:
1. The daily close of the previous day is analyzed, the first condition is met if the previous day was bullish, closing higher than the 'opening.
2. Must pass 'x' number of bars before placing market orders.
3. The range, as the difference between close and open of the previous day must be greater than 'x'.
If these three conditions are met then the algorithm will proceed to place long orders. On a total of 10 grid levels, up to five trades are executed per day.
If the current close is above level 1 of the grid (previous day's close) then trend following trading will take place, working on the upper 5 levels. In this case each order is placed starting at level 1 and closed at each level above.
If the current close is below level 1 of the grid (previous day's open) then swing trading will be carried out, working on the lower 5 levels. In this case each order is placed starting at level 2 and closed at the upper level.
If at the time of order execution the price is above or below the stop loss and take profit levels, the algorithm will cancel the orders and prevent trading.
All orders are closed exclusively for two reasons:
1. If the stop loss or take profit level is confirmed.
2. If the daily session is ended.
UI Interface
You can adjust:
1. Backtesting period
2. 'x' number of bars before placing orders at the market (remember to always add 2 to the number you enter in the user interface if you enter 2 then execution will occur at the market opening after the fourth bar).
3. Intercepted price range between close and open of the previous day, avoiding trading on days when the range is too low.
4. Stop loss, level calculated from the 'last lower grid, if the market breaks this level the grid is destroyed and closes all open positions.
5. Take profit, the level calculated from the last upper grid, if the market breaks this level the grid is destroyed and closes all open positions.
The backtesting you see in the example was generated on:
BINANCE:BTCUSDT
Timeframe 15 min
Stop loss 2%
Take profit 2%
Minimum bars 3
Size grid range 500
This algorithm can be used only on intraday timeframe.
Forex Strength IndicatorThis indicator will display the strength of 8 currencies, EUR, AUD, NZD, JPY, USD, GBP, CHF, and CAD. Each line will represent each currency. Alongside that, Fibonacci levels will be plotted based on a standard deviation from linear regression, with customizable lengths.
For more steady Fibonacci levels, use higher lengths for both Standard Deviations and Linear Regression. All currency lines come from moving averages with options like EMA, SMA, WMA, RMA, HMA, SWMA, and Linear Regression.
When lines of the active pair are far from each other, it means higher divergence in those currency strengths among the other pairs. The closer the lines are, the lower the divergence.
You can use the Fibonacci levels as points for the reversal or end of the current trend. When the lines cross can be used as a parameter for a more accurate signal of the next movement.
All 28 pairs are loaded from the same time frame and will use the same moving average for all of them
Alerts from the line crossing are available.
DataCorrelationLibrary "DataCorrelation"
Implementation of functions related to data correlation calculations. Formulas have been transformed in such a way that we avoid running loops and instead make use of time series to gradually build the data we need to perform calculation. This allows the calculations to run on unbound series, and/or higher number of samples
🎲 Simplifying Covariance
Original Formula
//For Sample
Covₓᵧ = ∑ ((xᵢ-x̄)(yᵢ-ȳ)) / (n-1)
//For Population
Covₓᵧ = ∑ ((xᵢ-x̄)(yᵢ-ȳ)) / n
Now, if we look at numerator, this can be simplified as follows
∑ ((xᵢ-x̄)(yᵢ-ȳ))
=> (x₁-x̄)(y₁-ȳ) + (x₂-x̄)(y₂-ȳ) + (x₃-x̄)(y₃-ȳ) ... + (xₙ-x̄)(yₙ-ȳ)
=> (x₁y₁ + x̄ȳ - x₁ȳ - y₁x̄) + (x₂y₂ + x̄ȳ - x₂ȳ - y₂x̄) + (x₃y₃ + x̄ȳ - x₃ȳ - y₃x̄) ... + (xₙyₙ + x̄ȳ - xₙȳ - yₙx̄)
=> (x₁y₁ + x₂y₂ + x₃y₃ ... + xₙyₙ) + (x̄ȳ + x̄ȳ + x̄ȳ ... + x̄ȳ) - (x₁ȳ + x₂ȳ + x₃ȳ ... xₙȳ) - (y₁x̄ + y₂x̄ + y₃x̄ + yₙx̄)
=> ∑xᵢyᵢ + n(x̄ȳ) - ȳ∑xᵢ - x̄∑yᵢ
So, overall formula can be simplified to be used in pine as
//For Sample
Covₓᵧ = (∑xᵢyᵢ + n(x̄ȳ) - ȳ∑xᵢ - x̄∑yᵢ) / (n-1)
//For Population
Covₓᵧ = (∑xᵢyᵢ + n(x̄ȳ) - ȳ∑xᵢ - x̄∑yᵢ) / n
🎲 Simplifying Standard Deviation
Original Formula
//For Sample
σ = √(∑(xᵢ-x̄)² / (n-1))
//For Population
σ = √(∑(xᵢ-x̄)² / n)
Now, if we look at numerator within square root
∑(xᵢ-x̄)²
=> (x₁² + x̄² - 2x₁x̄) + (x₂² + x̄² - 2x₂x̄) + (x₃² + x̄² - 2x₃x̄) ... + (xₙ² + x̄² - 2xₙx̄)
=> (x₁² + x₂² + x₃² ... + xₙ²) + (x̄² + x̄² + x̄² ... + x̄²) - (2x₁x̄ + 2x₂x̄ + 2x₃x̄ ... + 2xₙx̄)
=> ∑xᵢ² + nx̄² - 2x̄∑xᵢ
=> ∑xᵢ² + x̄(nx̄ - 2∑xᵢ)
So, overall formula can be simplified to be used in pine as
//For Sample
σ = √(∑xᵢ² + x̄(nx̄ - 2∑xᵢ) / (n-1))
//For Population
σ = √(∑xᵢ² + x̄(nx̄ - 2∑xᵢ) / n)
🎲 Using BinaryInsertionSort library
Chatterjee Correlation and Spearman Correlation functions make use of BinaryInsertionSort library to speed up sorting. The library in turn implements mechanism to insert values into sorted order so that load on sorting is reduced by higher extent allowing the functions to work on higher sample size.
🎲 Function Documentation
chatterjeeCorrelation(x, y, sampleSize, plotSize)
Calculates chatterjee correlation between two series. Formula is - ξnₓᵧ = 1 - (3 * ∑ |rᵢ₊₁ - rᵢ|)/ (n²-1)
Parameters:
x : First series for which correlation need to be calculated
y : Second series for which correlation need to be calculated
sampleSize : number of samples to be considered for calculattion of correlation. Default is 20000
plotSize : How many historical values need to be plotted on chart.
Returns: float correlation - Chatterjee correlation value if falls within plotSize, else returns na
spearmanCorrelation(x, y, sampleSize, plotSize)
Calculates spearman correlation between two series. Formula is - ρ = 1 - (6∑dᵢ²/n(n²-1))
Parameters:
x : First series for which correlation need to be calculated
y : Second series for which correlation need to be calculated
sampleSize : number of samples to be considered for calculattion of correlation. Default is 20000
plotSize : How many historical values need to be plotted on chart.
Returns: float correlation - Spearman correlation value if falls within plotSize, else returns na
covariance(x, y, include, biased)
Calculates covariance between two series of unbound length. Formula is Covₓᵧ = ∑ ((xᵢ-x̄)(yᵢ-ȳ)) / (n-1) for sample and Covₓᵧ = ∑ ((xᵢ-x̄)(yᵢ-ȳ)) / n for population
Parameters:
x : First series for which covariance need to be calculated
y : Second series for which covariance need to be calculated
include : boolean flag used for selectively including sample
biased : boolean flag representing population covariance instead of sample covariance
Returns: float covariance - covariance of selective samples of two series x, y
stddev(x, include, biased)
Calculates Standard Deviation of a series. Formula is σ = √( ∑(xᵢ-x̄)² / n ) for sample and σ = √( ∑(xᵢ-x̄)² / (n-1) ) for population
Parameters:
x : Series for which Standard Deviation need to be calculated
include : boolean flag used for selectively including sample
biased : boolean flag representing population covariance instead of sample covariance
Returns: float stddev - standard deviation of selective samples of series x
correlation(x, y, include)
Calculates pearson correlation between two series of unbound length. Formula is r = Covₓᵧ / σₓσᵧ
Parameters:
x : First series for which correlation need to be calculated
y : Second series for which correlation need to be calculated
include : boolean flag used for selectively including sample
Returns: float correlation - correlation between selective samples of two series x, y
JeeSauceScriptsLibrary "JeeSauceScripts"
getupdnvol()
GetTotalUpVolume(upvolume)
Parameters:
upvolume
GetTotalDnVolume(downvolume)
Parameters:
downvolume
GetDelta(totalupvolume, totaldownvolume)
Parameters:
totalupvolume
totaldownvolume
GetMaxUpVolume(upvolume)
Parameters:
upvolume
GetMaxDnVolume(downvolume)
Parameters:
downvolume
Getcvd()
Getcvdopen(cvd)
Parameters:
cvd
Getcvdhigh(cvd, maxvolumeup)
Parameters:
cvd
maxvolumeup
Getcvdlow(cvd, maxvolumedown)
Parameters:
cvd
maxvolumedown
Getcvdclose(cvd, delta)
Parameters:
cvd
delta
CombineData(data1, data2, data3, data4, data5, data6)
Parameters:
data1
data2
data3
data4
data5
data6
FindData(data, find)
Parameters:
data
find