TF Segmented Polynomial Regression [LuxAlgo]This indicator displays polynomial regression channels fitted using data within a user selected time interval.
The model is fitted using the same method described in our previous script:
Settings
Degree: Degree of the fitted polynomial
Width: Multiplicative factor of the model RMSE. Controls the width of the polynomial regression's channels
Timeframe: Fits the polynomial regression using data within the selected timeframe interval
Show fit for new bars: If selected, will fit the regression model for newly generated bars, else the previous fitted value is displayed.
Src: Input source
Usage
Segmented (or piecewise) models yield multiple fits by first partitioning the data into multiple intervals from specific partitioning conditions. In this script this partitioning condition is for a user selected timeframe to change.
Segmented models can be particularly pertinent for market prices, which often describes a series of local trends.
Segmented polynomial regressions can describe the nature of underlying trends in the price from their fit, such as if an underlying trend is more linear (trending) or constant (ranging), and if a trend is monotonic.
The above chart shows a monthly partitioning on SPX 15m, using a polynomial regression of degree 3. Channel extremities allows highlighting local tops/bottoms.
For real time applications users can choose to fit a current model to incoming price data using the Show fit for new bars settings.
Details
The script does not make use of line.new to display the segmented linear regressions, which allows showing a higher number of historical fits. Each channel extremity as well as the model fit is displayed from the plot function, as such user can more easily set alerts on them.
It is important to note that achieving this requires accessing future price data, as such this script is subject to lookahead bias, historical results differ from the results one could have obtained in real-time.
Bands and Channels
Adaptive-LB, Jurik-Filtered, Triangular MA w/ Price Zones [Loxx]Adaptive-LB, Jurik-Filtered, Triangular MA w/ Price Zones is a moving average indicator that takes as its input an adaptive lookback period. This is an experimental indicator and I wouldn't use this for trading. It's more to explore different adaptive calculation methods and their applications to moving averages and channels. Unlike the traditional Triangular Moving Average, this one uses Jurik smoothing.
What is the Triangular Moving Average
The Triangular Moving Average is basically a double-smoothed Simple Moving Average that gives more weight to the middle section of the data interval. The TMA has a significant lag to current prices and is not well-suited to fast moving markets. TMA = SUM (SMA values)/ N Where N = the number of periods.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Included:
Bar coloring
Signals
Alerts
HDT CloudsHDT Clouds combines custom clouds such as the 200EMA/MA cloud indicator to create high confluence bounce zones when combined with VWAP. The HDT indicator combines various clouds with the Volume Weighted Average Price indicator and Standard Deviations which allow users to identify areas on the chart where the stock may reverse.
On smaller time frames, like the 5/15/30minute, the 200ema/ma cloud and VWAP (when sitting in the same relative area) creates pockets of supply or demand.
In addition, the various moving average clouds, such as the 8/9ema cloud and the 34/50ema cloud, create areas of supply and demand depending on the overall trend. If the stock is trending very strongly to the upside, the 8/9ema can be used as a potential bounce area. Whereas, if the stock is trending, but not quite as strong, the stock may have demand at the 34-50ema where the stock could see a potential bounce to the upside. What sets this indicator apart from other moving average clouds is the incorporation of VWAP/Standard Deviation and the combining of a 200EMA/MA indicator which creates a strong pocket of demand even on lower time frames such as the 5 or 15 minute time frame.
SMA VWAP BANDS [qrsq]Description
This indicator is used to find support and resistance utilizing both SMA and VWAP. It can be used on lower and higher time frames to understand where price is likely to reject or bounce.
How it works
Rather than using the usual calculation for the VWAP, instead this script smooths the volume first with the SMA and then respectively calculates the smoothed multiplication of high, low and close price with the volume individually. These values are then divided by the smoothed volume to find individual VWAP's for each of the sources. The standard deviations of these are calculated, resulting in an upper, lower and middle band. It is essentially VWAP bands with some smoothed calculations in the middle.
How to use it
I like to use the bands for LTF scalping as well as HTF swings.
For scalping:
I tend to use either the 5m or 15m TF
I then set the indicator's TF to 1m
I will take a scalp based on the bands confluence with other PA methods, if price is being either supported or rejected.
For swings:
I tend to use a variety of TFs, including: 30m, 1H, 4H, D
I then set the indicator's TF to "Chart"
I will take a swing based on the bands confluence with other PA methods, if price is being either supported or rejected.
I also tend to use them on perpetual contracts as the volume seems to be more consistent and hence results in more accurate support and resistance.
Gate Signal by Market yogiThis indicator is made by Nischay Rana (Market Yogi)
How to use this Indicator
This is simple group of 8 moving averages, which can be configured in various ways according to your trading requirement.
1. moving average ribbon
2.moving average channel
3.moving average gate signal
4.This indicator has bonus indicator of bollinger bands inbuilt.
Logic:
As price has tendency to get closer to their moving averages. The logic behind this indicator is to use the contraction and expansion concepts of moving averages to find best entry exit points.
This nature of Price action is use to capture the big move after the convergence of all moving averages.
CAUTION : Do not blindly trade the gates as gate has tendency to break out on either side. So use this indicator in confluence with price action and other technical analysis to capture bigger moves.
Higher the gate width more gates are found. Similarly lesser the gate width less gate are found. i.e. Tight squeeze of all the moving averages.
"ENJOY HAPPY TRADING.."
Truly Yours Market Yogi
Relative Andean ScalpingThis is an experimental signal providing script for scalper that uses 2 of open source indicators.
First one provides the signals for us called Andean Oscillator by @alexgrover . We use it to create long signals when bull line crosses over signal line while being above the bear line. And reverse is true for shorts where bear line crosses over signal line while being above bull line.
Second one is used for filtering out low volatility areas thanks to great idea by @HeWhoMustNotBeNamed called Relative Bandwidth Filter . We use it to filter out signals and create signals only when the Relative Bandwith Line below middle line.
The default values for both indicators changed a bit, especially used linreg values to create relatively better signals. These can be changed in settings. Please be aware that i did not do extensive testing with this indicator in different market conditions so it should be used with caution.
Relative Bandwidth FilterThis is a very simple script which can be used as measure to define your trading zones based on volatility.
Concept
This script tries to identify the area of low and high volatility based on comparison between Bandwidth of higher length and ATR of lower length.
Relative Bandwidth = Bandwidth / ATR
Bandwidth can be based on either Bollinger Band, Keltner Channel or Donchian Channel. Length of the bandwidth need to be ideally higher.
ATR is calculated using built in ATR method and ATR length need to be ideally lower than that used for calculating Bandwidth.
Once we got Relative Bandwidth, the next step is to apply Bollinger Band on this to measure how relatively high/low this value is.
Overall - If relative bandwidth is higher, then volatility is comparatively low. If relative bandwidth is lower, then volatility is comparatively high.
Usage
This can be used with your own strategy to filter out your non-trading zones based on volatility. Script plots a variable called "Signal" - which is not shown on chart pane. But, it is available in the data window. This can be used in another script as external input and apply logic.
Signal values can be
1 : Allow only Long
-1 : Allow only short
0 : Do not allow any trades
2 : Allow both Long and Short
R-squared Adaptive T3 w/ DSL [Loxx]R-squared Adaptive T3 w/ DSL is the following T3 indicator but with Discontinued Signal Lines added to reduce noise and thereby increase signal accuracy. This adaptation makes this indicator lower TF scalp friendly.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination ( R-squared ), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average .
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
Included:
Bar coloring
Signals
Alerts
EMA and FEMA Signla/DSL smoothing
Loxx's Expanded Source Types
PPO w/ Discontinued Signal Lines [Loxx]PPO w/ Discontinued Signal Lines is a Percentage Price Oscillator with some upgrades. This indicator has 33 source types and 35+ moving average types as well as Discontinued Signal Lines and divergences. These additions reduce noise and increase hit rate.
What is the Price Percentage Oscillator?
The percentage price oscillator (PPO) is a technical momentum indicator that shows the relationship between two moving averages in percentage terms. The moving averages are a 26-period and 12-period exponential moving average (EMA).
The PPO is used to compare asset performance and volatility, spot divergence that could lead to price reversals, generate trade signals, and help confirm trend direction.
Included:
Bar coloring
3 signal variations w/ alerts
Divergences w/ alerts
Loxx's Expanded Source Types
Loxx's Moving Averages
Buy/Sell Signal Template/Boilerplate Strategy [MyTradingCoder]This script allows the user to connect an external indicator output/plot value to allow for a no-code solution to setup a simple buy/sell signal strategy. For those of you who do not know how to program, do not be intimidated as this is a very easy setup process.
Maybe you want to buy when the 'RSI' value drops below '30' and then sell when the 'RSI' value climbs above '70', but you don't want to code it. You can do that with this indicator along with thousands of others found on the free TradingView indicator library.
Step #1:
Put the strategy on the chart.
Step #2:
Apply a secondary indicator onto the chart, such as an RSI .
Step #3:
Open the strategy settings and change the source to the RSI
Step #4:
Change the 'Signal Settings' to match when you want a buy, or a sell. For example, if you want to get a buy signal when the RSI crosses above 50, and get a sell when it crosses below 50, set the 'buy value' to 50, and the 'buy type' to greater than, then set the 'sell value' to 50 and the 'sell type' to less than. BOOM! It works :)
Buy/Sell Signal Template/Boilerplate [MyTradingCoder]This script allows the user to connect an external indicator output/plot value to allow for a no-code solution to setup a simple buy/sell signal indicator. For those of you who do not know how to program, do not be intimidated as this is a very easy setup process.
Maybe you want to buy when the 'RSI' value drops below '30' and then sell when the 'RSI' value climbs above '70', but you don't want to code it. You can do that with this indicator along with thousands of others found on the free TradingView indicator library.
Step #1:
Put the indicator on the chart.
Step #2:
Apply a secondary indicator onto the chart, such as an RSI.
Step #3:
Open the indicator settings and change the source to the RSI
Step #4:
Change the 'Signal Settings' to match when you want a buy, or a sell. For example, if you want to get a buy signal when the RSI crosses above 50, and get a sell when it crosses below 50, set the 'buy value' to 50, and the 'buy type' to greater than, then set the 'sell value' to 50 and the 'sell type' to less than. BOOM! It works :)
LabTrend SSL [Loxx]LabTrend SSL is based on the popular ASCTrend. This version adds an SSL channel to show the macro trend.
What is Lab Trend?
LabTrend is a complete trading indicator developed by Igorad and is based on the popular ASC Trend. LabTrend is the market "direction" indicator. It shows trend direction by colors, when the market changes to or stays in an up-trend, the bars become or remain GREEN. When the market changes to or stays in a downtrend, the bars become or remain RED. When the market goes sideways or is not strong on either side, the bars become or remain WHITE. This version adds an SSL channel to show the core major trend while bar coloring changes on the micro trend.
Included:
Bar coloring
Signals
Alerts
Hitokiri rsi and bbNG : This indicator is created by combining the standard period RSI indicator with an Oversold limit of 32, an Overbought limit of 70 and a period of 14 (these values can be changed optionally from the entries and still tabs of the indicator settings) and the Bollinger Band . indicator with a standard deviation of 2 and a period of 20. Also, the RSI Oversold is an upward green triangle where the price simultaneously falls below the BB and the lower limit (Low) (i.e. below 32), where the RSI Overbought (i.e. above 70) at the same time the price rises above the BB and the upper limit (Upper) is a downward red triangle. is indicated by a triangle. An alarm condition is established on these conditions. Source codes are posted visually and written in clear language and with explanations for beginners to learn to pine.
TR : Bu gösterge OverSold sınırı 32, OverBought sınırı 70 ve periodu 14 olan (bu değerler tercihe göre indikatör ayarlarının girdiler ve still sekmelerinden değiştirilebilir) standart periodluk RSI göstergesi ile standart sapma değeri 2, periodu 20 olan Bollinger Bandı göstergesinin birleştirilmesiyle oluşturulmuş olup ilaveten RSI'nin OverSold iken (yani 32 altına düştüğü) aynı anda fiyatın BBand alt sınırı (Lower) altına düştüğü yerleri yukarı yönlü yeşil üçgenle, RSI'nin OverBought iken (yani 70 üstüne çıktığı) aynı anda fiyatın BBand üst sınırı (Upper) üstüne çıktığı yerleri aşağı yönlü kırmızı üçgenle belirtmektedir. Bu şartlar üzerine de alarm kondüsyonu oluşturulmuştur. Kaynak kodları görünür olarak yayınlanmış olup, pine öğrenmeye yeni başlayanlar için anlaşılır dilde ve açıklamalar eklenerek yazılmıştır.
Variety-Filtered, Squeeze Moving Averages [Loxx]Variety-Filtered, Squeeze Moving Averages is a chop zone indicator that identifies when price is below a specific volatility threshold calculated as the difference between a fast and slow moving average and filtered using ATR- or Pips-based threshold. This indicator can be use as both an entry and exit indicator. It identifies both chop zones and breakouts/breakdowns
How to use
When the candles turn white and the threshold bands appear on the chart, this is indicative of low volatility
When price exits the threshold bands, price will usually explode up or down giving a long or short signal. This acts as a sort of squeeze momentum.
Included:
Bar coloring
Signals
Alerts, 4 types of alerts: Squeeze started, Squeeze ended, long, and short
Loxx's Expanded Source Types
35+ Loxx's Moving Averages
STD-Filterd, R-squared Adaptive T3 w/ Dynamic Zones [Loxx]STD-Filterd, R-squared Adaptive T3 w/ Dynamic Zones is a standard deviation filtered R-squared Adaptive T3 moving average with dynamic zones.
What is the T3 moving average?
Better Moving Averages Tim Tillson
November 1, 1998
Tim Tillson is a software project manager at Hewlett-Packard, with degrees in Mathematics and Computer Science. He has privately traded options and equities for 15 years.
Introduction
"Digital filtering includes the process of smoothing, predicting, differentiating, integrating, separation of signals, and removal of noise from a signal. Thus many people who do such things are actually using digital filters without realizing that they are; being unacquainted with the theory, they neither understand what they have done nor the possibilities of what they might have done."
This quote from R. W. Hamming applies to the vast majority of indicators in technical analysis . Moving averages, be they simple, weighted, or exponential, are lowpass filters; low frequency components in the signal pass through with little attenuation, while high frequencies are severely reduced.
"Oscillator" type indicators (such as MACD , Momentum, Relative Strength Index ) are another type of digital filter called a differentiator.
Tushar Chande has observed that many popular oscillators are highly correlated, which is sensible because they are trying to measure the rate of change of the underlying time series, i.e., are trying to be the first and second derivatives we all learned about in Calculus.
We use moving averages (lowpass filters) in technical analysis to remove the random noise from a time series, to discern the underlying trend or to determine prices at which we will take action. A perfect moving average would have two attributes:
It would be smooth, not sensitive to random noise in the underlying time series. Another way of saying this is that its derivative would not spuriously alternate between positive and negative values.
It would not lag behind the time series it is computed from. Lag, of course, produces late buy or sell signals that kill profits.
The only way one can compute a perfect moving average is to have knowledge of the future, and if we had that, we would buy one lottery ticket a week rather than trade!
Having said this, we can still improve on the conventional simple, weighted, or exponential moving averages. Here's how:
Two Interesting Moving Averages
We will examine two benchmark moving averages based on Linear Regression analysis.
In both cases, a Linear Regression line of length n is fitted to price data.
I call the first moving average ILRS, which stands for Integral of Linear Regression Slope. One simply integrates the slope of a linear regression line as it is successively fitted in a moving window of length n across the data, with the constant of integration being a simple moving average of the first n points. Put another way, the derivative of ILRS is the linear regression slope. Note that ILRS is not the same as a SMA ( simple moving average ) of length n, which is actually the midpoint of the linear regression line as it moves across the data.
We can measure the lag of moving averages with respect to a linear trend by computing how they behave when the input is a line with unit slope. Both SMA (n) and ILRS(n) have lag of n/2, but ILRS is much smoother than SMA .
Our second benchmark moving average is well known, called EPMA or End Point Moving Average. It is the endpoint of the linear regression line of length n as it is fitted across the data. EPMA hugs the data more closely than a simple or exponential moving average of the same length. The price we pay for this is that it is much noisier (less smooth) than ILRS, and it also has the annoying property that it overshoots the data when linear trends are present.
However, EPMA has a lag of 0 with respect to linear input! This makes sense because a linear regression line will fit linear input perfectly, and the endpoint of the LR line will be on the input line.
These two moving averages frame the tradeoffs that we are facing. On one extreme we have ILRS, which is very smooth and has considerable phase lag. EPMA has 0 phase lag, but is too noisy and overshoots. We would like to construct a better moving average which is as smooth as ILRS, but runs closer to where EPMA lies, without the overshoot.
A easy way to attempt this is to split the difference, i.e. use (ILRS(n)+EPMA(n))/2. This will give us a moving average (call it IE /2) which runs in between the two, has phase lag of n/4 but still inherits considerable noise from EPMA. IE /2 is inspirational, however. Can we build something that is comparable, but smoother? Figure 1 shows ILRS, EPMA, and IE /2.
Filter Techniques
Any thoughtful student of filter theory (or resolute experimenter) will have noticed that you can improve the smoothness of a filter by running it through itself multiple times, at the cost of increasing phase lag.
There is a complementary technique (called twicing by J.W. Tukey) which can be used to improve phase lag. If L stands for the operation of running data through a low pass filter, then twicing can be described by:
L' = L(time series) + L(time series - L(time series))
That is, we add a moving average of the difference between the input and the moving average to the moving average. This is algebraically equivalent to:
2L-L(L)
This is the Double Exponential Moving Average or DEMA , popularized by Patrick Mulloy in TASAC (January/February 1994).
In our taxonomy, DEMA has some phase lag (although it exponentially approaches 0) and is somewhat noisy, comparable to IE /2 indicator.
We will use these two techniques to construct our better moving average, after we explore the first one a little more closely.
Fixing Overshoot
An n-day EMA has smoothing constant alpha=2/(n+1) and a lag of (n-1)/2.
Thus EMA (3) has lag 1, and EMA (11) has lag 5. Figure 2 shows that, if I am willing to incur 5 days of lag, I get a smoother moving average if I run EMA (3) through itself 5 times than if I just take EMA (11) once.
This suggests that if EPMA and DEMA have 0 or low lag, why not run fast versions (eg DEMA (3)) through themselves many times to achieve a smooth result? The problem is that multiple runs though these filters increase their tendency to overshoot the data, giving an unusable result. This is because the amplitude response of DEMA and EPMA is greater than 1 at certain frequencies, giving a gain of much greater than 1 at these frequencies when run though themselves multiple times. Figure 3 shows DEMA (7) and EPMA(7) run through themselves 3 times. DEMA^3 has serious overshoot, and EPMA^3 is terrible.
The solution to the overshoot problem is to recall what we are doing with twicing:
DEMA (n) = EMA (n) + EMA (time series - EMA (n))
The second term is adding, in effect, a smooth version of the derivative to the EMA to achieve DEMA . The derivative term determines how hot the moving average's response to linear trends will be. We need to simply turn down the volume to achieve our basic building block:
EMA (n) + EMA (time series - EMA (n))*.7;
This is algebraically the same as:
EMA (n)*1.7-EMA( EMA (n))*.7;
I have chosen .7 as my volume factor, but the general formula (which I call "Generalized Dema") is:
GD (n,v) = EMA (n)*(1+v)-EMA( EMA (n))*v,
Where v ranges between 0 and 1. When v=0, GD is just an EMA , and when v=1, GD is DEMA . In between, GD is a cooler DEMA . By using a value for v less than 1 (I like .7), we cure the multiple DEMA overshoot problem, at the cost of accepting some additional phase delay. Now we can run GD through itself multiple times to define a new, smoother moving average T3 that does not overshoot the data:
T3(n) = GD ( GD ( GD (n)))
In filter theory parlance, T3 is a six-pole non-linear Kalman filter. Kalman filters are ones which use the error (in this case (time series - EMA (n)) to correct themselves. In Technical Analysis , these are called Adaptive Moving Averages; they track the time series more aggressively when it is making large moves.
What is R-squared Adaptive?
One tool available in forecasting the trendiness of the breakout is the coefficient of determination ( R-squared ), a statistical measurement.
The R-squared indicates linear strength between the security's price (the Y - axis) and time (the X - axis). The R-squared is the percentage of squared error that the linear regression can eliminate if it were used as the predictor instead of the mean value. If the R-squared were 0.99, then the linear regression would eliminate 99% of the error for prediction versus predicting closing prices using a simple moving average .
R-squared is used here to derive a T3 factor used to modify price before passing price through a six-pole non-linear Kalman filter.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
Natural Market Mirror (NMM) and NMAs w/ Dynamic Zones [Loxx]Natural Market Mirror (NMM) and NMAs w/ Dynamic Zones is a very complex indicator derived from Sloman's Ocean Theory. This indicator contains 3 core outputs and those outputs, depending on the one you select to be used to crate a long/short signal, will be highlighted and bound by Dynamic Zones. Pre-smoothing of source input is available, you only need to increase the period length to greater than 1. The smoothing algorithm used here it's Ehlers Two-pole Super Smoother. This indicator should be used as you would use the popular QQE, the difference being this indicator is multi-level momentum adaptive, and QQE is fixed RSI-based. This indicator is multilayer adaptive.
The three core indicators calculations are as follows:
NMM = Natural Market Mirror, solid line
NMF = Natural Moving Average Fast, dashed line (white when off)
NMA = Natural Moving Average Regular, dashed line (yellow when off)
Whichever one you select to be used as the signal output base, that line with increased in width and change color to match the price inputted trend. The Dynamic Zones will then readjust around that selected output and form a new bounding zone for signal output.
What is the Ocean Natural Market Mirror?
Created by Jim Sloman, the NMA is a momentum indicator that automatically adjusts to volatility without being programed to do so. For more info, read his guide "Ocean Theory, an Introduction"
What is the Ocean Natural Moving Average?
Also created by Jim Sloman, the NMA is a moving average that automatically adjusts to volatility.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included
Bar coloring
3 types of signal output options
Alerts
Loxx's Expanded Source Types
Price-Filtered Spearman Rank Correl. w/ Floating Levels [Loxx]Price-Filtered Spearman Rank Correl. w/ Floating Levels is a Spearman Rank Correlation indicator with optional source filtering and floating levels.
What is Spearman rank correlation?
Spearman rank correlation, also known as Spearman coefficient is a formula used to identify the strength of the link between two datasets. This coefficient is a method that can be used to assess the strength of a relationship apart from the direction it takes. The formula, named after Charles Spearman, a mathematician, can only be used in circumstances where data can be categorized or put in order, for instance, the highest to the lowest.
For a better understanding of Spearman coefficient, it helps to get a sense of what monotonic function means. There’s a monotonic relationship under these circumstances:
– When the variable values rise together.
– When one variable value rises the other variable value lowers.
– The rate of movement of the variables need not necessarily be constant.
The Spearman correlation coefficient or rs, between +1 and -1, where +1 indicates a perfect strength between variables, while zero shows no association and -1 shows a perfect negative strength.
Spearman rank correlation theory:
A nonparametric (distribution-free) rank statistic proposed by Spearman in 1904 as a measure of the strength of the associations between two variables (Lehmann and D'Abrera 1998). The Spearman rank correlation coefficient can be used to give an R-estimate, and is a measure of monotone association that is used when the distribution of the data make Pearson's correlation coefficient undesirable or misleading.
Included:
Zero-line and signal cross options for bar coloring, signals, and alerts
Alerts
3 Signal types
Loxx's Expanded Source Types
Dynamic Zone Range on OMA [Loxx]Dynamic Zone Range on OMA is an One More Moving Average oscillator with Dynamic Zones.
What is the One More Moving Average (OMA)?
The usual story goes something like this : which is the best moving average? Everyone that ever started to do any kind of technical analysis was pulled into this "game". Comparing, testing, looking for new ones, testing ...
The idea of this one is simple: it should not be itself, but it should be a kind of a chameleon - it should "imitate" as much other moving averages as it can. So the need for zillion different moving averages would diminish. And it should have some extra, of course:
The extras:
it has to be smooth
it has to be able to "change speed" without length change
it has to be able to adapt or not (since it has to "imitate" the non-adaptive as well as the adaptive ones)
The steps:
Smoothing - compared are the simple moving average (that is the basis and the first step of this indicator - a smoothed simple moving average with as little lag added as it is possible and as close to the original as it is possible) Speed 1 and non-adaptive are the reference for this basic setup.
Speed changing - same chart only added one more average with "speeds" 2 and 3 (for comparison purposes only here)
Finally - adapting : same chart with SMA compared to one more average with speed 1 but adaptive (so this parameters would make it a "smoothed adaptive simple average") Adapting part is a modified Kaufman adapting way and this part (the adapting part) may be a subject for changes in the future (it is giving satisfactory results, but if or when I find a better way, it will be implemented here)
Some comparisons for different speed settings (all the comparisons are without adaptive turned on, and are approximate. Approximation comes from a fact that it is impossible to get exactly the same values from only one way of calculation, and frankly, I even did not try to get those same values).
speed 0.5 - T3 (0.618 Tilson)
speed 2.5 - T3 (0.618 Fulks/Matulich)
speed 1 - SMA , harmonic mean
speed 2 - LWMA
speed 7 - very similar to Hull and TEMA
speed 8 - very similar to LSMA and Linear regression value
Parameters:
Length - length (period) for averaging
Source - price to use for averaging
Speed - desired speed (i limited to -1.5 on the lower side but it even does not need that limit - some interesting results with speeds that are less than 0 can be achieved)
Adaptive - does it adapt or not
Variety Moving Averages w/ Dynamic Zones contains 33 source types and 35+ moving averages with double dynamic zones levels.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included
4 signal types
Bar coloring
Alerts
Channels fill
Variety Moving Averages w/ Dynamic Zones [Loxx]Variety Moving Averages w/ Dynamic Zones contains 33 source types and 35+ moving averages with double dynamic zones levels.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included
Bar coloring
Alerts
Channels fill
Loxx's Expanded Source Types
35+ moving average types
PA-Adaptive MACD w/ Variety Levels [Loxx]PA-Adaptive MACD w/ Variety Levels is a Phase Accumulation Adaptive MACD with both floating and quantile levels. This is tuned for Forex. You'll have to adjust the Phase Accumulation Cycle settings to work for crypto and stock markets.
What is MACD?
Moving average convergence divergence ( MACD ) is a trend-following momentum indicator that shows the relationship between two moving averages of a security’s price. The MACD is calculated by subtracting the 26-period exponential moving average ( EMA ) from the 12-period EMA .
What is the Phase Accumulation Cycle?
The phase accumulation method of computing the dominant cycle is perhaps the easiest to comprehend. In this technique, we measure the phase at each sample by taking the arctangent of the ratio of the quadrature component to the in-phase component. A delta phase is generated by taking the difference of the phase between successive samples. At each sample we can then look backwards, adding up the delta phases.When the sum of the delta phases reaches 360 degrees, we must have passed through one full cycle, on average.The process is repeated for each new sample.
The phase accumulation method of cycle measurement always uses one full cycle’s worth of historical data.This is both an advantage and a disadvantage.The advantage is the lag in obtaining the answer scales directly with the cycle period.That is, the measurement of a short cycle period has less lag than the measurement of a longer cycle period. However, the number of samples used in making the measurement means the averaging period is variable with cycle period. longer averaging reduces the noise level compared to the signal.Therefore, shorter cycle periods necessarily have a higher out- put signal-to-noise ratio.
Included:
Zero-line and signal cross options for bar coloring, signals, and alerts
Alerts
Signals
Loxx's Expanded Source Types
4 moving average types
Williams %R on Chart w/ Dynamic Zones [Loxx]Williams %R on Chart w/ Dynamic Zones is a Williams %R indicator but instead of being an oscillator it appears on chart. The WPR calculation used here leverages T3 moving average for its calculation. In addition, the WPR is bound by Dynamic Zones.
What is Williams %R?
Williams %R , also known as the Williams Percent Range, is a type of momentum indicator that moves between 0 and -100 and measures overbought and oversold levels. The Williams %R may be used to find entry and exit points in the market. The indicator is very similar to the Stochastic oscillator and is used in the same way. It was developed by Larry Williams and it compares a stock’s closing price to the high-low range over a specific period, typically 14 days or periods.
What is T3 moving average?
Developed by Tim Tillson, the T3 Moving Average is considered superior to traditional moving averages as it is smoother, more responsive and thus performs better in ranging market conditions as well.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included
Bar coloring
Channels fill
Loxx's Expanded Source Types
35+ moving average types
TMhi hi //@version=4
study(title="TM", shorttitle=" TM 15M", overlay=true)
up15on = input(true, title="15 Minute Opening Range High")
down15on = input(true, title="15 Minute Opening Range Low")
is_newbar(res) => change(time(res)) != 0
adopt(r, s) => security(syminfo.tickerid, r, s)
//high_range = valuewhen(is_newbar('D'),high,0)
//low_range = valuewhen(is_newbar('D'),low,0)
high_rangeL = valuewhen(is_newbar('D'),high,0)
low_rangeL = valuewhen(is_newbar('D'),low,0)
diff = (high_rangeL-low_rangeL)/2.35
up15 = plot(up15on ? adopt('15', high_rangeL): na, color = #009900, linewidth=1,style=plot.style_line)
down15 = plot(down15on ? adopt('15', low_rangeL): na, color = #ff0000, linewidth=1,style=plot.style_line)
diffup15 = plot(up15on ? adopt('15', (high_rangeL+diff)): na, color = #009900, linewidth=1,style=plot.style_line)
diffdown15 = plot(down15on ? adopt('15', (low_rangeL-diff)): na, color = #ff0000, linewidth=1,style=plot.style_line)