Tops & Bottoms - Time of Day Report█ OVERVIEW
The indicator tracks and reports the percentage of occurrence of daily tops and bottoms by the time of the day.
█ CONCEPTS
At certain times during the trading day, the market reverses and marks the high or low of the day. Tops and bottoms are vital when entering a trade, as they will decide if you are catching the train or being straight offside. They are equally crucial when exiting a position, as they will determine if you are closing at the optimal price or seeing your unrealized profits vanish.
This indicator is before all for educational purposes. It aims to make the knowledge available to all traders, facilitate understanding of the various markets, and ultimately get to know your trading pairs by heart.
Tops and bottoms percentage of occurrence on EURGBP (London time).
Up days versus down days on EURUSD (London time).
█ FEATURES
Selectable time zones
Present the column chart in your local time zone (or other market participants).
Configurable time range filter
Select the period to report from.
Day type filter
Analyze all days, or filter only up days or down days.
█ HOW TO USE
Plot the indicator and visit the 1-hour or 30-minute timeframe.
█ NOTES
Timeframe choice
The 1-hour timeframe produces a higher number of days sampled. Prefer the usage of the 30-minute timeframe when your market starts at 9:30 AM.
Daylight Saving Time (DST)
The exchange time and geographical time zone options may observe Daylight Saving Time, unlike UTC+0.
Statistics
Monte Carlo Simulation - Your Strategy [Kioseff Trading]Hello!
This script “Monte Carlo Simulation - Your Strategy” uses Monte Carlo simulations for your inputted strategy returns or the asset on your chart!
Features
Monte Carlo Simulation: Performs Monte Carlo simulation to generate multiple future paths.
Asset Price or Strategy: Can simulate either future asset prices based on historical log returns or a specific trading strategy's future performance.
User-Defined Input: Allows you to input your own historical returns for simulation.
Statistical Methods: Offers two simulation methods—Gaussian (Normal) distribution and Bootstrapping.
Graphical Display: Provides options for graphical representation, including line plots and histograms.
Cumulative Probability Target: Enables setting a user-defined cumulative probability target to quantify simulation results.
Adjustable Parameters: Offers numerous user-adjustable settings like number of simulations, forecast length, and more.
Historical Data Points: Option to specify the amount of historical data to be used in the simulation (price).
Custom Binning: Allows you to select the binning method for histograms, with options like Sturges, Rice, and Square Root.
Best/Worst Case: Allows you to show only the best case / worst case outcome (range) for all simulations!
Scatterplot: allows you to show up to 1000 potential outcomes for a specified trade number (or bars forward price endpoint) using a scatter plot.
The image above shows the primary components of the indicator!
The image above shows the best/worst case outcome feature in action!
The image above shows a "fun feature" where 1000 simulated end points for a 15-bar price trajectory are shown as a scatter plot!
How To Perform a Monte Carlo Simulation On Your Strategy
Really, you can input any data into the indicator it will perform a Monte Carlo Simulation on it :D
The following instructions show how to export your strategy results from TradingView to an Excel File, copy the data, and input it into the indicator.
However , you are not limited to following this method!
Wherever your strategy results are stored, simply copy and paste them into the indicator text area in the settings and simulations will begin.
Returns Should Follow This Format
1
3
-3
2
-5
The numbers are presented as a single column. No commas or separators used.
The numbers above are in sequential order. A return of "1" for the first trade and a return of "-5" for the last trade. Your strategy returns will likely be in sequential order already so don't worry too much about this (:
How To Perform a Monte Carlo Simulation On Your TradingView Strategy With Excel Data
Export your strategy returns to an excel file using TradingView
Navigate to your downloads folder to column G "Profit"
Click the column and press CTRL + SPACE to highlight the entire column
Press CTRL + C to copy the entire column
Open this indicator's settings and paste the returns into the text area
The image above illustrates the process!
Notes on Inputting Returns
*Must input your returns without a separate as a vertical list
*The initial text area can only hold so many return values. If your list of trades is large you can input additional returns into two additional text areas at the bottom of the indicator settings.
That should be it; thank you for checking this out!
Xeeder - US Government Bonds AnalysisXeeder - US Government Bonds Analysis (USBA)
The "Xeeder - US Government Bonds Analysis" (USBA) is a comprehensive tool designed to assist traders in analyzing the spread, historical volatility, and correlation between two different U.S. Government Bonds. This indicator is crucial for understanding the relative performance and risk factors between two bond assets.
Details of the Indicator:
Bond Input Settings: This feature allows traders to select two different U.S. Government Bonds from a dropdown list. The bonds range from 1-month to 30-year maturities.
Timeframe Settings: Traders can choose the timeframe for the analysis, such as Daily, Weekly, etc.
Moving Average (MA) Settings: The indicator offers various types of moving averages like SMA, EMA, WMA, etc., for calculating the spread's moving average. Traders can also specify the length of the moving average.
Spread Calculation: The indicator calculates the spread between the selected bonds and plots it on the chart.
Historical Volatility: The indicator calculates and plots the historical volatility of the spread, which is useful for risk assessment.
Correlation Coefficient: This feature calculates the correlation between the two selected bonds over a specified period.
How to Use the Indicator:
Select Bonds: Choose two U.S. Government Bonds from the dropdown list that you are interested in analyzing.
Choose Timeframe: Select the timeframe that aligns with your trading or investment strategy.
Configure MA Settings: Adjust the type and length of the moving average according to your needs.
Analyze Plots: Observe the plotted spread, its moving average, historical volatility, and correlation coefficient to gain insights into the bonds' relative performance and risk factors.
Interpret Data: Use the plotted data to make informed decisions about bond trading or hedging strategies.
Example of Usage:
As a trader focused on swing trading and strategy development, you can use the USBA indicator to:
Select Bonds: Choose bonds that you believe will show significant spread changes based on your macroeconomic and geopolitical analysis.
Adjust Settings: Configure the MA settings to suit your trading strategy.
Analysis and Comparison: Examine the spread, historical volatility, and correlation to identify potential trading opportunities or hedging strategies.
Content Creation: Use the insights gained to write compelling articles on bond market trends, risks, and opportunities, enriching your financial journalism portfolio.
Remember, the USBA indicator is a versatile tool that provides a multi-faceted analysis of U.S. Government Bonds. Always consider your broader trading strategy and market conditions when using this tool.
Seasonal Trend by LogReturnSeasonal trend in terms of stocks refers to typical and recurring patterns in stock prices that happen at a specific time of the year. There are many theories and beliefs regarding seasonal trends in the financial markets, and some traders use these patterns to guide their investment decisions.
This indicator calculates the trend by "Daily" logarithmic returns of the past years.
So, you should use this indicator with a "Daily" mainchart.
Note: If you select more years in the past than data is available, the line turns red.
Statistical Package for the Trading Sciences [SS]
This is SPTS.
It stands for Statistical Package for the Trading Sciences.
Its a play on SPSS (Statistical Package for the Social Sciences) by IBM (software that, prior to Pinescript, I would use on a daily basis for trading).
Let's preface this indicator first:
This isn't so much an indicator as it is a project. A passion project really.
This has been in the works for months and I still feel like its incomplete. But the plan here is to continue to add functionality to it and actually have the Pinecoding and Tradingview community contribute to it.
As a math based trader, I relied on Excel, SPSS and R constantly to plan my trades. Since learning a functional amount of Pinescript and coding a lot of what I do and what I relied on SPSS, Excel and R for, I use it perhaps maybe a few times a week.
This indicator, or package, has some of the key things I used Excel and SPSS for on a daily and weekly basis. This also adds a lot of, I would say, fairly complex math functionality to Pinescript. Because this is adding functionality not necessarily native to Pinescript, I have placed most, if not all, of the functionality into actual exportable functions. I have also set it up as a kind of library, with explanations and tips on how other coders can take these functions and implement them into other scripts.
The hope here is that other coders will take it, build upon it, improve it and hopefully share additional functionality that can be added into this package. Hence why I call it a project. Okay, let's get into an overview:
Current Functions of SPTS:
SPTS currently has the following functionality (further explanations will be offered below):
Ability to Perform a One-Tailed, Two-Tailed and Paired Sample T-Test, with corresponding P value.
Standard Pearson Correlation (with functionality to be able to calculate the Pearson Correlation between 2 arrays).
Quadratic (or Curvlinear) correlation assessments.
R squared Assessments.
Standard Linear Regression.
Multiple Regression of 2 independent variables.
Tests of Normality (with Kurtosis and Skewness) and recognition of up to 7 Different Distributions.
ARIMA Modeller (Sort of, more details below)
Okay, so let's go over each of them!
T-Tests
So traditionally, most correlation assessments on Pinescript are done with a generic Pearson Correlation using the "ta.correlation" argument. However, this is not always the best test to be used for correlations and determine effects. One approach to correlation assessments used frequently in economics is the T-Test assessment.
The t-test is a statistical hypothesis test used to determine if there is a significant difference between the means of two groups. It assesses whether the sample means are likely to have come from populations with the same mean. The test produces a t-statistic, which is then compared to a critical value from the t-distribution to determine statistical significance. Lower p-values indicate stronger evidence against the null hypothesis of equal means.
A significant t-test result, indicating the rejection of the null hypothesis, suggests that there is statistical evidence to support that there is a significant difference between the means of the two groups being compared. In practical terms, it means that the observed difference in sample means is unlikely to have occurred by random chance alone. Researchers typically interpret this as evidence that there is a real, meaningful difference between the groups being studied.
Some uses of the T-Test in finance include:
Risk Assessment: The t-test can be used to compare the risk profiles of different financial assets or portfolios. It helps investors assess whether the differences in returns or volatility are statistically significant.
Pairs Trading: Traders often apply the t-test when engaging in pairs trading, a strategy that involves trading two correlated securities. It helps determine when the price spread between the two assets is statistically significant and may revert to the mean.
Volatility Analysis: Traders and risk managers use t-tests to compare the volatility of different assets or portfolios, assessing whether one is significantly more or less volatile than another.
Market Efficiency Tests: Financial researchers use t-tests to test the Efficient Market Hypothesis by assessing whether stock price movements follow a random walk or if there are statistically significant deviations from it.
Value at Risk (VaR) Calculation: Risk managers use t-tests to calculate VaR, a measure of potential losses in a portfolio. It helps assess whether a portfolio's value is likely to fall below a certain threshold.
There are many other applications, but these are a few of the highlights. SPTS permits 3 different types of T-Test analyses, these being the One Tailed T-Test (if you want to test a single direction), two tailed T-Test (if you are unsure of which direction is significant) and a paired sample t-test.
Which T is the Right T?
Generally, a one-tailed t-test is used to determine if a sample mean is significantly greater than or less than a specified population mean, whereas a two-tailed t-test assesses if the sample mean is significantly different (either greater or less) from the population mean. In contrast, a paired sample t-test compares two sets of paired observations (e.g., before and after treatment) to assess if there's a significant difference in their means, typically used when the data points in each pair are related or dependent.
So which do you use? Well, it depends on what you want to know. As a general rule a one tailed t-test is sufficient and will help you pinpoint directionality of the relationship (that one ticker or economic indicator has a significant affect on another in a linear way).
A two tailed is more broad and looks for significance in either direction.
A paired sample t-test usually looks at identical groups to see if one group has a statistically different outcome. This is usually used in clinical trials to compare treatment interventions in identical groups. It's use in finance is somewhat limited, but it is invaluable when you want to compare equities that track the same thing (for example SPX vs SPY vs ES1!) or you want to test a hypothesis about an index and a leveraged share (for example, the relationship between FNGU and, say, MSFT or NVDA).
Statistical Significance
In general, with a t-test you would need to reference a T-Table to determine the statistical significance of the degree of Freedom and the T-Statistic.
However, because I wanted Pinescript to full fledge replace SPSS and Excel, I went ahead and threw the T-Table into an array, so that Pinescript can make the determination itself of the actual P value for a t-test, no cross referencing required :-).
Left tail (Significant):
Both tails (Significant):
Distributed throughout (insignificant):
As you can see in the images above, the t-test will also display a bell-curve analysis of where the significance falls (left tail, both tails or insignificant, distributed throughout).
That said, I have not included this function for the paired sample t-test because that is a bit more nuanced. But for the one and two tailed assessments, the indicator will provide you the P value.
Pearson Correlation Assessment
I don't think I need to go into too much detail on this one.
I have put in functionality to quickly calculate the Pearson Correlation of two array's, which is not currently possible with the "ta.correlation" function.
Quadratic (Curvlinear) Correlation
Not everything in life is linear, sometimes things are curved!
The Pearson Correlation is great for linear assessments, but tends to under-estimate the degree of the relationship in curved relationships. There currently is no native function to t-test for quadratic/curvlinear relationships, so I went ahead and created one.
You can see an example of how Quadratic and Pearson Correlations vary when you look at CME_MINI:ES1! against AMEX:DIA for the past 10 ish months:
Pearson Correlation:
Quadratic Correlation:
One or the other is not always the best, so it is important to check both!
R-Squared Assessments:
The R-squared value, or the square of the Pearson correlation coefficient (r), is used to measure the proportion of variance in one variable that can be explained by the linear relationship with another variable. It represents the goodness-of-fit of a linear regression model with a single predictor variable.
R-Squared is offered in 3 separate forms within this indicator. First, there is the generic R squared which is taking the square root of a Pearson Correlation assessment to assess the variance.
The next is the R-Squared which is calculated from an actual linear regression model done within the indicator.
The first is the R-Squared which is calculated from a multiple regression model done within the indicator.
Regardless of which R-Squared value you are using, the meaning is the same. R-Square assesses the variance between the variables under assessment and can offer an insight into the goodness of fit and the ability of the model to account for the degree of variance.
Here is the R Squared assessment of the SPX against the US Money Supply:
Standard Linear Regression
The indicator contains the ability to do a standard linear regression model. You can convert one ticker or economic indicator into a stock, ticker or other economic indicator. The indicator will provide you with all of the expected information from a linear regression model, including the coefficients, intercept, error assessments, correlation and R2 value.
Here is AAPL and MSFT as an example:
Multiple Regression
Oh man, this was something I really wanted in Pinescript, and now we have it!
I have created a function for multiple regression, which, if you export the function, will permit you to perform multiple regression on any variables available in Pinescript!
Using this functionality in the indicator, you will need to select 2, dependent variables and a single independent variable.
Here is an example of multiple regression for NASDAQ:AAPL using NASDAQ:MSFT and NASDAQ:NVDA :
And an example of SPX using the US Money Supply (M2) and AMEX:GLD :
Tests of Normality:
Many indicators perform a lot of functions on the assumption of normality, yet there are no indicators that actually test that assumption!
So, I have inputted a function to assess for normality. It uses the Kurtosis and Skewness to determine up to 7 different distribution types and it will explain the implication of the distribution. Here is an example of SP:SPX on the Monthly Perspective since 2010:
And NYSE:BA since the 60s:
And NVDA since 2015:
ARIMA Modeller
Okay, so let me disclose, this isn't a full fledge ARIMA modeller. I took some shortcuts.
True ARIMA modelling would involve decomposing the seasonality from the trend. I omitted this step for simplicity sake. Instead, you can select between using an EMA or SMA based approach, and it will perform an autogressive type analysis on the EMA or SMA.
I have tested it on lookback with results provided by SPSS and this actually works better than SPSS' ARIMA function. So I am actually kind of impressed.
You will need to input your parameters for the ARIMA model, I usually would do a 14, 21 and 50 day EMA of the close price, and it will forecast out that range over the length of the EMA.
So for example, if you select the EMA 50 on the daily, it will plot out the forecast for the next 50 days based on an autoregressive model created on the EMA 50. Here is how it looks on AMEX:SPY :
You can also elect to plot the upper and lower confidence bands:
Closing Remarks
So that is the indicator/package.
I do hope to continue expanding its functionality, but as of now, it does already have quite a lot of functionality.
I really hope you enjoy it and find it helpful. This. Has. Taken. AGES! No joke. Between referencing my old statistics textbooks, trying to remember how to calculate some of these things, and wanting to throw my computer against the wall because of errors in the code, this was a task, that's for sure. So I really hope you find some usefulness in it all and enjoy the ability to be able to do functions that previously could really only be done in external software.
As always, leave your comments, suggestions and feedback below!
Take care!
Correlation Coefficient based on Log ReturnsMeasuring correlations based on log returns, rather than raw prices or simple returns, offers several advantages:
- stationarity: Log returns are more stationary, resulting in more meaningful and reliable results
- volatility: Log returns give a consistent measure of relative changes of assets with different volatility
Log returns are time-additive and often more stationary than simple returns, making them statistically more reliable for analyses in financial contexts. Additionally, they provide a consistent measure of relative price changes and align more closely with the assumptions of many statistical models, including normal distribution.
K's Reversal Indicator IIIK's Reversal Indicator III is based on the concept of autocorrelation of returns. The main theory is that extreme autocorrelation (trending) that coincide with a technical signals such as one from the RSI, may result in a powerful short-term signal that can be exploited.
The indicator is calculated as follows:
1. Calculate the price differential (returns) as the current price minus the previous price.
2. the correlation between the current return and the return from 14 periods ago using a lookback of 14 periods.
3. Calculate a 14-period RSI on the close prices.
To generate the signals, use the following rules:
* A bullish signal is generated whenever the correlation is above 0.60 while the RSI is below 40.
* A bearish signal is generated whenever the correlation is above 0.60 while the RSI is above 60.
Position Cost DistributionThe Position Cost Distribution indicator (also known as the Market Position Overview, Chip Distribution, or CYQ Algorithm) provides an estimate of how shares are distributed across different price levels. Visually, it resembles the Volume Profile indicator, though they rely on distinct computational approaches.
🟠 Principle
The Position Cost Distribution algorithm is based on the principle that a security's total shares outstanding usually remains constant, except under conditions like stock splits, reverse splits, or new share issuance. It views all trading activity as simply exchanging share positions between holders at different price points.
By analyzing daily trade volume and the prior day's distribution, the algorithm infers the resulting share distribution after each day. By tracking these inferred transpositions over time, the indicator builds up an aggregate view of the estimated share concentration at each price level. This provides insights into potential buying and selling pressure zones that could form support or resistance areas.
Together with the Volume Profile, the Position Cost Distribution gives traders multiple lenses for examining market structure from both a volume and positional standpoint. Both can help identify meaningful technical price levels.
🟠 Algorithm
The algorithm initializes by allocating all shares to the price range encompassed by the first bar displayed on the chart. Preferably, the chart window should include the stock's IPO date, allowing the model to distribute shares specifically to the IPO price.
For subsequent trading sessions, the indicator performs the following calculations:
1. The daily turnover ratio is calculated by dividing the bar's trading volume by total outstanding shares.
2. For each price level (bucket), the number of shares is reduced by the turnover amount to represent shares transferring from existing holders.
3. The bar's total volume is then added to buckets corresponding to that period's price range.
Currently, the model assumes each share has an equal probability of being exchanged, regardless of how long ago it was acquired or at what price. Potential optimizations could incorporate factors like making shares held longer face a smaller chance of transfer compared to more recently purchased shares.
────────────────────────────────────────────
中文介绍:该指标为“筹码分布”的一个 TradingView 实现 :)
Median of Means Estimator Median of Means (MoM) is a measure of central tendency like mean (average) and median. However, it could be a better and robust estimator of central tendency when the data is not normal, asymmetric, have fat tails (like stock price data) and have outliers. The MoM can be used as a robust trend following tool and in other derived indicators.
Median of means (MoM) is calculated as follows, the MoM estimator shuffles the "n" data points and then splits them into k groups of m data points (n= k*m). It then computes the Arithmetic Mean of each group (k). Finally, it calculate the median over the resulting k Arithmetic Means. This technique diminishes the effect that outliers have on the final estimation by splitting the data and only considering the median of the resulting sub-estimations. This preserves the overall trend despite the data shuffle.
Below is an example to illustrate the advantages of MoM
Set A Set B Set C
3 4 4
3 4 4
3 5 5
3 5 5
4 5 5
4 5 5
5 5 5
5 5 5
6 6 8
6 6 8
7 7 10
7 7 15
8 8 40
9 9 50
10 100 100
Median 5 5 5
Mean 5.5 12.1 17.9
MoM 5.7 6.0 17.3
For all three sets the median is the same, though set A and B are the same except for one outlier in set B (100) it skews the mean but the median is resilient. However, in set C the group has several high values despite that the median is not responsive and still give 5 as the central tendency of the group, but the median of means is a value of 17.3 which is very close to the group mean 17.9. In all three cases (set A, B and C) the MoM provides a better snapshot of the central tendency of the group. Note: The MoM is dependent on the way we split the data initially and the value might slightly vary when the randomization is done sevral time and the resulting value can give the confidence interval of the MoM estimator.
Cumulative Distribution of a Dataset [SS]This is the Cumulative Distribution of a Dataset indicator that also calculates the Kurtosis and Skewness for a selected dataset and determines the normality and distribution type.
What it does, in pragmatic terms?
In the most simplest terms, it calculates the cumulative distribution function (or CDF) of user-defined dataset.
The cumulative distribution function (CDF) is a concept used in statistics and probability to describe how the probability of a random variable taking on a certain value or less is distributed across the entire range of possible values. In simpler terms, you can conceptualize the CDF as this:
Imagine you have a list of data, such as test scores of students in a class. The CDF helps you answer questions like, "What's the probability that a randomly chosen student scored 80 or less on the test?"
Or in our case, say we are in a strong up or downtrend on a stock. The CDF can help us answer questions like "Based on this current xyz trend, what is the probability that a ticker will fall above X price or below Y price".
Within the indicator, you can manually assess a price of interest. Let's say, for NVDA, we want to know the probability NVDA goes above or below $450. We can enter $450 into the indicator and get this result:
Other functions:
Kurtosis and Skewness Functions:
In addition to calculating and plotting the CDF, we can also plot the kurtosis & Skewness.
This can help you look for outlier periods where the distribution of your dataset changed. It can potentially alert you to when a stock is behaving abnormally and when it is more stable and evenly distributed.
Tests of normality
The indicator will use the kurtosis and skewness to determine the normality of the dataset. The indicator is programmed to recognize up to 7 different distribution types and alert you to them and the implications they have in your overall assessment.
e.g. #1 AMC during short squeeze:
e.g. #2: BA during the COVID crash:
Plotting the standardized Z-Score of the Distribution Dataset
You can also standardize the dataset by converting it into Z-Score format:
Plot the raw, CDF results
Two values are plotting, the green and the red. The green represents the probability of a ticker going higher than the current value. The red represents the probability of a ticker going lower than the current value.
Limitations
There are some limitations of the indicator which I think are important to point out. They are:
The indicator cannot tell you timelines, it can only tell you the general probability that data within the dataset will fall above or below a certain value.
The indicator cannot take into account projected periods of consolidation. It is possible a ticker can remain in a consolidation phase for a very long time. This would have the effect of stabilizing the probability in one direction (if there was a lot of downside room, it can normalize the data out so that the extent of the downside probability is mitigated). Thus, its important to use judgement and other methods to assess the likelihood that a stock will pullback or continue up, based on the overall probability.
The indicator is only looking at an individual dataset.
Using this indicator, you have to omit a large amount of data and look at solely a confined dataset. In a way, this actually improves the accuracy, but can also be misleading, depending on the size and strength of the dataset being chosen. It is important to balance your choice of dataset time with such things as:
a) The strength of the uptrend or downtrend.
b) The length of the uptrend or downtrend.
c) The overall performance of the stock leading into the dataset time period
And that is the indicator in a nutshell.
Hopefully you find it helpful and interesting. Feel free to leave questions, comments and suggestions below.
Safe trades everyone and take care!
Returns Model by TenozenHey there! I've been diving into the book "Paul Wilmott on Quantitative Finance," and I stumbled upon this cool model for calculating and modeling returns. Basically, it helps us figure out how much a price has changed over a set number of periods—I like to use 20 periods as a default. Once we get that rate of change value, we crunch some numbers to find the standard deviation and mean using all the historical data we have. That's the foundation of this model.
Now, let's talk about how to use it. This model shows us how returns and price behavior are connected. When returns hang out in the +1 to +2 standard deviation range, it usually means returns are about to drop, and vice versa. Often, this leads to corresponding price moves. But here's the thing: sometimes prices don't do what we expect. Why? It's because there's another hidden factor at play—I like to call it "power."
This "power" isn't something we can see directly, but it's there. Basically, when returns are within that standard deviation range, the market faces resistance when trying to move in its preferred direction, whether bullish or bearish. The strength of this "power" determines if the market will snap back to the average or go for a wild ride. It can show up as small price wiggles, big price jumps, or lightning-fast moves. By understanding this "power," we can get a better handle on what the market might do next and avoid getting blindsided. In the meantime, I couldn't explain "power" yet, but In the future, when I've learned enough, I'd love to share the model with you guys!
So... I'm planning to explore and share more models from this book as I learn, even if those pesky math formulas can be tough to crack. I hope you find this indicator as helpful as I do, and if you've got any suggestions or feedback, please feel free to share! Ciao!
TTP PNR filterPNR filter uses the "percentile nearest rank" method to produce signals from any source including oscillator indicators and price bars.
Features:
* Length - how many candles back in time to use for calculating PNR
* % low and high - what range of the spread of values captured will form the PNR band. Use 99&100 to create a band on the 1% highest percentile or 0&1 to create a band in the lowest percentile. It accepts float numbers so you can find very rare occurrences.
* src - by default it will use the close price but PNR filter can be used with any source. It's particularly useful when working with oscillators like RSI, MACD, ADX, etc.
* Signal direction - The indicator will print 1 when the selected conditions are met. Once the PNR band is plotted you can chose from cross over, cross under, above and below conditions to trigger a signal.
* Signal source - the band consists in a % low and % high, this option allows you to pick which band will be used with the "signal direction" parameter.
Example configuration:
1) Select 200 as the length
2) Select % low 0 and % high 1
3) Add RSI to the chart and select it as the source parameter
4) Select signal direction cross over
5) Select signal source % high which corresponds to the 1% band
In this setup you are finding values of RSI that in the past 200 candles have been that low only 1% of the time. With each new candle the calculation window will move as well leaving the oldest candle out.
Percentile Based Trend StrengthThe "Percentile Based Trend Strength" (PBTS) calculates trend strength based on percentile values of high and low prices for various length periods and then identifies the current trend as either Bullish, Bearish, or N/A (No Trend). Here's a step-by-step explanation of the code:
Percentile Calculations:
For each specified length period (13, 21, 34, 55, 89, and 144 - Fibonacci numbers), the code calculates the 75th percentile of high prices (e.g., percentile_13H) and the 25th percentile of low prices (e.g., percentile_13L). These percentiles represent levels that prices need to exceed or fall below to indicate a strong trend.
Calculate Highest High and Lowest Low:
The highest high (75th percentile high price of longest length) and lowest low (25th percentile low price of longest length) for the longest length period (144) are calculated as highest_high and lowest_low. These values represent threshold price levels .
Trend Strength Conditions:
The code calculates various conditions to determine trend strength. For each percentile value and each length period, it checks if the percentile value is greater than the highest high (trendBull) or less than the lowest low (trendBear). These conditions are used to assess the strength of the bullish and bearish trends.
Count Bull and Count Bear:
The countBull and countBear variables count the number of bullish and bearish conditions met, respectively. These counts help evaluate trend strength.
Weak Bull and Weak Bear Count:
The code calculates the number of weak bullish and bearish conditions. Weak conditions occur when a percentile value falls within the range defined by the highest high and lowest low but doesn't meet the strong trend criteria.
Bull Strength and Bear Strength:
bullStrength and bearStrength are calculated based on the counts of bullish, bearish, weak bullish, and weak bearish conditions. These values represent the overall strength of the bullish and bearish trends.
Strong Bull and Bear Conditions:
These conditions occur when the 75th percentile of high prices (for bull conditions) or the 25th percentile of low prices (for bear conditions) exceeds or falls below the highest high or lowest low, respectively, for the specified length period.
Strong bull conditions indicate a strong upward trend, while strong bear conditions indicate a strong downward trend.
Strong conditions are indicative of more significant price movements and are considered as primary signals of trend strength.
Weak Bull and Bear Conditions:
Weak bull and bear conditions are more nuanced. They occur when the 75th percentile of high prices (for weak bull conditions) or the 25th percentile of low prices (for weak bear conditions) falls within the range defined by the highest high and lowest low for the specified length period.
In other words, prices are not strong enough to reach the extreme levels represented by the highest high or lowest low, but they still exhibit some bullish or bearish tendencies within that range.
Weak conditions suggest a less robust trend. They may indicate that while there is some bias toward a bullish or bearish trend, it is not as strong or decisive as in the case of strong conditions.
Current Trend Identification:
The current trend is determined by comparing bullStrength and bearStrength. If bullStrength is greater, it's considered a Bull trend; if bearStrength is greater, it's a Bear trend. If they are equal, the trend is identified as N/A (No Trend).
Displaying Trend Information:
The code creates a table to display the current trend, reversal probability (strength), count of bullish and bearish conditions, weak bullish and weak bearish counts, and colors the text accordingly.
Plotting Percentiles:
Finally, the code plots the percentile lines for visualization, with 20% transparency. It also plots the highest high and lowest low lines (75th and 25th percentile of the longest length 144) using their original colors.
In summary, this indicator calculates trend strength based on percentile levels of high and low prices for different length periods. It then counts the number of bullish and bearish conditions, factors in weak conditions, and compares the strengths to identify the current trend as Bullish, Bearish, or No Trend. It provides a table with trend information and visualizes percentile lines on the chart.
Paytience DistributionPaytience Distribution Indicator User Guide
Overview:
The Paytience Distribution indicator is designed to visualize the distribution of any chosen data source. By default, it visualizes the distribution of a built-in Relative Strength Index (RSI). This guide provides details on its functionality and settings.
Distribution Explanation:
A distribution in statistics and data analysis represents the way values or a set of data are spread out or distributed over a range. The distribution can show where values are concentrated, values are absent or infrequent, or any other patterns. Visualizing distributions helps users understand underlying patterns and tendencies in the data.
Settings and Parameters:
Main Settings:
Window Size
- Description: This dictates the amount of data used to calculate the distribution.
- Options: A whole number (integer).
- Tooltip: A window size of 0 means it uses all the available data.
Scale
- Description: Adjusts the height of the distribution visualization.
- Options: Any integer between 20 and 499.
Round Source
- Description: Rounds the chosen data source to a specified number of decimal places.
- Options: Any whole number (integer).
Minimum Value
- Description: Specifies the minimum value you wish to account for in the distribution.
- Options: Any integer from 0 to 100.
- Tooltip: 0 being the lowest and 100 being the highest.
Smoothing
- Description: Applies a smoothing function to the distribution visualization to simplify its appearance.
- Options: Any integer between 1 and 20.
Include 0
- Description: Dictates whether zero should be included in the distribution visualization.
- Options: True (include) or False (exclude).
Standard Deviation
- Description: Enables the visualization of standard deviation, which measures the amount of variation or dispersion in the chosen data set.
- Tooltip: This is best suited for a source that has a vaguely Gaussian (bell-curved) distribution.
- Options: True (enable) or False (disable).
Color Options
- High Color and Low Color: Specifies colors for high and low data points.
- Standard Deviation Color: Designates a color for the standard deviation lines.
Example Settings:
Example Usage RSI
- Description: Enables the use of RSI as the data source.
- Options: True (enable) or False (disable).
RSI Length
- Description: Determines the period over which the RSI is calculated.
- Options: Any integer greater than 1.
Using an External Source:
To visualize the distribution of an external source:
Select the "Move to" option in the dropdown menu for the Paytience Distribution indicator on your chart.
Set it to the existing panel where your external data source is placed.
Navigate to "Pin to Scale" and pin the indicator to the same scale as your external source.
Indicator Logic and Functions:
Sinc Function: Used in signal processing, the sinc function ensures the elimination of aliasing effects.
Sinc Filter: A filtering mechanism which uses sinc function to provide estimates on the data.
Weighted Mean & Standard Deviation: These are statistical measures used to capture the central tendency and variability in the data, respectively.
Output and Visualization:
The indicator visualizes the distribution as a series of colored boxes, with the intensity of the color indicating the frequency of the data points in that range. Additionally, lines representing the standard deviation from the mean can be displayed if the "Standard Deviation" setting is enabled.
The example RSI, if enabled, is plotted along with its common threshold lines at 70 (upper) and 30 (lower).
Understanding the Paytience Distribution Indicator
1. What is a Distribution?
A distribution represents the spread of data points across different values, showing how frequently each value occurs. For instance, if you're looking at a stock's closing prices over a month, you may find that the stock closed most frequently around $100, occasionally around $105, and rarely around $110. Graphically visualizing this distribution can help you see the central tendencies, variability, and shape of your data distribution. This visualization can be essential in determining key trading points, understanding volatility, and getting an overview of the market sentiment.
2. The Rounding Mechanism
Every asset and dataset is unique. Some assets, especially cryptocurrencies or forex pairs, might have values that go up to many decimal places. Rounding these values is essential to generate a more readable and manageable distribution.
Why is Rounding Needed? If every unique value from a high-precision dataset was treated distinctly, the resulting distribution would be sparse and less informative. By rounding off, the values are grouped, making the distribution more consolidated and understandable.
Adjusting Rounding: The `Round Source` input allows users to determine the number of decimal places they'd like to consider. If you're working with an asset with many decimal places, adjust this setting to get a meaningful distribution. If the rounding is set too low for high precision assets, the distribution could lose its utility.
3. Standard Deviation and Oscillators
Standard deviation is a measure of the amount of variation or dispersion of a set of values. In the context of this indicator:
Use with Oscillators: When using oscillators like RSI, the standard deviation can provide insights into the oscillator's range. This means you can determine how much the oscillator typically deviates from its average value.
Setting Bounds: By understanding this deviation, traders can better set reasonable upper and lower bounds, identifying overbought or oversold conditions in relation to the oscillator's historical behavior.
4. Resampling
Resampling is the process of adjusting the time frame or value buckets of your data. In the context of this indicator, resampling ensures that the distribution is manageable and visually informative.
Resample Size vs. Window Size: The `Resample Resolution` dictates the number of bins or buckets the distribution will be divided into. On the other hand, the `Window Size` determines how much of the recent data will be considered. It's crucial to ensure that the resample size is smaller than the window size, or else the distribution will not accurately reflect the data's behavior.
Why Use Resampling? Especially for price-based sources, setting the window size around 500 (instead of 0) ensures that the distribution doesn't become too overloaded with data. When set to 0, the window size uses all available data, which may not always provide an actionable insight.
5. Uneven Sample Bins and Gaps
You might notice that the width of sample bins in the distribution is not uniform, and there can be gaps.
Reason for Uneven Widths: This happens because the indicator uses a 'resampled' distribution. The width represents the range of values in each bin, which might not be constant across bins. Some value ranges might have more data points, while others might have fewer.
Gaps in Distribution: Sometimes, there might be no data points in certain value ranges, leading to gaps in the distribution. These gaps are not flaws but indicate ranges where no values were observed.
In conclusion, the Paytience Distribution indicator offers a robust mechanism to visualize the distribution of data from various sources. By understanding its intricacies, users can make better-informed trading decisions based on the distribution and behavior of their chosen data source.
Bursa Malaysia Index SeriesBursa Malaysia Index Series. The index computation is as follows:-
Current aggregate Market Capitalisation/Base Aggregate Market Capitalisation x 100.
The Bursa Malaysia Index Series is calculated and disseminated on a real-time basis at 60-second intervals during Bursa’s trading hours.
Label_Trades Enter your trade information to display on chartThis indicator is an overlay for your main chart. It will display your trade entry and trade close positions on your chart.
After you place the indicator on you shart you will need to enter the trade information that you want to display.
You can open thte input setting by clicking on the gear sprocket that appears when you hover your mouse over the indicator name. There are 7 seting you will want to fill in.
Date and Time Bought
Date and Time Sold
Trade Lot Size
Select whether the trades was 'long' or 'short'
The price for buying the Trade
The price for selling the Trade
On the third tab
The code is straightforward. Using a conditional based on whtehr the trade was 'long' or 'short' determines where the labels will be placed and whether they show a long trade or short trade. It also displays a tool tip when you hover over the label. The tooltip will display the number of lots bought or sold and the price.
The lable.new() function is the meat of the indicator. I will go over a line to explainthe options available.
Pinscript manual(www.tradingview.com)
The function parameters can be called out as in the example above or the values can be placed comma seperated. If you do the latter you must enter the parameters in order. I like anming the parameters as I place them so I can easily see what I did.
label.new(
x=t_bot, // x is the time the transaction occured
y=na, // y is the for the y-axis it is not used here so 'na' tells pinescript to ignore the parameter
xloc=xloc.bar_time, // x_loc is specifying that x is a time value
yloc=yloc.belowbar, // y-loc specifies to place the label under the bar. There are other locations to use. See language reference ((www.tradingview.com)
style=label.style_triangleup, // This parameter selects the lable style. There are many other style to use, see the manual.
color=color.green, // the Label fill color
size=size.small, // the label size
tooltip=str.tostring(lot_size) + " lots bought at $" + str.tostring(bot_val)) // Some parameters are tricky. This one needs to be a string but we are using an integer value(lot_size) and a float value(bol_val). They are all concatenated via the "+" sign. In oorder to do this the numeric values need to be cast or converted into strings. The string function str.tostring() does this.
Z-Score Based Momentum Zones with Advanced Volatility ChannelsThe indicator "Z-Score Based Momentum Zones with Advanced Volatility Channels" combines various technical analysis components, including volatility, price changes, and volume correction, to calculate Z-Scores and determine momentum zones and provide a visual representation of price movements and volatility based on multi timeframe highest high and lowest low values.
Note: THIS IS A IMPROVEMNT OF "Multi Time Frame Composite Bands" INDICATOR OF MINE WITH MORE EMPHASIS ON MOMENTUM ZONES CALULATED BASED ON Z-SCORES
Input Options
look_back_length: This input specifies the look-back period for calculating intraday volatility. correction It is set to a default value of 5.
lookback_period: This input sets the look-back period for calculating relative price change. The default value is 5.
zscore_period: This input determines the look-back period for calculating the Z-Score. The default value is 500.
avgZscore_length: This input defines the length of the momentum block used in calculations, with a default value of 14.
include_vc: This is a boolean input that, if set to true, enables volume correction in the calculations. By default, it is set to false.
1. Volatility Bands (Composite High and Low):
Composite High and Low: These are calculated by combining different moving averages of the high prices (high) and low prices (low). Specifically:
a_high and a_low are calculated as the average of the highest (ta.highest) and lowest (ta.lowest) high and low prices over various look-back periods (5, 8, 13, 21, 34) to capture short and long-term trends.
b_high and b_low are calculated as the simple moving average (SMA) of the high and low prices over different look-back periods (5, 8, 13) to smooth out the trends.
high_c and low_c are obtained by averaging a_high with b_high and a_low with b_low respectively.
IDV Correction Calulation : In this script the Intraday Volatility (IDV) is calculated as the simple moving average (SMA) of the daily high-low price range divided by the closing price. This measures how much the price fluctuates in a given period.
Composite High and Low with Volatility: The final c_high and c_low values are obtained by adjusting high_c and low_c with the calculated intraday volatility (IDV). These values are used to create the "Composite High" and "Composite Low" plots.
Composite High and Low with Volatility Correction: The final c_high and c_low values are obtained by adjusting high_c and low_c with the calculated intraday volatility (IDV). These values are used to create the "Composite High" and "Composite Low" plots.
2. Momentum Blocks Based on Z-Score:
Relative Price Change (RPC):
The Relative Price Change (rpdev) is calculated as the difference between the current high-low-close average (hlc3) and the previous simple moving average (psma_hlc3) of the same quantity. This measures the change in price over time.
Additionally, std_hlc3 is calculated as the standard deviation of the hlc3 values over a specified look-back period. The standard deviation quantifies the dispersion or volatility in the price data.
The rpdev is then divided by the std_hlc3 to normalize the price change by the volatility. This normalization ensures that the price change is expressed in terms of standard deviations, which is a common practice in quantitative analysis.
Essentially, the rpdev represents how many standard deviations the current price is away from the previous moving average.
Volume Correction (VC): If the include_vc input is set to true, volume correction is applied by dividing the trading volume by the previous simple moving average of the volume (psma_volume). This accounts for changes in trading activity.
Volume Corrected Relative Price Change (VCRPD): The vcrpd is calculated by multiplying the rpdev by the volume correction factor (vc). This incorporates both price changes and volume data.
Z-Scores: The Z-scores are calculated by taking the difference between the vcrpd and the mean (mean_vcrpd) and then dividing it by the standard deviation (stddev_vcrpd). Z-scores measure how many standard deviations a value is away from the mean. They help identify whether a value is unusually high or low compared to its historical distribution.
Momentum Blocks: The "Momentum Blocks" are essentially derived from the Z-scores (avgZScore). The script assigns different colors to the "Fill Area" based on predefined Z-score ranges. These colored areas represent different momentum zones:
Positive Z-scores indicate bullish momentum, and different shades of green are used to fill the area.
Negative Z-scores indicate bearish momentum, and different shades of red are used.
Z-scores near zero (between -0.25 and 0.25) suggest neutrality, and a yellow color is used.
Robust Bollinger Bands with Trend StrengthThe "Robust Bollinger Bands with Trend Strength" indicator is a technical analysis tool designed assess price volatility, identify potential trading opportunities, and gauge trend strength. It combines several robust statistical methods and percentile-based calculations to provide valuable information about price movements with Improved Resilience to Noise while mitigating the impact of outliers and non-normality in price data.
Here's a breakdown of how this indicator works and the information it provides:
Bollinger Bands Calculation: Similar to traditional Bollinger Bands, this indicator calculates the upper and lower bands that envelop the median (centerline) of the price data. These bands represent the potential upper and lower boundaries of price movements.
Robust Statistics: Instead of using standard deviation, this indicator employs robust statistical measures to calculate the bands (spread). Specifically, it uses the Interquartile Range (IQR), which is the range between the 25th percentile (low price) and the 75th percentile (high price). Robust statistics are less affected by extreme values (outliers) and data distributions that may not be perfectly normal. This makes the bands more resistant to unusual price spikes.
Median as Centerline: The indicator utilizes the median of the chosen price source (either HLC3 or VWMA) as the central reference point for the bands. The median is less affected by outliers than the mean (average), making it a robust choice. This can help identify the center of price action, which is useful for understanding whether prices are trending or ranging.
Trend Strength Assessment: The indicator goes beyond the standard Bollinger Bands by incorporating a measure of trend strength. It uses a robust rank-based correlation coefficient to assess the relationship between the price source and the bar index (time). This correlation coefficient, calculated over a specified length, helps determine whether a trend is strong, positive (uptrend), negative (down trend), or non-existent and weak. When the rank-based correlation coefficient shifts it indicates exhaustion of a prevailing trend. Trend Strength" indicator is designed to provide statistically valid information about trend strength while minimizing the impact of outliers and data distribution characteristics. The parameter choices, including a length of 14 and a correlation threshold of +/-0.7, considered to offer meaningful insights into market conditions and statistical validity (p-value ,0.05 statistically significant). The use of rank-based correlation is a robust alternative to traditional Pearson correlation, especially in the context of financial markets.
Trend Fill: Based on the robust rank-based correlation coefficient, the indicator fills the area between the upper and lower Bollinger Bands with different colors to visually represent the trend strength. For example, it may use green for an uptrend, red for a down trend, and a neutral color for a weak or ranging market. This visual representation can help traders quickly identify potential trend opportunities. In addition the middle line also informs about the overall trend direction of the median.
Cross Correlation [Kioseff Trading]Hello!
This script "Cross Correlation" calculates up to ~10,000 lag-symbol pair cross correlation values simultaneously!
Cross correlation calculation for 20 symbols simultaneously
+/- Lag Range is theoretically infinite (configurable min/max)
Practically, calculate up to 10000 lag-symbol pairs
Results can be sorted by greatest absolute difference or greatest sum
Ability to "isolate" the symbol on your chart and check for cross correlation against a list of symbols
Script defaults to stock pairs when on a stock, Forex pairs when on a Forex pair, crypto when on a crypto coin, futures when on a futures contract.
A custom symbol list can be used for cross correlation checking
Can check any number of available historical data points for cross correlation
Practical Assessment
Ideally, we can calculate cross correlation to determine if, in a list of assets, any of the assets frequently lead or lag one another.
Example
Say we are comparing the log returns for the previous 10 days for SPY and XLU.
*A single time-interval corresponds to the timeframe of your chart i.e. 1-minute chart = 1-minute time interval. We're using days for this example.
(Example Results)
A lag value (k) +/-3 is used.
The cross correlation (normalized) for k = +3 is -0.787
The cross correlation (normalized) for k = -3 is 0.216
A positive "k" value indicates the correlation when Asset A (SPY) leads Asset B (XLU)
A negative "k" value indicates the correlation when Asset B (XLU) leads Asset A (SPY)
A normalized cross correlation of -0.787 for k = +3 indicates an "adequately strong" negative relationship when SPY leads XLU by 3 days.
When SPY increases or decreases - XLU frequently moves in the opposite direction 3 days later.
A cross correlation value of 0.216 at k = −3 indicates a "weak" positive correlation when XLU leads SPY by 3 days.
There's a slight tendency for SPY to move in the same direction as XLU 3 days later.
After the cross-correlation score is normalized it will fall between -1 and 1.
A cross-correlation score of 1 indicates a perfect directional relationship between asset A and asset B at the corresponding lag (k).
A cross correlation of -1 indicates a perfect inverse relationship between asset A and asset B at the corresponding lag (k).
A cross correlation of 0 indicates no correlation at the corresponding lag (k).
The image above shows the primary usage for the script!
The image above further explains the data points located in the table!
The image above shows the script "isolating" the symbol on my chart and checking the cross correlation between the symbol and a list of symbols!
Wrapping Up
With this information, hopefully you can find some meaningful lead-lag relationships amongst assets!
Thank you for checking this out (:
Z-Score Support & Resistance [SS}Hello everyone,
This is the Z-Score Support and Resistance (S/R) indicator.
How it works:
The trouble with most indicators and strategies that rely on distributions is that they are constantly moving targets.
To combat this, what I have done is anchored the assessment of the normal distribution to the period open price and dropped the data from the current day.
This provides us with a static assessment of the current distribution and static target levels.
It then plots out an assessment of what would be neutral (0 Standard Deviations) all the way up to +3 Standard Deviations and all the way down to -3 Standard Deviations.
It can plot out this assessment on any timeframe, from the minutes to the months to the years, simply select which desired timeframe you want in the settings menu (default is 9 which seems to work well for most generic tickers and indicies).
The indicator will also count the number of times a ticker has closed within each designated period. To do this, please make sure that you have the assessment timeframe opened on the chart. So if you want to look at the instances on the daily timeframe, ensure you have the daily timeframe opened. If you want to look on the monthly, ensure you have the monthly opened, etc. (See below):
How to Use:
To use the indicator, its pretty simple.
Simply select the desired timeframe you want to use as S/R and use it!
You can adjust the period lookback from the defaulted 9 period based on:
a) The degree of normality in the dataset (you can use a kurtosis indicator to help you ascertain this); or
b) The back-test results of closes within a desired range.
For the later, you can see an example below:
This is TSLA with a 9 period lookback:
We can see that 50% of closes are happening within 0.5 and -0.5 standard deviations. If we extend this to a 15 period lookback:
Now over 60% of closes are happening in this area.
Why does this matter? Well, because now we know our prime short and long entries (see below):
The green arrows represent prime long setups and the red prime short setups.
This is because we know, 61% of the time the ticker will close between 0.5 and -0.5 standard deviations, so we can trade the ticker back to this area.
Further instructions:
Because it is somewhat of a complex indicator, I have done a tutorial video that I will link below here:
And that is the indicator my friends! Hopefully you enjoy :-).
As always, leave your comments and suggestions / Questions below!
Safe trades!
VWMA/SMA Delta Volatility (Statistical Anomaly Detector)The "VWMA/SMA Delta Volatility (Statistical Anomaly Detector)" indicator is a tool designed to detect and visualize volatility in a financial market's price data. The indicator calculates the difference (delta) between two moving averages (VWMA/SMA) and uses statistical analysis to identify anomalies or extreme price movements. Here's a breakdown of its components:
Hypothesis:
The hypothesis behind this indicator is that extreme price movements or anomalies in the market can be detected by analyzing the difference between two moving averages and comparing it to a statistically derived normal distribution. When the MA delta (the difference between two MAs: VWMA/SMA) exceeds a certain threshold based on standard deviation and the Z-score coefficient, it may indicate increased market volatility or potential trading opportunities.
Calculation of MA Delta:
The indicator calculates the MA delta by subtracting a simple moving average (SMA) from a volume-weighted moving average (VWMA) of a selected price source. This calculation represents the difference in the market's short-term and long-term trends.
Statistical Analysis:
To detect anomalies, the indicator performs statistical analysis on the MA delta. It calculates a moving average (MA) of the MA delta and its standard deviation over a specified sample size. This MA acts as a baseline, and the standard deviation is used to measure how much the MA delta deviates from the mean.
Delta Normalization:
The MA delta, lower filter, and upper filter are normalized using a function that scales them to a specific range, typically from -100 to 100. Normalization helps in comparing these values on a consistent scale and enhances their visual representation.
Visual Representation:
The indicator visualizes the results through histograms and channels:
The histogram bars represent the normalized MA delta. Red bars indicate negative and below-lower-filter values, green bars indicate positive and above-upper-filter values, and silver bars indicate values within the normal range.
It also displays a Z-score channel, which represents the upper and lower filters after normalization. This channel helps traders identify price levels that are statistically significant and potentially indicative of market volatility.
In summary, the "MA Delta Volatility (Statistical Anomaly Detector)" indicator aims to help traders identify abnormal price movements in the market by analyzing the difference between two moving averages and applying statistical measures. It can be a valuable tool for traders looking to spot potential opportunities during periods of increased volatility or to identify potential market anomalies.
Trade Warehouse (SPOT trades)Hello there!
Let's imagine You are trading SPOT, buy more and more every new dump, but bear market is not going to stop... and your first trade was 3 YEARS AGO!!!
Can't believe it is true.
The problem is - exchanges allow You to see only new trades last 6 months(Binance). But I want to see all of them! How do I know AVG Price?
This script is my solution. Just use it to track and store your trade, so You can see AVG without uploading old trades everytime and using calculator.
Script description:
Here You can see the "Trade" type of variable. Python script using Pandas converts trades from .csv file into string type that You can input as trade(price, pair, amount, date..). After it uppends to the trades_array and pushed into the loop.
If trade date is more than current cundle - it pushes new trade to other arrays such a "pair", "avg_tot" etc. to comput it later.
If trade was buy - it increase invested capital and owned amount, opposite for sell and recomputs AVG price.
Since script has at least 1 trade it starts to plot AVG price.
There are 2 AVG price:
1. For total invested counting(You can get negative value if traded successful)
2. Current AVG price since last 0 currency amount(there is dust value to set how many usd we take as dust)
Table represents all assets statistics
Just upload your trades only 1 time, use script to convert it into pine code, and use as indicator. This script allow You to see ALL trades from oldest to the newest.
github.com/Arivadis/...w_Tradings_warehouse
If this script helped You - press Star (on GitHub) Like (on TradingView)
Warning -
Does not include free/earn/withdraw/deposit counting. Only Buy and Sell =>
This script has no idea about your side currency deposits, so if You got Your BTC or EUR or .. from another wallet and sold later - it can break your statistical data. Add this transfer manually(see examples inside script).
Use my github manual to get this script workin.
Installing takes around 3 minutes and contains 3-5 steps
[GTH decimals heatmap] (wide screen advised)Preface
I share my personal general view on indicators below; skip ahead to the Description below if you are not interested.
It is my personal conviction that most - if not all - indicators rely mainly on trader's belief that they work, and in a feedback system like free markets they might become a self-fulfilling prophecy as a result, if (!) a big part of the traders believes in it, because some famous trader releases an indicator, or such person's public statement goes viral.
One of those voodoo indicators is the famous "follow-through day". There is zero statistical evidence for its validity, beyond the validity of a statement like "If it's bright at day it's usually the sun shining". The uselessness was proven exactly on its inventor's YT channel, Investors Business Daily. According to the examiner, its inventor William J. O'Neil himself could not explain the values used for this indicator. It might have been an incidental observation at some point without general validity. A.k.a "curve fitting". Still, it's being used by many today.
Another one of those indicators is the three points reversal on the S&P 500 Volatility Index (VIX) which allegedly might potentially maybe indicate a possible shift in trend. Both indicators share an immediately problematic feature: They use absolute values. Nothing is ever absolute in a highly subjective and emotionally driven game like the markets where a lot of money can be made and lost.
Most indicators can not produce additional information since they can only re-pack price/volume action. Many times an interpretion of the distance between price and a moving average and/or the slope of a moving average deliver very similar - if not better - results than MACD, RSI etc., especially with standard settings, the origin of which are usually unknown (always a warning sign). Very few indicators can deliver information which is otherwise hard to quantify, e. g. market noise (Kaufman's Efficiency Ratio or Price Density) or volatility, standard deviation etc.
It is common knowledge that trading the markets is a game of probability. No indicator works all the time (or at all, see above). In order to make decisions based on any indicator, the probability for its validity and the conditions under which validity seemed to have occurred, must be known. Otherwise it is just coffee grounds reading under the illusion of adding to the edge, when in fact it is only adding to the trees, making it even harder to see the forest.
Description
A common belief is that whole or half-dollar prices tend to be attraction points in price action, so a number of traders include those into decision making. But are they really...?
Spoiler Alert:
Generally, it is safe to say that for the big majority of stocks there is very thin evidence for it. It depends vastly on the asset, the timeframe used and the market period (pre/post/main trading times). If at all, there seems to be an above random but still thin evidence for whole prices being significant attraction points. Interesting/surprising patterns are visible on many stocks/timeframes/session periods, though.
The screenshot shows TSLA, 30m timeframe, two heatmaps added. The top one shows pre/post-market data only, the bottom one main market data only. The cyan fields indicate the strongest occurrence, the dark blue fields indicate the weakest occurrence of open/high/low/close prices at the respective decimal. The red field indicates the current/last price decimal.
Clearly, TSLA displays a strong pre-market attraction for .00, followed by .33 and .67 and .50. This pattern of thirds seems to be a unique feature of TSLA. In the main trading session it is being diluted by a more random distribution.
Other interesting equities to examine:
SPY: No significant pattern on any timeframe!
META: Generally weak patterns on all timeframes, but interestingly on the 1D there is evidence for less randomness on O and H, more on L and most on C.
AAPL: 1D, foggy attraction areas around .35 and .12. Whole price is no attraction area at all! Very weak attraction around .73.
AMD: Strong pattern on D, W, M, attraction areas around 1/16th intervals. No patterns on lower timeframes.
AMZN: Significant differences between pre/post and main session. Strong 1/16th pattern below D in pre/post.
TAOP: Strong 1/5th pattern on all timeframes.
Read the tool tips and go explore!