ETF-Benchmark AnalyzerHave you ever wondered which ETF performs the best? Which one is the most volatile, or which one has the smallest drawdown?
This Pine Script™ "ETF-Benchmark Analyzer" compares the performance of an ETF (such as SPY, the S&P 500 ETF) against a benchmark, which can also be adjusted by the user. It provides several key financial metrics, such as:
Performance (%): Displays the total return over a specified lookback period (e.g., 1 year). It compares the performance of the ETF against the benchmark and shows the difference.
Alpha (%): Measures the excess return of the ETF over the expected return, which is calculated using the benchmark’s return. Positive alpha indicates that the ETF has outperformed the benchmark, while negative alpha suggests underperformance. This metric is important because it isolates performance that cannot be explained by exposure to the benchmark's movements.
Sharpe Ratio: A risk-adjusted measure of return. It is calculated by dividing the excess return of the ETF (above the risk-free rate) by its standard deviation (volatility). A higher Sharpe ratio indicates better risk-adjusted returns. The Sharpe ratio is calculated for both the ETF and the benchmark, and their difference is displayed as well.
Drawdown: The percentage decrease from the highest price to the lowest price over the lookback period. This is a critical measure of risk, as it shows the largest potential loss an investor might face during a specific period.
Beta: Measures the ETF’s sensitivity to movements in the benchmark. A beta of 1 means the ETF moves in line with the benchmark; greater than 1 means it is more volatile, while less than 1 means it is less volatile.
These metrics provide a holistic view of the ETF’s performance compared to the benchmark, allowing traders to assess the risk and return profile more effectively.
Scientific Sources
Sharpe Ratio: Sharpe, W. F. (1994). The Sharpe Ratio. Journal of Portfolio Management, 21(1), 49-58. This paper defines and develops the Sharpe ratio as a measure of risk-adjusted return.
Alpha and Beta: Jensen, M. C. (1968). The Performance of Mutual Funds in the Period 1945–1964. The Journal of Finance, 23(2), 389-416. This paper discusses the concepts of alpha and beta in the context of mutual fund performance.
Statistics
Kalman PredictorThe **Kalman Predictor** indicator is a powerful tool designed for traders looking to enhance their market analysis by smoothing price data and projecting future price movements. This script implements a Kalman filter, a statistical method for noise reduction, to dynamically estimate price trends and velocity. Combined with ATR-based confidence bands, it provides actionable insights into potential price movement, while offering clear trend and momentum visualization.
---
#### **Key Features**:
1. **Kalman Filter Smoothing**:
- Dynamically estimates the current price state and velocity to filter out market noise.
- Projects three future price levels (`Next Bar`, `Next +2`, `Next +3`) based on velocity.
2. **Dynamic Confidence Bands**:
- Confidence bands are calculated using ATR (Average True Range) to reflect market volatility.
- Visualizes potential price deviation from projected levels.
3. **Trend Visualization**:
- Color-coded prediction dots:
- **Green**: Indicates an upward trend (positive velocity).
- **Red**: Indicates a downward trend (negative velocity).
- Dynamically updated label displaying the current trend and velocity value.
4. **User Customization**:
- Inputs to adjust the process and measurement noise for the Kalman filter (`q` and `r`).
- Configurable ATR multiplier for confidence bands.
- Toggleable trend label with adjustable positioning.
---
#### **How It Works**:
1. **Kalman Filter Core**:
- The Kalman filter continuously updates the estimated price state and velocity based on real-time price changes.
- Projections are based on the current price trend (velocity) and extend into the future (Next Bar, +2, +3).
2. **Confidence Bands**:
- Calculated using ATR to provide a dynamic range around the projected future prices.
- Indicates potential volatility and helps traders assess risk-reward scenarios.
3. **Trend Label**:
- Updates dynamically on the last bar to show:
- Current trend direction (Up/Down).
- Velocity value, providing insight into the expected magnitude of the price movement.
---
#### **How to Use**:
- **Trend Analysis**:
- Observe the direction and spacing of the prediction dots relative to current candles.
- Larger spacing indicates a potential strong move, while clustering suggests consolidation.
- **Risk Management**:
- Use the confidence bands to gauge potential price volatility and set stop-loss or take-profit levels accordingly.
- **Pullback Detection**:
- Look for flattening or clustering of dots during trends as a signal of potential pullbacks or reversals.
---
#### **Customizable Inputs**:
- **Kalman Filter Parameters**:
- `lookback`: Adjusts the smoothing window.
- `q`: Process noise (higher values make the filter more reactive to changes).
- `r`: Measurement noise (controls sensitivity to price deviations).
- **Confidence Bands**:
- `band_multiplier`: Multiplies ATR to define the range of confidence bands.
- **Visualization**:
- `show_label`: Option to toggle the trend label.
- `label_offset`: Adjusts the label’s distance from the price for better visibility.
---
#### **Examples of Use**:
- **Scalping**: Use on lower timeframes (e.g., 1-minute, 5-minute) to detect short-term price trends and reversals.
- **Swing Trading**: Identify pullbacks or continuations on higher timeframes (e.g., 4-hour, daily) by observing the prediction dots and confidence bands.
- **Risk Assessment**: Confidence bands help visualize potential price volatility, aiding in the placement of stops and targets.
---
#### **Notes for Traders**:
- The **Kalman Predictor** does not predict the future with certainty but provides a statistically informed estimate of price movement.
- Confidence bands are based on historical volatility and should be used as guidelines, not guarantees.
- Always combine this tool with other analysis techniques for optimal results.
---
This script is open-source, and the Kalman filter logic has been implemented uniquely to integrate noise reduction with dynamic confidence band visualization. If you find this indicator useful, feel free to share your feedback and experiences!
---
#### **Credits**:
This script was developed leveraging the statistical principles of Kalman filtering and is entirely original. It incorporates ATR for dynamic confidence band calculations to enhance trader usability and market adaptability.
BTC Price Percentage Difference( Bitfinex - Coinbase)Introduction:
The BTC Price Percentage Difference Histogram Indicator is a powerful tool designed to help traders visualize and capitalize on the price discrepancies of Bitcoin (BTC) between two major exchanges: Bitfinex and Coinbase. By calculating the real-time percentage difference of BTC-USD prices and displaying it as a color-coded histogram, this indicator enables you to quickly spot potential arbitrage opportunities and gain deeper insights into market dynamics.
Features:
• Real-Time Percentage Difference Calculation:
• Computes the percentage difference between BTC-USD prices on Bitfinex and Coinbase.
• Color-Coded Histogram Visualization:
• Green Bars: Indicate that the BTC price on Bitfinex is higher than on Coinbase.
• Red Bars: Indicate that the BTC price on Bitfinex is lower than on Coinbase.
• User-Friendly and Intuitive:
• Simple setup with no additional inputs required.
• Automatically adapts to the chart’s timeframe for seamless integration.
Why Bitfinex Whales Matter:
Bitfinex is renowned for hosting some of the largest Bitcoin traders, often referred to as “whales.” These influential players have the capacity to move the market, and historically, they’ve demonstrated a high success rate in buying at market bottoms and selling at market tops. By tracking the price discrepancies between Bitfinex and other exchanges like Coinbase, you can gain valuable insights into the sentiment and actions of these key market participants.
lib_kernelLibrary "lib_kernel"
Library "lib_kernel"
This is a tool / library for developers, that contains several common and adapted kernel functions as well as a kernel regression function and enum to easily select and embed a list into the settings dialog.
How to Choose and Modify Kernels in Practice
Compact Support Kernels (e.g., Epanechnikov, Triangular): Use for localized smoothing and emphasizing nearby data.
Oscillatory Kernels (e.g., Wave, Cosine): Ideal for detecting periodic patterns or mean-reverting behavior.
Smooth Tapering Kernels (e.g., Gaussian, Logistic): Use for smoothing long-term trends or identifying global price behavior.
kernel_Epanechnikov(u)
Parameters:
u (float)
kernel_Epanechnikov_alt(u, sensitivity)
Parameters:
u (float)
sensitivity (float)
kernel_Triangular(u)
Parameters:
u (float)
kernel_Triangular_alt(u, sensitivity)
Parameters:
u (float)
sensitivity (float)
kernel_Rectangular(u)
Parameters:
u (float)
kernel_Uniform(u)
Parameters:
u (float)
kernel_Uniform_alt(u, sensitivity)
Parameters:
u (float)
sensitivity (float)
kernel_Logistic(u)
Parameters:
u (float)
kernel_Logistic_alt(u)
Parameters:
u (float)
kernel_Logistic_alt2(u, sigmoid_steepness)
Parameters:
u (float)
sigmoid_steepness (float)
kernel_Gaussian(u)
Parameters:
u (float)
kernel_Gaussian_alt(u, sensitivity)
Parameters:
u (float)
sensitivity (float)
kernel_Silverman(u)
Parameters:
u (float)
kernel_Quartic(u)
Parameters:
u (float)
kernel_Quartic_alt(u, sensitivity)
Parameters:
u (float)
sensitivity (float)
kernel_Biweight(u)
Parameters:
u (float)
kernel_Triweight(u)
Parameters:
u (float)
kernel_Sinc(u)
Parameters:
u (float)
kernel_Wave(u)
Parameters:
u (float)
kernel_Wave_alt(u)
Parameters:
u (float)
kernel_Cosine(u)
Parameters:
u (float)
kernel_Cosine_alt(u, sensitivity)
Parameters:
u (float)
sensitivity (float)
kernel(u, select, alt_modificator)
wrapper for all standard kernel functions, see enum Kernel comments and function descriptions for usage szenarios and parameters
Parameters:
u (float)
select (series Kernel)
alt_modificator (float)
kernel_regression(src, bandwidth, kernel, exponential_distance, alt_modificator)
wrapper for kernel regression with all standard kernel functions, see enum Kernel comments for usage szenarios. performance optimized version using fixed bandwidth and target
Parameters:
src (float) : input data series
bandwidth (simple int) : sample window of nearest neighbours for the kernel to process
kernel (simple Kernel) : type of Kernel to use for processing, see Kernel enum or respective functions for more details
exponential_distance (simple bool) : if true this puts more emphasis on local / more recent values
alt_modificator (float) : see kernel functions for parameter descriptions. Mostly used to pronounce emphasis on local values or introduce a decay/dampening to the kernel output
GapDetectGap Severity Analysis Library
This library, GapDetect , simplifies the identification and evaluation of overnight gaps by leveraging statistical metrics such as standard deviation and percentage moves. It is ideal for detecting large abnormal gaps which may be used to modify how strategies may decide to enter or exit.
Key Features:
Overnight Gap Detection
Provides two core functions:
today : Computes the value of today's overnight gap.
todayPercent : Computes the percentage change for today's overnight gap.
Volatility Analysis
Includes functions for statistical gap analysis:
normal : Calculates the normal daily standard deviation of the overnight gap, filtering outliers using customizable thresholds.
normalPercent : Similar to normal , but for percentage-based gap moves.
Gap Severity Metric
severity : a positive or negative value that represents the ratio of the current overnight move compared to the standard deviation of previous ones.
Customizable Parameters
Supports custom session specifications, resolutions, and outlier thresholds.
Gold Friday Anomaly StrategyThis script implements the " Gold Friday Anomaly Strategy ," a well-known historical trading strategy that leverages the gold market's behavior from Thursday evening to Friday close. It is a backtesting-focused strategy designed to assess the historical performance of this pattern. Traders use this anomaly as it captures a recurring market tendency observed over the years.
What It Does:
Entry Condition: The strategy enters a long position at the beginning of the Friday trading session (Thursday evening close) within the defined backtesting period.
Exit Condition: Friday evening close.
Backtesting Controls: Allows users to set custom backtesting periods to evaluate strategy performance over specific date ranges.
Key Features:
Custom Backtest Periods: Easily configurable inputs to set the start and end date of the backtesting range.
Fixed Slippage and Commission Settings: Ensures realistic simulation of trading conditions.
Process Orders on Close: Backtesting is optimized by processing orders at the bar's close.
Important Notes:
Backtesting Only: This script is intended purely for backtesting purposes. Past performance is not indicative of future results.
Live Trading Recommendations: For live trading, it is highly recommended to use limit orders instead of market orders, especially during evening sessions, as market order slippage can be significant.
Default Settings:
Entry size: 10% of equity per trade.
Slippage: 1 tick.
Commission: 0.05% per trade.
Volatility and Tick Size DataThis indicator, titled "Tick Information & Standard Deviation Table," provides detailed insights into market microstructure, including tick size, point value, and standard deviation values calculated based on the True Range. It helps visualize essential trading parameters that influence transaction costs, risk management, and portfolio performance, including volatility measures that can guide investment strategies.
Why These Data Points Are Important for Portfolio Management
Tick Size and Point Value:
Tick size refers to the smallest possible price movement in a given asset. It defines the granularity of the price changes, affecting how precise the market price can be at any moment. Point value reflects the monetary value of a single price movement (one tick). These two data points are essential for understanding transaction costs and for evaluating how much capital is at risk per price movement. Smaller tick sizes may lead to more efficient pricing in high-frequency trading strategies (Hasbrouck, 2009).
Reference: Hasbrouck, J. (2009). Empirical Market Microstructure. Foundations and Trends® in Finance, 3(4), 169-272.
Standard Deviations and Volatility:
Standard deviation measures the variability or volatility of an asset's price over a set period. This data point is critical for portfolio management, as it helps to quantify risk and predict potential price movements. True Range and its standard deviations provide a more comprehensive measure of market volatility than just price fluctuations, as they include gaps and extreme price changes. Investors use volatility data to assess the potential risk and adjust portfolio allocations accordingly (Ang, 2006).
Reference: Ang, A. (2006). Asset Management: A Systematic Approach to Factor Investing. Oxford University Press.
Risk Management:
The ability to quantify risk through metrics like the 1st, 2nd, and 3rd standard deviations of the true range is essential for implementing risk controls within a portfolio. By incorporating volatility data, portfolio managers can adjust their strategies for different market conditions, potentially reducing exposure to high-risk environments. These volatility measures help in setting stop-loss levels, optimizing position sizes, and managing the portfolio’s overall risk-return profile (Black & Scholes, 1973).
Reference: Black, F., & Scholes, M. (1973). The Pricing of Options and Corporate Liabilities. Journal of Political Economy, 81(3), 637-654.
Portfolio Diversification and Hedging:
Understanding asset volatility and transaction costs is critical when constructing a diversified portfolio. By using the standard deviations from this indicator, investors can better identify assets that may provide diversification benefits, potentially reducing the overall portfolio risk. Moreover, the point values and tick sizes help assess the cost-effectiveness of various assets, enabling portfolio managers to implement more efficient hedging strategies (Markowitz, 1952).
Reference: Markowitz, H. (1952). Portfolio Selection. The Journal of Finance, 7(1), 77-91.
Conclusion
The Tick Information & Standard Deviation Table provides critical market data that informs the risk management, diversification, and pricing strategies used in portfolio management. By incorporating tick size, point value, and volatility metrics, investors can make more informed decisions, better manage risk, and optimize the returns on their portfolios. The data serves as an essential tool for aligning asset selection and portfolio allocations with the investor's risk tolerance and market conditions.
M2 Money Shift for Bitcoin [SAKANE]M2 Money Shift for Bitcoin was developed to visualize the impact of M2 Money, a macroeconomic indicator, on the Bitcoin market and to support trade analysis.
Bitcoin price fluctuations have a certain correlation with cycles in M2 money supply.In particular, it has been noted that changes in M2 supply can affect the bitcoin price 70 days in advance.Very high correlations have been observed in recent years in particular, making it useful as a supplemental analytical tool for trading.
Support for M2 data from multiple countries
M2 supply data from the U.S., Europe, China, Japan, the U.K., Canada, Australia, and India are integrated and all are displayed in U.S. dollar equivalents.
Slide function
Using the "Slide Days Forward" setting, M2 data can be slid up to 500 days, allowing for flexible analysis that takes into account the time difference from the bitcoin price.
Plotting Total Liquidity
Plot total liquidity (in trillions of dollars) by summing the M2 supply of multiple countries.
How to use
After applying the indicator to the chart, activate the M2 data for the required country from the settings screen. 2.
2. adjust "Slide Days Forward" to analyze the relationship between changes in M2 supply and bitcoin price
3. refer to the Gross Liquidity plot to build a trading strategy that takes into account macroeconomic influences.
Notes.
This indicator is an auxiliary tool for trade analysis and does not guarantee future price trends.
The relationship between M2 supply and bitcoin price depends on many factors and should be used in conjunction with other analysis methods.
N-Degree Moment-Based Adaptive Detection🙏🏻 N-Degree Moment-Based Adaptive Detection (NDMBAD) method is a generalization of MBAD since the horizontal line fit passing through the data's mean can be simply treated as zero-degree polynomial regression. We can extend the MBAD logic to higher-degree polynomial regression.
I don't think I need to talk a lot about the thing there; the logic is really the same as in MBAD, just hit the link above and read if you want. The only difference is now we can gather cumulants not only from the horizontal mean fit (degree = 0) but also from higher-order polynomial regression fit, including linear regression (degree = 1).
Why?
Simply because residuals from the 0-degree model don't contain trend information, and while in some cases that's exactly what you need, in other cases, you want to model your trend explicitly. Imagine your underlying process trends in a steady manner, and you want to control the extreme deviations from the process's core. If you're going to use 0-degree, you'll be treating this beautiful steady trend as a residual itself, which "constantly deviates from the process mean." It doesn't make much sense.
How?
First, if you set the length to 0, you will end up with the function incrementally applied to all your data starting from bar_index 0. This can be called the expanding window mode. That's the functionality I include in all my scripts lately (where it makes sense). As I said in the MBAD description, choosing length is a matter of doing business & applied use of my work, but I think I'm open to talk about it.
I don't see much sense in using degree > 1 though (still in research on it). If you have dem curves, you can use Fourier transform -> spectral filtering / harmonic regression (regression with Fourier terms). The job of a degree > 0 is to model the direction in data, and degree 1 gets it done. In mean reversion strategies, it means that you don't wanna put 0-degree polynomial regression (i.e., the mean) on non-stationary trending data in moving window mode because, this way, your residuals will be contaminated with the trend component.
By the way, you can send thanks to @aaron294c , he said like mane MBAD is dope, and it's gonna really complement his work, so I decided to drop NDMBAD now, gonna be more useful since it covers more types of data.
I wanned to call it N-Order Moment Adaptive Detection because it abbreviates to NOMAD, which sounds cool and suits me well, because when I perform as a fire dancer, nomad style is one of my outfits. Burning Man stuff vibe, you know. But the problem is degree and order really mean two different things in the polynomial context, so gotta stay right & precise—that's the priority.
∞
AadTrend [InvestorUnknown]The AadTrend indicator is an experimental trading tool that combines a user-selected moving average with the Average Absolute Deviation (AAD) from this moving average. This combination works similarly to the Supertrend indicator but offers additional flexibility and insights. In addition to generating Long and Short signals, the AadTrend indicator identifies RISK-ON and RISK-OFF states for each trade direction, highlighting areas where taking on more risk may be considered.
Core Concepts and Features
Moving Average (User-Selected Type)
The indicator allows users to select from various types of moving averages to suit different trading styles and market conditions:
Simple Moving Average (SMA)
Exponential Moving Average (EMA)
Hull Moving Average (HMA)
Double Exponential Moving Average (DEMA)
Triple Exponential Moving Average (TEMA)
Relative Moving Average (RMA)
Fractal Adaptive Moving Average (FRAMA)
Average Absolute Deviation (AAD)
The Average Absolute Deviation measures the average distance between each data point and the mean, providing a robust estimation of volatility.
aad(series float src, simple int length, simple string avg_type) =>
avg = // Moving average as selected by the user
abs_deviations = math.abs(src - avg)
ta.sma(abs_deviations, length)
This provides a volatility measure that adapts to recent market conditions.
Combining Moving Average and AAD
The indicator creates upper and lower bands around the moving average using the AAD, similar to how the Supertrend indicator uses Average True Range (ATR) for its bands.
AadTrend(series float src, simple int length, simple float aad_mult, simple string avg_type) =>
// Calculate AAD (volatility measure)
aad_value = aad(src, length, avg_type)
// Calculate the AAD-based moving average by scaling the price data with AAD
avg = switch avg_type
"SMA" => ta.sma(src, length)
"EMA" => ta.ema(src, length)
"HMA" => ta.hma(src, length)
"DEMA" => ta.dema(src, length)
"TEMA" => ta.tema(src, length)
"RMA" => ta.rma(src, length)
"FRAMA" => ta.frama(src, length)
avg_p = avg + (aad_value * aad_mult)
avg_m = avg - (aad_value * aad_mult)
var direction = 0
if ta.crossover(src, avg_p)
direction := 1
else if ta.crossunder(src, avg_m)
direction := -1
A chart displaying the moving average with upper and lower AAD bands enveloping the price action.
Signals and Trade States
1. Long and Short Signals
Long Signal: Generated when the price crosses above the upper AAD band,
Short Signal: Generated when the price crosses below the lower AAD band.
2. RISK-ON and RISK-OFF States
These states provide additional insight into the strength of the current trend and potential opportunities for taking on more risk.
RISK-ON Long: When the price moves significantly above the upper AAD band after a Long signal.
RISK-OFF Long: When the price moves back below the upper AAD band, suggesting caution.
RISK-ON Short: When the price moves significantly below the lower AAD band after a Short signal.
RISK-OFF Short: When the price moves back above the lower AAD band.
Highlighted areas on the chart representing RISK-ON and RISK-OFF zones for both Long and Short positions.
A chart showing the filled areas corresponding to trend directions and RISK-ON zones
Backtesting and Performance Metrics
While the AadTrend indicator focuses on generating signals and highlighting risk areas, it can be integrated with backtesting frameworks to evaluate performance over historical data.
Integration with Backtest Library:
import InvestorUnknown/BacktestLibrary/1 as backtestlib
Customization and Calibration
1. Importance of Calibration
Default Settings Are Experimental: The default parameters are not optimized for any specific market condition or asset.
User Calibration: Traders should adjust the length, aad_mult, and avg_type parameters to align the indicator with their trading strategy and the characteristics of the asset being analyzed.
2. Factors to Consider
Market Volatility: Higher volatility may require adjustments to the aad_mult to avoid false signals.
Trading Style: Short-term traders might prefer faster-moving averages like EMA or HMA, while long-term traders might opt for SMA or FRAMA.
Alerts and Notifications
The AadTrend indicator includes built-in alert conditions to notify traders of significant market events:
Long and Short Alerts:
alertcondition(long_alert, "LONG (AadTrend)", "AadTrend flipped ⬆LONG⬆")
alertcondition(short_alert, "SHORT (AadTrend)", "AadTrend flipped ⬇Short⬇")
RISK-ON and RISK-OFF Alerts:
alertcondition(risk_on_long, "RISK-ON LONG (AadTrend)", "RISK-ON LONG (AadTrend)")
alertcondition(risk_off_long, "RISK-OFF LONG (AadTrend)", "RISK-OFF LONG (AadTrend)")
alertcondition(risk_on_short, "RISK-ON SHORT (AadTrend)", "RISK-ON SHORT (AadTrend)")
alertcondition(risk_off_short, "RISK-OFF SHORT (AadTrend)", "RISK-OFF SHORT (AadTrend)")
Important Notes and Disclaimer
Experimental Nature: The AadTrend indicator is experimental and should be used with caution.
No Guaranteed Performance: Past performance is not indicative of future results. Backtesting results may not reflect real trading conditions.
User Responsibility: Traders and investors should thoroughly test and calibrate the indicator settings before applying it to live trading.
Risk Management: Always use proper risk management techniques, including stop-loss orders and position sizing.
COT Report Indicator with Selectable Data TypeOverview
The COT Report Indicator with Selectable Data Types is a powerful tool for traders who want to gain deeper insights into market sentiment using the Commitment of Traders (COT) data. This indicator allows you to visualize the net positions of different participant categories—Commercial, Noncommercial, and Nonreportable—directly on your chart.
The indicator is fully customizable, allowing you to select the type of data to display, sync with your chart's timeframe, or choose a custom timeframe. Whether you're analyzing gold, crude oil, indices, or forex pairs, this indicator adapts seamlessly to your trading needs.
Features
Dynamic Data Selection:
Choose between Commercial, Noncommercial, or Nonreportable data types.
Analyze the net positions of market participants for more informed decision-making.
Flexible Timeframes:
Sync with the chart's timeframe for quick analysis.
Select a custom timeframe to view COT data at your preferred granularity.
Wide Asset Coverage:
Supports various assets, including gold, silver, crude oil, indices, and forex pairs.
Automatically adjusts to the ticker you're analyzing.
Clear Visual Representation:
Displays Net Long, Net Short, and Net Difference (Long - Short) positions with distinct colors for easy interpretation.
Error Handling:
Alerts you if the symbol is unsupported, ensuring you know when COT data isn't available for a specific asset.
How to Use
Add the Indicator:
Click "Indicators" in TradingView and search for "COT Report Indicator with Selectable Data Types."
Add it to your chart.
Customize the Settings:
Data Type: Choose between Commercial, Noncommercial, or Nonreportable positions.
Data Source: Select "Futures Only" or "Futures and Options."
Timeframe: Sync with the chart's timeframe or specify a custom one (e.g., weekly, monthly).
Interpret the Data:
Green Line: Net Long Positions.
Red Line: Net Short Positions.
Black Line: Net Difference (Long - Short).
Supported Symbols:
Gold, Silver, Crude Oil, Natural Gas, Forex Pairs, S&P 500, US30, NAS100, and more.
Who Can Benefit
Trend Followers: Identify the buying/selling trends of Commercial and Noncommercial participants.
Sentiment Analysts: Understand shifts in sentiment among major market players.
Long-Term Traders: Use COT data to confirm or contradict your fundamental analysis.
Example Use Case
For example, if you're trading gold (XAUUSD) and select Noncommercial Positions, you’ll see the long and short positions of speculators. An increase in net long positions may signal bullish sentiment, while an increase in net short positions may indicate bearish sentiment.
If you switch to Commercial Positions, you'll get insights into how hedgers and institutions are positioning themselves, helping you confirm or counterbalance your current trading strategy.
Limitations
The indicator only works with supported symbols (COT data availability is limited to specific assets).
The COT data is updated weekly, so it is not suitable for short-term intraday trading.
Weekly H/L DOTWThe Weekly High/Low Day Breakdown indicator provides a detailed statistical analysis of the days of the week (Monday to Sunday) on which weekly highs and lows occur for a given timeframe. It helps traders identify recurring patterns, correlations, and tendencies in price behavior across different days of the week. This can assist in planning trading strategies by leveraging day-specific patterns.
The indicator visually displays the statistical distribution of weekly highs and lows in an easy-to-read tabular format on your chart. Users can customize how the data is displayed, including whether the table is horizontal or vertical, the size of the text, and the position of the table on the chart.
Key Features:
Weekly Highs and Lows Identification:
Tracks the highest and lowest price of each trading week.
Records the day of the week on which these events occur.
Customizable Table Layout:
Option to display the table horizontally or vertically.
Text size can be adjusted (Small, Normal, or Large).
Table position is customizable (top-right, top-left, bottom-right, or bottom-left of the chart).
Flexible Value Representation:
Allows the display of values as percentages or as occurrences.
Default setting is occurrences, but users can toggle to percentages as needed.
Day-Specific Display:
Option to hide Saturday or Sunday if these days are not relevant to your trading strategy.
Visible Date Range:
Users can define a start and end date for the analysis, focusing the results on a specific period of interest.
User-Friendly Interface:
The table dynamically updates based on the selected timeframe and visibility of the chart, ensuring the displayed data is always relevant to the current context.
Adaptable to Custom Needs:
Includes all-day names from Monday to Sunday, but allows for specific days to be excluded based on the user’s preferences.
Indicator Logic:
Data Collection:
The indicator collects daily high, low, day of the week, and time data from the selected ticker using the request.security() function with a daily timeframe ('D').
Weekly Tracking:
Tracks the start and end times of each week.
During each week, it monitors the highest and lowest prices and the days they occurred.
Weekly Closure:
When a week ends (detected by Sunday’s daily candle), the indicator:
Updates the statistics for the respective days of the week where the weekly high and low occurred.
Resets tracking variables for the next week.
Visible Range Filter:
Only processes data for weeks that fall within the visible range of the chart, ensuring the table reflects only the visible portion of the chart.
Statistical Calculations:
Counts the number of weekly highs and lows for each day.
Calculates percentages relative to the total number of weeks in the visible range.
Dynamic Table Display:
Depending on user preferences, displays the data either horizontally or vertically.
Formats the table with proper alignment, colors, and text sizes for easy readability.
Custom Value Representation:
If set to "percentages," displays the percentage of weeks a high/low occurred on each day.
If set to "occurrences," displays the raw count of weekly highs/lows for each day.
Input Parameters:
High Text Color:
Color for the text in the "Weekly High" row or column.
Low Text Color:
Color for the text in the "Weekly Low" row or column.
High Background Color:
Background color for the "Weekly High" row or column.
Low Background Color:
Background color for the "Weekly Low" row or column.
Table Background Color:
General background color for the table.
Hide Saturday:
Option to exclude Saturday from the analysis and table.
Hide Sunday:
Option to exclude Sunday from the analysis and table.
Values Format:
Dropdown menu to select "percentages" or "occurrences."
Default value: "occurrences."
Table Position:
Dropdown menu to select the table position on the chart: "top_right," "top_left," "bottom_right," "bottom_left."
Default value: "top_right."
Text Size:
Dropdown menu to select text size: "Small," "Normal," "Large."
Default value: "Normal."
Vertical Table Format:
Checkbox to toggle the table layout:
Checked: Table displays days vertically, with Monday at the top.
Unchecked: Table displays days horizontally.
Start Date:
Allows users to specify the starting date for the analysis.
End Date:
Allows users to specify the ending date for the analysis.
Use Cases:
Day-Specific Pattern Recognition:
Identify if specific days, such as Monday or Friday, are more likely to form weekly highs or lows.
Seasonal Analysis:
Use the start and end date filters to analyze patterns during specific trading seasons.
Strategy Development:
Plan day-based entry and exit strategies by identifying recurring patterns in weekly highs/lows.
Historical Review:
Study historical data to understand how market behavior has changed over time.
TradingView TOS Compliance Notes:
Originality:
This script is uniquely designed to provide day-based statistics for weekly highs and lows, which is not a common feature in other publicly available indicators.
Usefulness:
Offers practical insights for traders interested in understanding day-specific price behavior.
Detailed Description:
Fully explains the purpose, features, logic, input settings, and use cases of the indicator.
Includes clear and concise details on how each input works.
Clear Input Descriptions:
All input parameters are clearly named and explained in the script and this description.
No Redundant Functionality:
Focused specifically on tracking weekly highs and lows, ensuring the indicator serves a distinct purpose without unnecessary features.
Simple Decesion Matrix Classification Algorithm [SS]Hello everyone,
It has been a while since I posted an indicator, so thought I would share this project I did for fun.
This indicator is an attempt to develop a pseudo Random Forest classification decision matrix model for Pinescript.
This is not a full, robust Random Forest model by any stretch of the imagination, but it is a good way to showcase how decision matrices can be applied to trading and within Pinescript.
As to not market this as something it is not, I am simply calling it the "Simple Decision Matrix Classification Algorithm". However, I have stolen most of the aspects of this machine learning algo from concepts of Random Forest modelling.
How it works:
With models like Support Vector Machines (SVM), Random Forest (RF) and Gradient Boosted Machine Learning (GBM), which are commonly used in Machine Learning Classification Tasks (MLCTs), this model operates similarity to the basic concepts shared amongst those modelling types. While it is not very similar to SVM, it is very similar to RF and GBM, in that it uses a "voting" system.
What do I mean by voting system?
How most classification MLAs work is by feeding an input dataset to an algorithm. The algorithm sorts this data, categorizes it, then introduces something called a confusion matrix (essentially sorting the data in no apparently order as to prevent over-fitting and introduce "confusion" to the algorithm to ensure that it is not just following a trend).
From there, the data is called upon based on current data inputs (so say we are using RSI and Z-Score, the current RSI and Z-Score is compared against other RSI's and Z-Scores that the model has saved). The model will process this information and each "tree" or "node" will vote. Then a cumulative overall vote is casted.
How does this MLA work?
This model accepts 2 independent variables. In order to keep things simple, this model was kept as a three node model. This means that there are 3 separate votes that go in to get the result. A vote is casted for each of the two independent variables and then a cumulative vote is casted for the overall verdict (the result of the model's prediction).
The model actually displays this system diagrammatically and it will likely be easier to understand if we look at the diagram to ground the example:
In the diagram, at the very top we have the classification variable that we are trying to predict. In this case, we are trying to predict whether there will be a breakout/breakdown outside of the normal ATR range (this is either yes or no question, hence a classification task).
So the question forms the basis of the input. The model will track at which points the ATR range is exceeded to the upside or downside, as well as the other variables that we wish to use to predict these exceedences. The ATR range forms the basis of all the data flowing into the model.
Then, at the second level, you will see we are using Z-Score and RSI to predict these breaks. The circle will change colour according to "feature importance". Feature importance basically just means that the indicator has a strong impact on the outcome. The stronger the importance, the more green it will be, the weaker, the more red it will be.
We can see both RSI and Z-Score are green and thus we can say they are strong options for predicting a breakout/breakdown.
So then we move down to the actual voting mechanisms. You will see the 2 pink boxes. These are the first lines of voting. What is happening here is the model is identifying the instances that are most similar and whether the classification task we have assigned (remember out ATR exceedance classifier) was either true or false based on RSI and Z-Score.
These are our 2 nodes. They both cast an individual vote. You will see in this case, both cast a vote of 1. The options are either 1 or 0. A vote of 1 means "Yes" or "Breakout likely".
However, this is not the only voting the model does. The model does one final vote based on the 2 votes. This is shown in the purple box. We can see the final vote and result at the end with the orange circle. It is 1 which means a range exceedance is anticipated and the most likely outcome.
The Data Table Component
The model has many moving parts. I have tried to represent the pivotal functions diagrammatically, but some other important aspects and background information must be obtained from the companion data table.
If we bring back our diagram from above:
We can see the data table to the left.
The data table contains 2 sections, one for each independent variable. In this case, our independent variables are RSI and Z-Score.
The data table will provide you with specifics about the independent variables, as well as about the model accuracy and outcome.
If we take a look at the first row, it simply indicates which independent variable it is looking at. If we go down to the next row where it reads "Weighted Impact", we can see a corresponding percent. The "weighted impact" is the amount of representation each independent variable has within the voting scheme. So in this case, we can see its pretty equal, 45% and 55%, This tells us that there is a slight higher representation of z-score than RSI but nothing to worry about.
If there was a major over-respresentation of greater than 30 or 40%, then the model would risk being skewed and voting too heavily in favour of 1 variable over the other.
If we move down from there we will see the next row reads "independent accuracy". The voting of each independent variable's accuracy is considered separately. This is one way we can determine feature importance, by seeing how well one feature augments the accuracy. In this case, we can see that RSI has the greatest importance, with an accuracy of around 87% at predicting breakouts. That makes sense as RSI is a momentum based oscillator.
Then if we move down one more, we will see what each independent feature (node) has voted for. In this case, both RSI and Z-Score voted for 1 (Breakout in our case).
You can weigh these in collaboration, but its always important to look at the final verdict of the model, which if we move down, we can see the "Model prediction" which is "Bullish".
If you are using the ATR breakout, the model cannot distinguish between "Bullish" or "Bearish", must that a "Breakout" is likely, either bearish or bullish. However, for the other classification tasks this model can do, the results are either Bullish or Bearish.
Using the Function:
Okay so now that all that technical stuff is out of the way, let's get into using the function. First of all this function innately provides you with 3 possible classification tasks. These include:
1. Predicting Red or Green Candle
2. Predicting Bullish / Bearish ATR
3. Predicting a Breakout from the ATR range
The possible independent variables include:
1. Stochastics,
2. MFI,
3. RSI,
4. Z-Score,
5. EMAs,
6. SMAs,
7. Volume
The model can only accept 2 independent variables, to operate within the computation time limits for pine execution.
Let's quickly go over what the numbers in the diagram mean:
The numbers being pointed at with the yellow arrows represent the cases the model is sorting and voting on. These are the most identical cases and are serving as the voting foundation for the model.
The numbers being pointed at with the pink candle is the voting results.
Extrapolating the functions (For Pine Developers:
So this is more of a feature application, so feel free to customize it to your liking and add additional inputs. But here are some key important considerations if you wish to apply this within your own code:
1. This is a BINARY classification task. The prediction must either be 0 or 1.
2. The function consists of 3 separate functions, the 2 first functions serve to build the confusion matrix and then the final "random_forest" function serves to perform the computations. You will need all 3 functions for implementation.
3. The model can only accept 2 independent variables.
I believe that is the function. Hopefully this wasn't too confusing, it is very statsy, but its a fun function for me! I use Random Forest excessively in R and always like to try to convert R things to Pinescript.
Hope you enjoy!
Safe trades everyone!
Moment-Based Adaptive DetectionMBAD (Moment-Based Adaptive Detection) : a method applicable to a wide range of purposes, like outlier or novelty detection, that requires building a sensible interval/set of thresholds. Unlike other methods that are static and rely on optimizations that inevitably lead to underfitting/overfitting, it dynamically adapts to your data distribution without any optimizations, MLE, or stuff, and provides a set of data-driven adaptive thresholds, based on closed-form solution with O(n) algo complexity.
1.5 years ago, when I was still living in Versailles at my friend's house not knowing what was gonna happen in my life tomorrow, I made a damn right decision not to give up on one idea and to actually R&D it and see what’s up. It allowed me to create this one.
The Method Explained
I’ve been wandering about z-values, why exactly 6 sigmas, why 95%? Who decided that? Why would you supersede your opinion on data? Based on what? Your ego?
Then I consciously noticed a couple of things:
1) In control theory & anomaly detection, the popular threshold is 3 sigmas (yet nobody can firmly say why xD). If your data is Laplace, 3 sigmas is not enough; you’re gonna catch too many values, so it needs a higher sigma.
2) Yet strangely, the normal distribution has kurtosis of 3, and 6 for Laplace.
3) Kurtosis is a standardized moment, a moment scaled by stdev, so it means "X amount of something measured in stdevs."
4) You generate synthetic data, you check on real data (market data in my case, I am a quant after all), and you see on both that:
lower extension = mean - standard deviation * kurtosis ≈ data minimum
upper extension = mean + standard deviation * kurtosis ≈ data maximum
Why not simply use max/min?
- Lower info gain: We're not using all info available in all data points to estimate max/min; we just pick the current higher and lower values. Lol, it’s the same as dropping exponential smoothing with alpha = 0 on stationary data & calling it a day.
You can’t update the estimates of min and max when new data arrives containing info about the matter. All you can do is just extend min and max horizontally, so you're not using new info arriving inside new data.
- Mixing order and non-order statistics is a bad idea; we're losing integrity and coherence. That's why I don't like the Hurst exponent btw (and yes, I came up with better metrics of my own).
- Max & min are not even true order statistics, unlike a percentile (finding which requires sorting, which requires multiple passes over your data). To find min or max, you just need to do one traversal over your data. Then with or without any weighting, 100th percentile will equal max. So unlike a weighted percentile, you can’t do weighted max. Then while you can always check max and min of a geometric shape, now try to calculate the 56th percentile of a pentagram hehe.
TL;DR max & min are rather topological characteristics of data, just as the difference between starting and ending points. Not much to do with statistics.
Now the second part of the ballet is to work with data asymmetry:
1) Skewness is also scaled by stdev -> so it must represent a shift from the data midrange measured in stdevs -> given asymmetric data, we can include this info in our models. Unlike kurtosis, skewness has a sign, so we add it to both thresholds:
lower extension = mean - standard deviation * kurtosis + standard deviation * skewness
upper extension = mean + standard deviation * kurtosis + standard deviation * skewness
2) Now our method will work with skewed data as well, omg, ain’t it cool?
3) Hold up, but what about 5th and 6th moments (hyperskewness & hyperkurtosis)? They should represent something meaningful as well.
4) Perhaps if extensions represent current estimated extremums, what goes beyond? Limits, beyond which we expect data not to be able to pass given the current underlying process generating the data?
When you extend this logic to higher-order moments, i.e., hyperskewness & hyperkurtosis (5th and 6th moments), they measure asymmetry and shape of distribution tails, not its core as previous moments -> makes no sense to mix 4th and 3rd moments (skewness and kurtosis) with 5th & 6th, so we get:
lower limit = mean - standard deviation * hyperkurtosis + standard deviation * hyperskewness
upper limit = mean + standard deviation * hyperkurtosis + standard deviation * hyperskewness
While extensions model your data’s natural extremums based on current info residing in the data without relying on order statistics, limits model your data's maximum possible and minimum possible values based on current info residing in your data. If a new data point trespasses limits, it means that a significant change in the data-generating process has happened, for sure, not probably—a confirmed structural break.
And finally we use time and volume weighting to include order & process intensity information in our model.
I can't stress it enough: despite the popularity of these non-weighted methods applied in mainstream open-access time series modeling, it doesn’t make ANY sense to use non-weighted calculations on time series data . Time = sequence, it matters. If you reverse your time series horizontally, your means, percentiles, whatever, will stay the same. Basically, your calculations will give the same results on different data. When you do it, you disregard the order of data that does have order naturally. Does it make any sense to you? It also concerns regressions applied on time series as well, because even despite the slope being opposite on your reversed data, the centroid (through which your regression line always comes through) will be the same. It also might concern Fourier (yes, you can do weighted Fourier) and even MA and AR models—might, because I ain’t researched it extensively yet.
I still can’t believe it’s nowhere online in open access. No chance I’m the first one who got it. It’s literally in front of everyone’s eyes for centuries—why no one tells about it?
How to use
That’s easy: can be applied to any, even non-stationary and/or heteroscedastic time series to automatically detect novelties, outliers, anomalies, structural breaks, etc. In terms of quant trading, you can try using extensions for mean reversion trades and limits for emergency exits, for example. The market-making application is kinda obvious as well.
The only parameter the model has is length, and it should NOT be optimized but picked consciously based on the process/system you’re applying it to and based on the task. However, this part is not about sharing info & an open-access instrument with the world. This is about using dem instruments to do actual business, and we can’t talk about it.
∞
IU Price Density(Market Noise)This Price density Indicator will help you understand what and how market noise is calculated and treated.
Market noise = when the market is moving up and down without any clear direction
The Price Density Indicator is a technical analysis tool used to measure the concentration or "density" of price movements within a specific range. It helps traders differentiate between noisy, choppy markets and trending ones.
I’ve developed a custom Pine Script indicator, "IU Price Density," designed to help traders distinguish between noisy, indecisive markets and clear trading opportunities. It can be applied across multiple markets.
How this work:
Formula = (Σ (High𝑖 - Low𝑖)) / (Max(High) - Min(Low))
Where,
High𝑖 = the high price at the 𝑖 data point.
Low𝑖 = the low price at the 𝑖 data point.
Max(High) = highest price over the data set.
Max(Low) = Lowest price over the data set.
How to use it :
This indicator ranges from 0 to 10
Green(0-3) = Trending Market
Orange(3-6) = Market is normal
Red(6-10) = Noise market
💡 Key Features:
Dynamic Visuals: The indicator uses color-coded signals—green for trending markets and red for noisy, volatile conditions—making it easy to identify optimal trading periods at a glance.
Background Shading: With background colors highlighting significant market conditions, traders can quickly assess when to engage or avoid certain trades.
Customizable Parameters: The length and smoothing factors allow for flexibility in adapting the indicator to various assets and timeframes.
Whether you're a swing trader or an intraday strategist, this tool provides valuable insights to improve your market analysis. I’m excited to bring this indicator to the community!
Position Size Using Manual Stop Loss [odnac]
This indicator calculates the risk per position based on user-defined settings.
Two Calculation Methods
1. Manual Stop Loss (%) & Manual Leverage
2. Manual Stop Loss (%) & Optimized Leverage
Settings
1. init_capital
Enter your current total capital.
2. Maximum Risk (%) per Position of Total Capital
Specify the percentage of your total funds to be risked for a single position.
3. manual_SL(%)
Enter the stop-loss percentage.
Range: 0.01 ~ 100
4. manual_leverage
Enter the leverage you wish to use.
Range: 1 ~ 100
Used in the first method (Manual Stop Loss (%) & Manual Leverage).
5. Safety Margin
Specify the safety margin for optimized leverage.
Range: 0.01 ~ 1
Used in the second method (Manual Stop Loss (%) & Optimized Leverage). Details are explained below.
Indicator Colors
Black: Indicates which method is being used.
White: Leverage.
First Green: Funds to be invested.
Second Green: Funds to be invested * Leverage.
First Red: Stop-loss (%).
Second Red: Stop-loss (%) * Leverage.
Details for Each Method:
1. Manual Stop Loss (%) & Manual Leverage
This method calculates the size of the funds based on user-defined stop-loss (%) and leverage settings.
White: manual_leverage.
First Green: Investment = Maximum Risk / (manual_SL / 100) / manual_leverage
Second Green: Maximum Risk * (manual_SL / 100)
First Red: manual_SL.
Second Red: manual_SL * manual_leverage
Ensure that the product of manual_SL and manual_leverage does not exceed 100.
If it does, there is a risk of liquidation.
2. Manual Stop Loss (%) & Optimized Leverage
This method calculates optimized leverage based on the user-defined stop-loss (%) and determines the size of the funds.
Optimization_LEVER = auto_leverage * safety_margin
auto_leverage = 100 / stop-loss (%), rounded down to the nearest whole number.
(Exception: If the stop-loss (%) is in the range of 0 ~ 1%, auto_leverage is always 100.)
Example:
If the stop-loss is 4%, auto_leverage = 25 (100 / 4 = 25).
However, 4% × 25 leverage equals 100%, meaning liquidation occurs even with a stop-loss.
To reduce this risk, the safety_margin value is applied.
White: auto_leverage * safety_margin
First Green: Investment = Maximum Risk / (manual_SL / 100) / optimization_LEVER
Second Green: Maximum Risk * (manual_SL / 100)
First Red: manual_SL.
Second Red: manual_SL * optimization_LEVER
lib_smcLibrary "lib_smc"
This is an adaptation of LuxAlgo's Smart Money Concepts indicator with numerous changes. Main changes include integration of object based plotting, plenty of performance improvements, live tracking of Order Blocks, integration of volume profiles to refine Order Blocks, and many more.
This is a library for developers, if you want this converted into a working strategy, let me know.
buffer(item, len, force_rotate)
Parameters:
item (float)
len (int)
force_rotate (bool)
buffer(item, len, force_rotate)
Parameters:
item (int)
len (int)
force_rotate (bool)
buffer(item, len, force_rotate)
Parameters:
item (Profile type from robbatt/lib_profile/32)
len (int)
force_rotate (bool)
swings(len)
INTERNAL: detect swing points (HH and LL) in given range
Parameters:
len (simple int) : range to check for new swing points
Returns: values are the price level where and if a new HH or LL was detected, else na
method init(this)
Namespace types: OrderBlockConfig
Parameters:
this (OrderBlockConfig)
method delete(this)
Namespace types: OrderBlock
Parameters:
this (OrderBlock)
method clear_broken(this, broken_buffer)
INTERNAL: delete internal order blocks box coordinates if top/bottom is broken
Namespace types: map
Parameters:
this (map)
broken_buffer (map)
Returns: any_bull_ob_broken, any_bear_ob_broken, broken signals are true if an according order block was broken/mitigated, broken contains the broken block(s)
create_ob(id, mode, start_t, start_i, top, end_t, end_i, bottom, break_price, early_confirmation_price, config, init_plot, force_overlay)
INTERNAL: set internal order block coordinates
Parameters:
id (int)
mode (int) : 1: bullish, -1 bearish block
start_t (int)
start_i (int)
top (float)
end_t (int)
end_i (int)
bottom (float)
break_price (float)
early_confirmation_price (float)
config (OrderBlockConfig)
init_plot (bool)
force_overlay (bool)
Returns: signals are true if an according order block was broken/mitigated
method align_to_profile(block, align_edge, align_break_price)
Namespace types: OrderBlock
Parameters:
block (OrderBlock)
align_edge (bool)
align_break_price (bool)
method create_profile(block, opens, tops, bottoms, closes, values, resolution, vah_pc, val_pc, args, init_calculated, init_plot, force_overlay)
Namespace types: OrderBlock
Parameters:
block (OrderBlock)
opens (array)
tops (array)
bottoms (array)
closes (array)
values (array)
resolution (int)
vah_pc (float)
val_pc (float)
args (ProfileArgs type from robbatt/lib_profile/32)
init_calculated (bool)
init_plot (bool)
force_overlay (bool)
method create_profile(block, resolution, vah_pc, val_pc, args, init_calculated, init_plot, force_overlay)
Namespace types: OrderBlock
Parameters:
block (OrderBlock)
resolution (int)
vah_pc (float)
val_pc (float)
args (ProfileArgs type from robbatt/lib_profile/32)
init_calculated (bool)
init_plot (bool)
force_overlay (bool)
track_obs(swing_len, hh, ll, top, btm, bull_bos_alert, bull_choch_alert, bear_bos_alert, bear_choch_alert, min_block_size, max_block_size, config_bull, config_bear, init_plot, force_overlay, enabled, extend_blocks, clear_broken_buffer_before, align_edge_to_value_area, align_break_price_to_poc, profile_args_bull, profile_args_bear, use_soft_confirm, soft_confirm_offset, use_retracements_with_FVG_out)
Parameters:
swing_len (int)
hh (float)
ll (float)
top (float)
btm (float)
bull_bos_alert (bool)
bull_choch_alert (bool)
bear_bos_alert (bool)
bear_choch_alert (bool)
min_block_size (float)
max_block_size (float)
config_bull (OrderBlockConfig)
config_bear (OrderBlockConfig)
init_plot (bool)
force_overlay (bool)
enabled (bool)
extend_blocks (simple bool)
clear_broken_buffer_before (simple bool)
align_edge_to_value_area (simple bool)
align_break_price_to_poc (simple bool)
profile_args_bull (ProfileArgs type from robbatt/lib_profile/32)
profile_args_bear (ProfileArgs type from robbatt/lib_profile/32)
use_soft_confirm (simple bool)
soft_confirm_offset (float)
use_retracements_with_FVG_out (simple bool)
method draw(this, config, extend_only)
Namespace types: OrderBlock
Parameters:
this (OrderBlock)
config (OrderBlockConfig)
extend_only (bool)
method draw(blocks, config)
INTERNAL: plot order blocks
Namespace types: array
Parameters:
blocks (array)
config (OrderBlockConfig)
method draw(blocks, config)
INTERNAL: plot order blocks
Namespace types: map
Parameters:
blocks (map)
config (OrderBlockConfig)
method cleanup(this, ob_bull, ob_bear)
removes all Profiles that are older than the latest OrderBlock from this profile buffer
Namespace types: array
Parameters:
this (array type from robbatt/lib_profile/32)
ob_bull (OrderBlock)
ob_bear (OrderBlock)
_plot_swing_points(mode, x, y, show_swing_points, linecolor_swings, keep_history, show_latest_swings_levels, trail_x, trail_y, trend)
INTERNAL: plot swing points
Parameters:
mode (int) : 1: bullish, -1 bearish block
x (int) : x-coordingate of swing point to plot (bar_index)
y (float) : y-coordingate of swing point to plot (price)
show_swing_points (bool) : switch to enable/disable plotting of swing point labels
linecolor_swings (color) : color for swing point labels and lates level lines
keep_history (bool) : weater to remove older swing point labels and only keep the most recent
show_latest_swings_levels (bool)
trail_x (int) : x-coordinate for latest swing point (bar_index)
trail_y (float) : y-coordinate for latest swing point (price)
trend (int) : the current trend 1: bullish, -1: bearish, to determine Strong/Weak Low/Highs
_pivot_lvl(mode, trend, hhll_x, hhll, super_hhll, filter_insignificant_internal_breaks)
INTERNAL: detect whether a structural level has been broken and if it was in trend direction (BoS) or against trend direction (ChoCh), also track the latest high and low swing points
Parameters:
mode (simple int) : detect 1: bullish, -1 bearish pivot points
trend (int) : current trend direction
hhll_x (int) : x-coordinate of newly detected hh/ll (bar_index)
hhll (float) : y-coordinate of newly detected hh/ll (price)
super_hhll (float) : level/y-coordinate of superior hhll (if this is an internal structure pivot level)
filter_insignificant_internal_breaks (bool) : if true pivot points / internal structure will be ignored where the wick in trend direction is longer than the opposite (likely to push further in direction of main trend)
Returns: coordinates of internal structure that has been broken (x,y): start of structure, (trail_x, trail_y): tracking hh/ll after structure break, (bos_alert, choch_alert): signal whether a structural level has been broken
_plot_structure(x, y, is_bos, is_choch, line_color, line_style, label_style, label_size, keep_history)
INTERNAL: plot structural breaks (BoS/ChoCh)
Parameters:
x (int) : x-coordinate of newly broken structure (bar_index)
y (float) : y-coordinate of newly broken structure (price)
is_bos (bool) : whether this structural break was in trend direction
is_choch (bool) : whether this structural break was against trend direction
line_color (color) : color for the line connecting the structural level and the breaking candle
line_style (string) : style (line.style_dashed/solid) for the line connecting the structural level and the breaking candle
label_style (string) : style (label.style_label_down/up) for the label above/below the line connecting the structural level and the breaking candle
label_size (string) : size (size.small/tiny) for the label above/below the line connecting the structural level and the breaking candle
keep_history (bool) : weater to remove older swing point labels and only keep the most recent
structure_values(length, super_hh, super_ll, filter_insignificant_internal_breaks)
detect (and plot) structural breaks and the resulting new trend
Parameters:
length (simple int) : lookback period for swing point detection
super_hh (float) : level/y-coordinate of superior hh (for internal structure detection)
super_ll (float) : level/y-coordinate of superior ll (for internal structure detection)
filter_insignificant_internal_breaks (bool) : if true pivot points / internal structure will be ignored where the wick in trend direction is longer than the opposite (likely to push further in direction of main trend)
Returns: trend: direction 1:bullish -1:bearish, (bull_bos_alert, bull_choch_alert, top_x, top_y, trail_up_x, trail_up): whether and which level broke in a bullish direction, trailing high, (bbear_bos_alert, bear_choch_alert, tm_x, btm_y, trail_dn_x, trail_dn): same in bearish direction
structure_plot(trend, bull_bos_alert, bull_choch_alert, top_x, top_y, trail_up_x, trail_up, hh, bear_bos_alert, bear_choch_alert, btm_x, btm_y, trail_dn_x, trail_dn, ll, color_bull, color_bear, show_swing_points, show_latest_swings_levels, show_bos, show_choch, line_style, label_size, keep_history)
detect (and plot) structural breaks and the resulting new trend
Parameters:
trend (int) : crrent trend 1: bullish, -1: bearish
bull_bos_alert (bool) : if there was a bullish bos alert -> plot it
bull_choch_alert (bool) : if there was a bullish choch alert -> plot it
top_x (int) : latest shwing high x
top_y (float) : latest swing high y
trail_up_x (int) : trailing high x
trail_up (float) : trailing high y
hh (float) : if there was a higher high
bear_bos_alert (bool) : if there was a bearish bos alert -> plot it
bear_choch_alert (bool) : if there was a bearish chock alert -> plot it
btm_x (int) : latest swing low x
btm_y (float) : latest swing low y
trail_dn_x (int) : trailing low x
trail_dn (float) : trailing low y
ll (float) : if there was a lower low
color_bull (color) : color for bullish BoS/ChoCh levels
color_bear (color) : color for bearish BoS/ChoCh levels
show_swing_points (bool) : whether to plot swing point labels
show_latest_swings_levels (bool) : whether to track and plot latest swing point levels with lines
show_bos (bool) : whether to plot BoS levels
show_choch (bool) : whether to plot ChoCh levels
line_style (string) : whether to plot BoS levels
label_size (string) : label size of plotted BoS/ChoCh levels
keep_history (bool) : weater to remove older swing point labels and only keep the most recent
structure(length, color_bull, color_bear, super_hh, super_ll, filter_insignificant_internal_breaks, show_swing_points, show_latest_swings_levels, show_bos, show_choch, line_style, label_size, keep_history, enabled)
detect (and plot) structural breaks and the resulting new trend
Parameters:
length (simple int) : lookback period for swing point detection
color_bull (color) : color for bullish BoS/ChoCh levels
color_bear (color) : color for bearish BoS/ChoCh levels
super_hh (float) : level/y-coordinate of superior hh (for internal structure detection)
super_ll (float) : level/y-coordinate of superior ll (for internal structure detection)
filter_insignificant_internal_breaks (bool) : if true pivot points / internal structure will be ignored where the wick in trend direction is longer than the opposite (likely to push further in direction of main trend)
show_swing_points (bool) : whether to plot swing point labels
show_latest_swings_levels (bool) : whether to track and plot latest swing point levels with lines
show_bos (bool) : whether to plot BoS levels
show_choch (bool) : whether to plot ChoCh levels
line_style (string) : whether to plot BoS levels
label_size (string) : label size of plotted BoS/ChoCh levels
keep_history (bool) : weater to remove older swing point labels and only keep the most recent
enabled (bool)
_check_equal_level(mode, len, eq_threshold, enabled)
INTERNAL: detect equal levels (double top/bottom)
Parameters:
mode (int) : detect 1: bullish/high, -1 bearish/low pivot points
len (int) : lookback period for equal level (swing point) detection
eq_threshold (float) : maximum price offset for a level to be considered equal
enabled (bool)
Returns: eq_alert whether an equal level was detected and coordinates of the first and the second level/swing point
_plot_equal_level(show_eq, x1, y1, x2, y2, label_txt, label_style, label_size, line_color, line_style, keep_history)
INTERNAL: plot equal levels (double top/bottom)
Parameters:
show_eq (bool) : whether to plot the level or not
x1 (int) : x-coordinate of the first level / swing point
y1 (float) : y-coordinate of the first level / swing point
x2 (int) : x-coordinate of the second level / swing point
y2 (float) : y-coordinate of the second level / swing point
label_txt (string) : text for the label above/below the line connecting the equal levels
label_style (string) : style (label.style_label_down/up) for the label above/below the line connecting the equal levels
label_size (string) : size (size.tiny) for the label above/below the line connecting the equal levels
line_color (color) : color for the line connecting the equal levels (and it's label)
line_style (string) : style (line.style_dotted) for the line connecting the equal levels
keep_history (bool) : weater to remove older swing point labels and only keep the most recent
equal_levels_values(len, threshold, enabled)
detect (and plot) equal levels (double top/bottom), returns coordinates
Parameters:
len (int) : lookback period for equal level (swing point) detection
threshold (float) : maximum price offset for a level to be considered equal
enabled (bool) : whether detection is enabled
Returns: (eqh_alert, eqh_x1, eqh_y1, eqh_x2, eqh_y2) whether an equal high was detected and coordinates of the first and the second level/swing point, (eql_alert, eql_x1, eql_y1, eql_x2, eql_y2) same for equal lows
equal_levels_plot(eqh_x1, eqh_y1, eqh_x2, eqh_y2, eql_x1, eql_y1, eql_x2, eql_y2, color_eqh, color_eql, show, keep_history)
detect (and plot) equal levels (double top/bottom), returns coordinates
Parameters:
eqh_x1 (int) : coordinates of first point of equal high
eqh_y1 (float) : coordinates of first point of equal high
eqh_x2 (int) : coordinates of second point of equal high
eqh_y2 (float) : coordinates of second point of equal high
eql_x1 (int) : coordinates of first point of equal low
eql_y1 (float) : coordinates of first point of equal low
eql_x2 (int) : coordinates of second point of equal low
eql_y2 (float) : coordinates of second point of equal low
color_eqh (color) : color for the line connecting the equal highs (and it's label)
color_eql (color) : color for the line connecting the equal lows (and it's label)
show (bool) : whether plotting is enabled
keep_history (bool) : weater to remove older swing point labels and only keep the most recent
Returns: (eqh_alert, eqh_x1, eqh_y1, eqh_x2, eqh_y2) whether an equal high was detected and coordinates of the first and the second level/swing point, (eql_alert, eql_x1, eql_y1, eql_x2, eql_y2) same for equal lows
equal_levels(len, threshold, color_eqh, color_eql, enabled, show, keep_history)
detect (and plot) equal levels (double top/bottom)
Parameters:
len (int) : lookback period for equal level (swing point) detection
threshold (float) : maximum price offset for a level to be considered equal
color_eqh (color) : color for the line connecting the equal highs (and it's label)
color_eql (color) : color for the line connecting the equal lows (and it's label)
enabled (bool) : whether detection is enabled
show (bool) : whether plotting is enabled
keep_history (bool) : weater to remove older swing point labels and only keep the most recent
Returns: (eqh_alert) whether an equal high was detected, (eql_alert) same for equal lows
_detect_fvg(mode, enabled, o, h, l, c, filter_insignificant_fvgs, change_tf)
INTERNAL: detect FVG (fair value gap)
Parameters:
mode (int) : detect 1: bullish, -1 bearish gaps
enabled (bool) : whether detection is enabled
o (float) : reference source open
h (float) : reference source high
l (float) : reference source low
c (float) : reference source close
filter_insignificant_fvgs (bool) : whether to calculate and filter small/insignificant gaps
change_tf (bool) : signal when the previous reference timeframe closed, triggers new calculation
Returns: whether a new FVG was detected and its top/mid/bottom levels
_clear_broken_fvg(mode, upper_boxes, lower_boxes)
INTERNAL: clear mitigated FVGs (fair value gaps)
Parameters:
mode (int) : detect 1: bullish, -1 bearish gaps
upper_boxes (array) : array that stores the upper parts of the FVG boxes
lower_boxes (array) : array that stores the lower parts of the FVG boxes
_plot_fvg(mode, show, top, mid, btm, border_color, extend_box)
INTERNAL: plot (and clear broken) FVG (fair value gap)
Parameters:
mode (int) : plot 1: bullish, -1 bearish gap
show (bool) : whether plotting is enabled
top (float) : top level of fvg
mid (float) : center level of fvg
btm (float) : bottom level of fvg
border_color (color) : color for the FVG box
extend_box (int) : how many bars into the future the FVG box should be extended after detection
fvgs_values(o, h, l, c, filter_insignificant_fvgs, change_tf, enabled)
detect (and plot / clear broken) FVGs (fair value gaps), and return alerts and level values
Parameters:
o (float) : reference source open
h (float) : reference source high
l (float) : reference source low
c (float) : reference source close
filter_insignificant_fvgs (bool) : whether to calculate and filter small/insignificant gaps
change_tf (bool) : signal when the previous reference timeframe closed, triggers new calculation
enabled (bool) : whether detection is enabled
Returns: (bullish_fvg_alert, bull_top, bull_mid, bull_btm): whether a new bullish FVG was detected and its top/mid/bottom levels, (bearish_fvg_alert, bear_top, bear_mid, bear_btm): same for bearish FVGs
fvgs_plot(bullish_fvg_alert, bull_top, bull_mid, bull_btm, bearish_fvg_alert, bear_top, bear_mid, bear_btm, color_bull, color_bear, extend_box, show)
Parameters:
bullish_fvg_alert (bool)
bull_top (float)
bull_mid (float)
bull_btm (float)
bearish_fvg_alert (bool)
bear_top (float)
bear_mid (float)
bear_btm (float)
color_bull (color) : color for bullish FVG boxes
color_bear (color) : color for bearish FVG boxes
extend_box (int) : how many bars into the future the FVG box should be extended after detection
show (bool) : whether plotting is enabled
Returns: (bullish_fvg_alert, bull_top, bull_mid, bull_btm): whether a new bullish FVG was detected and its top/mid/bottom levels, (bearish_fvg_alert, bear_top, bear_mid, bear_btm): same for bearish FVGs
fvgs(o, h, l, c, filter_insignificant_fvgs, change_tf, color_bull, color_bear, extend_box, enabled, show)
detect (and plot / clear broken) FVGs (fair value gaps)
Parameters:
o (float) : reference source open
h (float) : reference source high
l (float) : reference source low
c (float) : reference source close
filter_insignificant_fvgs (bool) : whether to calculate and filter small/insignificant gaps
change_tf (bool) : signal when the previous reference timeframe closed, triggers new calculation
color_bull (color) : color for bullish FVG boxes
color_bear (color) : color for bearish FVG boxes
extend_box (int) : how many bars into the future the FVG box should be extended after detection
enabled (bool) : whether detection is enabled
show (bool) : whether plotting is enabled
Returns: (bullish_fvg_alert): whether a new bullish FVG was detected, (bearish_fvg_alert): same for bearish FVGs
OrderBlock
Fields:
id (series int)
dir (series int)
left_top (chart.point)
right_bottom (chart.point)
break_price (series float)
early_confirmation_price (series float)
ltf_high (array)
ltf_low (array)
ltf_volume (array)
plot (Box type from robbatt/lib_plot_objects/49)
profile (Profile type from robbatt/lib_profile/32)
trailing (series bool)
extending (series bool)
awaiting_confirmation (series bool)
touched_break_price_before_confirmation (series bool)
soft_confirmed (series bool)
has_fvg_out (series bool)
hidden (series bool)
broken (series bool)
OrderBlockConfig
Fields:
show (series bool)
show_last (series int)
show_id (series bool)
show_profile (series bool)
args (BoxArgs type from robbatt/lib_plot_objects/49)
txt (series string)
txt_args (BoxTextArgs type from robbatt/lib_plot_objects/49)
delete_when_broken (series bool)
broken_args (BoxArgs type from robbatt/lib_plot_objects/49)
broken_txt (series string)
broken_txt_args (BoxTextArgs type from robbatt/lib_plot_objects/49)
broken_profile_args (ProfileArgs type from robbatt/lib_profile/32)
use_profile (series bool)
profile_args (ProfileArgs type from robbatt/lib_profile/32)
Candle ThermalsThis indicator color candles based on their percentage price change, relative to the average, maximum, and minimum changes over the last 100 candles.
-It calculates the percentage change of all candles
-Calculates the minimum, maximum and average in the last 100 bars in percentage change
-Changes color of the candle based on the range between the current percent and min/max value
-The brightest candle provides the highest compound effect to you account if you act on it at the open.
-Candles that have a percentage close to the average then they are barely visible = lowest compound effect to your account
This indicator functions like a "heatmap" for candles, highlighting the relative volatility of price movements in both directions. Strong bullish candles are brighter green, and strong bearish candles are brighter red. It's particularly useful for traders wanting quick visual feedback on price volatility and strength trends within the last 100 bars.
Volatility-Adjusted Trend Deviation Statistics (C-Ratios)The Pine Script logic provided generates and displays a table with key information derived from VWMA, EMA, and ATR-based "C Ratios," alongside stochastic oscillators, correlation coefficients, Z-scores, and bias indicators. Here’s an explanation of the logic and what the output in the table informs:
Key Calculations and Their Purpose
VWMA and EMA (Smoothing Lengths):
Multiple EMAs are calculated using VWMA as the source, with lengths spanning short-term (13) to long-term (233).
These EMAs provide a hierarchy of smoothed price levels to assess trends over various time horizons.
ATR-Based "C Ratios":
The C Ratios measure deviations of smoothed prices (a_1 to a_7) from the source price relative to ATR at corresponding lengths.
These values normalize deviations, giving insight into the price's relative movement strength and direction over various periods.
Stochastic Oscillator for C Ratios:
Calculates normalized stochastic values for each C Ratio to assess overbought/oversold conditions dynamically over a rolling window.
Helps identify short-term momentum trends within the broader context of C Ratios.
Displays the average stochastic value derived from all C Ratios.
Text: Shows overbought/oversold conditions (Overbought, Oversold, or ---).
Color: Green for strong upward momentum, red for downward, and white for neutral.
Weighted and Mean C Ratio:
The script computes both an arithmetic mean (c_mean) and a weighted mean (c_mean_w) for all C Ratios.
Weighted mean emphasizes short-term values using predefined weights.
Trend Bias and Reversal Detection:
The script calculates Z-scores for c_mean to identify statistically significant deviations.
It combines Z-scores and weighted C Ratio values to determine:
Bias (Bullish/Bearish based on Z-score thresholds and mean values).
Reversals (Based on relative positioning and how the weighted c_mean and un-weighted C_mean move. ).
Correlation Coefficient:
Correlation of mean C Ratios (c_mean) with bar indices over the short-term length (sl) assesses the strength and direction of trend consistency.
Table Output and Its Meaning
Stochastic Strength:
Long-term Correlation:
List of Lengths: Define the list of lengths for EMA and ATR explicitly (e.g., ).
Calculate Mean C Ratios: For each length in the list, calculate the mean C Ratio
Average these values over the entire dataset.
Store Lengths and Mean C Ratios: Maintain arrays for lengths and their corresponding mean C Ratios.
Correlation: compute the Pearson correlation between the list of lengths and the mean C Ratios.
Text: Indicates Uptrend, Downtrend, or neutral (---).
Color: Green for positive (uptrend), red for negative (downtrend), and white for neutral.
Z-Score Bias:
Assesses the statistical deviation of C Ratios from their historical mean.
Text: Bullish Bias, Bearish Bias, or --- (neutral).
Color: Green or red based on the direction and significance of the Z-score.
C-Ratio Mean:
Displays the weighted average C Ratio (c_mean_w) or a reversal condition.
Text: If no reversal is detected, shows c_mean_w; otherwise, a reversal condition (Bullish Reversal, Bearish Reversal).
Color: Indicates the strength and direction of the bias or reversal.
Practical Insights
Trend Identification: Correlation coefficients, Z-scores, and stochastic values collectively highlight whether the market is trending and the trend's direction.
Momentum and Volatility: Stochastic and ATR-normalized C Ratios provide insights into the momentum and price movement consistency across different timeframes.
Bias and Reversal Detection: The script highlights potential shifts in market sentiment or direction (bias or reversal) using statistical measures.
Customization: Users can toggle plots and analyze specific EMA lengths or focus on combined metrics like the weighted C Ratio.
BacktestLibraryLibrary "BacktestLibrary"
A library providing functions for equity calculation and performance metrics.
since(date, active)
: Calculates the number of candles since a specified date.
Parameters:
date (simple float) : (simple float): The starting date in timestamp format (e.g., input.time(timestamp()))
active (simple bool) : (simple bool): If true, counts the number of candles since the date; if false, returns 0.
Returns: (int): The number of candles since the specified date.
buy_and_hold(r, startDate)
: Calculates the Buy and Hold Equity from a specified date.
Parameters:
r (float) : (series float): Daily returns of the asset (e.g., 0.02 for 2% move).
startDate (simple float) : (simple float): Timestamp of the starting date for the equity calculation.
Returns: (float): Buy and Hold Equity of the asset from the specified date.
equity(sig, threshold, r, startDate, signals)
: Calculates the strategy's equity on a candle-by-candle basis.
Parameters:
sig (float) : (series float): Signal values; positive for long, negative for short.
threshold (simple float) : (simple float): Signal threshold for entering trades.
r (float) : (series float): Daily returns of the asset (e.g., 0.02 for 2% move).
startDate (simple float) : (simple float): Timestamp of the starting date for the equity calculation.
signals (simple string) : (simple string): Type of signals to backtest ("Long & Short", "Long Only", "Short Only").
Returns: (float): Strategy equity on a candle-by-candle basis.
PerformanceMetrics(base, Lookback, startDate)
: Calculates performance metrics of a strategy from a specified date.
Parameters:
base (float) : (series float): Equity values of the strategy or Buy and Hold equity.
Lookback (int) : (series int): Number of periods since the start date; recommended to use the 'since' function.
startDate (simple float) : (simple float): Timestamp of the starting date for the equity calculation.
Returns: (float ): Array of performance metrics.
PerfMetricTable(buy_and_hold, strategy)
: Plots a table comparing performance metrics of Buy and Hold and Strategy equity.
Parameters:
buy_and_hold (array) : (float ): Metrics from the PerformanceMetrics() function for Buy and Hold.
strategy (array) : (float ): Metrics from the PerformanceMetrics() function for the strategy.
Returns: : Table displaying the performance metrics comparison.
Portfolio [Afnan]🚀 Portfolio - Advanced Portfolio Management Indicator 📊
A game-changing portfolio management tool designed to help traders stay on top of their positions and manage risk efficiently. This indicator combines detailed tracking, real-time analytics, and visual clarity to ensure traders are well-equipped for the dynamic world of financial markets.
📈 Key Features 💡
Track up to 14 positions with ease
Real-time Profit & Loss (P&L) updates and risk metrics
Visual representation of entry, stop-loss (SL), and target levels
Alerts for stop-loss breaches and target achievements
Comprehensive portfolio summaries for quick analysis
Customizable options to suit individual trading styles
🔍 Main Components ⚙️
📊 1. Position Tracking
Detailed position data: entry, stop-loss, target levels, and more
Real-time risk-reward ratios
Insights into position size and exposure percentages
Continuous updates on P&L in real-time
📉 2. Visual Indicators
Clear visual markers for entry, SL, and target prices
Price labels with detailed percentage changes
Indicators that show the current position's market status
💼 3. Portfolio Summary
Aggregate account values and exposure
Summarized P&L metrics across all positions
Risk management insights for better decision-making
Daily performance tracking to evaluate strategies
⚠️ 4. Alert System
Instant notifications for stop-loss breaches
Alerts when target prices are hit
Alerts operate for the current chart symbol
⚡ Customization Options 🎨
Show or hide specific data columns
Adjust the table's position and size for better visibility
Personalize color schemes and text styles
Switch between full portfolio view and single symbol focus
📱 How to Use 📝
Input your positions in the indicator's settings
Enable or disable specific positions dynamically
Customize display preferences to your liking
Set up alerts for proactive risk management
Monitor all your trading activities in one comprehensive dashboard
📌 Important Notes ℹ️
Compatible with any trading symbol
Updates seamlessly during market hours
Alerts are specific to the currently active chart symbol
Maximum capacity: 14 simultaneous positions
Created by: @AfnanTAjuddin
⚠️ Disclaimer ⚠️
This indicator is a tool for informational purposes only. Ensure all calculations are verified and consult a financial professional before making investment decisions.
🎯 "Stay disciplined, trade smart, and let data guide your decisions." 📊
ArrayMovingAveragesLibrary "ArrayMovingAverages"
This library adds several moving average methods to arrays, so you can call, eg.:
myArray.ema(3)
method emaArray(id, length)
Calculate Exponential Moving Average (EMA) for Arrays
Namespace types: array
Parameters:
id (array) : (array) Input array
length (int) : (int) Length of the EMA
Returns: (array) Array of EMA values
method ema(id, length)
Get the last value of the EMA array
Namespace types: array
Parameters:
id (array) : (array) Input array
length (int) : (int) Length of the EMA
Returns: (float) Last EMA value or na if empty
method rmaArray(id, length)
Calculate Rolling Moving Average (RMA) for Arrays
Namespace types: array
Parameters:
id (array) : (array) Input array
length (int) : (int) Length of the RMA
Returns: (array) Array of RMA values
method rma(id, length)
Get the last value of the RMA array
Namespace types: array
Parameters:
id (array) : (array) Input array
length (int) : (int) Length of the RMA
Returns: (float) Last RMA value or na if empty
method smaArray(id, windowSize)
Calculate Simple Moving Average (SMA) for Arrays
Namespace types: array
Parameters:
id (array) : (array) Input array
windowSize (int) : (int) Window size for calculation, defaults to array size
Returns: (array) Array of SMA values
method sma(id, windowSize)
Get the last value of the SMA array
Namespace types: array
Parameters:
id (array) : (array) Input array
windowSize (int) : (int) Window size for calculation, defaults to array size
Returns: (float) Last SMA value or na if empty
method wmaArray(id, windowSize)
Calculate Weighted Moving Average (WMA) for Arrays
Namespace types: array
Parameters:
id (array) : (array) Input array
windowSize (int) : (int) Window size for calculation, defaults to array size
Returns: (array) Array of WMA values
method wma(id, windowSize)
Get the last value of the WMA array
Namespace types: array
Parameters:
id (array) : (array) Input array
windowSize (int) : (int) Window size for calculation, defaults to array size
Returns: (float) Last WMA value or na if empty
Quick scan for signal🙏🏻 Hey TV, this is QSFS, following:
^^ Quick scan for drift (QSFD)
^^ Quick scan for cycles (QSFC)
As mentioned before, ML trading is all about spotting any kind of non-randomness, and this metric (along with 2 previously posted) gonna help ya'll do it fast. This one will show you whether your time series possibly exhibits mean-reverting / consistent / noisy behavior, that can be later confirmed or denied by more sophisticated tools. This metric is O(n) in windowed mode and O(1) if calculated incrementally on each data update, so you can scan Ks of datasets w/o worrying about melting da ice.
^^ windowed mode
Now the post will be divided into several sections, and a couple of things I guess you’ve never seen or thought about in your life:
1) About Efficiency Ratios posted there on TV;
Some of you might say this is the Efficiency Ratio you’ve seen in Perry's book. Firstly, I can assure you that neither me nor Perry, just as X amount of quants all over the world and who knows who else, would say smth like, "I invented it," lol. This is just a thing you R&D when you need it. Secondly, I invite you (and mods & admin as well) to take a lil glimpse at the following screenshot:
^^ not cool...
So basically, all the Efficiency Ratios that were copypasted to our platform suffer the same bug: dudes don’t know how indexing works in Pine Script. I mean, it’s ok, I been doing the same mistakes as well, but loxx, cmon bro, you... If you guys ever read it, the lines 20 and 22 in da code are dedicated to you xD
2) About the metric;
This supports both moving window mode when Length > 0 and all-data expanding window mode when Length < 1, calculating incrementally from the very first data point in the series: O(n) on history, O(1) on live updates.
Now, why do I SQRT transform the result? This is a natural action since the metric (being a ratio in essence) is bounded between 0 and 1, so it can be modeled with a beta distribution. When you SQRT transform it, it still stays beta (think what happens when you apply a square root to 0.01 or 0.99), but it becomes symmetric around its typical value and starts to follow a bell-shaped curve. This can be easily checked with a normality test or by applying a set of percentiles and seeing the distances between them are almost equal.
Then I noticed that on different moving window sizes, the typical value of the metric seems to slide: higher window sizes lead to lower typical values across the moving windows. Turned out this can be modeled the same way confidence intervals are made. Lines 34 and 35 explain it all, I guess. You can see smth alike on an autocorrelogram. These two match the mean & mean + 1 stdev applied to the metric. This way, we’ve just magically received data to estimate alpha and beta parameters of the beta distribution using the method of moments. Having alpha and beta, we can now estimate everything further. Btw, there’s an alternative parameterization for beta distributions based on data length.
Now what you’ll see next is... u guys actually have no idea how deep and unrealistically minimalistic the underlying math principles are here.
I’m sure I’m not the only one in the universe who figured it out, but the thing is, it’s nowhere online or offline. By calculating higher-order moments & combining them, you can find natural adaptive thresholds that can later be used for anomaly detection/control applications for any data. No hardcoded thresholds, purely data-driven. Imma come back to this in one of the next drops, but the truest ones can already see it in this code. This way we get dem thresholds.
Your main thresholds are: basis, upper, and lower deviations. You can follow the common logic I’ve described in my previous scripts on how to use them. You just register an event when the metric goes higher/lower than a certain threshold based on what you’re looking for. Then you take the time series and confirm a certain behavior you were looking for by using an appropriate stat test. Or just run a certain strategy.
To avoid numerous triggers when the metric jitters around a threshold, you can follow this logic: forget about one threshold if touched, until another threshold is touched.
In general, when the metric gets higher than certain thresholds, like upper deviation, it means the signal is stronger than noise. You confirm it with a more sophisticated tool & run momentum strategies if drift is in place, or volatility strategies if there’s no drift in place. Otherwise, you confirm & run ~ mean-reverting strategies, regardless of whether there’s drift or not. Just don’t operate against the trend—hedge otherwise.
3) Flex;
Extension and limit thresholds based on distribution moments gonna be discussed properly later, but now you can see this:
^^ magic
Look at the thresholds—adaptive and dynamic. Do you see any optimizations? No ML, no DL, closed-form solution, but how? Just a formula based on a couple of variables? Maybe it’s just how the Universe works, but how can you know if you don’t understand how fundamentally numbers 3 and 15 are related to the normal distribution? Hm, why do they always say 3 sigmas but can’t say why? Maybe you can be different and say why?
This is the primordial power of statistical modeling.
4) Thanks;
I really wanna dedicate this to Charlotte de Witte & Marion Di Napoli, and their new track "Sanctum." It really gets you connected to the Source—I had it in my soul when I was doing all this ∞