Recessions & crises shading (custom dates & stats)Shades your chart background to flag events such as crises or recessions, in similar fashion to what you see on FRED charts. The advantage of this indicator over others is that you can quickly input custom event dates as text in the menu to analyse their impact for your specific symbol. The script automatically labels, calculates and displays the peak to through percentage corrections on your current chart.
By default the indicator is configured to show the last 6 US recessions. If you have custom events which will benefit others, just paste the input string in the comments below so one can simply copy/paste in their indicator.
Example event input (No spaces allowed except for the label name. Enter dates as YYYY-MM-DD.)
2020-02-01,2020-03-31,COVID-19
2007-12-01,2009-05-31,Subprime mortgages
2001-03-01,2001-10-30,Dot-com bubble
1990-07-01,1991-03-01,Oil shock
1981-07-01,1982-11-01,US unemployment
1980-01-01,1980-07-01,Volker
1973-11-01,1975-03-01,OPEC
Statistics
Trend Reversal Probability CalculatorThe "Trend Reversal Probability Calculator" is a TradingView indicator that calculates the probability of a trend reversal based on the crossover of multiple moving averages and the rate of change (ROC) of their slopes. This indicator is designed to help traders identify potential trend reversals by providing signals when the short-term moving averages start to slope in the opposite direction of the long-term moving average.
To use the indicator, simply add it to your TradingView chart and adjust the input parameters according to your preferences. The input parameters include the length of the moving averages, the ROC length (trend sensitivity), and the reversal sensitivity (signal percentage).
The indicator calculates the ROC of the moving averages and determines if the short-term moving averages are sloping in the opposite direction of the long-term moving average. The number of short-term moving averages that meet this condition is then counted, and the probability of a trend reversal is calculated based on the percentage of short-term moving averages that meet this condition.
When the probability of a trend reversal is high, a bullish or bearish signal is generated, depending on the direction of the reversal. The bullish signal is generated when the short-term moving averages start to slope upward, and the bearish signal is generated when the short-term moving averages start to slope downward.
Traders can use the "Trend Reversal Probability Calculator" to identify potential trend reversals and adjust their trading strategies accordingly. It is important to note that this indicator is not a guarantee of a trend reversal and should be used in conjunction with other technical analysis tools to make informed trading decisions.
relative performanceThis indicator is built to mesure the performance of a stock vs the index of choice. it is best use for the intraday session because it doesn't take gap into account when doing the calculation. This is how i made my math (using AAPL compared to SPY for simplicity)
(change AAPL / ATR AAPL) - (change SPY / ATR SPY) * beta factor * volume factor
change is calculated open to close for each candle instead of close to close. this is why gap does not affect the calculation
blue columns is an instant snap shot of the RP
red and green columns is the moving average of the blue columns
limit is the max value for the blue line when ploting them on the chart but doesn't affect the calculation
option:
indice: default with SPY but could use any stock
moving average choice: let you choose between EMA or SMA green and red columns
rolling average length : number of bar for the moving average
I made an auto adjust for the 5 min chart and the 2 min chart so you can swithc between both chart and have the same average (default value set to 6x 5min and 15x 2 min, giving you the average of the last 30min)
volume weighing let you choose if you want a volume factor or not. volume factor is only going to multiplie the result of the price move. it cannot move it from positive to negative.
this is the calculation
(volume AAPL / volume SMA AAPL) / (volume SPY / volume sma SPY)
meaning that a higher volume on the thicker compared to it's sma while having a lower volume on SPY will give you a big relative performance.
you can choose the number of bar in the average for the volume.
BETA factor work the same way that the volume factor does. you got to manualy enter your beta. default is set to 1.5
table
top line : blue square is you RP value (same has the blue columns bar) and your reference thicker
middle line : pourcentage move from the open (9:30 open) for your stock on the left and the reference on the right
bottom line : beta on the left and volume factor on the right
feel free to ask question or give modification idea!
Uptrend Downtrend Loopback Candle Identification LibThis library is for identifying uptrends and downtrends using a loopback candle analysis method. Which contains two functions:
uptrendLoopbackCandleIdentification() and downtrendLoopbackCandleIdentification() . These functions check if the current candle is part of an uptrend or downtrend, respectively, based on the specified lookback period.
The uptrendLoopbackCandleIdentification() takes two arguments: index , which is the index of the current bar, and lookbackPeriod , which is the number of previous candles to check for an uptrend. The function returns false if the index is less than the lookback period. Otherwise, it initializes a boolean variable isHigherHigh as true and loops through the previous candles. If any of the previous candles have a higher high than the current candle, isHigherHigh is set to false , and the loop breaks. Finally, the function returns the value of isHigherHigh .
The downtrendLoopbackCandleIdentification() takes the same arguments and returns false if the index is less than the lookback period. The function initializes a boolean variable isHigherLow as true and loops through the previous candles. If any of the previous candles have a higher low than the current candle, isHigherLow is set to false , and the loop breaks. The function returns the value of isHigherLow .
Endpointed SSA of Price [Loxx]The Endpointed SSA of Price: A Comprehensive Tool for Market Analysis and Decision-Making
The financial markets present sophisticated challenges for traders and investors as they navigate the complexities of market behavior. To effectively interpret and capitalize on these complexities, it is crucial to employ powerful analytical tools that can reveal hidden patterns and trends. One such tool is the Endpointed SSA of Price, which combines the strengths of Caterpillar Singular Spectrum Analysis, a sophisticated time series decomposition method, with insights from the fields of economics, artificial intelligence, and machine learning.
The Endpointed SSA of Price has its roots in the interdisciplinary fusion of mathematical techniques, economic understanding, and advancements in artificial intelligence. This unique combination allows for a versatile and reliable tool that can aid traders and investors in making informed decisions based on comprehensive market analysis.
The Endpointed SSA of Price is not only valuable for experienced traders but also serves as a useful resource for those new to the financial markets. By providing a deeper understanding of market forces, this innovative indicator equips users with the knowledge and confidence to better assess risks and opportunities in their financial pursuits.
█ Exploring Caterpillar SSA: Applications in AI, Machine Learning, and Finance
Caterpillar SSA (Singular Spectrum Analysis) is a non-parametric method for time series analysis and signal processing. It is based on a combination of principles from classical time series analysis, multivariate statistics, and the theory of random processes. The method was initially developed in the early 1990s by a group of Russian mathematicians, including Golyandina, Nekrutkin, and Zhigljavsky.
Background Information:
SSA is an advanced technique for decomposing time series data into a sum of interpretable components, such as trend, seasonality, and noise. This decomposition allows for a better understanding of the underlying structure of the data and facilitates forecasting, smoothing, and anomaly detection. Caterpillar SSA is a particular implementation of SSA that has proven to be computationally efficient and effective for handling large datasets.
Uses in AI and Machine Learning:
In recent years, Caterpillar SSA has found applications in various fields of artificial intelligence (AI) and machine learning. Some of these applications include:
1. Feature extraction: Caterpillar SSA can be used to extract meaningful features from time series data, which can then serve as inputs for machine learning models. These features can help improve the performance of various models, such as regression, classification, and clustering algorithms.
2. Dimensionality reduction: Caterpillar SSA can be employed as a dimensionality reduction technique, similar to Principal Component Analysis (PCA). It helps identify the most significant components of a high-dimensional dataset, reducing the computational complexity and mitigating the "curse of dimensionality" in machine learning tasks.
3. Anomaly detection: The decomposition of a time series into interpretable components through Caterpillar SSA can help in identifying unusual patterns or outliers in the data. Machine learning models trained on these decomposed components can detect anomalies more effectively, as the noise component is separated from the signal.
4. Forecasting: Caterpillar SSA has been used in combination with machine learning techniques, such as neural networks, to improve forecasting accuracy. By decomposing a time series into its underlying components, machine learning models can better capture the trends and seasonality in the data, resulting in more accurate predictions.
Application in Financial Markets and Economics:
Caterpillar SSA has been employed in various domains within financial markets and economics. Some notable applications include:
1. Stock price analysis: Caterpillar SSA can be used to analyze and forecast stock prices by decomposing them into trend, seasonal, and noise components. This decomposition can help traders and investors better understand market dynamics, detect potential turning points, and make more informed decisions.
2. Economic indicators: Caterpillar SSA has been used to analyze and forecast economic indicators, such as GDP, inflation, and unemployment rates. By decomposing these time series, researchers can better understand the underlying factors driving economic fluctuations and develop more accurate forecasting models.
3. Portfolio optimization: By applying Caterpillar SSA to financial time series data, portfolio managers can better understand the relationships between different assets and make more informed decisions regarding asset allocation and risk management.
Application in the Indicator:
In the given indicator, Caterpillar SSA is applied to a financial time series (price data) to smooth the series and detect significant trends or turning points. The method is used to decompose the price data into a set number of components, which are then combined to generate a smoothed signal. This signal can help traders and investors identify potential entry and exit points for their trades.
The indicator applies the Caterpillar SSA method by first constructing the trajectory matrix using the price data, then computing the singular value decomposition (SVD) of the matrix, and finally reconstructing the time series using a selected number of components. The reconstructed series serves as a smoothed version of the original price data, highlighting significant trends and turning points. The indicator can be customized by adjusting the lag, number of computations, and number of components used in the reconstruction process. By fine-tuning these parameters, traders and investors can optimize the indicator to better match their specific trading style and risk tolerance.
Caterpillar SSA is versatile and can be applied to various types of financial instruments, such as stocks, bonds, commodities, and currencies. It can also be combined with other technical analysis tools or indicators to create a comprehensive trading system. For example, a trader might use Caterpillar SSA to identify the primary trend in a market and then employ additional indicators, such as moving averages or RSI, to confirm the trend and generate trading signals.
In summary, Caterpillar SSA is a powerful time series analysis technique that has found applications in AI and machine learning, as well as financial markets and economics. By decomposing a time series into interpretable components, Caterpillar SSA enables better understanding of the underlying structure of the data, facilitating forecasting, smoothing, and anomaly detection. In the context of financial trading, the technique is used to analyze price data, detect significant trends or turning points, and inform trading decisions.
█ Input Parameters
This indicator takes several inputs that affect its signal output. These inputs can be classified into three categories: Basic Settings, UI Options, and Computation Parameters.
Source: This input represents the source of price data, which is typically the closing price of an asset. The user can select other price data, such as opening price, high price, or low price. The selected price data is then utilized in the Caterpillar SSA calculation process.
Lag: The lag input determines the window size used for the time series decomposition. A higher lag value implies that the SSA algorithm will consider a longer range of historical data when extracting the underlying trend and components. This parameter is crucial, as it directly impacts the resulting smoothed series and the quality of extracted components.
Number of Computations: This input, denoted as 'ncomp,' specifies the number of eigencomponents to be considered in the reconstruction of the time series. A smaller value results in a smoother output signal, while a higher value retains more details in the series, potentially capturing short-term fluctuations.
SSA Period Normalization: This input is used to normalize the SSA period, which adjusts the significance of each eigencomponent to the overall signal. It helps in making the algorithm adaptive to different timeframes and market conditions.
Number of Bars: This input specifies the number of bars to be processed by the algorithm. It controls the range of data used for calculations and directly affects the computation time and the output signal.
Number of Bars to Render: This input sets the number of bars to be plotted on the chart. A higher value slows down the computation but provides a more comprehensive view of the indicator's performance over a longer period. This value controls how far back the indicator is rendered.
Color bars: This boolean input determines whether the bars should be colored according to the signal's direction. If set to true, the bars are colored using the defined colors, which visually indicate the trend direction.
Show signals: This boolean input controls the display of buy and sell signals on the chart. If set to true, the indicator plots shapes (triangles) to represent long and short trade signals.
Static Computation Parameters:
The indicator also includes several internal parameters that affect the Caterpillar SSA algorithm, such as Maxncomp, MaxLag, and MaxArrayLength. These parameters set the maximum allowed values for the number of computations, the lag, and the array length, ensuring that the calculations remain within reasonable limits and do not consume excessive computational resources.
█ A Note on Endpionted, Non-repainting Indicators
An endpointed indicator is one that does not recalculate or repaint its past values based on new incoming data. In other words, the indicator's previous signals remain the same even as new price data is added. This is an important feature because it ensures that the signals generated by the indicator are reliable and accurate, even after the fact.
When an indicator is non-repainting or endpointed, it means that the trader can have confidence in the signals being generated, knowing that they will not change as new data comes in. This allows traders to make informed decisions based on historical signals, without the fear of the signals being invalidated in the future.
In the case of the Endpointed SSA of Price, this non-repainting property is particularly valuable because it allows traders to identify trend changes and reversals with a high degree of accuracy, which can be used to inform trading decisions. This can be especially important in volatile markets where quick decisions need to be made.
Quinn-Fernandes Fourier Transform of Filtered Price [Loxx]Down the Rabbit Hole We Go: A Deep Dive into the Mysteries of Quinn-Fernandes Fast Fourier Transform and Hodrick-Prescott Filtering
In the ever-evolving landscape of financial markets, the ability to accurately identify and exploit underlying market patterns is of paramount importance. As market participants continuously search for innovative tools to gain an edge in their trading and investment strategies, advanced mathematical techniques, such as the Quinn-Fernandes Fourier Transform and the Hodrick-Prescott Filter, have emerged as powerful analytical tools. This comprehensive analysis aims to delve into the rich history and theoretical foundations of these techniques, exploring their applications in financial time series analysis, particularly in the context of a sophisticated trading indicator. Furthermore, we will critically assess the limitations and challenges associated with these transformative tools, while offering practical insights and recommendations for overcoming these hurdles to maximize their potential in the financial domain.
Our investigation will begin with a comprehensive examination of the origins and development of both the Quinn-Fernandes Fourier Transform and the Hodrick-Prescott Filter. We will trace their roots from classical Fourier analysis and time series smoothing to their modern-day adaptive iterations. We will elucidate the key concepts and mathematical underpinnings of these techniques and demonstrate how they are synergistically used in the context of the trading indicator under study.
As we progress, we will carefully consider the potential drawbacks and challenges associated with using the Quinn-Fernandes Fourier Transform and the Hodrick-Prescott Filter as integral components of a trading indicator. By providing a critical evaluation of their computational complexity, sensitivity to input parameters, assumptions about data stationarity, performance in noisy environments, and their nature as lagging indicators, we aim to offer a balanced and comprehensive understanding of these powerful analytical tools.
In conclusion, this in-depth analysis of the Quinn-Fernandes Fourier Transform and the Hodrick-Prescott Filter aims to provide a solid foundation for financial market participants seeking to harness the potential of these advanced techniques in their trading and investment strategies. By shedding light on their history, applications, and limitations, we hope to equip traders and investors with the knowledge and insights necessary to make informed decisions and, ultimately, achieve greater success in the highly competitive world of finance.
█ Fourier Transform and Hodrick-Prescott Filter in Financial Time Series Analysis
Financial time series analysis plays a crucial role in making informed decisions about investments and trading strategies. Among the various methods used in this domain, the Fourier Transform and the Hodrick-Prescott (HP) Filter have emerged as powerful techniques for processing and analyzing financial data. This section aims to provide a comprehensive understanding of these two methodologies, their significance in financial time series analysis, and their combined application to enhance trading strategies.
█ The Quinn-Fernandes Fourier Transform: History, Applications, and Use in Financial Time Series Analysis
The Quinn-Fernandes Fourier Transform is an advanced spectral estimation technique developed by John J. Quinn and Mauricio A. Fernandes in the early 1990s. It builds upon the classical Fourier Transform by introducing an adaptive approach that improves the identification of dominant frequencies in noisy signals. This section will explore the history of the Quinn-Fernandes Fourier Transform, its applications in various domains, and its specific use in financial time series analysis.
History of the Quinn-Fernandes Fourier Transform
The Quinn-Fernandes Fourier Transform was introduced in a 1993 paper titled "The Application of Adaptive Estimation to the Interpolation of Missing Values in Noisy Signals." In this paper, Quinn and Fernandes developed an adaptive spectral estimation algorithm to address the limitations of the classical Fourier Transform when analyzing noisy signals.
The classical Fourier Transform is a powerful mathematical tool that decomposes a function or a time series into a sum of sinusoids, making it easier to identify underlying patterns and trends. However, its performance can be negatively impacted by noise and missing data points, leading to inaccurate frequency identification.
Quinn and Fernandes sought to address these issues by developing an adaptive algorithm that could more accurately identify the dominant frequencies in a noisy signal, even when data points were missing. This adaptive algorithm, now known as the Quinn-Fernandes Fourier Transform, employs an iterative approach to refine the frequency estimates, ultimately resulting in improved spectral estimation.
Applications of the Quinn-Fernandes Fourier Transform
The Quinn-Fernandes Fourier Transform has found applications in various fields, including signal processing, telecommunications, geophysics, and biomedical engineering. Its ability to accurately identify dominant frequencies in noisy signals makes it a valuable tool for analyzing and interpreting data in these domains.
For example, in telecommunications, the Quinn-Fernandes Fourier Transform can be used to analyze the performance of communication systems and identify interference patterns. In geophysics, it can help detect and analyze seismic signals and vibrations, leading to improved understanding of geological processes. In biomedical engineering, the technique can be employed to analyze physiological signals, such as electrocardiograms, leading to more accurate diagnoses and better patient care.
Use of the Quinn-Fernandes Fourier Transform in Financial Time Series Analysis
In financial time series analysis, the Quinn-Fernandes Fourier Transform can be a powerful tool for isolating the dominant cycles and frequencies in asset price data. By more accurately identifying these critical cycles, traders can better understand the underlying dynamics of financial markets and develop more effective trading strategies.
The Quinn-Fernandes Fourier Transform is used in conjunction with the Hodrick-Prescott Filter, a technique that separates the underlying trend from the cyclical component in a time series. By first applying the Hodrick-Prescott Filter to the financial data, short-term fluctuations and noise are removed, resulting in a smoothed representation of the underlying trend. This smoothed data is then subjected to the Quinn-Fernandes Fourier Transform, allowing for more accurate identification of the dominant cycles and frequencies in the asset price data.
By employing the Quinn-Fernandes Fourier Transform in this manner, traders can gain a deeper understanding of the underlying dynamics of financial time series and develop more effective trading strategies. The enhanced knowledge of market cycles and frequencies can lead to improved risk management and ultimately, better investment performance.
The Quinn-Fernandes Fourier Transform is an advanced spectral estimation technique that has proven valuable in various domains, including financial time series analysis. Its adaptive approach to frequency identification addresses the limitations of the classical Fourier Transform when analyzing noisy signals, leading to more accurate and reliable analysis. By employing the Quinn-Fernandes Fourier Transform in financial time series analysis, traders can gain a deeper understanding of the underlying financial instrument.
Drawbacks to the Quinn-Fernandes algorithm
While the Quinn-Fernandes Fourier Transform is an effective tool for identifying dominant cycles and frequencies in financial time series, it is not without its drawbacks. Some of the limitations and challenges associated with this indicator include:
1. Computational complexity: The adaptive nature of the Quinn-Fernandes Fourier Transform requires iterative calculations, which can lead to increased computational complexity. This can be particularly challenging when analyzing large datasets or when the indicator is used in real-time trading environments.
2. Sensitivity to input parameters: The performance of the Quinn-Fernandes Fourier Transform is dependent on the choice of input parameters, such as the number of harmonic periods, frequency tolerance, and Hodrick-Prescott filter settings. Choosing inappropriate parameter values can lead to inaccurate frequency identification or reduced performance. Finding the optimal parameter settings can be challenging, and may require trial and error or a more sophisticated optimization process.
3. Assumption of stationary data: The Quinn-Fernandes Fourier Transform assumes that the underlying data is stationary, meaning that its statistical properties do not change over time. However, financial time series data is often non-stationary, with changing trends and volatility. This can limit the effectiveness of the indicator and may require additional preprocessing steps, such as detrending or differencing, to ensure the data meets the assumptions of the algorithm.
4. Limitations in noisy environments: Although the Quinn-Fernandes Fourier Transform is designed to handle noisy signals, its performance may still be negatively impacted by significant noise levels. In such cases, the identification of dominant frequencies may become less reliable, leading to suboptimal trading signals or strategies.
5. Lagging indicator: As with many technical analysis tools, the Quinn-Fernandes Fourier Transform is a lagging indicator, meaning that it is based on past data. While it can provide valuable insights into historical market dynamics, its ability to predict future price movements may be limited. This can result in false signals or late entries and exits, potentially reducing the effectiveness of trading strategies based on this indicator.
Despite these drawbacks, the Quinn-Fernandes Fourier Transform remains a valuable tool for financial time series analysis when used appropriately. By being aware of its limitations and adjusting input parameters or preprocessing steps as needed, traders can still benefit from its ability to identify dominant cycles and frequencies in financial data, and use this information to inform their trading strategies.
█ Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
1. The first term represents the deviation of the data from the trend.
2. The second term represents the smoothness of the trend.
3. λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
Another significant advantage of the HP Filter is its ability to adapt to changes in the underlying trend. This feature makes it particularly well-suited for analyzing financial time series, which often exhibit non-stationary behavior. By employing the HP Filter to smooth financial data, traders can more accurately identify and analyze the long-term trends that drive asset prices, ultimately leading to better-informed investment decisions.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
█ Combined Application of Fourier Transform and Hodrick-Prescott Filter
The integration of the Fourier Transform and the Hodrick-Prescott Filter in financial time series analysis can offer several benefits. By first applying the HP Filter to the financial data, traders can remove short-term fluctuations and noise, effectively isolating the underlying trend. This smoothed data can then be subjected to the Fourier Transform, allowing for the identification of dominant cycles and frequencies with greater precision.
By combining these two powerful techniques, traders can gain a more comprehensive understanding of the underlying dynamics of financial time series. This enhanced knowledge can lead to the development of more effective trading strategies, better risk management, and ultimately, improved investment performance.
The Fourier Transform and the Hodrick-Prescott Filter are powerful tools for financial time series analysis. Each technique offers unique benefits, with the Fourier Transform being adept at identifying dominant cycles and frequencies, and the HP Filter excelling at isolating long-term trends from short-term noise. By combining these methodologies, traders can develop a deeper understanding of the underlying dynamics of financial time series, leading to more informed investment decisions and improved trading strategies. As the financial markets continue to evolve, the combined application of these techniques will undoubtedly remain an essential aspect of modern financial analysis.
█ Features
Endpointed and Non-repainting
This is an endpointed and non-repainting indicator. These are crucial factors that contribute to its usefulness and reliability in trading and investment strategies. Let us break down these concepts and discuss why they matter in the context of a financial indicator.
1. Endpoint nature: An endpoint indicator uses the most recent data points to calculate its values, ensuring that the output is timely and reflective of the current market conditions. This is in contrast to non-endpoint indicators, which may use earlier data points in their calculations, potentially leading to less timely or less relevant results. By utilizing the most recent data available, the endpoint nature of this indicator ensures that it remains up-to-date and relevant, providing traders and investors with valuable and actionable insights into the market dynamics.
2. Non-repainting characteristic: A non-repainting indicator is one that does not change its values or signals after they have been generated. This means that once a signal or a value has been plotted on the chart, it will remain there, and future data will not affect it. This is crucial for traders and investors, as it offers a sense of consistency and certainty when making decisions based on the indicator's output.
Repainting indicators, on the other hand, can change their values or signals as new data comes in, effectively "repainting" the past. This can be problematic for several reasons:
a. Misleading results: Repainting indicators can create the illusion of a highly accurate or successful trading system when backtesting, as the indicator may adapt its past signals to fit the historical price data. This can lead to overly optimistic performance results that may not hold up in real-time trading.
b. Decision-making uncertainty: When an indicator repaints, it becomes challenging for traders and investors to trust its signals, as the signal that prompted a trade may change or disappear after the fact. This can create confusion and indecision, making it difficult to execute a consistent trading strategy.
The endpoint and non-repainting characteristics of this indicator contribute to its overall reliability and effectiveness as a tool for trading and investment decision-making. By providing timely and consistent information, this indicator helps traders and investors make well-informed decisions that are less likely to be influenced by misleading or shifting data.
Inputs
Source: This input determines the source of the price data to be used for the calculations. Users can select from options like closing price, opening price, high, low, etc., based on their preferences. Changing the source of the price data (e.g., from closing price to opening price) will alter the base data used for calculations, which may lead to different patterns and cycles being identified.
Calculation Bars: This input represents the number of past bars used for the calculation. A higher value will use more historical data for the analysis, while a lower value will focus on more recent price data. Increasing the number of past bars used for calculation will incorporate more historical data into the analysis. This may lead to a more comprehensive understanding of long-term trends but could also result in a slower response to recent price changes. Decreasing this value will focus more on recent data, potentially making the indicator more responsive to short-term fluctuations.
Harmonic Period: This input represents the harmonic period, which is the number of harmonics used in the Fourier Transform. A higher value will result in more harmonics being used, potentially capturing more complex cycles in the price data. Increasing the harmonic period will include more harmonics in the Fourier Transform, potentially capturing more complex cycles in the price data. However, this may also introduce more noise and make it harder to identify clear patterns. Decreasing this value will focus on simpler cycles and may make the analysis clearer, but it might miss out on more complex patterns.
Frequency Tolerance: This input represents the frequency tolerance, which determines how close the frequencies of the harmonics must be to be considered part of the same cycle. A higher value will allow for more variation between harmonics, while a lower value will require the frequencies to be more similar. Increasing the frequency tolerance will allow for more variation between harmonics, potentially capturing a broader range of cycles. However, this may also introduce noise and make it more difficult to identify clear patterns. Decreasing this value will require the frequencies to be more similar, potentially making the analysis clearer, but it might miss out on some cycles.
Number of Bars to Render: This input determines the number of bars to render on the chart. A higher value will result in more historical data being displayed, but it may also slow down the computation due to the increased amount of data being processed. Increasing the number of bars to render on the chart will display more historical data, providing a broader context for the analysis. However, this may also slow down the computation due to the increased amount of data being processed. Decreasing this value will speed up the computation, but it will provide less historical context for the analysis.
Smoothing Mode: This input allows the user to choose between two smoothing modes for the source price data: no smoothing or Hodrick-Prescott (HP) smoothing. The choice depends on the user's preference for how the price data should be processed before the Fourier Transform is applied. Choosing between no smoothing and Hodrick-Prescott (HP) smoothing will affect the preprocessing of the price data. Using HP smoothing will remove some of the short-term fluctuations from the data, potentially making the analysis clearer and more focused on longer-term trends. Not using smoothing will retain the original price fluctuations, which may provide more detail but also introduce noise into the analysis.
Hodrick-Prescott Filter Period: This input represents the Hodrick-Prescott filter period, which is used if the user chooses to apply HP smoothing to the price data. A higher value will result in a smoother curve, while a lower value will retain more of the original price fluctuations. Increasing the Hodrick-Prescott filter period will result in a smoother curve for the price data, emphasizing longer-term trends and minimizing short-term fluctuations. Decreasing this value will retain more of the original price fluctuations, potentially providing more detail but also introducing noise into the analysis.
Alets and signals
This indicator featues alerts, signals and bar coloring. You have to option to turn these on/off in the settings menu.
Maximum Bars Restriction
This indicator requires a large amount of processing power to render on the chart. To reduce overhead, the setting "Number of Bars to Render" is set to 500 bars. You can adjust this to you liking.
█ Related Indicators and Libraries
Goertzel Cycle Composite Wave
Goertzel Browser
Fourier Spectrometer of Price w/ Extrapolation Forecast
Fourier Extrapolator of 'Caterpillar' SSA of Price
Normalized, Variety, Fast Fourier Transform Explorer
Real-Fast Fourier Transform of Price Oscillator
Real-Fast Fourier Transform of Price w/ Linear Regression
Fourier Extrapolation of Variety Moving Averages
Fourier Extrapolator of Variety RSI w/ Bollinger Bands
Fourier Extrapolator of Price w/ Projection Forecast
Fourier Extrapolator of Price
STD-Stepped Fast Cosine Transform Moving Average
Variety RSI of Fast Discrete Cosine Transform
loxfft
Probability Envelopes (PBE)Introduction
In the world of trading, technical analysis is vital for making informed decisions about the future direction of an asset's price. One such tool is the use of indicators, mathematical calculations that can help traders predict market trends. This article delves into an innovative indicator called the Probability Envelopes Indicator, which offers valuable insights into the potential price levels an asset may reach based on historical data. This in-depth look explores the statistical foundations of the indicator, highlighting its key components and benefits.
Section 1: Calculating Price Movements with Log Returns and Percentages
The Probability Envelopes Indicator provides the option to use either log returns or percentage changes when calculating price movements. Each method has its advantages:
Log Returns: These are calculated as the natural logarithm of the ratio of the current price to the previous price. Log returns are considered more stable and less sensitive to extreme price fluctuations.
Percentage Changes: These are calculated as the percentage difference between the current price and the previous price. They are simpler to interpret and easier to understand for most traders.
Section 2: Understanding Mean, Variance, and Standard Deviation
The Probability Envelopes Indicator utilizes various statistical measures to analyze historical price movements:
Mean: This is the average of a set of numbers. In the context of this indicator, it represents the average price movement for bullish (green) and bearish (red) scenarios.
Variance: This measure represents the dispersion of data points in a dataset. A higher variance indicates a greater spread of data points from the mean. Variance is calculated as the average of the squared differences from the mean.
Standard Deviation: This is the square root of the variance. It is a measure of the amount of variation or dispersion in a dataset. In the context of this indicator, standard deviations are used to calculate the width of the bands around the expected mean.
Section 3: Analyzing Historical Price Movements and Probabilities
The Probability Envelopes Indicator examines historical price movements and calculates probabilities based on their frequency:
The indicator first identifies and categorizes price movements into bullish (green) and bearish (red) scenarios.
It then calculates the probability of each price movement occurring by dividing the frequency of the movement by the total number of occurrences in each category (bullish or bearish).
The expected green and red movements are calculated by multiplying the probabilities by their respective price movements and summing the results.
The total expected movement, or weighted average, is calculated by combining the expected green and red movements and dividing by the total number of occurrences.
Section 4: Constructing the Probability Envelopes
The Probability Envelopes Indicator utilizes the calculated statistics to construct its bands:
The expected mean is calculated using the total expected movement and applied to the current open price.
An exponential moving average (EMA) is used to smooth the expected mean, with the smoothing length determining the degree of responsiveness.
The upper and lower bands are calculated by adding and subtracting the mean green and red movements, respectively, along with their standard deviations multiplied by a user-defined multiplier.
Section 5: Benefits of the Probability Envelopes Indicator
The Probability Envelopes Indicator offers numerous advantages to traders:
Enhanced Decision-Making: By providing probability-based estimations of future price levels, the indicator can help traders make more informed decisions and potentially improve their trading strategies.
Versatility: The indicator is applicable to various financial instruments, such as stocks, forex, commodities, and cryptocurrencies, making it a valuable tool for traders in different markets.
Customization: The indicator's parameters, including the use of log returns, multiplier values, and smoothing length, can be adjusted according to the user's preferences and trading style. This flexibility allows traders to fine-tune the Probability Envelopes Indicator to better suit their needs and goals.
Risk Management: The Probability Envelopes Indicator can be used as a component of a risk management strategy by providing insight into potential price movements. By identifying potential areas of support and resistance, traders can set stop-loss and take-profit levels more effectively.
Visualization: The graphical representation of the indicator, with its clear upper and lower bands, makes it easy for traders to quickly assess the market and potential price levels.
Section 6: Integrating the Probability Envelopes Indicator into Your Trading Strategy
When incorporating the Probability Envelopes Indicator into your trading strategy, consider the following tips:
Confirmation Signals: Use the indicator in conjunction with other technical analysis tools, such as trend lines, moving averages, or oscillators, to confirm the strength and direction of the market trend.
Timeframes: Experiment with different timeframes to find the optimal settings for your trading strategy. Keep in mind that shorter timeframes may generate more frequent signals but may also increase the likelihood of false signals.
Risk Management: Always establish a proper risk management strategy that includes setting stop-loss and take-profit levels, as well as managing your position sizes.
Backtesting: Test the Probability Envelopes Indicator on historical data to evaluate its effectiveness and fine-tune its parameters to optimize your trading strategy.
Section 7: Cons and Limitations of the Probability Envelopes Indicator
While the Probability Envelopes Indicator offers several advantages to traders, it is essential to be aware of its potential cons and limitations. Understanding these can help you make better-informed decisions when incorporating the indicator into your trading strategy.
Lagging Nature: The Probability Envelopes Indicator is primarily based on historical data and price movements. As a result, it may be less responsive to real-time changes in market conditions, and the predicted price levels may not always accurately reflect the market's current state. This lagging nature can lead to late entry and exit signals.
False Signals: As with any technical analysis tool, the Probability Envelopes Indicator can generate false signals. These occur when the indicator suggests a potential price movement, but the market does not follow through. It is crucial to use other technical analysis tools to confirm the signals and minimize the impact of false signals on your trading decisions.
Complex Statistical Concepts: The Probability Envelopes Indicator relies on complex statistical concepts and calculations, which may be challenging to grasp for some traders, particularly beginners. This complexity can lead to misunderstandings and misuse of the indicator if not adequately understood.
Overemphasis on Past Data: While historical data can be informative, relying too heavily on past performance to predict future movements can be limiting. Market conditions can change rapidly, and relying solely on past data may not provide an accurate representation of the current market environment.
No Guarantees: The Probability Envelopes Indicator, like all technical analysis tools, cannot guarantee success. It is essential to approach trading with realistic expectations and understand that no indicator or strategy can provide foolproof results.
To overcome these limitations, it is crucial to combine the Probability Envelopes Indicator with other technical analysis tools and utilize a comprehensive risk management strategy. By doing so, you can better understand the market and increase your chances of success in the ever-changing financial markets.
Section 8: Probability Envelopes Indicator vs. Bollinger Bands
Bollinger Bands and the Probability Envelopes Indicator are both technical analysis tools designed to identify potential support and resistance levels, as well as potential trend reversals. However, they differ in their underlying concepts, calculations, and applications. This section will provide a deep dive into the differences between these two indicators and how they can complement each other in a trading strategy.
Underlying Concepts and Calculations:
Bollinger Bands:
Bollinger Bands are based on a simple moving average (SMA) of the price data, with upper and lower bands plotted at a specified number of standard deviations away from the SMA.
The distance between the bands widens during periods of increased price volatility and narrows during periods of low volatility, indicating potential trend reversals or breakouts.
The standard settings for Bollinger Bands typically involve a 20-period SMA and a 2 standard deviation distance for the upper and lower bands.
Probability Envelopes Indicator:
The Probability Envelopes Indicator calculates the expected price movements based on historical data and probabilities, utilizing mean and standard deviation calculations for both upward and downward price movements.
It generates upper and lower bands based on the calculated expected mean movement and the standard deviation of historical price changes, multiplied by a user-defined multiplier.
The Probability Envelopes Indicator also allows users to choose between using log returns or percentage changes for the calculations, adding flexibility to the indicator.
Key Differences:
Calculation Method: Bollinger Bands are based on a simple moving average and standard deviations, while the Probability Envelopes Indicator uses statistical probability calculations derived from historical price changes.
Flexibility: The Probability Envelopes Indicator allows users to choose between log returns or percentage changes and adjust the multiplier, offering more customization options compared to Bollinger Bands.
Risk Management: Bollinger Bands primarily focus on volatility, while the Probability Envelopes Indicator incorporates probability calculations to provide additional insights into potential price movements, which can be helpful for risk management purposes.
Complementary Use:
Using both Bollinger Bands and the Probability Envelopes Indicator in your trading strategy can offer valuable insights into market conditions and potential price levels.
Bollinger Bands can provide insights into market volatility and potential breakouts or trend reversals based on the widening or narrowing of the bands.
The Probability Envelopes Indicator can offer additional information on the expected price movements based on historical data and probabilities, which can be helpful in anticipating potential support and resistance levels.
Combining these two indicators can help traders to better understand market dynamics and increase their chances of identifying profitable trading opportunities.
In conclusion, while both Bollinger Bands and the Probability Envelopes Indicator aim to identify potential support and resistance levels, they differ significantly in their underlying concepts, calculations, and applications. By understanding these differences and incorporating both tools into your trading strategy, you can gain a more comprehensive understanding of the market and make more informed trading decisions.
In conclusion, the Probability Envelopes Indicator is a powerful and versatile technical analysis tool that offers unique insights into expected price movements based on historical data and probability calculations. It provides traders with the ability to identify potential support and resistance levels, as well as potential trend reversals. When compared to Bollinger Bands, the Probability Envelopes Indicator offers more customization options and incorporates probability-based calculations for a different perspective on market dynamics.
Although the Probability Envelopes Indicator has its limitations and potential cons, such as the reliance on historical data and the assumption that past performance is indicative of future results, it remains a valuable addition to any trader's toolkit. By using the Probability Envelopes Indicator in conjunction with other technical analysis tools, such as Bollinger Bands, traders can gain a more comprehensive understanding of the market and make more informed trading decisions.
Ultimately, the success of any trading strategy relies on the ability to interpret and apply multiple indicators effectively. The Probability Envelopes Indicator serves as a unique and valuable tool in this regard, providing traders with a deeper understanding of the market and its potential price movements. By utilizing this indicator in combination with other tools and techniques, traders can increase their chances of success and optimize their trading strategies.
Goertzel Cycle Composite Wave [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Cycle Composite Wave indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
*** To decrease the load time of this indicator, only XX many bars back will render to the chart. You can control this value with the setting "Number of Bars to Render". This doesn't have anything to do with repainting or the indicator being endpointed***
█ Brief Overview of the Goertzel Cycle Composite Wave
The Goertzel Cycle Composite Wave is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The Goertzel Cycle Composite Wave is considered a non-repainting and endpointed indicator. This means that once a value has been calculated for a specific bar, that value will not change in subsequent bars, and the indicator is designed to have a clear start and end point. This is an important characteristic for indicators used in technical analysis, as it allows traders to make informed decisions based on historical data without the risk of hindsight bias or future changes in the indicator's values. This means traders can use this indicator trading purposes.
The repainting version of this indicator with forecasting, cycle selection/elimination options, and data output table can be found here:
Goertzel Browser
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the cycles. The color of the lines indicates whether the wave is increasing or decreasing.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast: These inputs define the window size for the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Cycle Composite Wave Code
The Goertzel Cycle Composite Wave code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Cycle Composite Wave function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past sizes (WindowSizePast), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Cycle Composite Wave algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Cycle Composite Wave code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Cycle Composite Wave code calculates the waveform of the significant cycles for specified time windows. The windows are defined by the WindowSizePast parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in a matrix:
The calculated waveforms for the cycle is stored in the matrix - goeWorkPast. This matrix holds the waveforms for the specified time windows. Each row in the matrix represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Cycle Composite Wave function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Cycle Composite Wave code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Cycle Composite Wave's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for specified time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast:
The WindowSizePast is updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
The matrix goeWorkPast is initialized to store the Goertzel results for specified time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for waveforms:
The goertzel array is initialized to store the endpoint Goertzel.
Calculating composite waveform (goertzel array):
The composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Drawing composite waveform (pvlines):
The composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms and visualizes them on the chart using colored lines.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
Limited applicability:
The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Cycle Composite Wave indicator can be interpreted by analyzing the plotted lines. The indicator plots two lines: composite waves. The composite wave represents the composite wave of the price data.
The composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend.
Interpreting the Goertzel Cycle Composite Wave indicator involves identifying the trend of the composite wave lines and matching them with the corresponding bullish or bearish color.
█ Conclusion
The Goertzel Cycle Composite Wave indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Cycle Composite Wave indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Cycle Composite Wave indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
1. The first term represents the deviation of the data from the trend.
2. The second term represents the smoothness of the trend.
3. λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
Goertzel Browser [Loxx]As the financial markets become increasingly complex and data-driven, traders and analysts must leverage powerful tools to gain insights and make informed decisions. One such tool is the Goertzel Browser indicator, a sophisticated technical analysis indicator that helps identify cyclical patterns in financial data. This powerful tool is capable of detecting cyclical patterns in financial data, helping traders to make better predictions and optimize their trading strategies. With its unique combination of mathematical algorithms and advanced charting capabilities, this indicator has the potential to revolutionize the way we approach financial modeling and trading.
█ Brief Overview of the Goertzel Browser
The Goertzel Browser is a sophisticated technical analysis tool that utilizes the Goertzel algorithm to analyze and visualize cyclical components within a financial time series. By identifying these cycles and their characteristics, the indicator aims to provide valuable insights into the market's underlying price movements, which could potentially be used for making informed trading decisions.
The primary purpose of this indicator is to:
1. Detect and analyze the dominant cycles present in the price data.
2. Reconstruct and visualize the composite wave based on the detected cycles.
3. Project the composite wave into the future, providing a potential roadmap for upcoming price movements.
To achieve this, the indicator performs several tasks:
1. Detrending the price data: The indicator preprocesses the price data using various detrending techniques, such as Hodrick-Prescott filters, zero-lag moving averages, and linear regression, to remove the underlying trend and focus on the cyclical components.
2. Applying the Goertzel algorithm: The indicator applies the Goertzel algorithm to the detrended price data, identifying the dominant cycles and their characteristics, such as amplitude, phase, and cycle strength.
3. Constructing the composite wave: The indicator reconstructs the composite wave by combining the detected cycles, either by using a user-defined list of cycles or by selecting the top N cycles based on their amplitude or cycle strength.
4. Visualizing the composite wave: The indicator plots the composite wave, using solid lines for the past and dotted lines for the future projections. The color of the lines indicates whether the wave is increasing or decreasing.
5. Displaying cycle information: The indicator provides a table that displays detailed information about the detected cycles, including their rank, period, Bartel's test results, amplitude, and phase.
This indicator is a powerful tool that employs the Goertzel algorithm to analyze and visualize the cyclical components within a financial time series. By providing insights into the underlying price movements and their potential future trajectory, the indicator aims to assist traders in making more informed decisions.
█ What is the Goertzel Algorithm?
The Goertzel algorithm, named after Gerald Goertzel, is a digital signal processing technique that is used to efficiently compute individual terms of the Discrete Fourier Transform (DFT). It was first introduced in 1958, and since then, it has found various applications in the fields of engineering, mathematics, and physics.
The Goertzel algorithm is primarily used to detect specific frequency components within a digital signal, making it particularly useful in applications where only a few frequency components are of interest. The algorithm is computationally efficient, as it requires fewer calculations than the Fast Fourier Transform (FFT) when detecting a small number of frequency components. This efficiency makes the Goertzel algorithm a popular choice in applications such as:
1. Telecommunications: The Goertzel algorithm is used for decoding Dual-Tone Multi-Frequency (DTMF) signals, which are the tones generated when pressing buttons on a telephone keypad. By identifying specific frequency components, the algorithm can accurately determine which button has been pressed.
2. Audio processing: The algorithm can be used to detect specific pitches or harmonics in an audio signal, making it useful in applications like pitch detection and tuning musical instruments.
3. Vibration analysis: In the field of mechanical engineering, the Goertzel algorithm can be applied to analyze vibrations in rotating machinery, helping to identify faulty components or signs of wear.
4. Power system analysis: The algorithm can be used to measure harmonic content in power systems, allowing engineers to assess power quality and detect potential issues.
The Goertzel algorithm is used in these applications because it offers several advantages over other methods, such as the FFT:
1. Computational efficiency: The Goertzel algorithm requires fewer calculations when detecting a small number of frequency components, making it more computationally efficient than the FFT in these cases.
2. Real-time analysis: The algorithm can be implemented in a streaming fashion, allowing for real-time analysis of signals, which is crucial in applications like telecommunications and audio processing.
3. Memory efficiency: The Goertzel algorithm requires less memory than the FFT, as it only computes the frequency components of interest.
4. Precision: The algorithm is less susceptible to numerical errors compared to the FFT, ensuring more accurate results in applications where precision is essential.
The Goertzel algorithm is an efficient digital signal processing technique that is primarily used to detect specific frequency components within a signal. Its computational efficiency, real-time capabilities, and precision make it an attractive choice for various applications, including telecommunications, audio processing, vibration analysis, and power system analysis. The algorithm has been widely adopted since its introduction in 1958 and continues to be an essential tool in the fields of engineering, mathematics, and physics.
█ Goertzel Algorithm in Quantitative Finance: In-Depth Analysis and Applications
The Goertzel algorithm, initially designed for signal processing in telecommunications, has gained significant traction in the financial industry due to its efficient frequency detection capabilities. In quantitative finance, the Goertzel algorithm has been utilized for uncovering hidden market cycles, developing data-driven trading strategies, and optimizing risk management. This section delves deeper into the applications of the Goertzel algorithm in finance, particularly within the context of quantitative trading and analysis.
Unveiling Hidden Market Cycles:
Market cycles are prevalent in financial markets and arise from various factors, such as economic conditions, investor psychology, and market participant behavior. The Goertzel algorithm's ability to detect and isolate specific frequencies in price data helps trader analysts identify hidden market cycles that may otherwise go unnoticed. By examining the amplitude, phase, and periodicity of each cycle, traders can better understand the underlying market structure and dynamics, enabling them to develop more informed and effective trading strategies.
Developing Quantitative Trading Strategies:
The Goertzel algorithm's versatility allows traders to incorporate its insights into a wide range of trading strategies. By identifying the dominant market cycles in a financial instrument's price data, traders can create data-driven strategies that capitalize on the cyclical nature of markets.
For instance, a trader may develop a mean-reversion strategy that takes advantage of the identified cycles. By establishing positions when the price deviates from the predicted cycle, the trader can profit from the subsequent reversion to the cycle's mean. Similarly, a momentum-based strategy could be designed to exploit the persistence of a dominant cycle by entering positions that align with the cycle's direction.
Enhancing Risk Management:
The Goertzel algorithm plays a vital role in risk management for quantitative strategies. By analyzing the cyclical components of a financial instrument's price data, traders can gain insights into the potential risks associated with their trading strategies.
By monitoring the amplitude and phase of dominant cycles, a trader can detect changes in market dynamics that may pose risks to their positions. For example, a sudden increase in amplitude may indicate heightened volatility, prompting the trader to adjust position sizing or employ hedging techniques to protect their portfolio. Additionally, changes in phase alignment could signal a potential shift in market sentiment, necessitating adjustments to the trading strategy.
Expanding Quantitative Toolkits:
Traders can augment the Goertzel algorithm's insights by combining it with other quantitative techniques, creating a more comprehensive and sophisticated analysis framework. For example, machine learning algorithms, such as neural networks or support vector machines, could be trained on features extracted from the Goertzel algorithm to predict future price movements more accurately.
Furthermore, the Goertzel algorithm can be integrated with other technical analysis tools, such as moving averages or oscillators, to enhance their effectiveness. By applying these tools to the identified cycles, traders can generate more robust and reliable trading signals.
The Goertzel algorithm offers invaluable benefits to quantitative finance practitioners by uncovering hidden market cycles, aiding in the development of data-driven trading strategies, and improving risk management. By leveraging the insights provided by the Goertzel algorithm and integrating it with other quantitative techniques, traders can gain a deeper understanding of market dynamics and devise more effective trading strategies.
█ Indicator Inputs
src: This is the source data for the analysis, typically the closing price of the financial instrument.
detrendornot: This input determines the method used for detrending the source data. Detrending is the process of removing the underlying trend from the data to focus on the cyclical components.
The available options are:
hpsmthdt: Detrend using Hodrick-Prescott filter centered moving average.
zlagsmthdt: Detrend using zero-lag moving average centered moving average.
logZlagRegression: Detrend using logarithmic zero-lag linear regression.
hpsmth: Detrend using Hodrick-Prescott filter.
zlagsmth: Detrend using zero-lag moving average.
DT_HPper1 and DT_HPper2: These inputs define the period range for the Hodrick-Prescott filter centered moving average when detrendornot is set to hpsmthdt.
DT_ZLper1 and DT_ZLper2: These inputs define the period range for the zero-lag moving average centered moving average when detrendornot is set to zlagsmthdt.
DT_RegZLsmoothPer: This input defines the period for the zero-lag moving average used in logarithmic zero-lag linear regression when detrendornot is set to logZlagRegression.
HPsmoothPer: This input defines the period for the Hodrick-Prescott filter when detrendornot is set to hpsmth.
ZLMAsmoothPer: This input defines the period for the zero-lag moving average when detrendornot is set to zlagsmth.
MaxPer: This input sets the maximum period for the Goertzel algorithm to search for cycles.
squaredAmp: This boolean input determines whether the amplitude should be squared in the Goertzel algorithm.
useAddition: This boolean input determines whether the Goertzel algorithm should use addition for combining the cycles.
useCosine: This boolean input determines whether the Goertzel algorithm should use cosine waves instead of sine waves.
UseCycleStrength: This boolean input determines whether the Goertzel algorithm should compute the cycle strength, which is a normalized measure of the cycle's amplitude.
WindowSizePast and WindowSizeFuture: These inputs define the window size for past and future projections of the composite wave.
FilterBartels: This boolean input determines whether Bartel's test should be applied to filter out non-significant cycles.
BartNoCycles: This input sets the number of cycles to be used in Bartel's test.
BartSmoothPer: This input sets the period for the moving average used in Bartel's test.
BartSigLimit: This input sets the significance limit for Bartel's test, below which cycles are considered insignificant.
SortBartels: This boolean input determines whether the cycles should be sorted by their Bartel's test results.
UseCycleList: This boolean input determines whether a user-defined list of cycles should be used for constructing the composite wave. If set to false, the top N cycles will be used.
Cycle1, Cycle2, Cycle3, Cycle4, and Cycle5: These inputs define the user-defined list of cycles when 'UseCycleList' is set to true. If using a user-defined list, each of these inputs represents the period of a specific cycle to include in the composite wave.
StartAtCycle: This input determines the starting index for selecting the top N cycles when UseCycleList is set to false. This allows you to skip a certain number of cycles from the top before selecting the desired number of cycles.
UseTopCycles: This input sets the number of top cycles to use for constructing the composite wave when UseCycleList is set to false. The cycles are ranked based on their amplitudes or cycle strengths, depending on the UseCycleStrength input.
SubtractNoise: This boolean input determines whether to subtract the noise (remaining cycles) from the composite wave. If set to true, the composite wave will only include the top N cycles specified by UseTopCycles.
█ Exploring Auxiliary Functions
The following functions demonstrate advanced techniques for analyzing financial markets, including zero-lag moving averages, Bartels probability, detrending, and Hodrick-Prescott filtering. This section examines each function in detail, explaining their purpose, methodology, and applications in finance. We will examine how each function contributes to the overall performance and effectiveness of the indicator and how they work together to create a powerful analytical tool.
Zero-Lag Moving Average:
The zero-lag moving average function is designed to minimize the lag typically associated with moving averages. This is achieved through a two-step weighted linear regression process that emphasizes more recent data points. The function calculates a linearly weighted moving average (LWMA) on the input data and then applies another LWMA on the result. By doing this, the function creates a moving average that closely follows the price action, reducing the lag and improving the responsiveness of the indicator.
The zero-lag moving average function is used in the indicator to provide a responsive, low-lag smoothing of the input data. This function helps reduce the noise and fluctuations in the data, making it easier to identify and analyze underlying trends and patterns. By minimizing the lag associated with traditional moving averages, this function allows the indicator to react more quickly to changes in market conditions, providing timely signals and improving the overall effectiveness of the indicator.
Bartels Probability:
The Bartels probability function calculates the probability of a given cycle being significant in a time series. It uses a mathematical test called the Bartels test to assess the significance of cycles detected in the data. The function calculates coefficients for each detected cycle and computes an average amplitude and an expected amplitude. By comparing these values, the Bartels probability is derived, indicating the likelihood of a cycle's significance. This information can help in identifying and analyzing dominant cycles in financial markets.
The Bartels probability function is incorporated into the indicator to assess the significance of detected cycles in the input data. By calculating the Bartels probability for each cycle, the indicator can prioritize the most significant cycles and focus on the market dynamics that are most relevant to the current trading environment. This function enhances the indicator's ability to identify dominant market cycles, improving its predictive power and aiding in the development of effective trading strategies.
Detrend Logarithmic Zero-Lag Regression:
The detrend logarithmic zero-lag regression function is used for detrending data while minimizing lag. It combines a zero-lag moving average with a linear regression detrending method. The function first calculates the zero-lag moving average of the logarithm of input data and then applies a linear regression to remove the trend. By detrending the data, the function isolates the cyclical components, making it easier to analyze and interpret the underlying market dynamics.
The detrend logarithmic zero-lag regression function is used in the indicator to isolate the cyclical components of the input data. By detrending the data, the function enables the indicator to focus on the cyclical movements in the market, making it easier to analyze and interpret market dynamics. This function is essential for identifying cyclical patterns and understanding the interactions between different market cycles, which can inform trading decisions and enhance overall market understanding.
Bartels Cycle Significance Test:
The Bartels cycle significance test is a function that combines the Bartels probability function and the detrend logarithmic zero-lag regression function to assess the significance of detected cycles. The function calculates the Bartels probability for each cycle and stores the results in an array. By analyzing the probability values, traders and analysts can identify the most significant cycles in the data, which can be used to develop trading strategies and improve market understanding.
The Bartels cycle significance test function is integrated into the indicator to provide a comprehensive analysis of the significance of detected cycles. By combining the Bartels probability function and the detrend logarithmic zero-lag regression function, this test evaluates the significance of each cycle and stores the results in an array. The indicator can then use this information to prioritize the most significant cycles and focus on the most relevant market dynamics. This function enhances the indicator's ability to identify and analyze dominant market cycles, providing valuable insights for trading and market analysis.
Hodrick-Prescott Filter:
The Hodrick-Prescott filter is a popular technique used to separate the trend and cyclical components of a time series. The function applies a smoothing parameter to the input data and calculates a smoothed series using a two-sided filter. This smoothed series represents the trend component, which can be subtracted from the original data to obtain the cyclical component. The Hodrick-Prescott filter is commonly used in economics and finance to analyze economic data and financial market trends.
The Hodrick-Prescott filter is incorporated into the indicator to separate the trend and cyclical components of the input data. By applying the filter to the data, the indicator can isolate the trend component, which can be used to analyze long-term market trends and inform trading decisions. Additionally, the cyclical component can be used to identify shorter-term market dynamics and provide insights into potential trading opportunities. The inclusion of the Hodrick-Prescott filter adds another layer of analysis to the indicator, making it more versatile and comprehensive.
Detrending Options: Detrend Centered Moving Average:
The detrend centered moving average function provides different detrending methods, including the Hodrick-Prescott filter and the zero-lag moving average, based on the selected detrending method. The function calculates two sets of smoothed values using the chosen method and subtracts one set from the other to obtain a detrended series. By offering multiple detrending options, this function allows traders and analysts to select the most appropriate method for their specific needs and preferences.
The detrend centered moving average function is integrated into the indicator to provide users with multiple detrending options, including the Hodrick-Prescott filter and the zero-lag moving average. By offering multiple detrending methods, the indicator allows users to customize the analysis to their specific needs and preferences, enhancing the indicator's overall utility and adaptability. This function ensures that the indicator can cater to a wide range of trading styles and objectives, making it a valuable tool for a diverse group of market participants.
The auxiliary functions functions discussed in this section demonstrate the power and versatility of mathematical techniques in analyzing financial markets. By understanding and implementing these functions, traders and analysts can gain valuable insights into market dynamics, improve their trading strategies, and make more informed decisions. The combination of zero-lag moving averages, Bartels probability, detrending methods, and the Hodrick-Prescott filter provides a comprehensive toolkit for analyzing and interpreting financial data. The integration of advanced functions in a financial indicator creates a powerful and versatile analytical tool that can provide valuable insights into financial markets. By combining the zero-lag moving average,
█ In-Depth Analysis of the Goertzel Browser Code
The Goertzel Browser code is an implementation of the Goertzel Algorithm, an efficient technique to perform spectral analysis on a signal. The code is designed to detect and analyze dominant cycles within a given financial market data set. This section will provide an extremely detailed explanation of the code, its structure, functions, and intended purpose.
Function signature and input parameters:
The Goertzel Browser function accepts numerous input parameters for customization, including source data (src), the current bar (forBar), sample size (samplesize), period (per), squared amplitude flag (squaredAmp), addition flag (useAddition), cosine flag (useCosine), cycle strength flag (UseCycleStrength), past and future window sizes (WindowSizePast, WindowSizeFuture), Bartels filter flag (FilterBartels), Bartels-related parameters (BartNoCycles, BartSmoothPer, BartSigLimit), sorting flag (SortBartels), and output buffers (goeWorkPast, goeWorkFuture, cyclebuffer, amplitudebuffer, phasebuffer, cycleBartelsBuffer).
Initializing variables and arrays:
The code initializes several float arrays (goeWork1, goeWork2, goeWork3, goeWork4) with the same length as twice the period (2 * per). These arrays store intermediate results during the execution of the algorithm.
Preprocessing input data:
The input data (src) undergoes preprocessing to remove linear trends. This step enhances the algorithm's ability to focus on cyclical components in the data. The linear trend is calculated by finding the slope between the first and last values of the input data within the sample.
Iterative calculation of Goertzel coefficients:
The core of the Goertzel Browser algorithm lies in the iterative calculation of Goertzel coefficients for each frequency bin. These coefficients represent the spectral content of the input data at different frequencies. The code iterates through the range of frequencies, calculating the Goertzel coefficients using a nested loop structure.
Cycle strength computation:
The code calculates the cycle strength based on the Goertzel coefficients. This is an optional step, controlled by the UseCycleStrength flag. The cycle strength provides information on the relative influence of each cycle on the data per bar, considering both amplitude and cycle length. The algorithm computes the cycle strength either by squaring the amplitude (controlled by squaredAmp flag) or using the actual amplitude values.
Phase calculation:
The Goertzel Browser code computes the phase of each cycle, which represents the position of the cycle within the input data. The phase is calculated using the arctangent function (math.atan) based on the ratio of the imaginary and real components of the Goertzel coefficients.
Peak detection and cycle extraction:
The algorithm performs peak detection on the computed amplitudes or cycle strengths to identify dominant cycles. It stores the detected cycles in the cyclebuffer array, along with their corresponding amplitudes and phases in the amplitudebuffer and phasebuffer arrays, respectively.
Sorting cycles by amplitude or cycle strength:
The code sorts the detected cycles based on their amplitude or cycle strength in descending order. This allows the algorithm to prioritize cycles with the most significant impact on the input data.
Bartels cycle significance test:
If the FilterBartels flag is set, the code performs a Bartels cycle significance test on the detected cycles. This test determines the statistical significance of each cycle and filters out the insignificant cycles. The significant cycles are stored in the cycleBartelsBuffer array. If the SortBartels flag is set, the code sorts the significant cycles based on their Bartels significance values.
Waveform calculation:
The Goertzel Browser code calculates the waveform of the significant cycles for both past and future time windows. The past and future windows are defined by the WindowSizePast and WindowSizeFuture parameters, respectively. The algorithm uses either cosine or sine functions (controlled by the useCosine flag) to calculate the waveforms for each cycle. The useAddition flag determines whether the waveforms should be added or subtracted.
Storing waveforms in matrices:
The calculated waveforms for each cycle are stored in two matrices - goeWorkPast and goeWorkFuture. These matrices hold the waveforms for the past and future time windows, respectively. Each row in the matrices represents a time window position, and each column corresponds to a cycle.
Returning the number of cycles:
The Goertzel Browser function returns the total number of detected cycles (number_of_cycles) after processing the input data. This information can be used to further analyze the results or to visualize the detected cycles.
The Goertzel Browser code is a comprehensive implementation of the Goertzel Algorithm, specifically designed for detecting and analyzing dominant cycles within financial market data. The code offers a high level of customization, allowing users to fine-tune the algorithm based on their specific needs. The Goertzel Browser's combination of preprocessing, iterative calculations, cycle extraction, sorting, significance testing, and waveform calculation makes it a powerful tool for understanding cyclical components in financial data.
█ Generating and Visualizing Composite Waveform
The indicator calculates and visualizes the composite waveform for both past and future time windows based on the detected cycles. Here's a detailed explanation of this process:
Updating WindowSizePast and WindowSizeFuture:
The WindowSizePast and WindowSizeFuture are updated to ensure they are at least twice the MaxPer (maximum period).
Initializing matrices and arrays:
Two matrices, goeWorkPast and goeWorkFuture, are initialized to store the Goertzel results for past and future time windows. Multiple arrays are also initialized to store cycle, amplitude, phase, and Bartels information.
Preparing the source data (srcVal) array:
The source data is copied into an array, srcVal, and detrended using one of the selected methods (hpsmthdt, zlagsmthdt, logZlagRegression, hpsmth, or zlagsmth).
Goertzel function call:
The Goertzel function is called to analyze the detrended source data and extract cycle information. The output, number_of_cycles, contains the number of detected cycles.
Initializing arrays for past and future waveforms:
Three arrays, epgoertzel, goertzel, and goertzelFuture, are initialized to store the endpoint Goertzel, non-endpoint Goertzel, and future Goertzel projections, respectively.
Calculating composite waveform for past bars (goertzel array):
The past composite waveform is calculated by summing the selected cycles (either from the user-defined cycle list or the top cycles) and optionally subtracting the noise component.
Calculating composite waveform for future bars (goertzelFuture array):
The future composite waveform is calculated in a similar way as the past composite waveform.
Drawing past composite waveform (pvlines):
The past composite waveform is drawn on the chart using solid lines. The color of the lines is determined by the direction of the waveform (green for upward, red for downward).
Drawing future composite waveform (fvlines):
The future composite waveform is drawn on the chart using dotted lines. The color of the lines is determined by the direction of the waveform (fuchsia for upward, yellow for downward).
Displaying cycle information in a table (table3):
A table is created to display the cycle information, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
Filling the table with cycle information:
The indicator iterates through the detected cycles and retrieves the relevant information (period, amplitude, phase, and Bartel value) from the corresponding arrays. It then fills the table with this information, displaying the values up to six decimal places.
To summarize, this indicator generates a composite waveform based on the detected cycles in the financial data. It calculates the composite waveforms for both past and future time windows and visualizes them on the chart using colored lines. Additionally, it displays detailed cycle information in a table, including the rank, period, Bartel value, amplitude (or cycle strength), and phase of each detected cycle.
█ Enhancing the Goertzel Algorithm-Based Script for Financial Modeling and Trading
The Goertzel algorithm-based script for detecting dominant cycles in financial data is a powerful tool for financial modeling and trading. It provides valuable insights into the past behavior of these cycles and potential future impact. However, as with any algorithm, there is always room for improvement. This section discusses potential enhancements to the existing script to make it even more robust and versatile for financial modeling, general trading, advanced trading, and high-frequency finance trading.
Enhancements for Financial Modeling
Data preprocessing: One way to improve the script's performance for financial modeling is to introduce more advanced data preprocessing techniques. This could include removing outliers, handling missing data, and normalizing the data to ensure consistent and accurate results.
Additional detrending and smoothing methods: Incorporating more sophisticated detrending and smoothing techniques, such as wavelet transform or empirical mode decomposition, can help improve the script's ability to accurately identify cycles and trends in the data.
Machine learning integration: Integrating machine learning techniques, such as artificial neural networks or support vector machines, can help enhance the script's predictive capabilities, leading to more accurate financial models.
Enhancements for General and Advanced Trading
Customizable indicator integration: Allowing users to integrate their own technical indicators can help improve the script's effectiveness for both general and advanced trading. By enabling the combination of the dominant cycle information with other technical analysis tools, traders can develop more comprehensive trading strategies.
Risk management and position sizing: Incorporating risk management and position sizing functionality into the script can help traders better manage their trades and control potential losses. This can be achieved by calculating the optimal position size based on the user's risk tolerance and account size.
Multi-timeframe analysis: Enhancing the script to perform multi-timeframe analysis can provide traders with a more holistic view of market trends and cycles. By identifying dominant cycles on different timeframes, traders can gain insights into the potential confluence of cycles and make better-informed trading decisions.
Enhancements for High-Frequency Finance Trading
Algorithm optimization: To ensure the script's suitability for high-frequency finance trading, optimizing the algorithm for faster execution is crucial. This can be achieved by employing efficient data structures and refining the calculation methods to minimize computational complexity.
Real-time data streaming: Integrating real-time data streaming capabilities into the script can help high-frequency traders react to market changes more quickly. By continuously updating the cycle information based on real-time market data, traders can adapt their strategies accordingly and capitalize on short-term market fluctuations.
Order execution and trade management: To fully leverage the script's capabilities for high-frequency trading, implementing functionality for automated order execution and trade management is essential. This can include features such as stop-loss and take-profit orders, trailing stops, and automated trade exit strategies.
While the existing Goertzel algorithm-based script is a valuable tool for detecting dominant cycles in financial data, there are several potential enhancements that can make it even more powerful for financial modeling, general trading, advanced trading, and high-frequency finance trading. By incorporating these improvements, the script can become a more versatile and effective tool for traders and financial analysts alike.
█ Understanding the Limitations of the Goertzel Algorithm
While the Goertzel algorithm-based script for detecting dominant cycles in financial data provides valuable insights, it is important to be aware of its limitations and drawbacks. Some of the key drawbacks of this indicator are:
Lagging nature:
As with many other technical indicators, the Goertzel algorithm-based script can suffer from lagging effects, meaning that it may not immediately react to real-time market changes. This lag can lead to late entries and exits, potentially resulting in reduced profitability or increased losses.
Parameter sensitivity:
The performance of the script can be sensitive to the chosen parameters, such as the detrending methods, smoothing techniques, and cycle detection settings. Improper parameter selection may lead to inaccurate cycle detection or increased false signals, which can negatively impact trading performance.
Complexity:
The Goertzel algorithm itself is relatively complex, making it difficult for novice traders or those unfamiliar with the concept of cycle analysis to fully understand and effectively utilize the script. This complexity can also make it challenging to optimize the script for specific trading styles or market conditions.
Overfitting risk:
As with any data-driven approach, there is a risk of overfitting when using the Goertzel algorithm-based script. Overfitting occurs when a model becomes too specific to the historical data it was trained on, leading to poor performance on new, unseen data. This can result in misleading signals and reduced trading performance.
No guarantee of future performance: While the script can provide insights into past cycles and potential future trends, it is important to remember that past performance does not guarantee future results. Market conditions can change, and relying solely on the script's predictions without considering other factors may lead to poor trading decisions.
Limited applicability: The Goertzel algorithm-based script may not be suitable for all markets, trading styles, or timeframes. Its effectiveness in detecting cycles may be limited in certain market conditions, such as during periods of extreme volatility or low liquidity.
While the Goertzel algorithm-based script offers valuable insights into dominant cycles in financial data, it is essential to consider its drawbacks and limitations when incorporating it into a trading strategy. Traders should always use the script in conjunction with other technical and fundamental analysis tools, as well as proper risk management, to make well-informed trading decisions.
█ Interpreting Results
The Goertzel Browser indicator can be interpreted by analyzing the plotted lines and the table presented alongside them. The indicator plots two lines: past and future composite waves. The past composite wave represents the composite wave of the past price data, and the future composite wave represents the projected composite wave for the next period.
The past composite wave line displays a solid line, with green indicating a bullish trend and red indicating a bearish trend. On the other hand, the future composite wave line is a dotted line with fuchsia indicating a bullish trend and yellow indicating a bearish trend.
The table presented alongside the indicator shows the top cycles with their corresponding rank, period, Bartels, amplitude or cycle strength, and phase. The amplitude is a measure of the strength of the cycle, while the phase is the position of the cycle within the data series.
Interpreting the Goertzel Browser indicator involves identifying the trend of the past and future composite wave lines and matching them with the corresponding bullish or bearish color. Additionally, traders can identify the top cycles with the highest amplitude or cycle strength and utilize them in conjunction with other technical indicators and fundamental analysis for trading decisions.
This indicator is considered a repainting indicator because the value of the indicator is calculated based on the past price data. As new price data becomes available, the indicator's value is recalculated, potentially causing the indicator's past values to change. This can create a false impression of the indicator's performance, as it may appear to have provided a profitable trading signal in the past when, in fact, that signal did not exist at the time.
The Goertzel indicator is also non-endpointed, meaning that it is not calculated up to the current bar or candle. Instead, it uses a fixed amount of historical data to calculate its values, which can make it difficult to use for real-time trading decisions. For example, if the indicator uses 100 bars of historical data to make its calculations, it cannot provide a signal until the current bar has closed and become part of the historical data. This can result in missed trading opportunities or delayed signals.
█ Conclusion
The Goertzel Browser indicator is a powerful tool for identifying and analyzing cyclical patterns in financial markets. Its ability to detect multiple cycles of varying frequencies and strengths make it a valuable addition to any trader's technical analysis toolkit. However, it is important to keep in mind that the Goertzel Browser indicator should be used in conjunction with other technical analysis tools and fundamental analysis to achieve the best results. With continued refinement and development, the Goertzel Browser indicator has the potential to become a highly effective tool for financial modeling, general trading, advanced trading, and high-frequency finance trading. Its accuracy and versatility make it a promising candidate for further research and development.
█ Footnotes
What is the Bartels Test for Cycle Significance?
The Bartels Cycle Significance Test is a statistical method that determines whether the peaks and troughs of a time series are statistically significant. The test is named after its inventor, George Bartels, who developed it in the mid-20th century.
The Bartels test is designed to analyze the cyclical components of a time series, which can help traders and analysts identify trends and cycles in financial markets. The test calculates a Bartels statistic, which measures the degree of non-randomness or autocorrelation in the time series.
The Bartels statistic is calculated by first splitting the time series into two halves and calculating the range of the peaks and troughs in each half. The test then compares these ranges using a t-test, which measures the significance of the difference between the two ranges.
If the Bartels statistic is greater than a critical value, it indicates that the peaks and troughs in the time series are non-random and that there is a significant cyclical component to the data. Conversely, if the Bartels statistic is less than the critical value, it suggests that the peaks and troughs are random and that there is no significant cyclical component.
The Bartels Cycle Significance Test is particularly useful in financial analysis because it can help traders and analysts identify significant cycles in asset prices, which can in turn inform investment decisions. However, it is important to note that the test is not perfect and can produce false signals in certain situations, particularly in noisy or volatile markets. Therefore, it is always recommended to use the test in conjunction with other technical and fundamental indicators to confirm trends and cycles.
Deep-dive into the Hodrick-Prescott Fitler
The Hodrick-Prescott (HP) filter is a statistical tool used in economics and finance to separate a time series into two components: a trend component and a cyclical component. It is a powerful tool for identifying long-term trends in economic and financial data and is widely used by economists, central banks, and financial institutions around the world.
The HP filter was first introduced in the 1990s by economists Robert Hodrick and Edward Prescott. It is a simple, two-parameter filter that separates a time series into a trend component and a cyclical component. The trend component represents the long-term behavior of the data, while the cyclical component captures the shorter-term fluctuations around the trend.
The HP filter works by minimizing the following objective function:
Minimize: (Sum of Squared Deviations) + λ (Sum of Squared Second Differences)
Where:
The first term represents the deviation of the data from the trend.
The second term represents the smoothness of the trend.
λ is a smoothing parameter that determines the degree of smoothness of the trend.
The smoothing parameter λ is typically set to a value between 100 and 1600, depending on the frequency of the data. Higher values of λ lead to a smoother trend, while lower values lead to a more volatile trend.
The HP filter has several advantages over other smoothing techniques. It is a non-parametric method, meaning that it does not make any assumptions about the underlying distribution of the data. It also allows for easy comparison of trends across different time series and can be used with data of any frequency.
However, the HP filter also has some limitations. It assumes that the trend is a smooth function, which may not be the case in some situations. It can also be sensitive to changes in the smoothing parameter λ, which may result in different trends for the same data. Additionally, the filter may produce unrealistic trends for very short time series.
Despite these limitations, the HP filter remains a valuable tool for analyzing economic and financial data. It is widely used by central banks and financial institutions to monitor long-term trends in the economy, and it can be used to identify turning points in the business cycle. The filter can also be used to analyze asset prices, exchange rates, and other financial variables.
The Hodrick-Prescott filter is a powerful tool for analyzing economic and financial data. It separates a time series into a trend component and a cyclical component, allowing for easy identification of long-term trends and turning points in the business cycle. While it has some limitations, it remains a valuable tool for economists, central banks, and financial institutions around the world.
MathEasingFunctionsLibrary "MathEasingFunctions"
A collection of Easing functions.
Easing functions are commonly used for smoothing actions over time, They are used to smooth out the sharp edges
of a function and make it more pleasing to the eye, like for example the motion of a object through time.
Easing functions can be used in a variety of applications, including animation, video games, and scientific
simulations. They are a powerful tool for creating realistic visual effects and can help to make your work more
engaging and enjoyable to the eye.
---
Includes functions for ease in, ease out, and, ease in and out, for the following constructs:
sine, quadratic, cubic, quartic, quintic, exponential, elastic, circle, back, bounce.
---
Reference:
easings.net
learn.microsoft.com
ease_in_sine_unbound(v)
Sinusoidal function, the position over elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_sine(v)
Sinusoidal function, the position over elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_sine_unbound(v)
Sinusoidal function, the position over elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_sine(v)
Sinusoidal function, the position over elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_sine_unbound(v)
Sinusoidal function, the position over elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_sine(v)
Sinusoidal function, the position over elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_quad_unbound(v)
Quadratic function, the position equals the square of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_quad(v)
Quadratic function, the position equals the square of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_quad_unbound(v)
Quadratic function, the position equals the square of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_quad(v)
Quadratic function, the position equals the square of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_quad_unbound(v)
Quadratic function, the position equals the square of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_quad(v)
Quadratic function, the position equals the square of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_cubic_unbound(v)
Cubic function, the position equals the cube of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_cubic(v)
Cubic function, the position equals the cube of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_cubic_unbound(v)
Cubic function, the position equals the cube of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_cubic(v)
Cubic function, the position equals the cube of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_cubic_unbound(v)
Cubic function, the position equals the cube of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_cubic(v)
Cubic function, the position equals the cube of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_quart_unbound(v)
Quartic function, the position equals the formula `f(t)=t^4` of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_quart(v)
Quartic function, the position equals the formula `f(t)=t^4` of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_quart_unbound(v)
Quartic function, the position equals the formula `f(t)=t^4` of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_quart(v)
Quartic function, the position equals the formula `f(t)=t^4` of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_quart_unbound(v)
Quartic function, the position equals the formula `f(t)=t^4` of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_quart(v)
Quartic function, the position equals the formula `f(t)=t^4` of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_quint_unbound(v)
Quintic function, the position equals the formula `f(t)=t^5` of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_quint(v)
Quintic function, the position equals the formula `f(t)=t^5` of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_quint_unbound(v)
Quintic function, the position equals the formula `f(t)=t^5` of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_quint(v)
Quintic function, the position equals the formula `f(t)=t^5` of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_quint_unbound(v)
Quintic function, the position equals the formula `f(t)=t^5` of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_quint(v)
Quintic function, the position equals the formula `f(t)=t^5` of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_expo_unbound(v)
Exponential function, the position equals the exponential formula of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_expo(v)
Exponential function, the position equals the exponential formula of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_expo_unbound(v)
Exponential function, the position equals the exponential formula of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_expo(v)
Exponential function, the position equals the exponential formula of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_expo_unbound(v)
Exponential function, the position equals the exponential formula of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_expo(v)
Exponential function, the position equals the exponential formula of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_circ_unbound(v)
Circular function, the position equals the circular formula of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_circ(v)
Circular function, the position equals the circular formula of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_circ_unbound(v)
Circular function, the position equals the circular formula of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_circ(v)
Circular function, the position equals the circular formula of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_circ_unbound(v)
Circular function, the position equals the circular formula of elapsed time (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_circ(v)
Circular function, the position equals the circular formula of elapsed time (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_back_unbound(v)
Back function, the position retreats a bit before resuming (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_back(v)
Back function, the position retreats a bit before resuming (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_back_unbound(v)
Back function, the position retreats a bit before resuming (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_back(v)
Back function, the position retreats a bit before resuming (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_back_unbound(v)
Back function, the position retreats a bit before resuming (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_back(v)
Back function, the position retreats a bit before resuming (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_elastic_unbound(v)
Elastic function, the position oscilates back and forth like a spring (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_elastic(v)
Elastic function, the position oscilates back and forth like a spring (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_elastic_unbound(v)
Elastic function, the position oscilates back and forth like a spring (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_elastic(v)
Elastic function, the position oscilates back and forth like a spring (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_elastic_unbound(v)
Elastic function, the position oscilates back and forth like a spring (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_elastic(v)
Elastic function, the position oscilates back and forth like a spring (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_bounce_unbound(v)
Bounce function, the position bonces from the boundery (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_bounce(v)
Bounce function, the position bonces from the boundery (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_bounce_unbound(v)
Bounce function, the position bonces from the boundery (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_out_bounce(v)
Bounce function, the position bonces from the boundery (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_bounce_unbound(v)
Bounce function, the position bonces from the boundery (unbound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
ease_in_out_bounce(v)
Bounce function, the position bonces from the boundery (bound).
Parameters:
v (float) : `float` Elapsed time.
Returns: Ratio of change.
select(v, formula, effect, bounded)
Parameters:
v (float)
formula (string)
effect (string)
bounded (bool)
Ticker Correlation Reference IndicatorHello,
I am super excited to be releasing this Ticker Correlation assessment indicator. This is a big one so let us get right into it!
Inspiration:
The inspiration for this indicator came from a similar indicator by Balipour called the Correlation with P-Value and Confidence Interval. It’s a great indicator, you should check it out!
I used it quite a lot when looking for correlations; however, there were some limitations to this indicator’s functionality that I wanted. So I decided to make my own indicator that had the functionality I wanted. I have been using this for some time but decided to actual spruce it up a bit and make it user friendly so that I could share it publically. So let me get into what this indicator does and, most importantly, the expanded functionality of this indicator.
What it does:
This indicator determines the correlation between 2 separate tickers. The user selects the two tickers they wish to compare and it performs a correlation assessment over a defaulted 14 period length and displays the results. However, the indicator takes this much further. The complete functionality of this indicator includes the following:
1. Assesses the correlation of all 4 ticker variables (Open, High, Low and Close) over a user defined period of time (defaulted to 14);
2. Converts both tickers to a Z-Score in order to standardize the data and provide a side by side comparison;
3. Displays areas of high and low correlation between all 4 variables;
4. Looks back over the consistency of the relationship (is correlation consistent among the two tickers or infrequent?);
5. Displays the variance in the correlation (there may be a statistically significant relationship, but if there is a high variance, it means the relationship is unstable);
6. Permits manual conversion between prices; and
7. Determines the degree of statistical significance (be it stable, unstable or non-existent).
I will discuss each of these functions below.
Function 1: Assesses the correlation of all 4 variables.
The only other indicator that does this only determines the correlation of the close price. However, correlation between all 4 variables varies. The correlation between open prices, high prices, low prices and close prices varies in statistically significant ways. As such, this indicator plots the correlation of all 4 ticker variables and displays each correlation.
Assessing this matters because sometimes a stock may not have the same magnitude in highs and lows as another stock (one stock may be more bullish, i.e. attain higher highs in comparison to another stock). Close price is helpful but does not pain the full picture. As such, the indicator displays the correlation relationship between all 4 variables (image below):
Function 2: Converts both tickers to Z-Score
Z-Score is a way of standardizing data. It simply measures how far a stock is trading in relation to its mean. As such, it is a way to express both tickers on a level playing field. Z-Score was also chosen because the Z-Score Values (0 – 4) also provide an appropriate scale to plot correlation lines (which range from 0 to 1).
The primary ticker (Ticker 1) is plotted in blue, the secondary comparison ticker (Ticker 2) is plotted in a colour changing format (which will be discussed below). See the image below:
Function 3: Displays areas of high and low correlation
While Ticker 1 is plotted in a static blue, Ticker 2 (the comparison ticker) is plotted in a dynamic, colour changing format. It will display areas of high correlation (i.e. areas with a P value greater than or equal to 0.9 or less than and equal to -0.9) in green, areas of moderate correlation in white. Areas of low correlation (between 0.4 and 0 or -0.4 and 0) are in red. (see image below):
Function 4: Checks consistency of relationship
While at the time of assessing a stock there very well maybe a high correlation, whether that correlation is consistent or not is the question. The indicator employs the use of the SMA function to plot the average correlation over a defined period of time. If the correlation is consistently high, the SMA should be within an area of statistical significance (over 0.5 or under -0.5). If the relationship is inconsistent, the SMA will read a lower value than the actual correlation.
You can see an example of this when you compare ETH to Tezos in the image below:
You can see that the correlation between ETH and Tezo’s on the high level seems to be inconsistent. While the current correlation is significant, the SMA is showing that the average correlation between the highs is actually less than 0.5.
The indicator also tells the user narratively the degree of consistency in the statistical relationship. This will be discussed later.
Function 5: Displays the variance
When it comes to correlation, variance is important. Variance simply means the distance between the highest and lowest value. The indicator assess the variance. A high degree of variance (i.e. a number surpassing 0.5 or greater) generally means the consistency and stability of the relationship is in issue. If there is a high variance, it means that the two tickers, while seemingly significantly correlated, tend to deviate from each other quite extensively.
The indicator will tell the user the variance in the narrative bar at the bottom of the chart (see image below):
Function 6: Permits manual conversion of price
One thing that I frequently want and like to do is convert prices between tickers. If I am looking at SPX and I want to calculate a price on SPY, I want to be able to do that quickly. This indicator permits you to do that by employing a regression based formula to convert Ticker 1 to Ticker 2.
The user can actually input which variable they would like to convert, whether they want to convert Ticker 1 Close to Ticker 2 Close, or Ticker 1 High to Ticker 2 High, or low or open.
To do this, open the settings and click “Permit Manual Conversion”. This will then take the current Ticker 1 Close price and convert it to Ticker 2 based on the regression calculations.
If you want to know what a specific price on Ticker 1 is on Ticker 2, simply click the “Allow Manual Price Input” variable and type in the price of Ticker 1 you want to know on Ticker 2. It will perform the calculation for you and will also list the standard error of the calculation.
Below is an example of calculating a SPY price using SPX data:
Above, the indicator was asked to convert an SPX price of 4,100 to a SPY price. The result was 408.83 with a standard error of 4.31, meaning we can expect 4,100 to fall within 408.83 +/- 4.31 on SPY.
Function 7: Determines the degree of statistical significance
The indicator will provide the user with a narrative output of the degree of statistical significance. The indicator looks beyond simply what the correlation is at the time of the assessment. It uses the SMA and the highest and lowest function to make an assessment of the stability of the statistical relationship and then indicates this to the user. Below is an example of IWM compared to SPY:
You will see, the indicator indicates that, while there is a statistically significant positive relationship, the relationship is somewhat unstable and inconsistent. Not only does it tell you this, but it indicates the degree of inconsistencies by listing the variance and the range of the inconsistencies.
And below is SPY to DIA:
SPY to BTCUSD:
And finally SPY to USDCAD Currency:
Other functions:
The indicator will also plot the raw or smoothed correlation result for the Open, High, Low or Close price. The default is to close price and smoothed. Smoothed just means it is displaying the SMA over the raw correlation score. Unsmoothing it will show you the raw correlation score.
The user also has the ability to toggle on and off the correlation table and the narrative table so that they can just review the chart (the side by side comparison of the 2 tickers).
Customizability
All of the functions are customizable for the most part. The user can determine the length of lookback, etc. The default parameters for all are 14. The only thing not customizable is the assessment used for determining the stability of a statistical relationship (set at 100 candle lookback) and the regression analysis used to convert price (10 candle lookback).
User Notes and important application tips:
#1: If using the manual calculation function to convert price, it is recommended to use this on the hourly or daily chart.
#2: Leaving pre-market data on can cause some errors. It is recommended to use the indicator with regular market hours enabled and extended market hours disabled.
#3: No ticker is off limits. You can compare anything against anything! Have fun with it and experiment!
Non-Indicator Specific Discussions:
Why does correlation between stocks mater?
This can matter for a number of reasons. For investors, it is good to diversify your portfolio and have a good array of stocks that operate somewhat independently of each other. This will allow you to see how your investments compare to each other and the degree of the relationship.
Another function may be getting exposure to more expensive tickers. I am guilty of trading IWM to gain exposure to SPY at a reduced cost basis :-).
What is a statistically significant correlation?
The rule of thumb is anything 0.5 or greater is considered statistically significant. The ideal setup is 0.9 or more as the effect is almost identical. That said, a lot of factors play into statistical significance. For example, the consistency and variance are 2 important factors most do not consider when ascertaining significance. Perhaps IWM and SPY are significantly correlated today, but is that a reliable relationship and can that be counted on as a rule?
These are things that should be considered when trading one ticker against another and these are things that I have attempted to address with this indicator!
Final notes:
I know I usually do tutorial videos. I have not done one here, but I will. Check back later for this.
I hope you enjoy the indicator and please feel free to share your thoughts and suggestions!
Safe trades all!
Price Level Stats (PLS)Hello traders! In today's post, we're going to delve into a powerful custom indicator called Price Level Stats (PLS). This indicator combines the functionalities of Arbitrary Price Point Probability (APPP) and Bar Movement Probability (BMP) to create an easy-to-use tool that displays price levels and their corresponding probabilities based on percentage steps away from the current price. Let's explore how PLS works and how you can effectively utilize it in your trading strategy.
Overview of Price Level Stats (PLS)
The PLS indicator combines the APPP and BMP indicators, leveraging both their strengths to create a more comprehensive and versatile tool. The indicator calculates the probabilities of different price levels being reached, based on historical price data, and displays them on your chart. This tool allows you to analyze various price points with different percentage steps away from the current price, providing valuable insights into potential market movements.
Key Components of PLS
EMA Calculation: The PLS indicator uses the Exponential Moving Average (EMA) to calculate the mean of the price data. This calculation is necessary for determining the probabilities associated with various price levels.
Price Movement Probability (T-Dist) Function: This function calculates the price movement probability using the Student's T-distribution. This statistical method is advantageous for small sample sizes and allows for more accurate probability estimations.
Step Size and Steps: The indicator allows you to define the step size (percentage away from the current price) and the number of steps to analyze. This customization enables you to explore various price levels and their associated probabilities.
Drawing Probability Labels: PLS displays the calculated probabilities as labels on your chart, providing you with an easy-to-understand visual representation of the likelihood of specific price levels being reached.
Using PLS in Your Trading Strategy
Setting the Source and Step Size: Start by configuring the source (typically set to open) and the step size. The step size determines the percentage distance between the price levels you want to analyze. For instance, a step size of 0.2 means you will analyze price levels at 0.2%, 0.4%, 0.6%, and so on, away from the current price.
Configuring the Steps: Next, set the number of steps you want the indicator to analyze. This setting determines how many price levels the indicator will evaluate on both the bullish and bearish sides.
Choosing the Style: The PLS indicator offers three different styles: Bar Estimate, Log Bar Estimate, and Student's T. Bar Estimate and Log Bar Estimate utilize the BMP method, while the Student's T style uses the T-distribution function for probability calculations. Choose the style that best suits your trading strategy and preferences.
Interpreting the Probability Labels: Once you have configured the indicator settings, PLS will display the calculated probabilities as labels on your chart. These labels represent the likelihood of the associated price levels being reached. Use these probabilities to make informed trading decisions and manage your trades more effectively.
Benefits of the PLS Indicator
Comprehensive Analysis: By combining the functionalities of APPP and BMP indicators, PLS provides a more comprehensive analysis of potential price movements, helping you to make better-informed trading decisions.
Customization and Flexibility: PLS offers a high level of customization, allowing you to analyze various price levels and choose from different styles to suit your trading strategy and preferences.
Easy-to-Understand Visual Representation: The PLS indicator displays the calculated probabilities as labels on your chart, providing you with an easy-to-understand visual representation of the likelihood of specific price levels being reached. This visual representation allows you to quickly assess the market situation and make more informed decisions.
Versatility: The PLS indicator is versatile and can be used in conjunction with other technical analysis tools or as a standalone tool for assessing the probabilities of price movements. This versatility makes it a valuable addition to any trader's toolbox.
Improved Risk Management: By providing insight into the probabilities of various price levels being reached, the PLS indicator helps traders improve their risk management strategies. You can use these probabilities to set more accurate stop-loss and take-profit levels or to better time your entries and exits.
Customizing the PLS Indicator: Controls and Their Effects
The PLS indicator offers several controls that allow you to customize the output to better suit your trading needs. Understanding how these controls affect the output is essential for making the most out of this powerful tool. Below, we will discuss the two main controls - Step and Step Size - and explain their impact on the PLS indicator's output.
Step: The 'Step' control determines the number of steps taken away from the current price when calculating price levels and their associated probabilities. By increasing or decreasing the number of steps, you can adjust the range of price levels considered by the PLS indicator. For example, if you set the step value to 5, the PLS indicator will calculate probabilities for five price levels above and five price levels below the current price. Adjusting the number of steps allows you to focus on price levels that are more relevant to your trading strategy, helping you make better-informed decisions.
Step Size: The 'Step Size' control dictates the percentage distance between each step, which directly impacts the price levels being analyzed. For instance, if you set the step size to 0.5%, each price level considered by the PLS indicator will be 0.5% away from the previous one. A larger step size will result in a broader range of price levels being assessed, while a smaller step size will provide a more granular view of the price levels surrounding the current price. Adjusting the step size enables you to fine-tune the PLS indicator according to your preferred level of detail, allowing you to better analyze the probabilities associated with specific price movements.
By customizing the 'Step' and 'Step Size' controls, you can tailor the PLS indicator to your specific trading needs and preferences. These controls enable you to analyze a wide range of price levels or focus on a narrow range, depending on your strategy and risk tolerance. Experimenting with different combinations of step and step size values will help you find the optimal settings for your trading style and goals.
Conclusion
The Price Level Stats (PLS) indicator is a powerful tool that combines the strengths of the Arbitrary Price Point Probability (APPP) and Bar Movement Probability (BMP) indicators to provide traders with an easy-to-use system for analyzing potential price movements. By offering comprehensive analysis, customization, and easy-to-understand visual representation, PLS is an invaluable tool for traders looking to improve their trading strategies and risk management. Be sure to incorporate this versatile indicator into your trading arsenal to gain valuable insights into the probabilities of price levels being reached and make more informed trading decisions.
Bar Move Probability Price Levels (BMPPL)Hello fellow traders! I am thrilled to present my latest creation, the Bar Move Probability Price Levels (BMPPL) indicator. This powerful tool offers a statistical edge in your trading by helping you understand the likelihood of price movements at multiple levels based on historical data. In this post, I'll provide an overview of the indicator, its features, and how it can enhance your trading experience. Let's dive in!
What is the Bar Move Probability Price Levels Indicator?
The Bar Move Probability Price Levels (BMPPL) indicator is a versatile tool that calculates the probability of a bar's price movement at multiple levels, either up or down, based on past occurrences of similar price movements. This comprehensive approach can provide valuable insights into the potential direction of the market, allowing you to make better-informed trading decisions.
One of the standout features of the BMPPL indicator is its flexibility. You can choose to see the probabilities of reaching various price levels, or you can focus on the highest probability move by adjusting the "Max Number of Elements" and "Step Size" settings. This flexibility ensures that the indicator caters to your specific trading style and requirements.
Max Number of Elements and Step Size: Fine-Tuning Your BMPPL Indicator
The BMPPL indicator allows you to customize its output to suit your trading style and requirements through two key settings: Max Number of Elements and Step Size.
Max Number of Elements: This setting determines the maximum number of price levels displayed by the indicator. By default, it is set to 1000, meaning the indicator will show probabilities for up to 1000 price levels. You can adjust this setting to limit the number of price levels displayed, depending on your preference and trading strategy.
Step Size: The Step Size setting determines the increment between displayed price levels. By default, it is set to 100, which means the indicator will display probabilities for every 100th price level. Adjusting the Step Size allows you to control the granularity of the displayed probabilities, enabling you to focus on specific price movements.
By adjusting the Max Number of Elements and Step Size settings, you can fine-tune the BMPPL indicator to focus on the most relevant price levels for your trading strategy. For example, if you want to concentrate on the highest probability move, you can set the Max Number of Elements to 1 and the Step Size to 1. This will cause the indicator to display only the price level with the highest probability, simplifying your trading decisions.
Probability Calculation: Understanding the Core Concept
The BMPPL indicator calculates the probability of a bar's price movement by analyzing historical price changes and comparing them to the current price change (in percentage). The indicator maintains separate arrays for green (bullish) and red (bearish) price movements and their corresponding counts.
When a new bar is formed, the indicator checks whether the price movement (in percentage) is already present in the respective array. If it is, the corresponding count is updated. Otherwise, a new entry is added to the array, with an initial count of 1.
Once the historical data has been analyzed, the BMPPL indicator calculates the probability of each price movement by dividing the count of each movement by the sum of all counts. These probabilities are then stored in separate arrays for green and red movements.
Utilizing BMPPL Indicator Settings Effectively
To make the most of the BMPPL indicator, it's essential to understand how to use the Max Number of Elements and Step Size settings effectively:
Identify your trading objectives: Before adjusting the settings, it's crucial to know what you want to achieve with your trades. Are you targeting specific price levels or focusing on high-probability moves? Identifying your objectives will help you determine the appropriate settings.
Start with the default settings: The default settings provide a broad overview of price movement probabilities. Start by analyzing these settings to gain a general understanding of the market behavior.
Adjust the settings according to your objectives: Once you have a clear understanding of your trading objectives, adjust the Max Number of Elements and Step Size settings accordingly. For example, if you want to focus on the highest probability move, set both settings to 1.
Experiment and refine: As you gain experience with the BMPPL indicator, continue to experiment with different combinations of Max Number of Elements and Step Size settings. This will help you find the optimal configuration that aligns with your trading strategy and risk tolerance. Remember to continually evaluate your trading results and refine your settings as needed.
Combine with other technical analysis tools: While the BMPPL indicator provides valuable insights on its own, combining it with other technical analysis tools can further enhance your trading strategy. Use additional indicators and chart patterns to confirm your analysis and improve the accuracy of your trades.
Monitor and adjust: Market conditions are constantly changing, and it's crucial to stay adaptive. Keep monitoring the market and adjust your BMPPL settings as necessary to ensure they remain relevant and effective in the current market environment.
By understanding and effectively utilizing the Max Number of Elements and Step Size settings in the BMPPL indicator, you can gain a deeper insight into the potential direction of the market, allowing you to make more informed trading decisions. Experimenting with different settings and combining the BMPPL indicator with other technical analysis tools will ultimately help you develop a robust trading strategy that maximizes your potential profits.
How Can the BMPPL Indicator Benefit Your Trading?
The primary benefit of the BMPPL indicator is its ability to provide you with a statistical edge in your trading by displaying probabilities for various price movements. By analyzing historical price data, the indicator helps you understand the likelihood of certain price movements occurring, allowing you to make more informed decisions about your trades.
The customizable nature of the BMPPL indicator makes it a valuable tool for traders with specific price targets or risk management strategies in mind. By understanding the probability of reaching your target price or the likelihood of encountering a significant price movement, you can better manage your risk and optimize your trading strategy.
Additionally, the BMPPL indicator can be used in conjunction with other technical analysis tools and indicators to further strengthen your trading strategy. For example, you can combine the BMPPL indicator with support and resistance levels, trend lines, and moving averages to better time your entries and exits.
Wrapping Up
In conclusion, the Bar Move Probability Price Levels (BMPPL) indicator is a powerful and customizable tool that can help you gain a statistical edge in your trading. By analyzing historical price data and displaying probabilities for various price movements, the BMPPL indicator allows you to make more informed decisions about your trades, ultimately leading to more successful outcomes.
The customizable settings of the BMPPL indicator make it an adaptable tool for traders with diverse trading styles and risk management preferences. With its ability to provide valuable insights into the potential direction of the market, the BMPPL indicator is an essential addition to any trader's toolbox.
Moreover, when combined with other technical analysis tools and indicators, the BMPPL indicator can further enhance your trading strategy, allowing you to better time your entries and exits and maximize your potential profits. So, if you're looking to gain an edge in your trading and improve your decision-making process, the Bar Move Probability Price Levels (BMPPL) indicator is definitely worth exploring.
Bar Move Probability (BMP)Hello fellow traders! I am excited to share with you my latest creation, the Bar Move Probability (BMP) indicator. This powerful tool is designed to give you a statistical edge in your trading by helping you understand the likelihood of price movements based on historical data. In this blog post, I'll give you an overview of the indicator, its features, and how it can help you make more informed trading decisions. Let's dive in!
What is the Bar Move Probability Indicator?
The Bar Move Probability (BMP) indicator is a unique tool that calculates the probability of a bar's price movement, either up or down, based on past occurrences of similar price movements. This can give you valuable insights into the potential direction of the market, allowing you to make better-informed trading decisions.
One of the key features of the BMP indicator is that it allows you to choose the price you want to determine the probability of. By inputting your desired price, the indicator will analyze historical data and provide you with the likelihood of reaching that price, offering a more personalized approach to trading.
How Does the BMP Indicator Work?
The BMP indicator calculates the probability of a bar's price movement by comparing the current price change (in percentage) to historical price changes. It does this by maintaining separate arrays for green (bullish) and red (bearish) price movements, as well as corresponding arrays for the count of each movement.
Whenever a new bar is formed, the indicator checks whether the price movement (in percentage) is already present in the respective array. If it is, the corresponding count is updated. Otherwise, a new entry is added to the array, with an initial count of 1.
After analyzing the historical data, the BMP indicator calculates the probability of each price movement by dividing the count of each movement by the sum of all counts. These probabilities are then stored in separate arrays for green and red movements.
Finally, the indicator displays the probability of the current price movement as a label on the chart. The label is color-coded, with green indicating a bullish price movement and red indicating a bearish price movement.
How Can the BMP Indicator Benefit Your Trading?
The primary benefit of the BMP indicator is its ability to provide you with a statistical edge in your trading. By analyzing historical price data, the indicator helps you understand the likelihood of a certain price movement occurring, allowing you to make more informed decisions about your trades.
The customizable nature of the BMP indicator, allowing you to input your desired price, makes it a valuable tool for traders with specific price targets in mind. By understanding the probability of reaching your target price, you can better manage your risk and optimize your trading strategy.
For example, suppose the BMP indicator shows a high probability of a bullish price movement towards your target price. In that case, you may consider entering a long position or tightening your stop loss on an existing short position. Conversely, if the indicator displays a high probability of a bearish price movement away from your target price, you may consider entering a short position or taking profit on an existing long position.
The BMP indicator can be used in conjunction with other technical analysis tools and indicators to further strengthen your trading strategy. For example, you can combine the BMP indicator with support and resistance levels, trend lines, and moving averages to better time your entries and exits.
Wrapping Up
In conclusion, the Bar Move Probability (BMP) indicator is a powerful and customizable tool that can help you gain a statistical edge in your trading. By analyzing historical price data and allowing you to input your desired price, the indicator provides valuable insights into the likelihood of price movements, enabling you to make better-informed trading decisions.
I hope you find the BMP indicator useful
Arbitrary Price Point Probability (APPP)The "Arbitrary Price Point Probability" indicator is designed to calculate the probability of a given price point occurring within a certain range of prices. The indicator uses statistical analysis to determine the likelihood of a specific price point appearing based on the market data.
The indicator works by taking the input price, which is the price point for which the probability is being calculated. The indicator then calculates the mean and standard deviation of the prices over a certain period specified by the user. The length of the period for calculating the mean and standard deviation is also specified by the user.
Once the mean and standard deviation have been calculated, the indicator uses them to calculate the probability of the input price point occurring within the range of prices over the specified period. The indicator does this by calculating the z-score, which is the number of standard deviations between the input price point and the mean price. The z-score is then used to calculate the probability using a t-distribution probability density function.
The t-distribution probability density function used by the indicator is a mathematical function that describes the likelihood of obtaining a particular value from a t-distribution. A t-distribution is a statistical distribution used when the sample size is small, and the population standard deviation is unknown.
The indicator also uses a binary search algorithm to find the t-value for a given confidence level. The t-value is the number of standard deviations from the mean at which the confidence interval is set. The confidence level is set by the user, and the default value is 99%.
Overall, the "Arbitrary Price Point Probability" indicator is a useful tool for traders who want to determine the probability of a particular price point occurring within a certain range of prices. The indicator can be used in conjunction with other technical analysis tools to make more informed trading decisions.
Intrabar Run Count Indicator [tbiktag]• OVERVIEW
Introducing the Intrabar Run Count Indicator , a tool designed to detect potential non-randomness in intrabar price data. It utilizes the statistical runs test to examine the number of sequences ( runs ) of positive and negative returns in the analyzed price series. As deviations from random-walk behavior of returns may indicate market inefficiencies , the Intrabar Run Count Indicator can help traders gain a better understanding of the price dynamics inside each chart bar and make more informed trading decisions.
• USAGE
The indicator line expresses the deviation between the number of runs observed in the dataset and the expected number of runs under the hypothesis of randomness. Thus, it gauges the degree of deviation from random-walk behavior. If, for a given chart bar, it crosses above the critical value or crosses below the negative critical value, this may indicate non-randomness in the underlying intrabar returns. These instances are highlighted by on-chart signals and bar coloring. The confidence level that defines the critical value, as well as the number of intrabars used for analysis, are selected in the input settings.
It is important to note that the readings of the Intrabar Run Count Indicator do not convey directional information and cannot predict future asset performance. Rather, they help distinguish between random and potentially tradable price movements, such as breakouts, reversals, and gap fillings.
• DETAILS
The efficient-market hypothesis implies that the distribution of returns should be random, reflecting the idea that all available information is already priced into the asset. However, in practice, financial markets may not always be perfectly efficient due to factors such as market frictions, information asymmetry, and irrational behavior of market participants. As a result, inefficiency (non-randomness) can occur, potentially creating opportunities for trading strategies.
To search for potential inefficiencies, the Intrabar Run Count Indicator analyzes the distribution of the signs of returns. The central assumption underlying the indicator's logic is that if the asset price follows a random-walk pattern, then the probability of the next return being positive or negative (i.e., the next price value being larger or smaller than the current value) follows a binomial distribution. In this case, the number of runs is also a random variable, and, for a large sample, its conditional distribution is approximately normal with a well-defined mean and variance (see this link for the exact expressions). Thus, the observed number of runs in the price series is indicative of whether or not the time series can be regarded as random. In simple words, if there are too few runs or too many runs, it is unlikely a random time series. A trivial example is a series with all returns of the same sign.
Quantitatively, the deviation from randomness can be gauged by calculating the test statistic of the runs test (that serves as an indicator line ). It is defined as the absolute difference between the observed number of runs and the expected number of runs under the null hypothesis of randomness, divided by the standard deviation of the expected number of runs. If the test statistic is negative and exceeds the negative critical value (at a given confidence level), it suggests that there are fewer runs than expected for a random-walking time series. Likewise, if the test statistic exceeds the positive critical value, it is indicative of more runs than expected for a random series. The sign of the test statistic can also be informative, as too few runs can be sometimes indicative of mean-reverting behavior.
• CONCLUSION
The Intrabar Run Count Indicator can be a useful tool for traders seeking to exploit market inefficiencies and gain a better understanding of price action within each chart bar. However, it is important to note that the runs test only evaluates the distributional properties of the data and does not provide any information on the underlying causes of the non-randomness detected. Additionally, like any statistical test, it can sometimes produce false-positive signals. Therefore, this indicator should be used in conjunction with other analytical techniques as part of a trading strategy.
TradeBee Percent Gap AlertA simple script to enable adding a trigger when a stock reaches 'X' percent, the 'X' can be configured.
The script also displays current tickers gap %, this is particularly helpful when you have 2 or more panels on a chart
Strategy DesignerHello traders.
Thanks to the tool I have published, everyone who knows or does not know coding will be able to create strategies and see the results instantly on the screen. Yes it looks very nice :)
What does this script do?
Thanks to this tool, even if you don't know any coding, you will be able to create your own strategies. You can add and remove indicators.
Entrance
The first thing you need to do is to set a strategy in your mind.
Then you need to adjust the settings of the indicators installed in the system. Please set the indicators first, because later they are forgotten.
The screen for entering the parameters of the indicators will be as follows.
After entering the parameters there is an important part . In this section, we can adjust the strategy settings.
First we choose between which dates we want the strategy to run. We then choose whether we want the strategy results to be displayed in a table or not.
We choose how the Terms should be linked together. For example, if you have a condition that is expected to produce more than one receive signal, select whether these conditions are connected to each other with and or with the connector.
In this way, you can determine whether all or any of the rules in your strategy should apply.
Next, we choose whether our strategy will work in the spot market or in a bidirectional market. Yes, you can design a strategy for both spot and bidirectional trades :)
At the bottom of the above image, we see a screen where we can adjust the stop level and tp level. As a standard, adjustments are made according to the percentage level you enter. However, if you remove the tick next to the percent sign, the previous stop level and the next profit level are determined as much as the value you entered.
At the bottom is the trailing stop. When you open the trailing stop, the trailing stop becomes active in your strategy.
Very important, when the trailing stop and the stop are active at the same time, the trailing stop value is valid.
It's time to design our strategy. Each chapter that begins with an exclamation point is a separate fiction.
If you do not mark the Active button, that condition will not be included in the calculation.
Direction = It is the direction for which the fiction in this region is valid.
We came to the indicator setting screen. Here, there is a screen where we can select two different indicators on the right and left.
We choose the first indicator starting with 1.
Then we choose from the middle region how we want these two indicators to interact.
We choose our second indicator from the place starting with 2.
If you want an indicator to interact with any value, tick the box where it says Value and fill in the value in the blank. When Value is ticked, the second indicator does not work.
COT-index rangeA graph showing the commercials (part of COT-data) positioning in relation to its own range, X periods back. I usually choose the look-back period to equal approximately one year. This will be around 52 on a weekly chart and 250 on a daily chart.
In my opinion a high data-point for the commercials is bullish and vice versa. But instead of only looking att absolute values I now look more at how the commercials are positioned compared to the previous 12 och 6 months.
Example:
a) if COT-index range = 0.8, then the commercials are in the 80th percentile for this specific look-back period, i.e. the commercials has only been more bullish 20% of the time and more bearish 80% of the time.
b) a) if COT-index range = 0.5, then the commercials are in the 50th percentile for this specific look-back period, i.e. the commercials has been more bullish 50% of the time and more bearish 50% of the time.
c) if COT-index range = 0.2, then the commercials are in the 20th percentile for this specific look-back period, i.e. the commercials has been more bullish 80% of the time and more bearish 20% of the time.
In other words, a high reading is bullish and a low reading is bearish.
Global LiquidityThe "Global Liquidity" script is an indicator that calculates and displays the global liquidity value using a formula that takes into account the money supply of several major economies. The script utilizes data from various sources, such as the Federal Reserve Economic Data (FRED), Economics, and FX_IDC.
The indicator plots the global liquidity value as a candlestick chart and breaks it down into two categories: the Euro-Atlantic region (West) and the rest of the world (East). The values are denominated both in inflation-adjusted dollars and in trillions of dollars. The script also calculates the spread between the Euro-Atlantic region and the rest of the world.
Traders and investors can use this indicator to gauge the overall liquidity of the global economy and to identify potential investment opportunities or risks. By breaking down the liquidity value into different regions, traders can also gain insights into regional economic trends and dynamics.
Note that this script is subject to the terms of the Mozilla Public License 2.0 and was created by rodopacapital.
UtilityLibrary "Utility"
dema(src, length)
Parameters:
src (float)
length (simple int)
tema(src, length)
Parameters:
src (float)
length (simple int)
hma(src, length)
Parameters:
src (float)
length (int)
zlema(src, length)
Parameters:
src (float)
length (simple int)
stochRSI(src, lengthRSI, lengthStoch, smoothK, smoothD)
Parameters:
src (float)
lengthRSI (simple int)
lengthStoch (int)
smoothK (int)
smoothD (int)
slope(src, length)
Parameters:
src (float)
length (int)
Days in rangeThis script is a little widget that I made to do some homework on the VIX.
As you can see in the chart I was analyzing the 2008 market crash and the stats that followed it after until the market started to recover.
You can see that theory in my "Ideas" tab.
This is an interactive set of lines that you can use to count the the bars inside and outside of your chosen range, and the percentage outside that range.
You should initially enter the price range of your product in the menu and set some arbitrary dates that you can easily see on your chart.
Drag and drop the lines around to suit what price and the dates you are analyzing.
The table will display the bar count inside and outside of the range, the total bars, and the percentage outside that range.
I personally used this as a tool to study the overall average of the product, compared with the behavior during major market events.
It is currently my opinion that post 2020 analysis needs to take into account the behavior of any given product prior to 2020 when the
VIX was in its comfort zone. Not to say that a price valuation hasn't been set, but that the movement to that price was outside of "Normal Market Conditions,"
and the time factor to return to that value might be skewed. Other factors would need to be considered at that point pertaining to your specific product or corelating indicator.
I could see this tool being useful to Forex and commodities traders. But that isn't my field so that that for what it is. I do think it would perform best on something that is more
pegged to a price range. I personally would use it on product's, like the VIX, that I use as an indicator product. That is what it was designed for.
But I suppose it could be used for Mean price and time related analysis, maybe with a Vwap, SMA or other breakout style indicators.
Volume analysis might be pretty sporty. Possibly time patterns... the possibilities could be endless. Or... limited.
I am publishing this for my trade group so that it can be tinkered with to find other helpful ways to use it.
If anyone finds something interesting with other indicators, please drop a comment below and I could consider creating a script to integrate with this tool.
loxxfftLibrary "loxxfft"
This code is a library for performing Fast Fourier Transform (FFT) operations. FFT is an algorithm that can quickly compute the discrete Fourier transform (DFT) of a sequence. The library includes functions for performing FFTs on both real and complex data. It also includes functions for fast correlation and convolution, which are operations that can be performed efficiently using FFTs. Additionally, the library includes functions for fast sine and cosine transforms.
Reference:
www.alglib.net
fastfouriertransform(a, nn, inversefft)
Returns Fast Fourier Transform
Parameters:
a (float ) : float , An array of real and imaginary parts of the function values. The real part is stored at even indices, and the imaginary part is stored at odd indices.
nn (int) : int, The number of function values. It must be a power of two, but the algorithm does not validate this.
inversefft (bool) : bool, A boolean value that indicates the direction of the transformation. If True, it performs the inverse FFT; if False, it performs the direct FFT.
Returns: float , Modifies the input array a in-place, which means that the transformed data (the FFT result for direct transformation or the inverse FFT result for inverse transformation) will be stored in the same array a after the function execution. The transformed data will have real and imaginary parts interleaved, with the real parts at even indices and the imaginary parts at odd indices.
realfastfouriertransform(a, tnn, inversefft)
Returns Real Fast Fourier Transform
Parameters:
a (float ) : float , A float array containing the real-valued function samples.
tnn (int) : int, The number of function values (must be a power of 2, but the algorithm does not validate this condition).
inversefft (bool) : bool, A boolean flag that indicates the direction of the transformation (True for inverse, False for direct).
Returns: float , Modifies the input array a in-place, meaning that the transformed data (the FFT result for direct transformation or the inverse FFT result for inverse transformation) will be stored in the same array a after the function execution.
fastsinetransform(a, tnn, inversefst)
Returns Fast Discrete Sine Conversion
Parameters:
a (float ) : float , An array of real numbers representing the function values.
tnn (int) : int, Number of function values (must be a power of two, but the code doesn't validate this).
inversefst (bool) : bool, A boolean flag indicating the direction of the transformation. If True, it performs the inverse FST, and if False, it performs the direct FST.
Returns: float , The output is the transformed array 'a', which will contain the result of the transformation.
fastcosinetransform(a, tnn, inversefct)
Returns Fast Discrete Cosine Transform
Parameters:
a (float ) : float , This is a floating-point array representing the sequence of values (time-domain) that you want to transform. The function will perform the Fast Cosine Transform (FCT) or the inverse FCT on this input array, depending on the value of the inversefct parameter. The transformed result will also be stored in this same array, which means the function modifies the input array in-place.
tnn (int) : int, This is an integer value representing the number of data points in the input array a. It is used to determine the size of the input array and control the loops in the algorithm. Note that the size of the input array should be a power of 2 for the Fast Cosine Transform algorithm to work correctly.
inversefct (bool) : bool, This is a boolean value that controls whether the function performs the regular Fast Cosine Transform or the inverse FCT. If inversefct is set to true, the function will perform the inverse FCT, and if set to false, the regular FCT will be performed. The inverse FCT can be used to transform data back into its original form (time-domain) after the regular FCT has been applied.
Returns: float , The resulting transformed array is stored in the input array a. This means that the function modifies the input array in-place and does not return a new array.
fastconvolution(signal, signallen, response, negativelen, positivelen)
Convolution using FFT
Parameters:
signal (float ) : float , This is an array of real numbers representing the input signal that will be convolved with the response function. The elements are numbered from 0 to SignalLen-1.
signallen (int) : int, This is an integer representing the length of the input signal array. It specifies the number of elements in the signal array.
response (float ) : float , This is an array of real numbers representing the response function used for convolution. The response function consists of two parts: one corresponding to positive argument values and the other to negative argument values. Array elements with numbers from 0 to NegativeLen match the response values at points from -NegativeLen to 0, respectively. Array elements with numbers from NegativeLen+1 to NegativeLen+PositiveLen correspond to the response values in points from 1 to PositiveLen, respectively.
negativelen (int) : int, This is an integer representing the "negative length" of the response function. It indicates the number of elements in the response function array that correspond to negative argument values. Outside the range , the response function is considered zero.
positivelen (int) : int, This is an integer representing the "positive length" of the response function. It indicates the number of elements in the response function array that correspond to positive argument values. Similar to negativelen, outside the range , the response function is considered zero.
Returns: float , The resulting convolved values are stored back in the input signal array.
fastcorrelation(signal, signallen, pattern, patternlen)
Returns Correlation using FFT
Parameters:
signal (float ) : float ,This is an array of real numbers representing the signal to be correlated with the pattern. The elements are numbered from 0 to SignalLen-1.
signallen (int) : int, This is an integer representing the length of the input signal array.
pattern (float ) : float , This is an array of real numbers representing the pattern to be correlated with the signal. The elements are numbered from 0 to PatternLen-1.
patternlen (int) : int, This is an integer representing the length of the pattern array.
Returns: float , The signal array containing the correlation values at points from 0 to SignalLen-1.
tworealffts(a1, a2, a, b, tn)
Returns Fast Fourier Transform of Two Real Functions
Parameters:
a1 (float ) : float , An array of real numbers, representing the values of the first function.
a2 (float ) : float , An array of real numbers, representing the values of the second function.
a (float ) : float , An output array to store the Fourier transform of the first function.
b (float ) : float , An output array to store the Fourier transform of the second function.
tn (int) : float , An integer representing the number of function values. It must be a power of two, but the algorithm doesn't validate this condition.
Returns: float , The a and b arrays will contain the Fourier transform of the first and second functions, respectively. Note that the function overwrites the input arrays a and b.
█ Detailed explaination of each function
Fast Fourier Transform
The fastfouriertransform() function takes three input parameters:
1. a: An array of real and imaginary parts of the function values. The real part is stored at even indices, and the imaginary part is stored at odd indices.
2. nn: The number of function values. It must be a power of two, but the algorithm does not validate this.
3. inversefft: A boolean value that indicates the direction of the transformation. If True, it performs the inverse FFT; if False, it performs the direct FFT.
The function performs the FFT using the Cooley-Tukey algorithm, which is an efficient algorithm for computing the discrete Fourier transform (DFT) and its inverse. The Cooley-Tukey algorithm recursively breaks down the DFT of a sequence into smaller DFTs of subsequences, leading to a significant reduction in computational complexity. The algorithm's time complexity is O(n log n), where n is the number of samples.
The fastfouriertransform() function first initializes variables and determines the direction of the transformation based on the inversefft parameter. If inversefft is True, the isign variable is set to -1; otherwise, it is set to 1.
Next, the function performs the bit-reversal operation. This is a necessary step before calculating the FFT, as it rearranges the input data in a specific order required by the Cooley-Tukey algorithm. The bit-reversal is performed using a loop that iterates through the nn samples, swapping the data elements according to their bit-reversed index.
After the bit-reversal operation, the function iteratively computes the FFT using the Cooley-Tukey algorithm. It performs calculations in a loop that goes through different stages, doubling the size of the sub-FFT at each stage. Within each stage, the Cooley-Tukey algorithm calculates the butterfly operations, which are mathematical operations that combine the results of smaller DFTs into the final DFT. The butterfly operations involve complex number multiplication and addition, updating the input array a with the computed values.
The loop also calculates the twiddle factors, which are complex exponential factors used in the butterfly operations. The twiddle factors are calculated using trigonometric functions, such as sine and cosine, based on the angle theta. The variables wpr, wpi, wr, and wi are used to store intermediate values of the twiddle factors, which are updated in each iteration of the loop.
Finally, if the inversefft parameter is True, the function divides the result by the number of samples nn to obtain the correct inverse FFT result. This normalization step is performed using a loop that iterates through the array a and divides each element by nn.
In summary, the fastfouriertransform() function is an implementation of the Cooley-Tukey FFT algorithm, which is an efficient algorithm for computing the DFT and its inverse. This FFT library can be used for a variety of applications, such as signal processing, image processing, audio processing, and more.
Feal Fast Fourier Transform
The realfastfouriertransform() function performs a fast Fourier transform (FFT) specifically for real-valued functions. The FFT is an efficient algorithm used to compute the discrete Fourier transform (DFT) and its inverse, which are fundamental tools in signal processing, image processing, and other related fields.
This function takes three input parameters:
1. a - A float array containing the real-valued function samples.
2. tnn - The number of function values (must be a power of 2, but the algorithm does not validate this condition).
3. inversefft - A boolean flag that indicates the direction of the transformation (True for inverse, False for direct).
The function modifies the input array a in-place, meaning that the transformed data (the FFT result for direct transformation or the inverse FFT result for inverse transformation) will be stored in the same array a after the function execution.
The algorithm uses a combination of complex-to-complex FFT and additional transformations specific to real-valued data to optimize the computation. It takes into account the symmetry properties of the real-valued input data to reduce the computational complexity.
Here's a detailed walkthrough of the algorithm:
1. Depending on the inversefft flag, the initial values for ttheta, c1, and c2 are determined. These values are used for the initial data preprocessing and post-processing steps specific to the real-valued FFT.
2. The preprocessing step computes the initial real and imaginary parts of the data using a combination of sine and cosine terms with the input data. This step effectively converts the real-valued input data into complex-valued data suitable for the complex-to-complex FFT.
3. The complex-to-complex FFT is then performed on the preprocessed complex data. This involves bit-reversal reordering, followed by the Cooley-Tukey radix-2 decimation-in-time algorithm. This part of the code is similar to the fastfouriertransform() function you provided earlier.
4. After the complex-to-complex FFT, a post-processing step is performed to obtain the final real-valued output data. This involves updating the real and imaginary parts of the transformed data using sine and cosine terms, as well as the values c1 and c2.
5. Finally, if the inversefft flag is True, the output data is divided by the number of samples (nn) to obtain the inverse DFT.
The function does not return a value explicitly. Instead, the transformed data is stored in the input array a. After the function execution, you can access the transformed data in the a array, which will have the real part at even indices and the imaginary part at odd indices.
Fast Sine Transform
This code defines a function called fastsinetransform that performs a Fast Discrete Sine Transform (FST) on an array of real numbers. The function takes three input parameters:
1. a (float array): An array of real numbers representing the function values.
2. tnn (int): Number of function values (must be a power of two, but the code doesn't validate this).
3. inversefst (bool): A boolean flag indicating the direction of the transformation. If True, it performs the inverse FST, and if False, it performs the direct FST.
The output is the transformed array 'a', which will contain the result of the transformation.
The code starts by initializing several variables, including trigonometric constants for the sine transform. It then sets the first value of the array 'a' to 0 and calculates the initial values of 'y1' and 'y2', which are used to update the input array 'a' in the following loop.
The first loop (with index 'jx') iterates from 2 to (tm + 1), where 'tm' is half of the number of input samples 'tnn'. This loop is responsible for calculating the initial sine transform of the input data.
The second loop (with index 'ii') is a bit-reversal loop. It reorders the elements in the array 'a' based on the bit-reversed indices of the original order.
The third loop (with index 'ii') iterates while 'n' is greater than 'mmax', which starts at 2 and doubles each iteration. This loop performs the actual Fast Discrete Sine Transform. It calculates the sine transform using the Danielson-Lanczos lemma, which is a divide-and-conquer strategy for calculating Discrete Fourier Transforms (DFTs) efficiently.
The fourth loop (with index 'ix') is responsible for the final phase adjustments needed for the sine transform, updating the array 'a' accordingly.
The fifth loop (with index 'jj') updates the array 'a' one more time by dividing each element by 2 and calculating the sum of the even-indexed elements.
Finally, if the 'inversefst' flag is True, the code scales the transformed data by a factor of 2/tnn to get the inverse Fast Sine Transform.
In summary, the code performs a Fast Discrete Sine Transform on an input array of real numbers, either in the direct or inverse direction, and returns the transformed array. The algorithm is based on the Danielson-Lanczos lemma and uses a divide-and-conquer strategy for efficient computation.
Fast Cosine Transform
This code defines a function called fastcosinetransform that takes three parameters: a floating-point array a, an integer tnn, and a boolean inversefct. The function calculates the Fast Cosine Transform (FCT) or the inverse FCT of the input array, depending on the value of the inversefct parameter.
The Fast Cosine Transform is an algorithm that converts a sequence of values (time-domain) into a frequency domain representation. It is closely related to the Fast Fourier Transform (FFT) and can be used in various applications, such as signal processing and image compression.
Here's a detailed explanation of the code:
1. The function starts by initializing a number of variables, including counters, intermediate values, and constants.
2. The initial steps of the algorithm are performed. This includes calculating some trigonometric values and updating the input array a with the help of intermediate variables.
3. The code then enters a loop (from jx = 2 to tnn / 2). Within this loop, the algorithm computes and updates the elements of the input array a.
4. After the loop, the function prepares some variables for the next stage of the algorithm.
5. The next part of the algorithm is a series of nested loops that perform the bit-reversal permutation and apply the FCT to the input array a.
6. The code then calculates some additional trigonometric values, which are used in the next loop.
7. The following loop (from ix = 2 to tnn / 4 + 1) computes and updates the elements of the input array a using the previously calculated trigonometric values.
8. The input array a is further updated with the final calculations.
9. In the last loop (from j = 4 to tnn), the algorithm computes and updates the sum of elements in the input array a.
10. Finally, if the inversefct parameter is set to true, the function scales the input array a to obtain the inverse FCT.
The resulting transformed array is stored in the input array a. This means that the function modifies the input array in-place and does not return a new array.
Fast Convolution
This code defines a function called fastconvolution that performs the convolution of a given signal with a response function using the Fast Fourier Transform (FFT) technique. Convolution is a mathematical operation used in signal processing to combine two signals, producing a third signal representing how the shape of one signal is modified by the other.
The fastconvolution function takes the following input parameters:
1. float signal: This is an array of real numbers representing the input signal that will be convolved with the response function. The elements are numbered from 0 to SignalLen-1.
2. int signallen: This is an integer representing the length of the input signal array. It specifies the number of elements in the signal array.
3. float response: This is an array of real numbers representing the response function used for convolution. The response function consists of two parts: one corresponding to positive argument values and the other to negative argument values. Array elements with numbers from 0 to NegativeLen match the response values at points from -NegativeLen to 0, respectively. Array elements with numbers from NegativeLen+1 to NegativeLen+PositiveLen correspond to the response values in points from 1 to PositiveLen, respectively.
4. int negativelen: This is an integer representing the "negative length" of the response function. It indicates the number of elements in the response function array that correspond to negative argument values. Outside the range , the response function is considered zero.
5. int positivelen: This is an integer representing the "positive length" of the response function. It indicates the number of elements in the response function array that correspond to positive argument values. Similar to negativelen, outside the range , the response function is considered zero.
The function works by:
1. Calculating the length nl of the arrays used for FFT, ensuring it's a power of 2 and large enough to hold the signal and response.
2. Creating two new arrays, a1 and a2, of length nl and initializing them with the input signal and response function, respectively.
3. Applying the forward FFT (realfastfouriertransform) to both arrays, a1 and a2.
4. Performing element-wise multiplication of the FFT results in the frequency domain.
5. Applying the inverse FFT (realfastfouriertransform) to the multiplied results in a1.
6. Updating the original signal array with the convolution result, which is stored in the a1 array.
The result of the convolution is stored in the input signal array at the function exit.
Fast Correlation
This code defines a function called fastcorrelation that computes the correlation between a signal and a pattern using the Fast Fourier Transform (FFT) method. The function takes four input arguments and modifies the input signal array to store the correlation values.
Input arguments:
1. float signal: This is an array of real numbers representing the signal to be correlated with the pattern. The elements are numbered from 0 to SignalLen-1.
2. int signallen: This is an integer representing the length of the input signal array.
3. float pattern: This is an array of real numbers representing the pattern to be correlated with the signal. The elements are numbered from 0 to PatternLen-1.
4. int patternlen: This is an integer representing the length of the pattern array.
The function performs the following steps:
1. Calculate the required size nl for the FFT by finding the smallest power of 2 that is greater than or equal to the sum of the lengths of the signal and the pattern.
2. Create two new arrays a1 and a2 with the length nl and initialize them to 0.
3. Copy the signal array into a1 and pad it with zeros up to the length nl.
4. Copy the pattern array into a2 and pad it with zeros up to the length nl.
5. Compute the FFT of both a1 and a2.
6. Perform element-wise multiplication of the frequency-domain representation of a1 and the complex conjugate of the frequency-domain representation of a2.
7. Compute the inverse FFT of the result obtained in step 6.
8. Store the resulting correlation values in the original signal array.
At the end of the function, the signal array contains the correlation values at points from 0 to SignalLen-1.
Fast Fourier Transform of Two Real Functions
This code defines a function called tworealffts that computes the Fast Fourier Transform (FFT) of two real-valued functions (a1 and a2) using a Cooley-Tukey-based radix-2 Decimation in Time (DIT) algorithm. The FFT is a widely used algorithm for computing the discrete Fourier transform (DFT) and its inverse.
Input parameters:
1. float a1: an array of real numbers, representing the values of the first function.
2. float a2: an array of real numbers, representing the values of the second function.
3. float a: an output array to store the Fourier transform of the first function.
4. float b: an output array to store the Fourier transform of the second function.
5. int tn: an integer representing the number of function values. It must be a power of two, but the algorithm doesn't validate this condition.
The function performs the following steps:
1. Combine the two input arrays, a1 and a2, into a single array a by interleaving their elements.
2. Perform a 1D FFT on the combined array a using the radix-2 DIT algorithm.
3. Separate the FFT results of the two input functions from the combined array a and store them in output arrays a and b.
Here is a detailed breakdown of the radix-2 DIT algorithm used in this code:
1. Bit-reverse the order of the elements in the combined array a.
2. Initialize the loop variables mmax, istep, and theta.
3. Enter the main loop that iterates through different stages of the FFT.
a. Compute the sine and cosine values for the current stage using the theta variable.
b. Initialize the loop variables wr and wi for the current stage.
c. Enter the inner loop that iterates through the butterfly operations within each stage.
i. Perform the butterfly operation on the elements of array a.
ii. Update the loop variables wr and wi for the next butterfly operation.
d. Update the loop variables mmax, istep, and theta for the next stage.
4. Separate the FFT results of the two input functions from the combined array a and store them in output arrays a and b.
At the end of the function, the a and b arrays will contain the Fourier transform of the first and second functions, respectively. Note that the function overwrites the input arrays a and b.
█ Example scripts using functions contained in loxxfft
Real-Fast Fourier Transform of Price w/ Linear Regression
Real-Fast Fourier Transform of Price Oscillator
Normalized, Variety, Fast Fourier Transform Explorer
Variety RSI of Fast Discrete Cosine Transform
STD-Stepped Fast Cosine Transform Moving Average