$SFRX CEO Update: Juno Beach Progress and Tech BreakthroughsDecember 9, 2024
OTC:SFRX CEO Update: Juno Beach Progress and Tech Breakthroughs
seafarerexplorationcorp.com
This CEO Update shares exciting news on advancements at Juno Beach, offshore discoveries, and breakthroughs in SeaSearcher technology.
Dear Shareholders,
While adverse weather conditions posed significant challenges this summer, our team has continued to make steady advancements in exploration and artifact recovery, reinforcing the long-term potential of our work.
At the Juno Beach archaeological site, we have made significant strides in mapping a prominent area of debris concentration using transect mapping. This has enabled us to locate new areas containing large ballast stones and a substantial wooden element believed to be part of the ship’s stern. Our recovery efforts have been fruitful, adding dozens of artifacts to our collection, which now exceeds 1,000 items, excluding lead sheathing. We have established a 300’x300’ grid, allowing us to focus our SeaSearcher scans on a refined area where debris is concentrated within a 100’ north-south by 250’ east-west zone. A harpoon recovered during this phase may also be associated with this wreck. These findings are supported by detailed documentation, including hundreds of pages of reports and hours of video records of our archaeological processes.
At Melbourne Beach, activity has been limited by weather, but we completed test dives and laid the groundwork for a grid system over two target areas, the Ring Site and HTQ. This will guide SeaSearcher scans to evaluate the potential for continued exploration.
At Cape Canaveral, a historically significant area known for colonial-era shipwrecks, magnetometer scans, and test dives have yielded promising data, and we are preparing to bring in the SeaSearcher as soon as conditions allow.
Offshore, we have identified a promising area with evidence pointing to the possible presence of two or three vessels from the 1715 fleet. Initial dives uncovered a colonial-era anchor, further validating this area as a strong prospect for recovery operations.
The SeaSearcher technology continues to advance, with improvements in metal discrimination allowing for greater confidence in distinguishing ferrous, non-ferrous, and even precious metals. The platform’s stability has been a key factor in its routine deployment, and our second-generation metal discriminator has shown improved sensitivity and reduced noise. Progress toward a handheld unit also continues, promising further flexibility in our recovery operations.
Our archives and historical research team have also been making significant strides as well, particularly in analyzing records from Seville, Spain. These documents are beginning to yield useful insights into additional shipwrecks, and our new cataloging method has improved our ability to extract and organize data efficiently.
While weather has been a limiting factor, particularly as we transition from search to recovery operations, we remain committed to maintaining the highest levels of safety and precision. The rough seas of summer have presented challenges for both diver safety and accurate positioning of recovery grids, but we are well-prepared to capitalize on more favorable conditions as they arise.
We continue to make meaningful progress toward our mission of uncovering and preserving historical shipwrecks, and the work completed this year lays a strong foundation for the year ahead. Thank you for your continued support as we pursue opportunities to create long-term value through our exploration efforts.
Sincerely,
Kyle Kennedy
CEO, Seafarer Exploration
Machine
Can Computation Transcend Its Own Limits?In the vast, unexplored terrain of technological innovation, D-Wave Quantum Inc. emerges as a pioneering navigator, challenging the fundamental constraints of computational science. Their groundbreaking 4,400+ qubit Advantage2™ processor represents more than a technological milestone—it is a quantum leap that promises to redefine the very boundaries of problem-solving across complex domains like materials science, artificial intelligence, and optimization.
The true marvel of this quantum revolution lies not merely in processing speed but in a fundamental reimagining of computational potential. Where classical computers navigate problems sequentially, quantum computing exploits the bizarre, counterintuitive properties of quantum mechanics—enabling simultaneous multiple-state calculations that can solve intricate challenges up to 25,000 times faster than traditional systems. This isn't incremental improvement; it's a paradigm shift that transforms computational impossibility into potential reality.
Backed by visionary investors like Jeff Bezos and strategic partners including NASA and Google, D-Wave is not simply developing a technology—it is architecting the future's computational infrastructure. By doubling qubit coherence time, increasing energy scale, and expanding quantum connectivity, the company is methodically dismantling the barriers that have historically confined computational thinking. Each breakthrough represents a portal to unexplored intellectual territories, where problems once deemed unsolvable become navigable landscapes of potential insight.
The quantum frontier beckons not just as a technological challenge, but as an intellectual invitation—a profound question of how far human knowledge can stretch when we liberate ourselves from conventional computational thinking. D-Wave's Advantage2 processor is more than a machine; it is a testament to human imagination, a bridge between what is known and what remains tantalizingly unexplored.
Can a Tech Giant Rewrite Its Future While Racing Against Time?In a remarkable display of corporate resilience, Super Micro Computer stands at the intersection of crisis and opportunity, navigating regulatory challenges while simultaneously revolutionizing the AI infrastructure landscape. As the company addresses its Nasdaq compliance requirements through comprehensive reforms, including the strategic appointment of BDO USA as its new independent auditor, it hasn't missed a beat in its technological innovation trajectory - a feat that has left critics and supporters watching intently.
The numbers tell a compelling story of growth amidst adversity: a staggering 110% revenue surge to $15 billion in FY2024, coupled with a nearly 90% increase in adjusted earnings. But, perhaps more impressive is Supermicro's technical leadership, maintaining an 18-24 month advantage over competitors in liquid-cooled AI rack technology and demonstrating the capability to deploy 100,000-GPU liquid-cooled AI data centers. This technical prowess, combined with strategic partnerships with industry giants like NVIDIA, positions Supermicro at the forefront of the AI infrastructure revolution.
Looking ahead, Supermicro's journey represents more than just a corporate turnaround story - it's a masterclass in organizational agility and strategic focus. While many companies might have faltered under the weight of regulatory scrutiny, Supermicro has instead used this moment as a catalyst for transformation, strengthening its corporate governance while accelerating its innovation pipeline. With analyst projections indicating 40%+ earnings growth for FY2025 and revenue expected to surge over 70%, the company's trajectory suggests that sometimes, the most significant opportunities for growth emerge from the crucible of challenge.
Can a Tech Giant Redefine the Future of Enterprise Computing?In an era where technology companies rise and fall with stunning rapidity, Dell Technologies has orchestrated a remarkable transformation that challenges conventional wisdom about legacy tech companies. The company's strategic positioning in the hybrid cloud market, coupled with recent market disruptions affecting competitors like Super Micro Computer, has created an unprecedented opportunity for Dell to reshape the enterprise computing landscape.
Dell's masterful execution of its hybrid cloud strategy, particularly through its groundbreaking partnership with Nutanix, demonstrates the power of strategic evolution. The integration of PowerFlex software-defined storage and the introduction of the XC Plus appliance represent more than mere product innovations—they exemplify a deeper understanding of how enterprise computing needs are fundamentally changing. This transformation is particularly evident in regions like Saudi Arabia, where Dell's two-decade presence has evolved into a catalyst for technological advancement and digital transformation.
The financial markets have begun to recognize this shifting dynamic, as reflected in Dell's impressive 38% year-over-year growth in infrastructure solutions revenue. However, the true significance lies not in the numbers alone, but in what they represent: a traditional hardware company successfully pivoting to meet the complex demands of the AI era while maintaining its core strengths in enterprise computing. For investors and industry observers alike, Dell's journey presents a compelling case study in how established tech giants can not only survive but thrive in an era of rapid technological change.
Article Title: Is AI Just Hype?In the whirlwind of AI's rapid ascent, a critical question emerges: Is the hype surrounding AI justified, or are we witnessing a bubble fueled by inflated valuations and limited innovation? Let's delve deep into the AI industry, separating the signal from the noise and providing a sobering reality check.
The Super Micro Cautionary Tale
The financial woes of Super Micro Computer serve as a stark warning. Despite the soaring demand for AI hardware, the company's internal challenges highlight the risks of investing solely in market enthusiasm. This case underscores the importance of **industry openness** and **due diligence** in the face of AI's allure.
A Landscape of Contrasts
The broader AI landscape is a tapestry of contrasting narratives. While pioneers like DeepMind and Tesla are pushing the boundaries of AI applications, a multitude of companies are capitalizing on the hype with products lacking substance. This proliferation of **AI hype** has created a toxic environment characterized by inflated valuations and a lack of substantive innovation.
Market Dynamics and Future Prospects
As the market for AI hardware matures, saturation and potential price drops loom. NVIDIA's dominance may be challenged by competitors, reshaping the industry landscape. The future of AI, however, lies in the development of more sophisticated systems capable of collaboration and learning. The integration of **quantum computing** could revolutionize AI, unlocking solutions to complex problems that are currently beyond our reach.
Conclusion
The AI industry is a complex landscape, filled with both promise and peril. While the hype surrounding AI may be tempting, it's imperative to scrutinize each company's core innovation and value. As the market matures and competition intensifies, those who can deliver **real value** and **technological advancements** will ultimately prevail. The Super Micro case serves as a stark reminder that in the realm of AI, substance, not hype, is the true currency of success.
Can AI Revolutionize Healthcare?The convergence of artificial intelligence (AI) and healthcare is ushering in a new era of medical innovation. As AI models continue to evolve, their potential to revolutionize patient care becomes increasingly evident. Google's Med-Gemini, a family of AI models specifically tailored for medical applications, represents a significant leap forward in this direction.
Google's Med-Gemini's advanced capabilities, including its ability to process complex medical data, reason effectively, and understand long-form text, have the potential to transform various aspects of healthcare. From generating radiology reports to analyzing pathology slides and predicting disease risk, Med-Gemini's applications are vast and far-reaching.
However, the integration of AI into healthcare raises important ethical considerations. As AI models become more sophisticated, it is crucial to address concerns related to bias, privacy, and the potential for job displacement. A balanced approach that emphasizes human-AI collaboration is essential to ensure that AI is used to augment rather than replace human expertise.
The future of healthcare is undoubtedly intertwined with the advancement of AI. By harnessing the power of AI, we can unlock new possibilities for improving patient outcomes, enhancing medical research, and revolutionizing the way we deliver healthcare. As we continue to explore the potential of AI in medicine, it is imperative to approach this journey with a sense of both excitement and responsibility.
Why Large Language Models Struggle with Financial Analysis.Large language models revolutionized areas where text generation, analysis, and interpretation were applied. They perform fabulously with volumes of textual data by drawing logical and interesting inferences from such data. But it is precisely when these models are tasked with the analysis of numerical, or any other, more-complex mathematical relationships that are inevitable in the world of financial analysis that obvious limitations start to appear.
Let's break it down in simpler terms.
Problem in Math and Numerical Data Now, imagine a very complicated mathematical formula, with hundreds of variables involved. All ChatGPT would actually do, if you asked it to solve this, is not really a calculation in the truest sense; it would be an educated guess based on the patterns it learned from training.
That could be used to predict, for example, after reading through several thousand symbols, that the most probable digit after the equals sign is 4, based on statistical probability, but not because there's a good deal of serious mathematical reason for it. This, in short, is a consequence of the fact indicated above, namely that LLMs are created to predict patterns in a language rather than solve equations or carry out logical reasoning through problems. To put it better, consider the difference between an English major and a math major: the English major can read and understand text very well, but if you hand him a complicated derivative problem, he's likely to make an educated guess and check it with a numerical solver, rather than actually solve it step by step.
That is precisely how ChatGPT and similar models tackle a math problem. They just haven't had the underlying training in how to reason through numbers in the way a mathematics major would do.
Financial Analysis and Applying It
Okay, so why does this matter for financial analysis? Suppose you were engaging in some financial analytics on the performance of a stock based on two major data sets: 1) a corpus of tweets about the company and 2) movements of the stock. ChatGPT would be great at doing some sentiment analysis on tweets.
This is able to scan through thousands of tweets and provide a sentiment score, telling if the public opinion about the company is positive, negative, or neutral. Since text understanding is one of the major functionalities of LLMs, it is possible to effectively conduct the latter task.
It gets a bit more challenging when you want it to take a decision based on numerical data. For example, you might ask, "Given the above sentiment scores across tweets and additional data on stock prices, should I buy or sell the stock at this point in time?" It's for this that ChatGPT lets you down. Interpreting raw numbers in the form of something like price data or sentiment score correlations just isn't what LLMs were originally built for.
In this case, ChatGPT will not be able to judge the estimation of relationship between the sentiment scores and prices. If it guesses, the answer could just be entirely random. Such unreliable prediction would be not only of no help but actually dangerous, given that in financial markets, real monetary decisions might be based on the data decisions.
Why Causation and Correlation are Problematic for LLMs More than a math problem, a lot of financial analysis is really trying to figure out which way the correlation runs—between one set of data and another. Say, for example, market sentiment vs. stock prices. But then again, if A and B move together, that does not automatically mean that A causes B to do so because correlation is not causation. Determination of causality requires orders of logical reasoning that LLMs are absolutely incapable of.
One recent paper asked whether LLMs can separate causation from correlation. The researchers developed a data set of 400,000 samples and injected known causal relationships to it. They also tested 17 other pre-trained language models, including ChatGPT, on whether it can be told to determine what is cause and what is effect. The results were shocking: the LLMs performed close to random in their ability to infer causation, meaning they often couldn't distinguish mere correlation from true cause-and-effect relationships. Translated back into our example with the stock market, one might see much more clearly why that would be a problem. If sentiment towards a stock is bullish and the price of a stock does go up, LLM simply wouldn't understand what the two things have to do with each other—let alone if it knew a stock was going to continue to go up. The model may as well say "sell the stock" as give a better answer than flipping a coin would provide.
Will Fine-Tuning Be the Answer
Fine-tuning might be a one-time way out. It will let the model be better at handling such datasets through retraining on the given data. The fine-tuned model for sentiment analysis of textual stock prices should, in fact, be made to pick up the trend between those latter two features.
However, there's a catch.
While this is also supported by the same research, this capability is refined to support only similar operating data on which the models train. The immediate effect of the model on completely new data, which involves sentiment sources or new market conditions, will always put its performance down.
In other words, even fine-tuned models are not generalizable; thus, they can work with data which they have already seen, but they cannot adapt to new or evolving datasets.
Plug-ins and External Tools: One Potential Answer Integration of such systems with domain-specific tooling is one way to overcome this weakness. This is quite akin to the way that ChatGPT now integrates Wolfram Alpha for maths problems. Since ChatGPT is incapable of solving a math problem, it sends the problem further to Wolfram Alpha—a system set up and put in place exclusively for complex calculations—and then relays the answer back to them.
The exact same approach could be replicated in the case of financial analysis: Once the LLM realizes it's working with numerical data or that it has had to infer causality, then work on the general problem can be outsourced to those prepared models or algorithms that have been developed for those particular tasks. Once these analyses are done, the LLM will be able to synthesize and lastly provide an enhanced recommendation or insight. Such a hybrid approach of combining LLMs with specialized analytical tools holds the key to better performance in financial decision-making contexts. What does that mean for a financial analyst and a trader? Thus, if you plan to use ChatGPT or other LLMs in your financial flow of analysis, such limitations shall not be left unattended. Powerful the models may be for sentiment analysis, news analysis, or any type of textual data analysis, numerical analysis should not be relayed on by such models, nor correlational or causality inference-at least not without additional tools or techniques. If you want to do quantitative analysis using LLMs or trading strategies, be prepared to carry out a lot of fine-tuning and many integrations of third-party tools that will surely be able to process numerical data and more sophisticated logical reasoning. That said, one of the most exciting challenges for the future is perhaps that as research continues to sharpen their capability with numbers, causality, and correlation, the ability to use LLMs robustly within financial analysis may improve.
CAT, BUT NOT THE MEOW KIND, MORE THE PLANET KILLIN' KIND. At some point climate might matter with this stock, idk.
Are they doing EV?
I haven't done much fundamental research.
Trends marked. All support. All end up taking the price in a bearish direction.
Meaning we may have seen the top or will be seeing top on earnings.
There is a potential scenario where it breaks out on the underside of a trend and move to 400, but it's the least likely of the possibilities.
More likely, we break down into that 250 range and maybe even 150.
everything is pretty clearly marked.
As trends break, they will likely become a rejection trend.
Sell targets in blue
buy targets in red
SOL Bearish Continuation According to Deep LearningThis post is a continuation of my ongoing efforts to fine-tune a predictive algorithm based on deep learning methods, and I am recording results in the form of ideas as future reference.
Brief Background:
This algorithm is based on a custom CNN-LSTM implementation I have developed for multivariate financial time series forecasting using the Pytorch framework in python. If you are familiar with some of my indicators, the features I'm using are similar to the ones I use in the Lorentzian Distance Classifier script that I published recently, except they are normalized and filtered in a slightly different way. The most critical I’ve found are WT3D, CCI, ADX, and RSI.
The previous post in this series:
As always, it is important to keep in perspective that while these predictions have the potential to be helpful, they are not guaranteed, and the cryptocurrency market, in particular, can be highly volatile. This post is not financial advice, and as with any investment decision, conducting thorough research and analysis is essential before entering a position. As in the case of any ML-based technique, it is most useful when used as a source of confluence for traditional TA.
Notes:
- Remember that the CCI Release is tomorrow and that this model does not consider additional volatility from this particular event.
- The new DTW (Dynamic Time Warping) Metric is an experimental feature geared towards assessing how reliable the model's prediction is. The closer to 0 this number is, the more accurate the prediction.
SOL Next Leg according to Deep LearningThis post is a continuation of my ongoing efforts to fine-tune a predictive algorithm based on deep learning methods.
Last post in this series:
Previously, the algorithm correctly projected SOL's breakout to the upside following SOL's consolidation at around the $16 mark.
As a next leg, the algorithm predicts that a noticeable continuation to the upside is likely in the coming days, and I am posting this prediction here for future reference.
As always, it is important to keep in perspective that while these predictions have the potential to be helpful, they are not guaranteed, and the cryptocurrency market, in particular, can be highly volatile. This post is not financial advice and as with any investment decision, conducting thorough research and analysis is essential before entering a position.
SOL Breakout according to Deep LearningA deep learning algorithm that I am currently working on predicts that the price of SOL (Solana) will experience a breakout to the upside in the coming days. I am posting this prediction to have it recorded for future reference.
Deep learning algorithms are a type of Machine Learning algorithm designed to learn and improve their performance over time through training on large datasets. In the case of predicting the price of SOL, the algorithm has analyzed historical feature data, which I have spent a considerable amount of time selecting/wrangling. Using this data, the algorithm has identified patterns/trends that suggest an upward breakout is likely to occur, as shown in the included screenshot.
It is worth noting that while these predictions can be helpful, they are not guaranteed, and the cryptocurrency market, in particular, is highly volatile. As with any investment, conducting thorough research and traditional technical analysis is critical before opening a position.
AI's Broadening Wedge, Bearish TargetDespite all the up spikes, it's not out of the trap.
Wait for the bearish response.
Technical indicators support: Relative Strength Index ( RSI - bearish divergences)
AI painted the chart using TradingView's native charting tools.
Analysis: we used Google ML "Firebase" Toolkit, OXYBITS Space Invariant Artificial Neural Networks.
100% bots, zero humans, DYO before investment.
BTCUSDT Support/resistance levels, Fri Feb 25, 2022, BigdataBTC in an uptrend after the yesterday dip. It has a strong support at the range 36867.36 – 38244.38 USDT.
There is a 75% chance to return to 37615.65 USDT and 93% chance to reach the level 38862.59 USDT.
Current support/resistance levels:
– 34952.33 USDT
– 35680.78 USDT
– 36867.36 USDT
– 37615.65 USDT
– 38244.38 USDT
– 38862.59 USDT
* Calculation is based on 23.72M of trades
BTCUSDT Support/resistance levels, Thu Feb 24, 2022, BigdataBTC is in a high downtrend, Russia invading Ukraine.
There is only a 50% chance to return to the level 36886.14 USDT
No to war!
Current support/resistance levels:
– 35128.0 USDT
– 36886.14 USDT
– 37599.47 USDT
– 38191.63 USDT
– 38866.38 USDT
– 39894.71 USDT
* Calculation is based on 21.21M of trades
BTCUSDT Support/resistance levels, Wed Feb 23, 2022, BigdataBTC is in neutral position now, there is about 87% chance to reach the level 39851.52 USDT and 81% probability to get 40269.13 USDT. The selling is higher than the buying.
Current support/resistance levels:
– 36902.88 USDT
– 37609.52 USDT
– 38190.32 USDT
– 38890.92 USDT
– 39851.52 USDT
– 40269.13 USDT
* Calculation is based on 18.33M of trades
BTCUSDT Support/resistance levels, Thu Feb 22, 2022, BigdataBTC touched the lowest point, there is about 80% chance to reach 38156.63 USDT level and 58% chance to get 38918.78 USDT .
Current support/resistance levels:
– 37147.52 USDT
– 38156.63 USDT
– 38918.78 USDT
– 39989.73 USDT
– 40707.3 USDT
– 42207.45 USDT
* Calculation is based on 18.45M of trades
BTCUSDT Support/resistance levels, Mon Feb 21, 2022, BigdataBTC is in a downtrend and the selling is higher than the buying. There is 75% chance to return to the level 38297.8 USDT , around 70% chance to reach 39993.54 USDT .
Current support/resistance levels:
– 38297.8 USDT
– 39035.79 USDT
– 39993.54 USDT
– 40698.92 USDT
– 42072.23 USDT
– 43714.28 USDT
* Calculation is based on 15.25M of trades
BTCUSDT Support/resistance levels, Sub Feb 20, 2022, BigdataBTC is in a high downtrend. It's about 30% probability to reach the level 39974.26 USDT .
Current support/resistance levels:
– 38676.83 USDT
– 39974.26 USDT
– 40688.08 USDT
– 42052.63 USDT
– 43435.36 USDT
– 44096.91 USDT
* Calculation is based on 15M of trades
BTCUSDT Support/resistance levels, Fri Feb 18, 2022, BigdataBTC broke the last support/resistance level (see related idea) and moved to the dip. Statistically, that's the best point to join Long and receive high reward from the position.
Current support/resistance levels:
– 40551.41 USDT
– 41050.17 USDT
– 42005.0 USDT
– 42503.3 USDT
– 43483.93 USDT
– 44096.24 USDT
* Calculation is based on 14.67M of trades