
Don’t Just Survive Q4 — Win It
The 7- & 8-Figure Seller’s Q4 + Holiday Playbook is your free guide to dominating holiday sales.
Learn how to plan inventory, run promotions that convert, and drive external traffic with affiliate marketing.
A profitable Q4 is right around the corner. Download our free guide to set yourself up for success.
🚀 Your Investing Journey Just Got Better: Premium Subscriptions Are Here! 🚀
It’s been 4 months since we launched our premium subscription plans at GuruFinance Insights, and the results have been phenomenal! Now, we’re making it even better for you to take your investing game to the next level. Whether you’re just starting out or you’re a seasoned trader, our updated plans are designed to give you the tools, insights, and support you need to succeed.
Here’s what you’ll get as a premium member:
Exclusive Trading Strategies: Unlock proven methods to maximize your returns.
In-Depth Research Analysis: Stay ahead with insights from the latest market trends.
Ad-Free Experience: Focus on what matters most—your investments.
Monthly AMA Sessions: Get your questions answered by top industry experts.
Coding Tutorials: Learn how to automate your trading strategies like a pro.
Masterclasses & One-on-One Consultations: Elevate your skills with personalized guidance.
Our three tailored plans—Starter Investor, Pro Trader, and Elite Investor—are designed to fit your unique needs and goals. Whether you’re looking for foundational tools or advanced strategies, we’ve got you covered.
Don’t wait any longer to transform your investment strategy. The last 4 months have shown just how powerful these tools can be—now it’s your turn to experience the difference.
Forecasting future price levels in financial markets isn’t about trying to predict a single outcome. That carries too much error.
Instead, we want to understand the distribution of possible future outcomes. This article outlines a method to do that in a parametric way.
Here, we estimate future return distributions using rolling statistical moments, i.e. mean, volatility, skewness, and kurtosis .
We then feed those into a Normal Inverse Gaussian distribution. This allows us to generate realistic, fat-tailed forward-looking price bands.
The complete Python notebook for the analysis is provided below.

1. Modeling NIG Forward Return Distributions
Financial returns are not always symmetric or move in straight lines. They spike, cluster, and skew.
The model presented here is built on such structure: the shape of recent return distributions, captured through rolling statistics.
We then project those statistical characteristics forward using a fat-tailed distribution.
Kickstart your holiday campaigns
CTV should be central to any growth marketer’s Q4 strategy. And with Roku Ads Manager, launching high-performing holiday campaigns is simple and effective.
With our intuitive interface, you can set up A/B tests to dial in the most effective messages and offers, then drive direct on-screen purchases via the remote with shoppable Action Ads that integrate with your Shopify store for a seamless checkout experience.
Don’t wait to get started. Streaming on Roku picks up sharply in early October. By launching your campaign now, you can capture early shopping demand and be top of mind as the seasonal spirit kicks in.
Get a $500 ad credit when you spend your first $500 today with code: ROKUADS500. Terms apply.
1.1. Log Returns at the Base
We begin with log returns:

This ensures additive behavior over time horizons and simplifies the treatment of compounded returns.
1.2. Rolling Moment Estimation
From the rolling window, we estimate four key statistics:
Mean μ
Standard deviation σ
Skewness γ
Kurtosis κ
Formally:

These higher-order moments capture asymmetry and fat tails, which are much needed traits in real market returns.
1.3. Parameterizing the NIG Distribution
To approximate the return distribution, we use the ‘Normal Inverse Gaussian’ distribution.
Formally, the NIG PDF for a random variable x is:

α > 0 → tail heaviness (larger = thinner tails)
β ∈ (−α,α) → skewness (positive = right-skewed)
δ > 0 → scale (volatility-like term)
μ ∈ R → location (mean or median-like center)
K1→ modified Bessel function of the second kind (used in heavy-tailed models)
The above may seem like a complex formula, but essentially the purpose is to capture heavy tails, skewness and partially captures volatility clustering (via a rolling window) without simulation or complex fitting.
1.4. Estimating Parameters from Rolling Moments
We then use rolling log-return windows (e.g. 100 days) to estimate the key moments, i.e. mean, standard deviation, skewness, and kurtosis.
These map heuristically into the NIG parameters as follows:

These mappings are empirical approximations that link the intuitive moments of the data to the corresponding dimensions of the NIG.
The scale δ is set directly to the observed standard deviation, while the shape and skew parameters scale the observed kurtosis and skewness into NIG values.
1.5. Scaling by Forecast Horizon
For each forecast step h, we scale the parameters:

This assumes independent returns across time steps, consistent with standard stochastic modeling.
100 Genius Side Hustle Ideas
Don't wait. Sign up for The Hustle to unlock our side hustle database. Unlike generic "start a blog" advice, we've curated 100 actual business ideas with real earning potential, startup costs, and time requirements. Join 1.5M professionals getting smarter about business daily and launch your next money-making venture.
1.6. Converting to Price Probabilities
To translate the return distribution into price space, we:
1. Define a range around the current price using ATR:

2. Discretize this range into bins

3. Compute the midpoint return for each bin and convert to log-space

4. Evaluate the NIG PDF at each bin to get probabilities

2. Python Implementation
The process below is structured to match the methodology just outlined.
2.1. Parameters
We begin by specifying key inputs for the analysis.
These parameters control the modeling horizon, granularity, smoothing windows, and visualization scale.
Adjust as needed. Each parameter is exaplained as a comment in the snippet.
import numpy as np
import pandas as pd
import yfinance as yf
import matplotlib.pyplot as plt
from matplotlib.dates import AutoDateLocator, AutoDateFormatter
### PARAMETERS
TICKER = "TSLA"
START_DATE = "2020-01-01"
END_DATE = "2025-07-13"
INTERVAL = "1d" # Data granularity ("1d" = daily candles); used by yfinance
FREQUENCY = "monthly" # Time horizon unit for forward projections: "daily", "weekly", or "monthly"
NUM_HORIZONS = 5 # Number of forward steps; increasing adds longer-term projections
WINDOW_SIZE = 100 # Rolling window length; increasing smooths moment estimates, decreasing makes it more reactive
ATR_LENGTH = 14 # ATR window size; larger values reduce noise in range estimation
ATR_MULT = 3.0 # Range multiplier; higher values widen forecast bands, lower values narrow them
BIN_SIZE = 19 # Number of return bins; higher = finer granularity, lower = faster but coarser
BOX_WIDTH = 1.0 # Visual width of histogram bars; increase for wider boxes
MIN_ALPHA = 0.30 # Minimum opacity for low-probability bins; lower = more transparent tails
MAX_ALPHA = 0.90 # Maximum opacity for high-probability bins; higher = more emphasis on mode
PRICE_WINDOW_DAYS = 180 # Number of recent days shown in price chart; higher = longer. This is only for visual context
unit_map = {"daily": (1, "d"), "weekly": (5, "w"), "monthly": (21, "m")}
unit_days, unit_label = unit_map[FREQUENCY]
2.2 Data Retrieval and Log Returns
We download historical OHLC data using the yfinance
library and compute daily log returns.
### LOG RETURNS AT THE BASE
df = yf.download(
TICKER, start=START_DATE, end=END_DATE,
interval=INTERVAL, auto_adjust=True, progress=False
)
if isinstance(df.columns, pd.MultiIndex):
df.columns = df.columns.get_level_values(0)
df["ret"] = np.log(df["Close"] / df["Close"].shift(1))
rets = df["ret"].dropna()
2.3 Rolling Moments Estimation
We estimate the mean, standard deviation, skewness, and kurtosis from a rolling window of historical returns.
### ROLLING MOMENT ESTIMATION
mu_hat = rets.rolling(WINDOW_SIZE).mean().iloc[-1]
sigma_hat = rets.rolling(WINDOW_SIZE).std(ddof=0).iloc[-1]
m3 = (rets - mu_hat)**3
m4 = (rets - mu_hat)**4
skewness = m3.rolling(WINDOW_SIZE).mean().iloc[-1] / sigma_hat**3
kurtosis = m4.rolling(WINDOW_SIZE).mean().iloc[-1] / sigma_hat**4
2.4 NIG PDF Distribution and Parameters
We define the NIG PDF using an analytical approximation of the modified Bessel function.
Parameters are literally mapped from empirical moments:
### NIG DISTRIBUTION PDF
def nig_pdf(x, a, b, d, m):
dx = np.sqrt(d**2 + (x - m)**2)
z = a * dx
k1 = np.sqrt(np.pi / (2*z)) * np.exp(-z) # Bessel function approx
return (a * d * k1 / (np.pi * dx)) * np.exp(d * np.sqrt(a**2 - b**2) + b * (x - m))
### PARAMETERIZING THE NIG DISTRIBUTION
alpha_hat = 0.75 * kurtosis + 1
beta_hat = 0.5 * skewness
delta_hat = sigma_hat
mu_unit = mu_hat
2.5 ATR for Return bounds
We compute the ATR and define a symmetric return interval around the current price.
This range is discretized into bins for probability estimation.
tr = pd.concat([
df["High"] - df["Low"],
(df["High"] - df["Close"].shift(1)).abs(),
(df["Low"] - df["Close"].shift(1)).abs()
], axis=1).max(axis=1)
atr = tr.rolling(ATR_LENGTH).mean().iloc[-1]
price0 = df["Close"].iloc[-1]
p_range = ATR_MULT * atr
minR = -p_range / price0
maxR = p_range / price0
stepR = (maxR - minR) / BIN_SIZE
y_lower = price0 * (1 + minR)
y_upper = price0 * (1 + maxR)
recent = df.tail(PRICE_WINDOW_DAYS)
if not recent["Close"].between(y_lower, y_upper).any():
recent_in = recent
else:
recent_in = recent[recent["Close"].between(y_lower, y_upper)]
2.6 Plotting Future Price Probabilities
Finally, we visualize the actual price path alongside forward-looking probabilistic bands across multiple horizons.
For each horizon t+h, we:
Scale the NIG distribution
Evaluate it across return bins
Convert those to price levels
Color boxes based on probability and direction
Compute total bullish vs. bearish probability mass
Each panel includes:
A histogram of probability bins (green for bullish, red for bearish)
A label showing the total bullish/bearish split
A time horizon indicator
## PLOT SETUP
plt.style.use("dark_background")
fig, axes = plt.subplots(
1, NUM_HORIZONS + 1,
figsize=(2 * (NUM_HORIZONS + 1), 6),
sharey=True,
gridspec_kw={"width_ratios": [3] + [1]*NUM_HORIZONS}
)
# Main price panel
ax_price = axes[0]
ax_price.plot(recent_in.index, recent_in["Close"], color="white", lw=1.2)
ax_price.set_ylim(y_lower, y_upper)
ax_price.set_ylabel("Price", color="white")
ax_price.set_title(f"Price (last {PRICE_WINDOW_DAYS} d)", color="white", pad=8)
loc = AutoDateLocator(minticks=6, maxticks=12)
ax_price.xaxis.set_major_locator(loc)
ax_price.xaxis.set_major_formatter(AutoDateFormatter(loc))
ax_price.tick_params(axis="x", rotation=45, colors="white")
ax_price.tick_params(axis="y", colors="white")
# PROBABILITY PANELS
for i in range(1, NUM_HORIZONS + 1):
ax = axes[i]
h = unit_days * i
mu_h = mu_unit * h
delta_h = delta_hat * np.sqrt(h)
# Bin midpoints in log-return space
probs = []
for j in range(BIN_SIZE):
loR = minR + j * stepR
hiR = loR + stepR
midR = loR + 0.5 * stepR
log_mid = np.log1p(midR)
p = nig_pdf(log_mid, alpha_hat, beta_hat, delta_h, mu_h) * np.log1p(stepR)
probs.append(p)
probs = np.array(probs)
probs /= probs.sum()
max_p = probs.max()
cdf = probs.cumsum()
# Confidence intervals
lo68 = np.searchsorted(cdf, 0.16)
hi68 = np.searchsorted(cdf, 0.84)
lo68P = price0 * (1 + (minR + lo68 * stepR))
hi68P = price0 * (1 + (minR + (hi68 + 1) * stepR))
ax.axhspan(lo68P, hi68P, color="gray", alpha=0.25, zorder=0)
lo95 = np.searchsorted(cdf, 0.025)
hi95 = np.searchsorted(cdf, 0.975)
lo95P = price0 * (1 + (minR + lo95 * stepR))
hi95P = price0 * (1 + (minR + (hi95 + 1) * stepR))
ax.axhspan(lo95P, hi95P, color="gray", alpha=0.15, zorder=0)
# Compute bullish and bearish probabilities
bullish_prob = 0.0
bearish_prob = 0.0
for j, p in enumerate(probs):
loP = price0 * (1 + (minR + j * stepR))
hiP = price0 * (1 + (minR + (j + 1) * stepR))
midP = 0.5 * (loP + hiP)
if midP >= price0:
bullish_prob += p
color = (0, 1, 0) # green
else:
bearish_prob += p
color = (1, 0, 0) # red
alpha = MIN_ALPHA + (MAX_ALPHA - MIN_ALPHA) * (p / max_p)
rgba = (*color, alpha)
ax.add_patch(
plt.Rectangle((0, loP), BOX_WIDTH, hiP - loP, color=rgba, ec="none")
)
ax.text(
BOX_WIDTH / 2, midP, f"{p * 100:4.1f}%",
ha="center", va="center", fontsize=6, color="black"
)
# Add bullish vs bearish summary at the top
label = f"↑ {bullish_prob:.1%} ↓ {bearish_prob:.1%}"
ax.text(
BOX_WIDTH / 2, y_upper , label,
ha="center", va="bottom", fontsize=8, color="white"
)
ax.set_xlim(0, BOX_WIDTH * 1.1)
ax.set_ylim(y_lower, y_upper)
ax.axis("off")
# Add bullish/bearish probability at top center
label = f"↑ {bullish_prob:.1%} ↓ {bearish_prob:.1%}"
ax.text(
BOX_WIDTH / 2, y_upper,
label, ha="center", va="bottom", fontsize=8, color="white"
)
# Add time horizon label at bottom center
ax.text(
BOX_WIDTH / 2, y_lower * 0.99,
f"t+{i}{unit_label}", ha="center", va="top", fontsize=12, color="white"
)
plt.tight_layout()
plt.show()

Figure 1. Forward-looking price probability bands for TSLA over five weekly horizons. Green and red bins represent upward and downward price moves, respectively, with shading proportional to probability mass. Confidence intervals (68% and 95%) are shown in gray.
The full end-to-end workflow is available as a Google Colab notebook in the end of the article 👇
3. Applications and Limitations
This model offers a fast and interpretable way to visualize forward-looking return distributions.
It is particularly useful in contexts where understanding the shape of future uncertainty.
Applications
The model gives a visual and statistical interpretation of the forward return distribution.
Is useful sanity check alongside implied volatility surfaces or option pricing models.
It offers a return-based framing complementary to Greeks.
In risk management or portfolio modeling, this method can serve as a lightweight simulation alternative.
Instead of running Monte Carlo paths, it produces horizon-specific return bands, useful for conditional VaR-style frameworks.
Limitations
The model assumes the most recent return distribution is a valid proxy for the near future.
It does not adapt to structural regime shifts or volatility jumps unless those changes are reflected in the rolling window.
Furthermore, returns are assumed to be independent across time steps. This ignores autocorrelation, momentum effects, and mean reversion.
The parameter-to-NIG mapping is heuristic, not fitted. While fast and functional, it may not be statistically optimal.
Extreme values in skew or kurtosis can distort the shape if not regularized.
The full end-to-end workflow is available as a Google Colab notebook below 👇
Subscribe to our premium content to get the full code snippet.
Become a paying subscriber to get access to this post and other subscriber-only content.
Upgrade