Build a volatility factor strategy with 0.84 Sharpe

October 4, 2025
Facebook logo.
Twitter logo.
LinkedIn logo.

Build a volatility factor strategy with 0.84 Sharpe

I was reviewing my own portfolio allocations this weekend and realized how much I still lean on low volatility names, even when markets get noisy.

A lot of beginners get stuck thinking that safer stocks mean lower returns. This comes from old ideas about risk, which break down when you look closer at the data.

Students in Getting Started With Python for Quant Finance learn to use Python to screen for low volatility stocks, build risk-adjusted portfolios, and backtest these strategies so they see how compounding and risk management really work.

Today’s newsletter walks through a piece of the end-to-end process my students master in full.

By reading today’s newsletter, you’ll get Python code to build and backtest a low volatility equity portfolio with cleaner drawdowns and improved risk-adjusted returns.

Let's go!

Build a volatility factor strategy with 0.84 Sharpe

The low volatility factor is a systematic way to prioritize stocks with lower price fluctuations and lower beta compared to the broader market. It allows investors to seek higher risk-adjusted returns through steady compounding and lower drawdowns over market cycles.

The history of the low volatility factor began with empirical anomalies in finance theory.

Decades ago, researchers noticed that low volatility stocks delivered better risk-adjusted returns than more volatile names. Academic evidence, notably the work of Baker and Haugen, challenged the classic risk-return tradeoff, showing "safer" stocks could outperform in the real world. Now, minimum volatility indices like those from S&P and MSCI formally track and publish performance data to demonstrate the effect globally.

Today, the low volatility factor is integral for practitioners looking to temper portfolio swings without sacrificing long-term returns.

Practitioners use systematic screens based on historical volatility or beta to select these stocks, often relying on ETFs like SPLV and USMV for broad exposure. They rebalance portfolios regularly to maintain risk levels, carefully monitor sector exposure, and frequently blend factors like value or quality for stronger risk-adjusted returns.

Let's see how it works with Python.

Imports and setup

This example uses the free Quandl data for backtesting. The pro is it’s free. The con is it is limited in universe size (about 3,000 stocks) and time (data ends in 2018). Despite the drawback, it’s a great way to get started.

To use it, create a free API key at Nasdaq. Then run the following code.

1import os
2from zipline.data import bundles
3
4
5os.environ["QUANDL_API_KEY"] = "YOUR_API_KEY"
6bundles.ingest("quandl")

We use warnings to filter messages, numpy and pandas for fast calculations, and Zipline libraries for running backtests. We also set a few key parameters we’ll use for the strategy.

1import warnings
2
3import numpy as np
4import pandas as pd
5from zipline import run_algorithm
6from zipline.api import (
7    attach_pipeline,
8    cancel_order,
9    date_rules,
10    get_datetime,
11    get_open_orders,
12    order_target_percent,
13    pipeline_output,
14    schedule_function,
15    set_commission,
16    set_slippage,
17    time_rules,
18)
19from zipline.finance import commission, slippage
20from zipline.pipeline import Pipeline
21from zipline.pipeline.data import USEquityPricing
22from zipline.pipeline.factors import AverageDollarVolume, CustomFactor
23
24warnings.filterwarnings("ignore")
25
26
27LOOKBACK_WEEKS = 151
28LOOKBACK_DAYS = LOOKBACK_WEEKS * 5
29UNIVERSE_SIZE = 3000
30QUANTILE = 4

Build custom stock factors

Here, we define a custom factor to measure weekly volatility and build a pipeline that selects stocks with the lowest volatility from higher-volume names.

1class WeeklyVolatility(CustomFactor):
2    """
3    Computes standard deviation of weekly returns over a 3-year window.
4    Weekly return is defined over 5 consecutive trading days:
5        r_week = (close[t] / close[t-4]) - 1
6    We take non-overlapping 5-day chunks across the window for stability.
7    """
8
9    inputs = [USEquityPricing.close]
10    window_length = LOOKBACK_DAYS
11
12    def compute(self, today, assets, out, closes):
13        n_weeks = closes.shape[0] // 5
14        if n_weeks < 2:  # not enough weeks to compute a standard deviation
15            out[:] = np.nan
16            return
17
18        trimmed = closes[-n_weeks * 5 :, :]  # (n_weeks*5, n_assets)
19        weekly_open = trimmed[::5, :]  # first close in each 5-day block
20        weekly_close = trimmed[4::5, :]  # last close in each 5-day block
21        weekly_rets = (weekly_close / weekly_open) - 1  # shape (n_weeks, n_assets)
22
23        out[:] = np.nanstd(weekly_rets, axis=0, ddof=1)
24
25def make_pipeline():
26    adv20 = AverageDollarVolume(window_length=20)
27    base_universe = adv20.top(UNIVERSE_SIZE)
28
29    vol = WeeklyVolatility(mask=base_universe)
30    # Select lowest-volatility quartile by taking the bottom N within the ADV-filtered universe.
31    target_count = max(1, UNIVERSE_SIZE // QUANTILE)
32    lows = vol.bottom(target_count, mask=base_universe)
33
34    pipe = Pipeline(
35        columns={
36            "adv20": adv20,
37            "vol": vol,
38            "low_vol_long": lows,
39        },
40        screen=base_universe,
41    )
42    return pipe

In this block, we create a special stock ranking that calculates how much a stock’s price jumps around from week to week over the last three years. We only consider stocks with good trading volume. We then sort these stocks and filter down to those with the most stable (least volatile) prices, forming our candidate list.

By defining a custom calculation for weekly volatility, we’re able to focus on stocks with lower risk. The pipeline groups our target stocks by filtering out those that don’t trade enough and then scoring what’s left, so each month we can focus on the most stable performers. This process uses Zipline’s factor modeling tools, making it easy to plug into the rest of our analysis. We return a data pipeline that tags each qualifying stock with its volume, its volatility, and a flag for inclusion in our portfolio.

Configure and execute the trading strategy

Next, we set up our strategy to use the custom data pipeline, schedule when we rebalance, and decide which stocks to hold at each rebalance.

1def initialize(context):
2    # Attach pipeline
3    attach_pipeline(make_pipeline(), "lowvol_pipe")
4
5    # Monthly rebalance at market open (first trading day)
6    schedule_function(
7        rebalance, date_rules.month_start(), time_rules.market_open(minutes=1)
8    )
9
10
11def before_trading_start(context, data):
12    # Pull latest factor data
13    context.pipe = pipeline_output("lowvol_pipe")
14
15    # Determine long basket (lowest-vol quartile)
16    longs_frame = context.pipe[context.pipe["low_vol_long"]]
17    context.long_assets = list(longs_frame.index)
18
19
20def rebalance(context, data):
21
22    # Filter tradable and priced today
23    tradable_longs = [a for a in context.long_assets if data.can_trade(a)]
24
25    if len(tradable_longs) == 0:
26        # Nothing tradable; flatten all positions
27        for asset in list(context.portfolio.positions.keys()):
28            if data.can_trade(asset):
29                order_target_percent(asset, 0.0)
30        return
31
32    # Target equal weights across long basket (long-only)
33    w = 1.0 / float(len(tradable_longs))
34
35    # Cancel any open orders to avoid drift
36    oo = get_open_orders()
37    for asset, orders in oo.items():
38        for o in orders:
39            cancel_order(o)
40
41    # Close positions that are no longer in the target basket
42    current_pos = list(context.portfolio.positions.keys())
43    for asset in current_pos:
44        if asset not in tradable_longs and data.can_trade(asset):
45            order_target_percent(asset, 0.0)
46
47    # Set target weights for longs
48    for asset in tradable_longs:
49        order_target_percent(asset, w)

This code ties everything together for live backtesting. Each month, we update our list of low-volatility stocks and assign equal weights to each, selling anything that’s no longer in our target group.

When nothing in our group can be traded that day, we exit all positions. We also make sure to clean up any outstanding orders before placing new ones to avoid unnecessary trading. Scheduling and pipeline hooks make the whole process hands-off once started, keeping our portfolio focused and up-to-date.

Run backtest and plot results

Now, we set the backtest period, run the simulation using Quandl US price data, and graph how the strategy’s value grows over time.

1start = pd.Timestamp("2015")
2end = pd.Timestamp("2018")
3
4perf = run_algorithm(
5    start=start,
6    end=end,
7    initialize=initialize,
8    before_trading_start=before_trading_start,
9    capital_base=100_000,
10    bundle="quandl",
11)
12
13perf.portfolio_value.plot(
14    title="Low Volatility Factor Equity Curve",
15    ylabel="Strategy Equity"
16)

Here, we run a full portfolio simulation over three years, tracking how a $100,000 account grows if we always hold the least volatile, high-volume US stocks. Zipline uses actual historical pricing from Quandl, making the results realistic for live trading.

The plot gives us a clear picture of how steady or bumpy our approach might feel in the real world. By checking this chart, we quickly see if our low-volatility approach actually produced smoother and stronger returns than just holding the market.

The equity curve will look something like this.

The average sharpe ratio of this strategy over the last year of the analysis is 0.84. A promising result!

Your next steps

You’ve now got a live backtest tracking low-volatility stock performance. The most obvious thing to do is run the backtest through today. Check out Norgate Data to learn how to get up to date data.

Man with glasses and a wristwatch, wearing a white shirt, looking thoughtfully at a laptop with a data screen in the background.