Skip to content

Replace scipy.signal.fftconvolve to Faster numpy.convolve #272

@seanlaw

Description

@seanlaw

While comparing the timing for computing sliding dot products with convolutions:

#!/usr/bin/env python

import numpy as  np
from scipy.signal import fftconvolve
import numpy.testing as npt
import time

def np_QT(Qr, T):
    QT = np.convolve(Qr, T)

    return QT.real[m - 1 : n]

def scipy_QT(Qr, T):
    QT = fftconvolve(Qr, T)

    return QT.real[m - 1 : n]

Q = np.random.rand(50)
for i in range(6, 28):
    T = np.random.rand(2**i + 11)
    n = T.shape[0]
    m = Q.shape[0]
    Qr = np.flipud(Q)  # Reverse/flip Q
    start = time.time()
    QT_np = np_QT(Qr, T)
    np_time = time.time()-start
    start = time.time()
    QT_scipy = scipy_QT(Qr, T)
    scipy_time = time.time()-start
    print(2**i, np.round(np_time, 2), np.round(scipy_time,2), np.round(scipy_time/np_time,2))
    npt.assert_almost_equal(QT_np, QT_scipy)

It was found that numpy.convolve has become faster than scipy.signal.fftconvolve:

Length Numpy Scipy Factor
64 0.0 0.0 11.42x
128 0.0 0.0 7.91x
256 0.0 0.0 6.89x
512 0.0 0.0 4.96x
1024 0.0 0.0 3.76x
2048 0.0 0.0 2.88x
4096 0.0 0.0 3.06x
8192 0.0 0.0 2.94x
16384 0.0 0.0 2.18x
32768 0.0 0.0 2.02x
65536 0.0 0.0 2.42x
131072 0.0 0.01 2.49x
262144 0.01 0.02 1.86x
524288 0.02 0.04 2.14x
1048576 0.03 0.09 3.27x
2097152 0.06 0.18 3.11x
4194304 0.11 0.38 3.35x
8388608 0.22 0.81 3.72x
16777216 0.44 1.95 4.46x
33554432 0.94 4.07 4.33x
67108864 1.87 8.57 4.59x
134217728 3.65 25.85 7.09x

In more cases, NumPy is 2x faster and around 4x faster for larger arrays. We should switch from using SciPy to NumPy for our convolutions

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions