When Mathematics Faced Infinity: The Legacy of Fourier (part 1)
Over two centuries ago, a French mathematician named Joseph Fourier dared to propose a revolutionary idea: that any periodic function—even those with discontinuities or without derivatives—could be represented as an infinite sum of sines and cosines. This proposal not only transformed physics and mathematics forever but also triggered a storm of criticism from the scientific community of his time.
Fourier was suggesting something that many of his contemporaries considered mathematical heresy: that “ugly” functions could be decomposed into “beautiful” ones—smooth, infinitely differentiable sines and cosines. Even if a function had a jump or a sharp corner, it could still be “drawn” as an infinite sum of gentle waves.
Today, these ideas are fundamental to physics, engineering, data science, and even digital music. But figuring out how to make sense of an infinite sum of oscillatory functions was anything but simple.
The central question: How do we represent complicated functions?
The Fourier series takes the form:
$$ f(t) = \sum_{k=-\infty}^{\infty} \hat{f}(k) e^{2\pi i k t} $$
Pause for a moment. How can this infinite sum of complex exponentials (or sines and cosines, if you prefer) faithfully represent a general function? What does such a sum even mean? Does it converge? To what? In what sense?
That’s where the story gets rich. Mathematicians such as Dirichlet, Riemann, Lebesgue, and Hilbert spent decades building the mathematical scaffolding to rigorously validate Fourier’s intuition. A particularly powerful idea emerged: convergence in the \(L^2\) norm, which allowed us to justify that a Fourier series can “converge” in the mean square sense, even if it doesn’t converge pointwise.
Thanks to this concept, we came to understand that a function can be “well represented” if its energy (the square of its magnitude) matches the energy of the sum of its harmonic components. This is the essence of Parseval’s theorem:
$$ \int_0^1 |f(t)|^2\, dt = \sum_k |\hat{f}(k)|^2 $$
Each component contributes energy equal to the square of the modulus of its coefficient. A simple yet powerful idea: a function is the energetic sum of its parts.
The heated ring problem: where the magic began
So where did all of this begin? Fourier was trying to solve a very concrete problem: to understand how heat flows through a body—specifically, a heated metallic ring. Imagine a ring with an initial temperature distribution \(f(x)\). As time passes, heat spreads and the temperature smooths out. How do we model this process?
Here, the heat equation comes into play:
$$ \frac{\partial u}{\partial t} = \frac{1}{2} \frac{\partial^2 u}{\partial x^2} $$
Assuming that the solution \(u(x,t)\) is periodic (as it would be on a ring), Fourier proposed expressing the solution as:
$$ u(x,t) = \sum_k c_k(t) e^{2\pi i k x} $$
Substituting this into the equation, he found that each coefficient \(c_k(t)\) must satisfy a very simple ordinary differential equation:
$$ c_k'(t) = -2\pi^2 k^2 c_k(t) \Rightarrow c_k(t) = c_k(0) e^{-2\pi^2 k^2 t} $$
Each harmonic mode decays exponentially over time. The higher the frequency (the “sharper” the mode), the faster it fades. Heat, in a sense, filters out the high-frequency noise. It smooths everything.
The modern impact: from physics to Spotify
This idea, born from heat flow and partial differential equations, ended up revolutionising the digital world. The same mathematical tools used to study how heat dissipates are now used to compress audio files, process images, and even train neural networks.
Even functions with discontinuities can be reasonably approximated using Fourier series. They may not be differentiable, but their global structure can still be captured. In fact, a truncated or finite Fourier series can reconstruct sharp edges or jumps surprisingly well. This is deeply related to the Fourier convergence theorem and convergence in \(L^2\). It's also why we observe the famous Gibbs phenomenon—the overshoot near discontinuities.
A revolution as powerful as artificial intelligence
Yes, you read that correctly. This conceptual revolution—the idea of breaking down general functions into harmonic components—was to applied mathematics what artificial intelligence is to today’s technology. A tool that reshaped entire disciplines, transformed how we think, and enabled new technologies once thought impossible.
But there was still one more missing piece: a way to compute these series efficiently.
Over a century later, John Tukey, along with James Cooley, discovered an algorithm that made Fourier’s vision even more powerful: the Fast Fourier Transform (FFT). It provided an efficient way to compute Fourier series, opening the floodgates for real-time signal processing, data compression, and digital innovation.
That story—the rise of Tukey, the FFT, and the digital revolution—will be the subject of a future blog post.
Final thoughts
From a heated ring to Spotify, from infinite sums to a mathematical revolution, Fourier series show us how mathematics can tame even the most chaotic and discontinuous of functions.
In part II, we’ll bring back the initial distribution \(f(x)\), manipulate the solution a bit further, and witness something utterly magical.
Stay tuned!
Cesar!
Comentarios
Publicar un comentario