Oversimplified: Signals and Systems (7) – What are Transforms? See your signal in a different angle: as a linear combination of a family of functions (basis)

Transforms is a big topic in signal processing, but the way it was traditionally taught was similar to the way they were discovered: through calculus. However, this approach does not reveal the true purpose of the transforms and why they are desirable. It only mislead beginners

  • to think that there are 5~7 separate topics to learn: DFT/DFS/FS + FT + LT + DTFT + ZT
  • to treat Fourier transform as a formula that magically give you ‘frequency’, without clearly stating what that really means.
  • to not really understand why we then later look into Laplace/z-transforms.
  • to give wavelets more ‘worship’ than it deserved
  • to not immediately see why sinusoids appears as simple spikes in their Fourier transforms
  • to not immediately understand why we can probe the impulse response of a system by sweeping sinusoidal frequencies.

The main reason why we need transforms is that we want to have at least a different view of our time domain signals, highlighting characteristics that cannot be seen easily with our plain eye. It’s very much like switching from the familiar rectangular coordinates to polar coordinates when the problem screams length and direction.


STOP!

Everything in this article requires a thorough understanding of many concepts in the first linear algebra class in college. Do NOT even think about skipping the linear algebra pre-requisite. It’s dead easy for those who know linear algebra and impossible for those who haven’t yet.

The modern approach here requires you to learn inner products. Don’t worry about Gaussian elimination/determinants/computational stuff. Focus on changing basis and orthonormality.

Based on my experience taking my first linear algebra class in UCSD in summer, I believe anybody with a decent high school math background who can integrate simple trigonometric functions can master the required materials (just enough to understand the modern signal processing approach here) in 2 weeks.

Don’t be scared of linear algebra! Sometimes it looks hard because the professor tried to smuggle abstract algebra materials into the class, without telling you anything about groups, rings and fields.

Once you get linear algebra, most of the materials in the later 2/3 of basic signal processing can be condensed into one or two sessions. Definitely worth the investment taking a 2 weeks detour to learn inner products first!


Before I start, I’d like to relentlessly emphasize the importance of linear combinations:

\alpha_1{\bf f_1} + \alpha_2{\bf f_2} + \alpha_3{\bf f_3} + ... + \alpha_N{\bf f_N}

because:

  • convolution is a linear combination of echos where the impulse response h provides the scaling coefficients (over each shifts/centers).
  • linear transforms is a way to decompose your time-domain signal into a linear combination over a family of functions you strategically pick, which we call basis functions*

All transforms taught in basic signals and systems class are linear transforms (maps), which is a change of basis/coordinates.


The most popular transform you’ll learn in class is Fourier Transform. Instructors can choose to go light on Laplace/z-transforms or even Fourier Series, or even move some of the materials to the next class, but at the end of the day, nobody should walk away without mastering Fourier transform if the instructor succeeded.

Fourier Transform is named in honor of Joseph Fourier for choosing sinusoids of different frequencies as the family of basis functions used in re-express our time domain signals as if it came from a linear combination of these sinusoids.

Each point in the Fourier (colloquially frequency) domain carries ONE (complex) sinusoidal function ringing at the said frequency in the time domain. The ‘frequency‘ in Fourier transform is a shorthand for saying ‘a complex sinusoid oscillating at that frequency’.

More generally, for all linear transforms, each point in the transform domain represents one basis function buried as an additive component in the original domain.

Out of brevity, we do not explicitly write down these sinusoids and work with the coefficients/spectrum instead. Since we are not constantly reminded what basis functions Fourier Transform is using, people gradually forget that Fourier Transform is all about adding sinusoids together!

More generally, linear transforms are all about adding basis functions/vectors together!


Why is the family of sinusoids (or complex exponents) such a popular basis?

  • It’s the solution form to the most popular type of ordinary differential equation: linear constant coefficient differential equations (LCCDE), which appears naturally in tons of physical systems.
    This also mean that you can solve these differential equations easily like high-school algebra (without calculus) by transforming them first and undoing it later

Sinusoids also have desirable properties of orthogonal basis shared by all the linear transforms taught in the class:

  • It’s orthogonal: inner products between two sinusoids at different frequencies are zero. That means inner product of your input with a sinusoid only extracts the buried (additive) sinusoidal component at the same frequency and nothing else. If sinusoids of different frequencies (basis) were interact to with each other, you have to account for these interactions and likely end up with more than one way to express your time-domain signal in the new basis, which is messy.
  • It’s unitary (therefore also orthonormal): the transform is not going to make your time domain signal stronger/weaker than what it really is after undoing, so you don’t have to worry about scaling them properly each time you use a transform. This also lead to Parseval Theorem which state that the energy is the same whether you scan it over time (in time-domain) or over frequency (in Fourier-domain)

Most practically useful bases (even the ones outside “signals & systems” class) are orthogonal bases.

Despite the popularity, sinusoids has its own character when acting as bases. Simple sinusoids spans from infinite past to infinite future, never just a small chunk in our mortal time. They stay flat (bounded between -1 and 1) and all they do is repeat with regularity. This means :

  • It’s highly sensitive in terms of picking up weak repeating patterns buried in a chaos of events
  • There is no way you can vaguely tell when a one-off event/transient happen with it.
  • It cannot properly capture exponential growths or decays

By tailoring a family of basis functions (use something other than simple sinusoids), we can trade some of the ability to pin-point the frequency of repetitions for the ability to tell roughly when an event happened. This is our version of uncertainty-principle. You’ll get to study them later in time-frequency analysis (STFT, wavelets, etc), which is outside the scope of this post.

By learning signal processing using the modern approach here, you’ve already done the hard work dealing with the intellectual leap in advanced signal processing classes: thinking in terms of linear combination and inner products!


Back to the mechanics, let’s take Discrete Time Fourier transform as an example, the textbook definition is:

A(\omega) = \displaystyle\sum_{t} a(t) e^{-j\omega t}

See what happens if use subscripts for t and replace the e^{j\omega t} with f_{t}(\omega)?

A(\omega) = \displaystyle\sum_{t} a_{t} \overline{f_{t}(\omega)}

It’s the linear combination of a family of functions ..., {\bf f}_{-1}, {\bf f}_0, {\bf f}_1, ... scaled/weighted by ..., a_{-1}, a_0, a_1, .... Note that the boldface \bf f abstracts the idea of functions/signals/vector into one concept: vectors. e.g. \bf f_1 is a vector that covers all points over \omega in f_1(\omega).

I chose DTFT for illustration because linear combination is more intuitive if t is in discrete steps so I can write out each of the terms instead of relying on summation/integration notations.

However, this is also the definition of inner product between a_t and f_t(\omega) for any \omega we are looking at for the moment:

\langle a_t, f_t(\omega)\rangle \equiv \displaystyle\sum_{t} a_{t} \overline{f_{t}(\omega)}

It’s exactly the same when you replace the summation with an integral for the continuous time case. Note that the second term of inner products is conjugated, which flips the index signs in complex exponents.

Laplace transforms, z-transforms are just the same inner product with different family of basis functions \bf f (kernels), which accounts for signals with an exponential growth/decay component (which also occurs a lot naturally) as we’ll discuss in later posts.


Basically this article is about ‘basis’, ‘basis’ and more ‘basis’. This is the concept which most first-time takers of signal processing classes are still missing even if they aced the exam. Why is this idea so important?

Now that we know Fourier transforms uses sinusoids (complex exponents) as its basis, it becomes immediately obvious that each point (frequency spot) in the Fourier Transform graph represents a complex sinusoid at the spot’s frequency.

Therefore if we have a simple signal like a real sinusoid, which can be easily written in two complex exponential terms, e.g.

\cos(\omega_0 t) \equiv \frac{e^{j\omega_0 t} +e^{-j\omega_0 t} }{2}

, the Fourier transform is obviously two points with half-amplitude at -\omega_0 and +\omega_0, or mathematically 0.5\delta(\omega-\omega_0)+0.5\delta(\omega+\omega_0). There’s no need to torture yourself with the Fourier integral for something dumb like this. A good memory of complex number properties can give you the pair from scratch in half a second if you know what Fourier transform really stands for.

So it’s not surprising sinusoids and unit impulses are one of the most common signal building blocks mentioned in my earlier article.

This is an important concept that leads to eigensystems and probing LTI systems with sinusoids, which we’ll talk about later after mastering transform properties such as the convolution<->multiplication pair.


So far I’ve covered all the essential topics that cannot be easily acquired on your own through traditional teaching methods. These should be enough for you to come up with the stuff you’ll learn later on your own that you’ll even find them obvious (that’s how I felt when I took EE367B/Music421).

In the future, depending on the feedback I received through comments, I am considering to write about the following topics:

  • Sampling
  • Windowing/Spectral leakage
  • Summary of relationships between transforms taught in class

These are natural extensions given the basic materials studied. Unfortunately, they are frequently poorly understood because it’s too tempting to substitute building a strong foundation getting the essence for dogma (Nyquist is a good example).

As with all the modules in this series, the material presented are meant to complement the regular study to guide you the right/smooth way and steer you away from century-old traps/misconceptions. It’s never meant to be self-contained.

Let me know in the comments section which ones you’d like to see first!


* The family of basis functions is also called the kernel in linear algebra speak

Loading