MATLAB Techniques: Self-identifying (by type) methods

We all know MATLAB by default fill numbers with 0 if we haven’t specified them (such as expanding a matrix by injecting values beyond the original matrix size). Cells are default filled with {[]} even if you meant to have cellstr()  {''} across the board. Sometimes it’s not what we wanted. 0 can be legitimate value, so we want NaN to denote undefined values. Same as cellstr(), we don’t want to choke the default string processing programs because one stupid {[]} turns the entire cell array into to a non-cellstr array.

For compatibility reasons (and it’s also hard to do so), it’s not a good idea to simply modify the factory behavior. I have something similar to table() objects that requires similar expansion on arbitrary data types, but MATLAB’s defaults proves to be clumsy.

Instead of writing switch-case statements or a bunch of if statements that relies on type information like this:

function x = makeUndefined(x)
  switch class(x)
    case {'double', 'single'}
      x = NaN(size(x));
    case 'cell'
      if( iscellstr(x) )
        x = repmat({''}, size(x));
      end
    % ...
  end

I found a slick way to do it so I don’t have to keep track of it again if I need the same defaults in other places: take advantage of the fact that MATLAB selectively will load the right method depending on the first input argument(s)*:

Create a commonly named method (e.g. makeUndefined()) for the PODs and put it under the POD’s @folder (e.g. /@double/makeUndefined.m, /@cell/makeUndefined.m). The functions look something like this:

function y = makeUndefined(x)
% This function must be put under /@double
  y = NaN(size(x));
function x = makeUndefined(x)
% This function must be put under /@cell
  if( iscellstr(x) )
    x = repmat({''}, size(x));
  end

Similarly, you can make your isundefined() implementation for @double, @cell, etc, just like the factory-shipped @categorical/isundefined() corresponding to the same rules you set for makeUndefined().

Actually, the switch-case approach is analogous to the common abuses of RTTI in C++: every time a new type is added, you have to open all the methods that depends on the type info and update them, instead of having the classes implement those methods (with proper object hierarchy and overloading).

[Scott Meyers, “Effective C++”] Anytime you find yourself writing code of the form “if the object is of type T1, then do something, but if it’s of type T2, then do something else,” slap yourself


This technique is especially valuable when you and TMW (or other users) have different ideas of what an English word (e.g. empty, defined, numeric) means. Like do you consider boolean (logical) numeric? TMW says no through isnumeric().

To give you an example, I made a tool to nicely plot arbitrary features in my @table over time (the equivalent of @timetable before TMW introduced it). It only make sense if the associated dependent variable (y-axis) can be quantified, so what I meant is broader than isnumeric(): it’s isConvertibleToDouble() since I casted my dependent variables with double() in between.

Boolean (logical) and categorical variables have quantifiable levels, so double() can be applied to them, they should return TRUE for isConvertibleToDouble() despite isnumeric() returns FALSE. They have the same behavior for basic types like double(), single(), char(), cellstr(), struct(), etc.

In summary,

  1. You say what you really mean (by introducing nomenclature), NOT what it typically does
    – this is like creating another indirection like half(x) instead of directly writing x/2 or x>>1.
    – spend 90% of your time coming up with a very intuitive yet precise name. ‘Misspoke’ == Bug!
  2. The new data types self-manage through implementing methods used by your code.
    – assume nothing about input type other than the interfaces that are accessed through
    (the traditional approach knows exactly what inputs they’re going to see)
    – if you did #1 correctly, there’s no reason to foresee/prepare-for new input types
    (just implement the methods for the input data types that you want it to run for now)
    – no sweep (switch-otherwise) case to mishandle** unexpected new input data types
    (because it won’t run on an input data type until all called methods are implemented)
    – introducing new input data types won’t break the core code for existing types.
    (new input data types can only break themselves if they implemented the methods wrong)

* This is tricky business. MATLAB doesn’t have function overloading (by signature), but will look into the type/class of FIRST dominant input argument and load the appropriate classes. Usually it’s the first argument, but for non-POD classes, you can specify the dominance relationship (Inferior classes). Actually little has been said about such relationship in PODs in the official documentation.

I called support and found that there’s no dominance relationship in PODs, so it’s pretty much the first argument. That means this trick does not work if you want to overload bsxfun() for say, nominal() objects (which doesn’t have a bsxfun() implementation) keeping the same input argument order because the first argument is a function handle for both the factory and the user method. Bummer!

This is why the new ‘*_fun‘ functions I write, I always put the object to operate on as the first argument whenever possible. Gets a little bit ugly when I want to support multiple inputs like cellfun(), so I have to weight whether it’s worth the confusion for the overloading capability.

** Unless you want to torture yourself by listing all recognized types and make sure the ‘switch-otherwise‘ case fails.

Loading

Quote of the day: You can’t learn too much linear algebra

Benedict Gross, Professor of Mathematics at Harvard said in his E-222 lecture: ‘You can’t learn too much linear algebra’.

If you read my WordPress pages in Signals and Systems, I’m trying a new approach to explain the materials done under the classical approach in a much more compact way using basic ideas in linear algebra.

Linear algebra is fun and easy to learn (at intro levels), once you get used to the fact that it’s a carefully picked set of restrictions that most physical problems boils down to. In high school algebra, it’s disguised as ‘method of detached coefficients’ when you solve systems of simultaneous equations.

Once you model your problem into linear equations (the hardest part), you will see the equivalence (or mathematically equivalent analogs) of different problems ranging from economics to circuits and mechanics. The beauty of the ‘detached coefficients’ approach separates the context of the problem from its mathematical structure as the application specific variables are often grouped as a vector (or matrix) that you deal with as a unit. In fact, most problem boils down to this:

    \[ \bf{Ax = b} \]

It’s your job to construct \bf{A}, \bf{x} and \bf{b} and tell people what they mean.

I agree with Gilbert Strang, Professor of Mathematics at MIT that it’s time to make linear algebra the standard curriculum over calculus. Digital control used to be a very advanced graduate class, but after a few decades, it’s undergraduate material. Linear algebra has very few knowledge dependencies (hard pre-requisites), so it’s great material to teach at any level.

Loading