MATLAB Quirks: struct with no fields are not empty

As far as struct() is concerned, I’m more inclined to using Struct of Array (SoA) over Array of Structs (AoS), unless all the use cases screams for SoA. Performance and memory overhead are the obvious reasons, but the true motivation for me to use SoA is that I’m thinking in terms of table-oriented programming (which I’ll discuss in later posts. See table() objects.): each field of a struct is a column in a table (heterogeneous array).

Since a table() is considered empty (by isempty()) if it has EITHER 0 rows INCLUSIVE OR 0 columns (no fields) and the default constructor creates a 0 \times 0 table, I thought struct() would do the same. NOT TRUE!

First of all, the default constructor of struct() gives ONE struct with NO FIELDS (so it’s supposed to correspond to a 1 \times 0 table). What’s even harder to remember is that struct2table(struct()) gives a 0 \times 0 table.

The second thing I missed is that a struct() with NO fields is NOT empty. You can have 3 structs with NO fields! So isempty(struct()) is always false!

I usually run into this problem when I want to seed the execution with an empty struct() and have the loop expand the fields if the file has contents in it, and I’ll check if the seeded struct was untouched to see if I can read data from the file. Next time I will remember to call struct([]) instead of struct(). What a trap!

At the end of the day, while struct is powerful, but I rarely find AoS necessary to do what I wanted once table() is out. AoS has pretty much the same restrictions as in table() that you cannot put different types in the same field across the AoS, but table allows you to index with variables (struct’s field) or rows (struct array index) without changing the data structure (AoS <-> SoA). So unless it’s a performance critical piece of the code, I’ll stick with tables() for most of my struct() needs.

 

Loading

MATLAB Techniques: Self-identifying (by type) methods

We all know MATLAB by default fill numbers with 0 if we haven’t specified them (such as expanding a matrix by injecting values beyond the original matrix size). Cells are default filled with {[]} even if you meant to have cellstr()  {''} across the board. Sometimes it’s not what we wanted. 0 can be legitimate value, so we want NaN to denote undefined values. Same as cellstr(), we don’t want to choke the default string processing programs because one stupid {[]} turns the entire cell array into to a non-cellstr array.

For compatibility reasons (and it’s also hard to do so), it’s not a good idea to simply modify the factory behavior. I have something similar to table() objects that requires similar expansion on arbitrary data types, but MATLAB’s defaults proves to be clumsy.

Instead of writing switch-case statements or a bunch of if statements that relies on type information like this:

function x = makeUndefined(x)
  switch class(x)
    case {'double', 'single'}
      x = NaN(size(x));
    case 'cell'
      if( iscellstr(x) )
        x = repmat({''}, size(x));
      end
    % ...
  end

I found a slick way to do it so I don’t have to keep track of it again if I need the same defaults in other places: take advantage of the fact that MATLAB selectively will load the right method depending on the first input argument(s)*:

Create a commonly named method (e.g. makeUndefined()) for the PODs and put it under the POD’s @folder (e.g. /@double/makeUndefined.m, /@cell/makeUndefined.m). The functions look something like this:

function y = makeUndefined(x)
% This function must be put under /@double
  y = NaN(size(x));
function x = makeUndefined(x)
% This function must be put under /@cell
  if( iscellstr(x) )
    x = repmat({''}, size(x));
  end

Similarly, you can make your isundefined() implementation for @double, @cell, etc, just like the factory-shipped @categorical/isundefined() corresponding to the same rules you set for makeUndefined().

Actually, the switch-case approach is analogous to the common abuses of RTTI in C++: every time a new type is added, you have to open all the methods that depends on the type info and update them, instead of having the classes implement those methods (with proper object hierarchy and overloading).

[Scott Meyers, “Effective C++”] Anytime you find yourself writing code of the form “if the object is of type T1, then do something, but if it’s of type T2, then do something else,” slap yourself


This technique is especially valuable when you and TMW (or other users) have different ideas of what an English word (e.g. empty, defined, numeric) means. Like do you consider boolean (logical) numeric? TMW says no through isnumeric().

To give you an example, I made a tool to nicely plot arbitrary features in my @table over time (the equivalent of @timetable before TMW introduced it). It only make sense if the associated dependent variable (y-axis) can be quantified, so what I meant is broader than isnumeric(): it’s isConvertibleToDouble() since I casted my dependent variables with double() in between.

Boolean (logical) and categorical variables have quantifiable levels, so double() can be applied to them, they should return TRUE for isConvertibleToDouble() despite isnumeric() returns FALSE. They have the same behavior for basic types like double(), single(), char(), cellstr(), struct(), etc.

In summary,

  1. You say what you really mean (by introducing nomenclature), NOT what it typically does
    – this is like creating another indirection like half(x) instead of directly writing x/2 or x>>1.
    – spend 90% of your time coming up with a very intuitive yet precise name. ‘Misspoke’ == Bug!
  2. The new data types self-manage through implementing methods used by your code.
    – assume nothing about input type other than the interfaces that are accessed through
    (the traditional approach knows exactly what inputs they’re going to see)
    – if you did #1 correctly, there’s no reason to foresee/prepare-for new input types
    (just implement the methods for the input data types that you want it to run for now)
    – no sweep (switch-otherwise) case to mishandle** unexpected new input data types
    (because it won’t run on an input data type until all called methods are implemented)
    – introducing new input data types won’t break the core code for existing types.
    (new input data types can only break themselves if they implemented the methods wrong)

* This is tricky business. MATLAB doesn’t have function overloading (by signature), but will look into the type/class of FIRST dominant input argument and load the appropriate classes. Usually it’s the first argument, but for non-POD classes, you can specify the dominance relationship (Inferior classes). Actually little has been said about such relationship in PODs in the official documentation.

I called support and found that there’s no dominance relationship in PODs, so it’s pretty much the first argument. That means this trick does not work if you want to overload bsxfun() for say, nominal() objects (which doesn’t have a bsxfun() implementation) keeping the same input argument order because the first argument is a function handle for both the factory and the user method. Bummer!

This is why the new ‘*_fun‘ functions I write, I always put the object to operate on as the first argument whenever possible. Gets a little bit ugly when I want to support multiple inputs like cellfun(), so I have to weight whether it’s worth the confusion for the overloading capability.

** Unless you want to torture yourself by listing all recognized types and make sure the ‘switch-otherwise‘ case fails.

Loading