Matlab has had support for full object oriented programming (OOP) since 2008a. Those with an OOP background (say, in Java, C# or Python) will find Matlab’s OOP features to be fairly complete. These include:
- Pass-by-reference semantics via subclassing the handle class
- Instance methods and static methods
- Full selection of access control for methods, properties (public, protected, private, constant. sealed)
- Getters and setters for properties (unfortunately cannot be subclassed so fairly limited)
- Interfaces via abstract methods
- Exceptions and try…catch
- Dynamic properties
- Dynamic methods via subsref
- Operator overloading (!)
- Packages (but unfortunately no global import statement)
- Multiple inheritance and mixins (!)
- Destructors and wake-up from load mechanism
- Reflections via the ? operator
- A unique mechanism for arrays of objects (haven’t tried this yet)
- A mechanism for automatic documentation generation (similar to doctrings and Javadoc)
These features are sufficiently complete to build pretty complex systems in Matlab. Unfortunately they’re little known or appreciated, in large part because the core functionalities in Matlab are, with few exceptions, implemented in the procedural (function calls) style.
So what is Matlab OOP good for? Let me give you a few examples highlighting some features:
- You have a function which has a changeable component that is expected to behave in a certain way. For example, optimization routines that work with matrices such as lsqr alternatively accept a function handle as the first argument, where that function is expected to be able to compute X*y and X’*y depending on the value of arguments. An alternative way of getting this behaviour is defining a parent PseudoMatrix class which has two abstract methods: matrixVectorProduct and matrixTransposeVectorProduct. Then the function can require the first argument to be a subclass of PseudoMatrix and call the methods at the required time. The use of an interface (abstract methods) enforces that the class respects the expected interface, avoids the use of anonymous functions to feed in extra data since class instances are tethered to their own data, while allowing the system to grow.
- You have a complex data structure that you want to be able to manipulate naturally. For example, you might have a fitModel function which returns the best parameters for the model, the quality of the fit as a function of the regularization parameters, information about the structure of the model, and so on. In that case you might have a FittedModel class. Model parameters would be stored as instance parameters. You could implement methods such as evaluate and plot which require use of the model weights and knowledge about the structure of the model. Furthermore, you could have several different types of models which all implement the same interface, and thus manipulate an SVM, ANN or GLM in the same basic way, while allowing specialization (for example, implementing a computeMarginalLikelihood method for the GLM).
- You’re working with a funky type of matrix that isn’t sparse, and is too large to fit into memory as a full matrix. Yet you would still like the convenience of using standard operators like + and * rather than function calls. For example, you might want to work with circulant matrices, which have the property that the product of a circulant matrix of the left hand side with a vector on the right hand side is equivalent to a convolution with circular edge conditions. In that case, you could implement a CirculantMatrix class, which could overload * (mtimes, matrix multiplication), + (plus, matrix addition) and ‘ (ctranspose, complex transposition). In the matrix multiplication case, you could implement the function such that it understands that the product of two circulant matrices is another circulant matrix, while the product of a circulant matrix and a vector can be efficiently computed in the Fourier domain. You could then keep adding methods as your needs grow, for example, eig, or plot, or subsref.