Explorable explanations are interactive web essays that explain challenging technical ideas. This elegant distill.pub article explains matrix convolution and related ideas like receptive field, important notions in CNNs that also have applications in image processing. Educational efforts like these are valuable but labour-intensive, especially for the kind of interactive graphics we might want to show how an algorithm like convolution works.

How could a language like Fluid help? For an interactive explanation of an algorithm, one possibility is to use Fluid's built-in provenance-tracking infrastructure to allow a user to explore the relationships between the stages of the convolution pipeline, using interactions like the ones shown below. This moves a real implementation closer to being a self-explanatory artifact, reducing the need for separate, custom-crafted explanations. Enriched with integrated documentation, “open implementations” like these could form the basis of a kind of literate execution and way of authoring explorable explanations with less effort.

An infrastructure for explorable explanations

As a simple illustration, consider the following Fluid implementation of convolution. The program takes an input matrix and transforms it using a small matrix called a filter (or kernel), as might be used in image processing to apply an effect like blurring or embossing. Toggle the data pane on the left to reveal the inputImage and filter; then mouse over the output to see how the inputs are being used. This implementation can automatically reveal how different cells in the output demand different cells in the input array and filter.

loading figure(s)

Notice how only certain parts of the input are relevant to a given output cell. Can some of the irrelevant inputs be attributed to zeros in the filter? Which ones? Also notice how the demand varies as we approach the edge of the output: because this implementation treats the input as though it were padded with zeros at the boundary, parts of the filter are irrelevant to output cells near the edge. Interactions like these would be more useful if we could show the actual computation involved for a given element, rather than just the IO relationships, but even this simple extensional view already reveals interesting things about the implementation, without the need for a bespoke visualisation.

What happens when you mouse over a corner of the output?

Relations of cognacy

The idea of related inputs introduced earlier can also be informative. Try interacting with the inputImage instead. The highlighted output now shows the elements that consume the data point under your mouse; the highlighted inputImage region includes all the cognates of that data point: all the inputs that have one of those outputs as an ancestor in the dependence graph. The highlighted region is a kind of “light cone”, picking out a causally closed region of the dependence graph.

The key take away here is that the author can simply express convolution as a pure functional algorithm; the Fluid runtime and visualisation front-end takes care of providing the interactions. The library function convolve below implements the convolution algorithm, and helper functions zero, wrap and extend implement specific policies (“methods”) for dealing with the boundary.

convolution.fld
emboss.fld

continued..