BOBBY AND STILLMAN: THE COLLABORATION WITH UCL’s BIG DATA INSTITUTE
In the applied mathematics of fluid dynamics, a ‘Eulerian approach’ describes movement passing a fixed point.
Swiss mathematician Leonhard Euler, fig.1, formulated equations for such motion in 1757.
In the 1770s his successor at the Berlin Academy, French/Italian mathematician Joseph-Louis Lagrange (fig.2) described phenomena recorded by a device that moves with the water.
These two perspectives, fixed or moving, are called ‘Eulerian’ and ‘Lagrangian’ approaches.
Amongst many other projects, Swedish mathematician Sofia Olhede (UCL Big Data Institute) works on global oceanography and climate change, and with data that refers to both approaches. Moored arrays of apparatus can record ‘Eulerian’ data about circulation that is different, but complementary, to ‘Lagrangian’ recordings from moving devices.
(For example, these devices might track with a current according to temperature or salinity. But their motion can distort the data; Sofia has been developing mathematical techniques to compensate or ‘undo’ this distortion.)
Sofia introduced Susi and Crispin to Eulerian/Lagrangian frames of reference. This clarified tensions in their earlier works, between an immersed and active participant (as in the film ‘Stone Hole’), and an objective and separate observer (eg. the film ‘The Moon & the City’, part of the ‘Thames Tides’ exhibition).
In turn, the very immersive films of ‘Thames Tides’ made Sofia hungry for the original sound and image data.
Together they planned an explicitly Eulerian/Lagrangian set of new recordings.Simultaneous image sequences and sound recordings were made during a tidal cycle in central London. They were made from two sets of apparatus with different perspectives; close together, and observing one another.
‘Bobby’, a buoyant camera+audio recorder assembly, was free to float with the rising and falling water (but limited by its tether). ‘Stillman’, a second assembly, was fixed at a height above Bobby’s starting position, but still below the extent of that day’s high tide.
Different analyses of the resultant data sets are now juxtaposed on screen.
Emerging themes include stereo and binaural hearing; human ear-brain sensitivities compared to ‘objective’ analysis of sound; synchronicity, ‘simultaneity’ and the speed of sound; frequencies, time-scales and intervals; and the whole problem of ‘sampling’.
The artists heard from US mathematician Patrick Wolfe (UCL Big Data Institute) that the Heisenberg-Gabor uncertainty principle is even involved; the more exact you want to be about frequencies in a signal, the longer time-window is needed for observation. Sofia explained that our notion of a signal depends on how quickly we see it.
The two artists were encouraged to be more scientific about their work; in turn, the two mathematicians have produced alluring and aesthetically engaging movies.
BOBBY AND STILLMAN: THE INSTALLATION AS PART OF ‘CREATIVE REACTIONS’ LONDON 2017
Timelapse Twinset (x80 normal speed; 6min.)
Simultaneous timelapse recordings of a tidal cycle in central London.
Bobby’s point-of-view (PoV) is on the left, Stillman’s PoV on the right.
They are close together, and observe each other; ‘Bobby’ can move, while ‘Stillman’ is fixed.
Each also recorded stereo sound, synchronized with clapper-boards.
Human ear-brain apparatus can’t make use of sound at x80.
Instead, passages of ‘real-time’ audio relating to the events on-screen, were edited together.
Stereo headphones feed Bobby’s audio to the Left ear, Stillman’s to the Right.
Sub-Sample Sequences (‘real-time’; 17min.33sec.)
Sub-samples of real-time sound are sequenced into a common soundtrack for all 4 movies.
Again, Bobby’s perspective is audible in the left earpiece, Stillman’s in the right.
And imagery of Bobby’s PoV is projected onto the left screen, Stillman’s to the right.
Above, moving spectrographs of each sound perspective is visualized, with frequency shown on the Y axis and volume by colour and intensity.
Below these are sequences of photographs taken from the two perspectives at the times of these audio sub-samples. Time-lapse images taken at 2-second intervals are displayed in ‘real-time’, to match the audio.