Acoustic waves can levitate particles of a wide range of sizes and materials through air (Foresti et al PNAS 13), water and biological tissues. To date, the levitated particles had to be surrounded by acoustic elements (Foresti et. al PNAS 13, Seah et al 2014 (from Subramanian’s Lab at UOS), Ochi et al Siggraph 2014) as single-sided approaches only exerted lateral trapping forces or pulling forces (Zhang Nature 14, demore14). Further, translation and rotation of the trap was limited. The UOS Group is the first in the world to show (Marzo Perez et al. Nature Comms. 2015) full acoustic trapping, translation and rotation of levitated particles in real time using a single-sided array. Our approach creates optimum traps at arbitrary positions for any spatial arrangement of transducers and significantly enhances previous manipulators. We also introduce the concept of Holographic Acoustic Elements (HAEs) based on interpreting the phase modulations of the transducers as a continuous holographic surface that inherently comprises the encoding of identifiable acoustic elements. HAEs allow us to analyse and efficiently generate acoustic traps as well as to relate them to optical traps because the transducers in an array need not be considered individually. Both our individual transducer-based optimization and the holographic approach reach similar performance. However, it is this HAE interpretation that enables us to leverage microprocessors like TI-DMD to create highspeed hardware implementation of acoustic phase manipulations.

Parametric sound is created by appropriately pre-distorting and modulating an audio signal onto an ultrasonic carrier (Pompei, JAES, 1999). Non-linearities in air cause a demodulation of the compound signal. The theoretical model of the de-modulation (Berktay, Sound and Vibration, 1965) employs a number of simplifications to the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation that cause performance limitations of existing loudspeakers especially with respect to the maximum output volume. We are going to take the new approach of employing signal processing models such as the Hammerstein model for the demodulating non-linearity of air. The model parameters are identified from measurements and are not restricted to assumptions on operating points. Furthermore, most existing systems use an array of transducers where radiation properties of the entire array are usually static. We intend to electronically steer the ultrasonic beam and thus the audible signal such that it either areflects off the levitating objects or reaches the user’s ears directly.

One of the longest standing visions of interaction with computers has been the ultimate display of Ivan Sutherland, where computers can “control the existence of matter”. Both Virtual Reality and Augmented Reality were inspired by this vision. More recently, Ishii proposed the vision of radical atoms, where we might directly interact with computer-controlled matter for input and output. The concepts have been ideational ever since as the technological implementation has been completely unclear. Only partial steps towards a realization had been achieved for example by creating the ability to evoke haptic OR visual OR auditory feedback. Potential application areas are therefore limited and largescale deployment of these technologies has not occurred. In Levitate, we go even beyond Sutherland’s and Ishii’s initial (and previously unachieved) visions by enabling users to interact with computer controlled levitating particles in a multimodal way using the full range of human capabilities, with graphics projected on to the surface of the particles.

Please publish modules in offcanvas position.