Touch: See: Hear

The Level current Research & Development Project: Dr Lewis Sykes (Manchester Metropolitan University).

Supported by an AHRC Cultural Engagement Fellowship, Dr Lewis Sykes aims to prototype a bespoke, interactive environment for users of the LEVEL Centre. More than just an immersive multi-sensory experience, this is a distinct art work with a purpose – a tool for individual and group audiovisual composition.

While LEVEL has a wealth of experience and expertise in ‘guided’ creative activities they also have a remit to develop ‘autonomous’ multimedia and creative technologies. Accordingly, they’ve recently initiated the ‘Inter-ACT + Re-ACT’ programme – simple, interactive installations (a current example is a digital ‘hall of mirrors’) that engage people through their journey around the building. No instructions are given and staff just observe the level of reaction and engagement to try and work out who does what and why. Touch:See:Hear responds to key questions raised through this emerging programming – “How might learning disabled adults engage with playful multi-media and multi-sensory environments embedded into the fabric of the building?” and “What unique benefits might this type of activity realise?”

Lewis is running a blog site, highlighting: research and theory that informed its conception; issues of interaction and usability; its emerging aesthetics; and key stages within an iterative design process that applies a User Centred Design approach (including significant user testing and analysis) to the development of the installation – all informed through LEVEL’s unique appreciation of the nature of learning disabled adult’s ‘usual’ engagement with interactive multimedia and creative technologies.


Touch: See: Hear – A Multi-Sensory Instrument for Learning Disabled Adults – Project Concepts

IMG_0454Effective playful, interactive digital installations are frequently ‘toylike’ – they’re designed to be immediate, easy to understand and quick to learn. As a result they’re often ’shallow’ – they do one thing well but once their process is understood they offer limited scope for ongoing engagement (although particularly effective examples allow users to ‘usurp’ the interaction process and use it in ways that were not intended). Instruments differ in that while they’re also designed to be (relatively) immediate and easy to understand – a child can play a piano and enjoy the experience – they’re not necessarily quick to learn – an accomplished pianist will have taken years to develop their musicianship. Accordingly instruments have ‘depth’ – the more they’re played with and practised the more adept the user becomes at creating increasingly sophisticated outputs. This inherent nature of the instrument provides a useful mechanism for responding to the question: “How do people start playing with things, interpret them on their own and learn how to use them?”

A key axiom that draws on an appreciation of how adults with learning disabilities respond best to these types of interactive environments is ‘agency’ – that a particular input should have a direct and unambiguous output that is repeatable and consistent. This is essential to discovering how something works and to learning how to use it through play and experimentation – and is the basis of many musical instruments. Yet while we’ll certainly draw upon established HCI principles to develop the instrument’s interface, more important here will be following a User Centred Design approach – evolving the way the instrument behaves through user testing and an iterative design process that responds to the needs and requirements of its end users.

The concept of developing a new musical interface – a minimal glowing sphere – in part responds to an understanding of how the learning disabled react to conventional musical interfaces like piano keyboards. A frequent behaviour is to run a finger from top to bottom and so play a crescendo of notes from high to low. Since their behaviour is very pattern based – they tend to repetition – it’s then difficult for them not to do this every-time. Level Centre have implemented strategies that use the senses to try and change this sequential behaviour, by adjusting the way their keyboards behave – for example playing a sound of the same pitch but from quiet to loud. This makes designing a custom-controller which is responsive to their touch and movement but is unlike any conventional musical interface they may have encountered before a significant element within the project.

While we don’t want to preempt the emerging aesthetic of the sounds generated we know that adults with learning disabilities generally respond to rhythm and the quality of the sound – they engage with ‘pure’ sound very well. This may well result in the instrument’s sonic output being less about conventional notions of pitch, timbre, melody and harmony and more about creating abstract sonic ‘objects’ – sounds, drones and pulses that move around the space and can be combined and layered to create complex aural textures and rhythms.

In developing the visuals we plan to draw on an understanding of how the learning disabled react to visual based installations – particularly those that use cameras to capture their image and allow them to see and hear themselves back. While they undoubtedly enjoy the experience it’s often difficult for them to get over the ‘that’s me” novelty and move beyond it. So we’re unlikely to feature them within the projections as more than a silhouette – although we do intend to integrate image and motion capture devices such as the Xbox 360 Kinect for supplemental control and for documenting their engagement.

Our emphasis here will be on trying to create strong perceptual connections between what’s heard and what’s seen – using abstract lines, curves, shapes and textures that look, behave and move like the sound sounds. For example, lower frequencies may move slowly and wobble while higher frequencies may move faster and be more sharply defined. Moreover, by using positional audio and linking the four projectors together we’ll be able to move sound and image around the space in 3D and in unison, further emphasising their interconnectedness.

Despite seeming a little too literal, this intuitive practical approach is actually grounded in an understanding of current theories around visual perception – that in its early stages we see an image composed of forms, lines, colours, motions, etc. but lacking specific meaning and that in order to create a more instantaneous visual perception of the world around us, we all have, built within us, a set of archetypal shapes that we constantly reference. Additionally, our colour palettes will most likely reflect current cognitive neuroscience research that evidences perceptual connections between pitch and colour – the brighter a colour the higher the pitch we associate with it.

While this approach certainly has to be carefully tailored for a group of people who don’t sequence in the same way as able-bodied and have differing abilities in tracking multiple objects with varying movement and speed, we’re confident it has the potential to engage their multiple senses in ways that are far more meaningful than a simple synchronisation of sound and image. This position is further informed by contemporary audiovisual theory and recent cognitive neuroscience research that argues and provides evidence for a unique quality to combined audiovisual perception – the technique of merging sounds and images in order to generate a third audiovisual form, a type of experience which is distinct from the experience of images or the experience of sounds in isolation from one another.

 


Lewis Sykes is a visual musician, creative technologist, researcher/educator and digital media producer/curator based in Manchester, UK.

A veteran bass player of the underground dub-dance scene of the 90s he performed and recorded with Emperor Sly and Radical Dance Faction and was a partner in Zip Dog Records.

Co-ordinator of the ‘digital futures think tank’ Cybersalon (2002-2007) – founding Artists-in- Residence at the Science Museum’s Dana Centre – he was also Director of Cybersonica, an annual celebration of music, sound art and technology launched at the Institute of Contemporary Arts (ICA), London, UK (2002-11).

Honing an interest in mixed media through an MA Hypermedia Studies at the University of Westminster (2000) he continued to fuse music, visuals and technology through creative collaborations – most notably as musician with the progressive audiovisual collective The Sancho Plan (2005-2008) – performing and exhibiting interactive audiovisual sets and sonic installations at numerous UK and European festivals. Currently as a member of Monomatic he explores sound and interaction and the interplay between music and image through physical works, creative software and audiovisual performances.

A doctoral graduate from MIRIAD, Manchester Metropolitan University since February 2015, his Practice as Research Ph.D. – The Augmented Tonoscope – explored the aesthetics of sound and vibration. His research interests in multi-sensory perception and audiovisual composition techniques which attempt to engage our senses in a way which is not discretely seen and heard – but is instead ‘co-sensed’ or ‘seenheard’ – has led to a current focus on ‘designing for difference’ – the development of interactive environments and assistive technologies for people with disabilities.