Vision-independent technologies and interfaces are an important topic in interaction design. Such technologies are of interest for visually impaired users and for elderly people who have difficulty in reading. Furthermore, they offer new pathways for communication in cases where visual overload is a problem, where the user needs to use vision for a more critical task, or where vision-free interaction could be more convenient for the user. Commonly used alternatives to vision are audio and tactile communication. A more direct means of communication involves brain computer technologies that involve the user’s brain activity to control devices. Interaction designers play a critical role in the realization of products, devices, and systems that involve vision-independent technologies reviewed in this paper. Their efforts will include a seamless adaptation of these technologies into easy to use products, as well as exploration and integration of the sociocultural context into such innovations.
1. Breaking loose from vision-dependent interfaces
Most of the consumer electronic products in the market involve touch screen or flat panel interfaces that require some kind of visual interaction. For example, while using a smart phone, the user needs to read a text message, choose an item from a menu, press an icon, make a specific finger gesture to open a different screen, or interpret a graphic display such as a map. Such products are almost impossible to use by visually impaired people (VIP) without any modification, additional technology, or some help from a person with adequate vision. Why? Because a flat screen has no topography that can be discovered by touch, unlike a pushbutton on a conventional phone. There is no difference between two points on a screen. Thanks to researchers, designers, and engineers, there are already various solutions available to overcome this problem (Bengisu, 2010; Brewster et al., 2003; Jayant et al., 2010; Qian et al., 2011). The general strategy is to make use of sensorial feedback other than vision such as sound. For example, screen readers are an established technology that aid VIP to read aloud the information represented on the device screen. Vision-independent technologies and products are not solely developed for VIP. They also offer the following benefits:
• increased accessibility for people with motor disabilities,
• increased accessibility for elderly users,
• improved security (by not displaying confidential information on the screen),
• easier and more accurate data input since the user is not limited by a small screen size.
Here, motor disabilities refer to various disabilities that may prevent or impair access to the interface of a device such as the keyboard or smart screen. The development of voice recognition and voice command has already become a useful alternative for people with motor disabilities. For example distal muscular dystrophy may affect the muscles of hands or arms, limiting the ability to use fingers for any data input into a device. However, most of the patients with such impairment can talk (Emery, 2008) and thus use a voice-based interface. In many adults, aging causes at least minor impairment in vision, hearing, or dexterity. Arthritis or tremor can make fine motor movements difficult to control (Fisk et al. 2005). Inclusive design or universal design philosophies aim to take into account the needs of the largest possible number of users in product, service, and system design. Thus, problems of vision loss or impaired dexterity need to be addressed by designers who design products such as tablet computers or, say, digital blood pressure monitors that will be used by senior adults, in addition to younger users. New ways of communication with such products gives designers fresh opportunities to consider during the design phase. Furthermore, the increased number of data input modes will make many products more accessible, increase accuracy of data input and output, and provide the user different alternatives, thereby increasing convenience and efficiency. Take, for example, data input for computers. Today, there are various alternatives to enter text, numbers, or other type of data into a computer, including keyboards, mouse, voice-input, touchscreen, light pen, trackball, and joystick (Wickens et al., 2004). Each of these technologies offers a different degree of cognitive, perceptual, and motor load as well as fatigue. A user who wants to omit stress or disease due to repetitive movements or prolonged static postures may benefit from the use of two or three alternative interfaces used at different times. For instance, instead of using the mouse continuously, it would be healthier to use the touch pad and mouse alternatively. Therefore, it is conceivable that the inclusion of new ways of communication with electronic devices will be beneficial for all types of customers, regardless of age, sex, nationality, or the presence of disability. “Interaction design is making technology fit people” according to David Kelley, one of the pioneers of this field. In this simple definition, technology represents software, hardware, screen graphics, displays, and input devices. Kelley believes that interaction design should make technology useful for people, delight them, and excite them (Moggridge, 2007). Following this definition, one could say that interaction design is at the core of vision-independent devices and applications. The role of interaction designers would be to adapt vision-independent technologies, develop easy to easy to use devices, and create pleasurable experiences. Examples to vision-independent devices are voice-activated mobile phones and tactile computer screens. Designers and design researchers could contribute to this emerging field through the exploration of the social and cultural environment of potential users of vision-free devices. A simple example would be helpful to clarify this point. For instance, Söderström and Ytterhus’s research (2010) on the use of assistive technologies by young people demonstrated that many disabled teenagers don’t want to be different from their friends and they don’t want to be seen as dependent on assistive technologies. Therefore, some of them prefer not to use an assistive device if it is something obvious, restrictive, slow, or if the assistive technology overtly exposes their impairment. In other words, they don’t prefer to use an assistive device if they do not seem as ordinary as their peers while they use it. Such information would obviously be very useful for interaction designers during the development of a device for young disabled users.
2. Vision-independent technologies
Various technologies are under development, aiming to offer reliable alternatives for vision-guided manual input and vision-based data output. Currently voice-based technologies are the most developed of such technologies, already offered as alternatives in many devices, although their owners may be unaware of it. Speech recognition became available for consumers in the 1990s. One of the first commercial products was Dragon’s speech recognition software, launched in 1990. An improved version was introduced in 1997 at a much lower price ($695 instead of the initial $9000) recognizing about 100 words per minute (Pinola, 2011). The Windows Speech Recognition software was released with the new Windows Vista in 2006. Unfortunately, the software failed to function correctly during a demonstration at a Microsoft financial analysis meeting, causing much embarrassment and loss of public trust in this technology (Wikipedia, 2013). Today, speech recognition and voice command are standard in certain Windows and Mac systems. Furthermore, Google introduced a free application for iPhone in 2008 that uses voice recognition technology for web search engines. A similar application is now available for other smart phones. Although speech recognition technology is rapidly diffusing in various platforms, this is not necessarily a vision-independent technology since the user still needs to see the screen and read the results of a web search or speech to text transformation. An additional program is needed to completely bypass vision. This is a text to speech software which synthesizes speech and reads the text aloud. Such software are known as screen readers. Common commercial software for mobile phones include TALKS for Symbian 3 operating system (OS), Mobile Speak for Symbian and Windows Mobile OS, and TalkBack and Spiel for Android OS (Bengisu, 2010).
3. Vision-independent brain computer interfaces
Brain computer interfaces (BCIs) are devices that translate brain signals into commands that are used to control equipment or computer devices (van Erp & Brouwer, 2014; Kim et al., 2011). In principle, any brain mapping technique such as functional magnetic resonance imaging, near infrared spectroscopy, or electroencephalography (EEG) can be used for this purpose. EEG, which reflects electrical activity of neuron ensembles, is the most popular mode because it is noninvasive, affordable, harmless, and well developed for practical applications (Kim et al., 2011). Three types of BCI approaches exist, namely active, reactive, and passive. Active BCIs rely on the active generation of brain patterns. For example, the user imagines a left hand movement to move a cursor to the left on a screen or vice versa. In the reactive mode, the brain reacts to specific stimuli. For example, different environmental sounds are created and the user is required to concentrate on a certain one and ignore the others. Passive BCIs detect the cognitive or emotional state such as workload, frustration, attention, and drowsiness. This information can be used to improve human-machine interaction but the user has no direct control on the output in this case (van Erp & Brouwer, 2014). Most of BCI research and product development has been focusing on vision-based active or reactive brain signals until now. This approach uses visual stimuli or visual feedback. For example, the visual P300 speller uses a 6 x 6 symbol matrix that contains letters and symbols within rows and columns. The user is asked to focus on the desired symbol and mentally count the number of times it flashes. The computer identifies the symbol attended by the user as the intersection of a row and a column (Riccio, 2012). Vision-free BCI recently became a subject of research in order to explore alternative modes of communication for disabled users that have difficulty fixing their gaze on specific visual stimuli or patients with vision impairment. BCI technology is also interesting for non-medical applications such as the game industry. There are already commercially available games such as Uncle Milton’s Force Trainer and Mattel’s Mindflex that use EEG caps (Fig.1) as the sole interface to control the game (van Erp & Brouwer, 2014). Such products rely on visual feedback but vision-free BCI could be used as additional means of control in such applications. Two vision-free BCI technologies being studied are auditory and tactile BCIs. Studies on auditory BCIs mostly use event-related potentials (ERPs). Participants are subjected to acoustic signals that are termed events. Some of these events are target events and the rest are non-target events. Both of these events can be tones, words, or environmental sounds. Participants are asked to focus on target events. Auditory streams are presented with changes in position, frequency, sequence, pitch, or loudness of tones. The target event is rare, while non-target events are frequently encountered. This approach in ERP based BCI is called an oddball paradigm and the target stimuli are called oddball sequences (Riccio et al., 2012). Based on such methodologies, auditory spellers have been developed. These interfaces are used for basic communication with the user. Simple words or sentences are communicated just by concentrating on oddball events. For example, participants are asked to select a letter or a whole word such as yes, no, stop, or pass, corresponding to the location of one of the six loudspeakers surrounding them. The average accuracy with healthy participants was in the range of 65-75% during various trials (Riccio et al., 2012). Tactile BCIs employ the sensation of touch for communication. At the moment, research is focused on tactile interfaces as a route for stimulation but studies in other fields (not related to BCI research) also use it for feedback. Tactile stimulation can include Braille letters, vibration directly delivered to fingertips, vibration delivered to other parts of the body through vibrating elements (tactors), and vibration delivered to fingers through a flat screen. In one experiment, tactors were placed on a vest in order to deliver vibration through the torso (van Erp & Brouwer, 2014). The user was required to focus on one of these tactors while ignoring others. This system is an interesting alternative for VIP or in situations where visual overload creates a risk.
4. Vision-independent interfaces for mobile interfaces
Another frontier of research is in the field of mobile assistive technologies for VIP. Portable devices such as mobile phones, digital music players, organizers, and handheld computers represent an important market both for people with good vision and VIP. There are many product development opportunities related to vision-independent technologies that may interest designers, engineers, and R&D managers. As mentioned before, any research aiming to solve problems faced by VIP could actually also benefit the whole sector. A recent review describes current research in the field of mobile assistive technologies for VIP (Hakobyan et al., 2013). Here only some of the more interesting and relevant studies are highlighted. As it is the case for BCI technologies, two alternative sensorial paths are considered in mobile devices in order to substitute vision. These are auditory and tactile paths, which are sometimes used together as well. One example of an auditory solution is the gesture-driven 3D audio wearable computer developed by Brewster et al. (2003). The aim of this research team was to create interfaces that use as little of users’ visual attention as possible and to be independent of the limited screen space causing input/output problems. Such an interface could be useful, for instance, when someone is trying to take an important note in his electronic organizer or handheld computer while walking, or switching from one song to another in the menu of a digital music player, while jogging. Brewster el al.’s wearable device uses a spatial audio interface for data input. The system interprets head gestures to choose items from menus. The user nods in the direction of sound or speech that surrounds the head like a pie in order to choose the desired item. An evaluation of this device with and without audio feedback indicated that dynamic guidance by audio feedback results in more accurate interaction. The efficiency of such a system depends on cognitive load (Vazquez-Alvarez & Brewster, 2011) but in general, it seems to be a very promising alternative to current vision-based devices. Tactile icons or tactons are tactile stimulations with various dimensions such as intensity, rhythm, and spatial location. Qian et al. (2011) used pairs of tactons in order to test and explore new tactile feedback mechanisms. For example, one tacton involved a pulse duration of 200 ms and an interval between pulses of 500 ms. Another tacton consisted of a pulse duration of 400 ms and an interval of 2000 ms. These tactons were tested under a set of distracting conditions showing that music and street noise reduces the chance of tactile recognition. Some guidelines were developed for interface designers who will use tactile feedback for mobile devices. Various possibilities exist for tactile input. Braille has been used directly on mobile phones for tactile input. Spice Braille Phone (Fig. 2) is a low cost mobile phone produced for VIP by the Indian company Spice, introduced in 2008. It is a simple phone without a screen. The keypad provides audio feedback to inform the user about the number being dialed. Other means of tactile input include talking touch sensitive interfaces such as the Slide Rule and vibration-based Braille script on a touch screen such as the V-Braille. Slide Rule is an interface that uses touch input and speech as output. As the user navigates through the screen, a list of on-screen objects appear. The user listens to the items on the menu while she brushes her fingers down and uses certain gestures such as tapping in order to select the desired item (Hakobyan et al., 2013). V-Braille is a free application for smart phones. It converts the mobile phone screen into a screen with six dots (Fig. 3). These dots correspond to Braille letters made of six raised dots on paper or other materials. Dots on the screen vibrate when touched and the rest of the dots remain still. The user can identify the letter through this interface. Both input (through the VBWriter application) and output (through VBReader) are possible (Jayant et al., 2010).
5. Conclusion and future research
Vision is an essential element of communication with electronic devices. However, there are many situations that necessitate the use of new modes of interaction. The major driving force behind research in the field of vision-independent technologies is loss of vision. VIP and elderly people who commonly develop certain problems with vision may benefit from new ways of communication with devices and their environment. Furthermore, research in vision-independent interaction design will also help to develop new, easier to use, and pleasurable interfaces and devices. Auditory and tactile paths are being studied as alternatives to visual communication. An interesting possibility is to make use of brain computer interfaces. In certain cases, the remaining two senses, smell and taste, could also be used to replace vision. New fields of research, design, and product development include, but are not limited with, the following:
• vision-free interfaces for devices with very small screens or with no screens at all,
• vision-free technologies for products that have to be used in the dark (for example for military applications),
• alternative paths of communication in situations where visual overload becomes a problem (for example in complex tasks or multitasking such as walking/driving/jogging and interacting with a device at the same time),
• alternative paths of communication that allow one to switch off the screen (for example for security reasons),
• exploration of the social and cultural context of users and the integration of this information into new products.
Bengisu, M. (2010). Assistive technologies for visually impaired individuals in Turkey. Assistive technology, 22(3), 163-171.
Brewster, S., Lumsden, J., Bell, M., Hall, M., & Tasker, S. (2003). Multimodal “eyes-free” interaction techniques for wearable devices. In proceedings of the SIGCHI conference on human factors in computing systems. ACM. pp. 473-480.
Emery, A.E.H. (2008). Muscular dystrophy. New York: Oxford University Press.
Fisk, A. D., Rogers, W. A., Charness, N., Czaja, S. J., & Sharit, J. (2012). Designing for older adults: principles and creative human factors approaches. New York: Taylor & Francis.
Hakobyan, L., Lumsden, J., O’Sullivan, D., & Bartlett, H. (2013). Mobile assistive technologies for the visually impaired. Survey of ophthalmology, 58(6), 513-528.
Jayant, C., Acuario, C., Johnson, W., Hollier, J., & Ladner, R. (2010). V-braille: haptic braille perception using a touch-screen and vibration on mobile phones. In proceedings of the 12th international ACM SIGACCESS conference on computers and accessibility. ACM. pp. 295-296.
Kim, D. W., Hwang, H. J., Lim, J. H., Lee, Y. H., Jung, K. Y., & Im, C. H. (2011). Classification of selective attention to auditory stimuli: toward vision-free brain–computer interfacing. Journal of neuroscience methods, 197(1), 180-185.
Moggridge, B. (2007). Designing interactions. Cambridge MA: MIT Press.
Pinola, M. (2011). Speech recognition through the decades: how we ended up with Siri. PCWorld, Nov 2. http://www.techhive.com/article/243060/speech_recognition_through_the_decades_how_we_ended_up_with_siri.html [19-08-2014].
Qian, H., Kuber, R., & Sears, A. (2011). Towards developing perceivable tactile feedback for mobile devices. International Journal of Human-Computer Studies, 69(11), 705-719.
Riccio, A., Mattia, D., Simione, L., Olivetti, M., & Cincotti, F. (2012). Eye-gaze independent EEG-based brain–computer interfaces for communication. Journal of neural engineering, 9(4), 1-15.
Söderström, S., & Ytterhus, B. (2010). The use and non-use of assistive technologies from the world of information and communication technology by visually impaired young people: a walk on the tightrope of peer inclusion. Disability & Society, 25(3), 303-315.
Van Erp, J.B.F., Brouwer, A. M. (2014). Touch-based brain computer interfaces: state of the art. IEEE haptics symposium, 23-26 February, Houston, TX.
Vazquez-Alvarez, Y., & Brewster, S. A. (2011). Eyes-free multitasking: the effect of cognitive load on mobile spatial audio interfaces. In proceedings of the SIGCHI conference on human factors in computing systems. ACM. pp. 2173-2176.
Wickens, C. D., Lee, J.D., Liu, Y. Gordon Becker, S.E. (2004). An introduction to human factors engineering. Pearson, Upper Saddle River, NJ.
Wikipedia (2013). http://en.wikipedia.org/wiki/Windows_Speech_Recognition [19-08-2014].
From screen readers to tactons: vision-independent technologies for accessible products