Towards an Integrated Solution from a Linguistic Standpoint
Published in: Computer for the deaf (and hearing impaired): Towards an integrated solution from a linguistic standpoint. In: Klaus, Joachim/Auff, Eduard/Kremser Willibald/Zagler, Wolfgang L. (eds.): Interdisciplinary aspects on computers helping people with special needs. Wien/München: Oldenbourg 1996 (= Schriftenreifhe der Österreichischen Computergesellschaft 87), Bd 1, S. 205-210
There are massive conflicts about the 'correct' use of terms like 'deaf' or 'hearing-impaired', resulting from political and social motives. I avoid them by using paraphrases from the linguistic standpoint: There are people whose loss of hearing is so strong that they cannot perceive spoken language sufficiently for the development of an individual language system without additional support. In such cases it is irrelevant whether there are 'remainders of hearing potential' detectable or not. This group of addressees is normally called 'deaf' in English. We can estimate the number of deaf in a country by taking 1 or 1,5 promille of the inhabitants (that is 7-10.000 for Austria).
Additionally there is a large number of 'hearing-impaired' (up to 4-6 percent of inhabitants). They can perceive spoken language in a manner that allows the development of an individual system, at least a sufficient approximation to it. The same applies to practically all who have acquired the impairment at a later age (the problem of delimitation between these groups has to be stressed). Nevertheless, some members of that second group of hearing-impaired also have to live with severe communicative restrictions. Take for example a person who can fully perceive spoken language only in the context of a noise-reduced face-to-face dialogue (inclusive lipreading). Probably visual communication systems could be an important help for such people (on free choice).
 Whoever joins the scientific discussion concerning deafness and educational answers to it will find himself in several conflicts: He/she will feel forced to identify herself with one of the struggling parties. One of the most important borderlines is that between supporters and rejectors of sign language as an instrument of deaf education in a special bilingual context. I have to confess that I'm a supporter and all what I say is to understand from this - well arguable - position (see ,  for this position,  for the controverse one).
Generally the following phenomena manifest themselves to various extents as a function of the degree of impairment.
Early acoustic communication between parents and a deaf child is strongly or partially restricted, especially spoken language is affected (exception: parents show the same impairment as children and use adequate communication strategies). Thus consequences for the individuals's psyche as well as social consequences are probable.
With exclusively 'oral' education (that means: using spoken language alone or dominantly in order to get spoken language as the only language system), many gaps in the information system of the individual result. One of the most severe consequences is the complete lack of a fully developed language system (either oral nor visual).
The educational and professional opportunities as well as the opportunities for self-development of persons who are deaf or whose hearing is severely impaired from birth, are reduced significantly.
Whoever signs instead of speaking or combines signing with speaking, manifests his/her need for a visually coded language.
 See , , . Traditional hearing aids and the cochlear implant are topics too comprehensive to be extensively discussed in this paper; see for 'pro-CI' , , for 'CI-criticism' , .
 Due to the massive social and communicative pressure towards oral language use we can hypothesize: It is no failure to bring up a child with a special 'deaf-bilingual' method if it is suspected to be deaf. Should it turn out to hear sufficiently for developing the oral language system (that goal could eventually be reached by a cochlear implant), he/she will give up signing as soon as the oral language fully works.
The main hypotheses are:
The more channels that are used for cognition and information processing (that is, the more multimodally information is presented), the better the result of learning is.
If a communicative channel is less accessible or blocked for the development of language, this fact has to be compensated by the use of another, fully accessible channel (this holds for any impaired sensoric or motoric dimension).
to acknowledge optical means for language and communication from the point of diagnosis onwards of a severe hearing damage in a child.
to use such means systematically and not only in cases where a (normally restricted) acoustic communication totally fails.
to use these means in a well understood, special bilingual context, in which visual and auditive communication possibilities are offered . The balance of visual (sign language and written variants of spoken language) and acoustic stimuli has to be planned individually to secure best results. Nevertheless, from the hypotheses it follows that the visual channel is to 'lead' the initial development. But developing competence in written and spoken (oral) language is an important goal because the deaf people live within hearing majorities.
In analogy to the ethical rules for working with 'exotic' oral languages and cultures, deaf as native speakers of their sign language must get their full 'language rights', including acknowledgment of the national sign languages like oral minority languages. Deaf are primary candidates to give courses in 'their' language (that follows from the 'native speakers first'- principle). Interpreters have to be educated like those for oral languages. The deaf communities must have the possibility to (co)determine plans and work on scientific analysis of sign languages, deaf education, and development of their facilities.
 This proposal is highly practical in our opinion. It is a compromise between somehow more 'extreme' proposals (see , , 2-5; , 2-5).
The general logic of dedicated experts for the sense-impaired is: Compensate the loss of one sense by allowing an other sense to take the functions of the lost one. Out of this logic we got braille and computers for the blind. Computer technology for the deaf and hearing-impaired should mainly serve the improvement of communication and information processing tasks. Therefore the analogy for the deaf is - starting from the example of the blind: Take spoken language out from the acoustic channel and put it into the visual channel; that is: Write it.
We know, of course, that we get a lot of information for everyday life from written language, but this logic is not appropriate for all groups of the hearing impaired. An improvement of information processing by using written instead of spoken language can only be guaranteed if (a) the person in question has sufficient knowledge of written language in order to understand many of its functions and variations, and (b) functional contexts for written language are present. These conditions are often not fulfilled for those deaf who have been educated by 'oral' methods: Because in most of these cases no sufficient spoken language system develops, the prediction is easy that written language (which follows spoken language normally) also cannot develop.
So we have to deal with the phenomenon that many members of the deaf group are inable to understand written texts of normal complexity. From that come several frustrations on both sides: Computer experts are frustrated because the well-intentioned presentation of spoken language in visual form does not work. When they want to communicate with deaf persons to overcome the problems, they have to take notice of a partially unfamiliar culture of the deaf (see , , , ) and a severely restricted communication via spoken language. In many cases the communication is ended by these shortcomings. The deaf, on the other side, get another proof of the hearing community's ignorance concerning their life and their needs.
A computer aid for hearing-impaired persons must allow many individual strategies of approaching information and language. When we classify these strategies roughly, we get one group of hearing impaired people (mostly those with lighter impairment from birth or impairment acquired after approx. age 6-10 years) who can be helped by optimizing the auditive perception via hearinng aids and by improving the visual channel for information processing by learning techniques of perception like lipreading, further by using written together with spoken language. For this group the visual channel has mainly a helping, additional function for language acquisition.
The other group are those people with severe impairment for whom a special bilingual way is preferable (or at least should be possible). Members of this group get their access to information and language(s) by using combinations of sign language or other visual communication systems like 'Signed English' (in German 'Lautsprachbegleitende Gebärde') and written or spoken language (in some variants, depending on the needs, wishes, and starting points of the child and the parents). The visual channel is essential for them in language learning and communication.
In order to support the improvement of education of the second group, a lot of work is urgently required to provide sign language items and sign communication as well as to provide the interconnection of sign language with written and spoken language. We have to take into account that a language system can be best 'anchored' in the cognitive system by bringing everyday activities in close connection to language. On the other hand, every deaf person should be able to decide, whether he/she wanted to identify with a signing or a speaking community (this includes free choice concerning their primary language).
We have to develop programs for computers which allow
parents to learn visual communication systems including sign language;
parents to look for the adequate combinations of visual and acoustic means of communication for/with their child;
parents and children to take adequate exercises for the developing of visual and oral communication systems
Naturally, computer aids cannot substitute everyday communication but they are inestimable tools for strengthening and broadening language competence and information processing. The point has to be stressed that communication skills are learnt by communicating with human beings. That means, we have to provide an education system in which the computer aids for deaf are embedded in a course system which secures the use of those aids.
I can only mention a few features which an adequate computer aid should display:
The computer system itself must be able to run digitized videos in a sufficient size and good quality (e.g. at least 20 frames/sec). The possibilty of videoconferencing should be developed as soon as possible to provide a similar quality. If such a quality can be obtained, the telecommunication of the deaf community will be significantly improved.
Learning programs have to include versions in sign language, signed oral language, as well as written and spoken language for every item. That means: All information which is now available for hearing people, has to be transposed into visual communication systems step by step (this is a program for many years!). We have to start at two or three points at a time, namely: producing materials for early (preschool) education, school education, and adult education.
 Those 'signed (oral) languages' are systems which (in a strict sense) provide a morpheme-to-morpheme transposition of an oral language into the visual channel. By this process all structures of the special oral language are preserved. From this point of view, 'Signed' English, French, or German, etc., can be used in two important 'bridging' functions: They can be used for hearing parents of deaf children to make the beginning of a visual communcation system less difficult. And they can be used to show the structures of the specific oral language to deaf children in bilingual education. It has been found that 'signed language' is an operative means for initializing visual communcation. But it shows no features of an adequate use of the visual channel (including economic factors) so that it remains relatively slow and complicated. Therefore, if a visual language of its own should develop, it has to be replaced by sign language as soon as possible .
 Of course, we have to take into account age, family structure, purpose of computer aid etc.
 BAUMGARTNER, P., DOTTER, F., HOLZINGER, D., PAYR, S., Interaktionsformen multimedialen Lernens (am Beispiel eines Kurses zur Österreichischen Gebärdensprache), Projektentwurf. Klagenfurt 1993 (= WISL, Technical Report 14)
 CALCAGNINI STILLHARD, E., Das Cochlear-Implant. Eine Herausforderung für die Hörgeschädigten-Pädagogik, Luzern 1994
 DOTTER, F., Gebärdensprache in der Gehörlosenbildung: Zu den Argumenten und Einstellungen ihrer Gegner, in: Das Zeichen 5 (1991), Heft 17, 321-332
 DOTTER, F., HOLZINGER, D., Vorschlag zur Frühförderung gehörloser und schwer hörbehinderter Kinder in Österreich, will appear in: der Sprachheilpädagoge
 ERTING, C., JOHNSON, R., Deaf way: The international celebration of the language, culture, history and arts of deaf people, 1994
 HOLZINGER, D., Forschungsbericht 'Linguistische Analyse der Gebärdensprache', Innsbruck 1993
 HOLZINGER, D., Gebärden in der Kommunikation mit gehörlosen Kindern, in: Hörgeschädigtenpädagogik (1995), 81-100 and 163-180
 LANE, H., The mask of benevolence, 1993
 LENARZ, T., LEHNHARDT, E., BERTRAM, B. (eds): Cochlear Implant bei Kindern, Stuttgart 1994
 LUCAS, C., The sociolinguistics of the deaf community, San Diego 1989
 PADDEN, C., HUMPHRIES, T., Deaf in America - Voices from a culture, Cambridge, MA, London 1988
 UDEN, A. VAN, Gebärdensprachen von Gehörlosen und Psycholinguistik, Heidelberg 1987
 WESEMANN, J., Gehörlosenpädagogik und Technologie - "Der maßgeschneiderte (gehörlose) Mensch, in: Das Zeichen 8 (1994), 186-192
 WILCOX, S., American deaf culture, Silver Spring, MD 1989
 WISCH, F.-H., Lautsprache und Gebärdensprache. Die Wende zur Zweisprachigkeit in Erziehung und Bildung Gehörloser, Hamburg 1990
This article comes from work within the project "Sprachwissenschaftliche Arbeiten zur Österreichischen Gebärdensprache" at Klagenfurt University (Linguistics Department), which is funded by the following institutions: Fonds zur Förderung der wissenschaftlichen Forschung, Arbeitsmarktservice Kärnten, and Europäischer Sozialfonds. Members of staff are: Jean Ellis (interpreter), Marlene Hilzensauer, Manuela Hobel (deaf), Klaudia Kramer, Ingeborg Okorn (deaf) and Andrea Skant.
Franz Dotter: Computer for the Deaf (and Hearing-Impaired): Towards an Integrated Solution from a Linguistic Standpoint.
In: Klaus, Joachim/Auff, Eduard/Kremser Willibald/Zagler, Wolfgang L. (eds.): Interdisciplinary aspects on computers helping people with special needs. Wien/München: Oldenbourg 1996 (= Schriftenreifhe der Österreichischen Computergesellschaft 87), Bd 1, S. 205-210
bidok - Volltextbibliothek: Wiederveröffentlichung im Internet (oder Erstveröffentlichung)