Adapting a training model for Inclusive Disaster Risk Reduction
Çaglar Akgungor (AKUT)
Developing a Disability Included Community Disaster Preparedness Training
Disabled people have special needs that are often neglected or classified under those pertaining to a broad category of “disadvantaged groups” when the emergency management plans are prepared. Moreover, the resources dedicated to the “disadvantaged” are usually disproportionally low for fulfilling those individuals’ needs and demands. Concerning Istanbul, some limited action has been taken since the 1999 earthquakes, but it is far from having overcome these shortfalls. Consequently, additional action is needed to include the disabled in disaster preparedness processes and to increase their autonomy (and resilience) vis-à-vis catastrophes. Taking these needs into account, EDUCEN’s Istanbul Case Study aimed to reduce this specific group of urban citizens’ vulnerability to disasters by developing a disability-included community disaster preparedness model, in collaboration with disability organizations. For that purpose, both “soft” (based on human interaction) and “hard” (based on technology) tools have been used, often in combination. The main issue here was to use available tools and techniques within reach of virtually any NGO with a modest financial capacity, hoping to produce a model that can be replicated in other parts of the World.
“Extended” Focus Group Meetings in Istanbul Case Study
We made use of focus group-like interactive sessions during the EDUCEN Istanbul Case Study in order to collectively review AKUT Association’s community disaster training materials with our disability partners. The sessions differed from the focus group concept as it is often described in social sciences; hence the name “extended focus groups”. The focal point of the meetings was AKUT’s disaster preparedness presentation, a slide show that consists of 3 modules (earthquake, fire and flood) and 120 slides in total. Two questions in relation to this content were asked to the participants: “What can be modified or added in order to make our content disability-adapted?” and “How this content can be made accessible?” The resulting discussions, in combination with communication requirements (orally describing each slide for participants with sight impairments; sign-language translation for participants with hearing impairments) often doubled, even tripled the meeting duration (up to 6 hours instead of 2 hours and 30 minutes) and the need for additional meetings occurred.
In fact, each focus group meeting in EDUCEN Istanbul Case Study is a series of successive meetings in itself, separated by convivial breaks to ensure an adequate comfort level. As for the participants’ motivation, it was proven important to emphasize that the focus group meetings were collaborative efforts among equals and not “feedback collection” sessions led (and benefited from) by a host organization, making the participants themselves actors of the solution development process.
The meetings were document mostly through hand-written notes. Few images were recorded in order to prevent any feeling of stigmatization in participants.
Regarding the organizational and logistics aspects, there is a non-negligible number of details to take into account. We suggest you to look at the case study manual. EDUCEN Istanbul Case Study Manual.
Contrary to popular belief, speech and language are not the same thing and people with hearing impairments who have never been exposed to sound can perfectly communicate through the sign language. The latter is a natural language with its own grammatical structure and vocabulary, created spontaneously by the community of people with hearing impairments. The deaf people feel about the sign language as do users of any spoken language, it it “their natural vehicle for clear description, reasoning, expression of opinions and feelings, humor, story telling”. [^1] The sign language “creates the deaf community, and not the lack of hearing or the lack of speech. In other words, the deaf community is defined by what it is, and not by ‘what it lacks’” [^2] Unfortunately, the sign language is surrounded by many false beliefs. For instance, no two sign languages are the same (usually every country has its own) and sign languages are not more iconic than spoken languages (there is not necessarily an analogy or similarity between the form of a sign and its meaning). To these we might add another: Never having been exposed to sounds can make the learning of a spoken language very difficult for someone with hearing impairments and consequently their reading skills remain low.[^3] It is clear that these realities should be taken into account in the disaster risk management, where conveying information correctly is of highest importance.
People with hearing impairments need to receive the information that is necessary to their own protection in their native language, which requires translation service. As it is with all disadvantaged groups, they have right to benefit from all information, knowledge and services offered to the rest of the population. Therefore, the need for accurate translation is evident for the whole of disaster management cycle; first as an ethical obligation.
Translation alone is not sufficient, the content has to be complete as well in comparison with the informative material destined to the general population. In some cases, simplification of the vocabulary derived from the spoken language may be necessary, yet this kind of adaptation do not justify a decision-making in the name of people with hearing impairments about which information will be useful or not to them.
In EDUCEN Istanbul Case Study, video clips of sign language translation have been inserted into the core training material (a slide presentation) to make it accessible and to allow deaf people receive the training together with non-deaf people, which also prevents social isolation. The sign language video clips start automatically every time slide is shown and present the same narrative given orally by the trainer while respecting the timing.
[^1] MEIR Irit and SANDLER Wendy, A Language in Space, The Story of Israeli Sign Language, Lawrence Earlbaum Associates, New York, 2008, p.2 [^2] Ibid., p.8 [^3] AZBEL Lyuba, “How Do The Deaf Read: The Paradox of Performing a Phonemic Task Without Sound”, New York, 2004, pp. 3-4 (unpublished research paper) Retrieved from: http://psych.nyu.edu/pelli/docs/azbel2004intel.pdf
Audio Description is the technical name of the auditory narration activity that aims to describe visual elements in artistic or media productions for people with sight impairments. It can be used in order to make accessible static objects, such as pieces displayed in museums or art exhibitions, as well as scenes, settings, people and actions in live art performances, sports events, films, television shows and similar productions. Also called “video description” or “descriptive narratives”, it consists of informing the spectator with sight impairments about the visual content, focusing first on the essential elements for comprehension. The narrative is usually placed in pauses between dialogues or in the rests in pieces music; thus it has to be as concise as possible while conveying, in an objective and intelligible manner, the maximum amount of information. There are various ways to provide audio description depending on the type of production and the medium. Currently, most television broadcasters offer audio description integrated soundtrack as user menu option, the same way it is done with distributed video (DVD and later technologies). For live or not-live performances in public venues such as theaters, “live” or pre-recorded -but synchronous- audio description narratives can be provided to users through dedicated radio receivers. A more recent application is to make the audio description available online so the user can download and listen it from his/her mobile devices.
Audio description is a vital tool for educational activities and became one the tools of which we have made use for improving accessibility of AKUT’s disaster preparedness training materials. Descriptive narratives have been added to the slide presentation in two ways: First, they were added to video clips integrated in the slideshow for describing what the existing narrative did not mention. Secondly, descriptions of functional visuals have been slipped into the trainer’s narratives.
For a training module designed for “mixed” audiences, it is important that the description work be realized subtly -without compromising its informative value- in order to not to distract participants who have sight. This might tend to result in shorter description narratives than usual, however this limitation can be tolerated for protecting diversity.
Use of audio description also became necessary for the “audio book” which is based on the narration of AKUT’s disaster preparedness booklet, which contains many didactic visuals that should be made accessible, such as maps, tables and photograps.
Suggested reading on audio description
- US Department of Health and Human Service’s Provisional Guidance for Audio Description
- American Council of Blind’s “Audio Description Project” Web Pages
For people with significant sight impairments, touch is a primary way of perception and information collection, hence the enhanced tactile acuity in most blind people. In this context, a “tactile tool” is any method that provides access to information and supports learning process through the use of haptic channel. Braille alphabet seems to be most well-known of tactile tools. Our experience is that it is almost automatically brought to the table by people who have sight, when it comes to design any pedagogical activity involving blind people. Nevertheless, Braille’s prevalence is significantly lower than it is often thought and the new digital technologies involving use of audio or text-to-speech interfaces will likely to decrease this prevalence.[^1] Braille has also the disadvantage of being a text-only tool that cannot be used for making visual elements accessible and its production and distribution cost is relatively high compared to ink-prints. Yet, the use of tactile is not limited to reading. It is possible to devise various methods to convey information through the sense of touch, and often without having to resort to sophisticated technology.
Simple, readily available objects can be transformed into efficient accessibility tools in consultation with the potential end-users. For example, EDUCEN Project Istanbul Case Study has brought an interesting tactile tool into AKUT’s existing training materials, which consists of a gas tap, two water taps of different style and an electrical fuse box mounted on a small piece of plywood. This object is expected to facilitate the understanding of shutoff procedure during building evacuations.
AKUT’s partners with sight impairments have also suggested that using a toy or small scale art model can be useful when instructing earthquake protection postures. This was also taken into account. Yet, we have discovered that using a “live” model (for example the trainer himself/herself) to whom the blind participants could touch would also work.
[^1] See for example US National Federation of the Blind Jennigan Institute’s report: “The Braille Literacy Crisis in America: Facing the Truth, Reversing the Trend, Empowering the Blind” (26 March 2009) retrieved from [https://nfb.org/images/nfb/documents/pdf/braille_literacy_report_web.pdf]