Does Visual and Auditory Word Perceptions have a Language-Selective Input? Evidence from Word Processing in Semitic Languages.

| January 8, 2014

August 2008. Volume 3 Issue 2

Does Visual and Auditory Word Perception have a Language-Selective Input? Evidence from Word Processing in Semitic languages

Raphiq Ibrahim
University of Haifa,

Raphiq Ibrahim is a lecturer at the learning disabilities department of Haifa University and neuropsychologist in the cognitive neurology department at Ramba Medical Center in Haifa. His research is in psycholinguistics (including visual and auditory word perception and bilingualism) and the organization of higher cognitive functions in the cerebral hemispheres of the brain.

The goal of this study was to compare performance differences of Native Arabic speakers in identifying spoken words and written words in the Arabic (L1) and Hebrew (L2) languages, and to examine whether such difference in performance patterns are related to the factors like; type of language and frequency of exposure to each modality (visual, auditory). Two lexical decision experiments were performed, in which the response times (reaction time- RT) and error rates were compared. In addition, each subject completed a structured questionnaire that examined the level of exposure to speech and reading in each language. The results showed a frequency effect within the language (for each language- Arabic and Hebrew) and within the presentation form (spoken or written), with longer reaction times in lexical decision tasks when the stimuli was presented orally in comparison to the visual presentation. A significant interaction was found between perceptual modalities and the language in which the stimuli were presented. Reaction times to Hebrew words were faster when the words were presented visually, while reaction time times for the Literary Arabic words were faster when they were presented orally. The results of the language exposure questionnaire revealed that in both languages, students whose exposure to a particular modality was greater performed faster and more accurate in that modality. These findings can be explained with the fact that mature Arab students read more in Hebrew at schools and hear more in Literary Arabic Consequently, Arab linguistic experience in a second language (L2) relies more on visual modality, and that affects significantly the language processing of this modality.

Keywords bilingualism, auditory, visual, word identification, Arabic, Hebrew

The research on bilingualism has focused on two views. The first view was the lexical organization of the two languages in the cognitive system of bilinguals (see Kroll & de Groot, 1997) and the relationship between semantically related words and translation equivalents across languages (e.g., de Groot, 1995; Grainger & Frenck-Mestre, 1998). A second view in this area of research focused on the difficulties that second language learners encounter on both the visual and auditory perception (see Piske et al., 2001 for review). Of particular interest were questions addressing spoken (Flege et al., 1999) and written words in native and in second language (Johnson & Newport, 1989). In this work I am interested in investigating issues related to the second view (differences between visual and auditory perception). I asked whether the two forms of word (visual and auditory) identification systems are independent and have a language-selective input. Specifically, I am interested to know how specific information provided by printed or spoken words interacts during word perception (e.g., Taraban & McClelland, 1987) and whether word perception in different modalities are differentially influenced by the degree of exposure to these modalities. The mechanism by which the experience of these modalities affects second language processing is unclear (e.g., Best & Strange, 1992). Such a mechanism might involve phonetic (segmental and supra-segmental), phonological, lexical, and/or other linguistic and extra-linguistic processes (Guiora, 1994).
The language situation in Israel represents a fully complicated case that includes the coexistence of two official state languages (Hebrew and Arabic). This leads to situation in which the majority of Arab students in Israel are bilinguals. Operationally, I tried to examine whether word perception in auditory or visual modalities are differentially influenced by degree of exposure to these modalities in L1 and L2. To achieve this aim, lexical decision paradigm and accuracy measures were used, in which participants were presented to Arabic and Hebrew words auditorally and visually, and were asked to identify the stimuli. The target population was adult Arabic Learning experience of Hebrew. The question I asked in this study is: how does the degree of exposure to different modalities affect the cognitive system? Given that Learning experience in the two languages differ in both modalities, the question is how this affects the performance pattern in word perception.

For a long time, bilingualism was believed to require two separate word processing systems that function independently, and that can be accessed selectively. These assumptions are both intuitively appealing and have guided empirical research on multiple language proficiency for decades. Kroll and colleagues (Kroll & Stewart, 1994) proposed a model called the “Revised Hierarchical Model” which assumes separate lexicons postulated for L1 and L2. These two lexicons are connected to one another and to a common semantic system where word meanings are stored. The model is asymmetric, because for unbalanced bilinguals the connections from L2 word forms to their L1 translations are stronger than the other way around, because L2 words are often learned by associating them with their L1 translation. Other researchers assumed that even if L1 and L2 are activated simultaneously, at the functional level they can still be considered as two independent language systems, at least as far as word form identification is concerned (Paradis, 1997). Paradis put a three-store hypothesis of word perception, in which a distinction is made between word forms (orthographic and phonological forms with their syntactic properties), word meanings (which are often language-dependent), and conceptual features (the nonlinguistic mental representations underlying human thoughts). In Paradis’ model, the first two types of representations are language-specific, whereas the last is shared by the two languages.

Spoken word perception
In spoken word perception, the input reaches the listener sequentially over time. Many words take a few hundred milliseconds to pronounce, and very often these words are recognized before the complete signal has been presented. The “Cohort Model” of spoken word perception assumes that the selection of a lexical candidate would depend on a process of mutual differentiation between the activation levels of the target word and its competitors (Marlsen-Wilson, 1987). This assumption predicts that a word with competitors of higher usage frequency should be recognized more slowly than a word matched on frequency but with lower frequency competitors. However, there is no consensus regarding the nature of the competitor set for a spoken word. The competitors are defined as consisting of all the words that may be generated by the addition, deletion or substitution of a single segment, and competition between candidates can potentially start at different points of time. In the first stage, implemented as a simple recurrent network, all potential lexical candidates beginning at every phoneme in the input are activated in a strictly bottom-up fashion. Another important characteristic of the Cohort Model is that the activation of neighbors takes place in parallel, at no cost. As a consequence, there should be no effect of the number of neighbors with which the target word has to compete. Elman and McClelland (1984) developed a TRACE Model of speech perception that depicts speech as a process in which speech units are arranged into levels and interact with each other. The three levels are: features, phonemes, and words. The levels are comprised of processing units, or nodes; for example, within the feature level, there are individual nodes that detect voicing. To perceive speech, the feature nodes are activated initially, followed in time by the phoneme and then word nodes. Thus, activation is bottom-up. Activation can also spread top-down, however, and the TRACE Model describes how context can influence the perception of individual phonemes.

Visual word perception
In several models of visual word perception, researchers have proposed that fluent readers do not use the phonological information conveyed by printed words until after their meaning has been identified (e.g. Jared & Seidenberg, 1991). In their extreme forms, such models assume that, although orthographic units may automatically activate phonological units in parallel with the activation of meaning, lexical access and the identification of printed words may be mediated exclusively by orthographic word-unit attractors in a parallel distributed network (if one takes a connectionist approach, e.g., Hinton & Shallice, 1991) or by a visual logogen system (if one prefers a more traditional view, e.g., Morton & Patterson, 1980). Much of the empirical evidence supporting the orthographic-semantic models of word perception comes from the neuropsychological laboratory. For example, patients with an acquired alexia labeled deep dyslexia apparently cannot use grapheme-to-phoneme translation, yet they are able to identify printed high-frequency words (Patterson, 1981).

Furthermore, the reading errors made by such patients are predominantly semantic paralexias and visual confusions (for a review, see Coltheart, 1980). These data were therefore interpreted as reflecting identification of printed words by their whole-word visual-orthographic (rather than phonological) structure. The propriety of generalizing these data to normal reading is questionable, but additional support for the orthographic-semantic view can also be found in studies of normal word perception. For example, in Hebrew (as in Arabic), letters represent mostly consonants, whereas vowels may be represented in print by a set of diacritical marks (points). These points are frequently not printed, and under these circumstances, isolated words are phonologically and semantically ambiguous. Nevertheless, it has been found that in both Hebrew (Bentin, Bargai, & Katz, 1984) and Arabic (Roman & Pavard, 1987, Bentin & Ibrahim, 1996) the addition of phonological disambiguating vowel points inhibits (rather than facilitates) lexical decision . On the basis of such results, it has been suggested that, at least in Hebrew, correct lexical decisions may be initiated on the basis of orthographic codes, before a particular phonological unit has been accessed (Bentin & Frost, 1987). In English, a distinction has been made between frequent and infrequent words.

Whereas it is usually accepted that phonological processing is required to identify infrequent words, frequent words are presumed to be identified on the basis of their familiar orthographic pattern (Seidenberg, 1995). Advocates of phonological mediation, on the other hand, claim that access to semantic memory is necessarily mediated by phonology (e.g., Frost, 1995). In a “weaker” form of the phonological- mediation view, it is suggested that although the phonologic structure may not necessarily be a vehicle for semantic access, it is automatically activated and integrated in the process of word perception (Van Orden et al, 1988). Such models assume that phonological entries in the lexicon can be either accessed by assembling the phonological structures at a prelexical level or addressed directly from print, using whole-word orthographic patterns. The problem of orthographic-phonemic irregularity is thus solved by acceptance of the concept of addressed phonology. Indeed, cross-language comparisons indicate that addressed phonology is the preferred strategy for naming printed words in deep orthographies (Frost, Katz, & Bentin, 1987; but see Frost, 1995). The assumption that words can be represented by orthographic, phonological, and semantic components is not new. Distributed representation triangle models (Plaut, 1996) explicitly represent these three levels without having a level of lexical representation. Symbolic dual-route models (Coltheart et al., 2001) also represent these components. Although the constituency framework is theoretically neutral among various possible implementations, our description of it relies on a symbolic system. Thus, we emphasize that word representations comprise constituent identities. We also emphasize that the constituency framework is universal, not language or writing-system dependent.

In any writing system, it is the representation of the word that is at issue, and the specification of a value on each of the variables (the constituents) provides the identity of the word as it is represented in an idealized mental lexicon. The process of identification becomes one of specifying constituents. In processing terms, written word perception entails the retrieval of a phonological form and meaning information from a graphic form. Given that all of the above strategies are in principle possible, the focus of most contemporary studies of word perception has shifted from attempting to determine which of the above theories is better supported by empirical evidence, to understanding how the different kinds of information provided by printed or spoken words interact during word perception (e.g., Taraban & McClelland, 1987). To achieve this aim, we took advantage of a specific property found in both Arabic language and Hebrew language in which they related in the spoken form but unrelated in the printed form.

Arabic and Hebrew- Background and Characteristics
As Semitic languages, words in Arabic and Hebrew have similar morphological structures. Regardless if these words based on inflectional or derivational forms, the morpheme-based lexicon of these families implies the existence of roots and templates. Previous studies such as Harris (1951) recognized roots as autonomous morphemes expressing the basic meaning of the word. Roots are abstract entities that are separated by vowels adding morphological information (e.g., in Arabic, the perfective /a-a/ in daraba ‘hit’, or the passive /u-i/ in duriba ‘was hitten’. In Hebrew for example, the perfective /a-a/ in lakah ‘took’, or the passive /ni-a/ in nilkah ‘was taken’). Other researchers defined both Semitic languages as non-concatenate, highly productive derivational morphology (Berman, 1978). According to this approach, most words are derived by embedding a root (generally trilateral) into a morpho-phonological word pattern when various derivatives are formed by the addition of affixes and vowels. Also, in Arabic and Hebrew, there are four letters which also specify long vowels, in addition to their role in signifying specific consonants (in Arabic there are only three – a, u, y ا و ي). However, in some cases it is difficult for the reader to determine whether these dual-function letters represent a vowel or a consonant. When vowels do appear (in poetry, children’s books and liturgical texts), they are signified by diacritical marks above, below or within the body of the word. Inclusion of these marks specifies the phonological form of the orthographic string, making it completely transparent in terms of orthographic/phonological relations.
In regard to semantics, the root conveys the core meaning, while the phonological pattern conveys word class information. For example, in Arabic the word (TAKREEM) consists of the root (KRM, whose semantic space includes things having to do with respect) and the phonological pattern TA—I. The combination results in the word ‘honor’. In Hebrew, the word (SIFRA) consists of the root (SFR- whose semantic space includes things having to do with counting) and the phonological pattern –I—A, which tends to occur in words denoting singular feminine nouns, resulting in the word ‘numeral’. As the majority of written materials do not include the diacritical marks, a single printed word is often not only ambiguous between different lexical items (this ambiguity is normally solved by semantic and syntactic processes in text comprehension), but also does not specify the phonological form of the letter string. Thus in their unpointed form, Hebrew and Arabic orthographies contain a limited amount of vowel information and include a large number of homographs. Comparing to Hebrew, Arabic includes much larger number of homographs thus, it is much more complicated than Hebrew.
Despite the similarity between these languages, there are major differences between Arabic and Hebrew. First, Arabic has special case of diglossia that does not exist in Hebrew. Literary Arabic is universally used in the Arab world for formal communication and is known as “written Arabic” called also “Modern Standard Arabic” (MSA) and Spoken Arabic appears partly or entirely in colloquial dialect and it is the language of everyday communication and has no written form. Although sharing a limited subgroup of words, the two forms of Arabic are phonologically, morphologically, and syntactically different. This added complexity is found in several characteristics that occur in both orthographies, but to a much larger extent in Arabic than in Hebrew. The latter has to do with orthography that includes letters, diacritics and dots. In the two orthographies some letters are represented by different shapes, depending on their placement in the word. Again, this is much less extensive in Hebrew than in Arabic. In Hebrew, there are five letters that change shape when they are word final: ( מ-ם , כ -ך ,פ-ף,צ-ץ ,נ- ן ). In Arabic, 22 of the 28 letters in the alphabet have four shapes each (for example, the phoneme /h/ is represented as:ه٬ ـه٬ ﻬ ﻪ ).

Thus, the grapheme-phoneme relations are quite complex in Arabic, with similar graphemes representing quite different phonemes, and different graphemes representing the same phoneme. Concerning dots in Hebrew, they occur only as diacritics to mark vowels and as a stress-marking device (dagesh). In the case of three letters, this stress-marking device (which does not appear in voweless scripts) changes the phonemic representation of the letters from fricatives (v, x, f) to stops (b, k, p for the letters ב ק פ respectively). In the voweless form of the script, these letters can be disambiguated by their place in the word, as only word or syllable initial placement indicates the stop consonant. In Arabic, the use of dots is more extensive: many letters have a similar or even identical structure and are distinguished only on the basis of the existence, location and number of dots (e.g., the Arabic letters representing /t/ and /n/ ن , ت) become the graphemes representing /th/ and /b/ ( ب , ث) by adding or changing the number or location of dots.
Many studies have demonstrated bilinguals do not recognize written words exactly the same as monolinguals. For example, it was proven that visual word perception in L2 is affected by the native language of the reader (e.g., Wang, Koda, & Perfetti, 2003). However, the opposite is true as well: knowledge of L2 may have impact on the identification of printed L1 words was published by Bijeljac-Babic, Biardeau, and Grainger (1997). In comparative studies of different languages, there are two points of comparison: The speech and the writing system. Thus, in comparing Arabic and Hebrew reading, we compare examples of two related language families (Semetic languages) that are similar in their morphological structure but radically differ in their orthographic and phonetic systems. Recent studies on Hebrew (Frost, Deutsch & Forster, 1997) and Arabic (Mahadin, 1982.) support the assumption that roots can be accessed as independent morphological units. Recent research in the area of speech perception has suggested that there are differences in the phonetic perception of the speech signal between native and nonnative speakers (for reviews see Flege, 1992). These findings suggest that adult second language learners perform an assimilation process by which they perceive and produce L2 sounds via their L1 phonological system, at least at some stage of L2 acquisition (e.g., Major, 1999). It must be noted, however, that adaptation of phonetic features (categories) of L2 is a necessary component of second language (speech) acquisition, and, consequently, bilinguals who attain a high level of proficiency in their L2 are able to exploit the phonetic categories of that language in speech production and perception (Goetry & Kolinsky, 2000). Further evidence for assimilation process comes from a case study we described (Eviatar, Leikin, & Ibrahim, 1999) in which a Russian-Hebrew bilingual aphasic woman showed a dissociation between her ability to perceive her second language (learned in adulthood) when it was spoken by a native speaker versus when it was spoken by a speaker with an accent like her own. We interpret this as supporting the hypothesis that perception of second language (L2) sounds is affected by phonological categories of their native language (L1), and that this assimilation procedure can be differentially damaged, such that L2 speech that conforms to L1 phonology (accented speech) is better perceived than phonemically correct L2 speech. This interpretation is supported also by an interesting dissociation in the writing abilities of the patient.

The participants were 48 high school seniors (24 boys and 24 girls). Half of them were instructed to make lexical decisions for visual stimuli, and the other half were instructed to make lexical decisions for the same stimuli presented orally. In addition, the participants were asked to fill out a 12 item questionnaire in an attempt to assess the learning experience and degree of exposure to Arabic and Hebrew in both modalities. The responses were on a Likert scale with in general (a) being maximum exposure to a one language, (e) being maximum exposure to the other and (b c d) in moderate level of both. The questionnaire is presented in the Appendix. No subjects with neurological deficits or learning disabilities were chosen to participate in the study.

Ninety-six Arabic stimuli were used: 48 frequent words and 48 non-frequent words. The words were from among the subset used in both spoken and literary Arabic. Among them, 24 were high frequency, and 24 were low frequency. Ninety-six Hebrew stimuli were used: 48 frequent words and 48 non frequent words. All pseudowords (96 in Arabic and 96 in Hebrew) were built on the basis of real words by replacing on or two letters keeping the pronunciation of the stimuli acceptable.
The stimuli were recorded in a male voice, native speaker of the local dialect, and were presented to the participants orally, through earphones. The words underwent computer processing, designed to equalize their volume, and their length, as much as possible (700 ms duration time, on the average). A computer was used to present the stimuli.

The participants were requested to perform a lexical decision task. The stimuli were presented at a steady rate, and the SOA (Stimulus Onset Asynchrony) was 2000 milliseconds. In this task, participants pressed one key with their dominant-hand index finger for positive answers and pressed another key with their non dominant hand index finger for negative answers. In Arabic, because of the poor quality of computer fonts, calligraphic-written stimuli were used. In the visual test, the stimuli remained on the screen until a response had been given. Because the same stimuli were used for both auditory and visual task, different participants were tested for each task. Half of the participants began the session with Arabic, and the other half began with the session with Hebrew.
In the auditory test, it was explained to the participants that they were about to hear words and pseudowords in different languages, and they were to indicate, by pressing a button, whether the phonological string presented was a word. The dominant hand was used for the affirmative detection of a word and the other hand for the negative detection of a pseudo-word. Accuracy and speed were equally stressed. The experiments were conducted at the school in a relatively quiet classroom. Experimental instructions were given in Spoken Arabic at the beginning of the session. After this session, the participants were asked to fill out a questionnaire assessing the learning experience and degree of exposure to Arabic and Hebrew in both modalities.

Mean RTs for correct responses and percentage of errors were calculated separately for each participant for high- and low-frequency words and for pseudowords. RTs that were above or below two standard deviations from the participants’ mean in each condition were excluded, and the mean was recalculated. About 2% of the trials were excluded by this procedure. The data presented in Table 1 show mean responses (RTs) and percentage of errors of 12 categories. All analysis were conducted twice, one when the pseudowords were part of stimuli and one for words only. For each task, we have analyzed the stimulus-type effect within subjects (FI) and between stimulus types (F2).

Table 1: Lexical decision performance for written and spoken words in literary Arabic and Hebrew. The order of the parameters is as follows:
Reaction times (in millisecond); Standard Errors (..); Error rates (%).

Stimulus type

Auditory presentation

Visual Presentation



Literary Arabic


Literary Arabic


1027 (18)     3.0%    

1053 (19)   3.9%

753 (18)        2.9%    

784 (19)   


1452 (33)     20.7%    

1354 (31)     17.2%    

1007 (33)     20.2%    

1100 (31)    19.8%    


1432  (27)  11.0%

1352 (29)  8.5%

1053 (27)  12.1%

1096 (29) 8.9%

A 4-way ANOVA with a 2 (Modality: visual and auditory) x 2 (Language type: Arabic and Hebrew) x 3 (Lexicality: high- and low-frequency words and pseudowords) x 24 subjects was conducted for the 6 means for every subject when the effects of type of language and lexicality was tested within subject and the modality effect was tested between subjects. This analysis showed that lexical-decisions for all stimuli (words and pseudowords) were faster when they presented visually than when they presented orally (F1(1,46) = 96.08, MSe=73264, p < 0.001; F2 (1,756) = 1376.7, MSe=13150, p < 0.001). The difference between the modalities was found significant in the analysis of words only (F1 (1,46)=99.97, MSe = 46264, p < 0.001). In contrast, the language of the stimuli did not influence significantly the lexical decisions either when the analysis include pseudowords (F1 (1,46) < 1; F2 (1,756) < 1 ( and when the analysis include words only (F1 (1,46) = 1.061, MSe = 7919, p = 0.308) .
In addition, decisions were significantly faster for high-frequency words than for low-frequency words either when the analysis include pseudowords (F1 (1,46)=466, MSe = 7328, p < 0.001; F2 (2,756) = 609, MSe = 13150, p < 0.001 (, and when the analysis for words only )F1 (1,46)=544.5, MSe = 9251, p < 0.001). The interaction between the frequency variable and language variable differ in the visual modality than in the auditory modality. It was shown that the frequency effect in Hebrew is significantly larger in the auditory modality (171 ms) than in the visual modality while in Arabic the frequency effect is equal (14 ms). The 4-way ANOVA revealed a significant three way interaction (F1(1,46) = 42.759, MSe=2403, p < 0.001 ; F2(1,756) = 8.6, MSe = 13150, p < 0.001). This interaction was further elaborated by checking the one-way interaction analysis.
As can be noticed, the reaction times were faster for Hebrew than for Arabic when words were presented visually and little slower for Hebrew than for Arabic when the words were presented orally (F1(1,46) = 14.7, MSe = 7920, p < 0.001 ; F2(1,756) = 40.4, MSe = 13150, p < 0.001)]. Post-hoc comparisons (t-test) for these differences revealed that within each modality a significant effect was found in the visual modality [t(23) = 2.91, p < 0.01] and in the auditory modality [t(23)=2.543, P<0.025]. A second interaction was found to be significant for subject analysis between frequency effect and language (F(1,46) = 5.04, MSe = 2403, p < 0.05) but this interaction was not significant for the stimuli analysis (F(1,756) = 1.84, MSe = 13150, p = 0.158). A third interaction that was significant is between frequency effect and modality (F1(1,46) = 8.09, MSe = 9252, p < 0.01; F2(1,756) = 8.6, MSe = 13150, p < 0.001). This interaction effect is due to that the frequency effect was larger in the auditory modality than in the visual modality.
A different pattern was gained in the error rate analysis. A three-way interaction between Language, Lexicality and Modality was found to be non-significant (F1(1,46) < 1 ; F2(2,754) < 1). The main effects of Language and Modality are not significant [F1(1,46)<1 ; F2(1,754) = 1.1, MSe = 105, p = 0.294 and F1(1,46) < 1; F2(1,754) = 2.74, MSe=105, p = 0.098 respectively] but a simple main effect of Lexicality was significant [F1(1,46) = 89.34, MSe = 134, p < 0.001 ; F2(1,754) = 117.7, MSe = 105, p < 0.001]. The interaction between language and modality was also not significant [F1(1,46)=2.377, MSe = 22.4, p = 0.13 ;F2(1,754) < 1 ] suggesting that effect of modality was similar in both Arabic and Hebrew. Also, the interaction between Lexicality and Modality was not significant [F1(1,46) < 1 ; F2(2,754) < 1] suggesting that the frequency effect was similar in visual and auditory presentation.
In order to investigate how degree of exposure to both modalities interacts with language (L1, L2), the mean score over the 12 questions in the exposure questionnaire were entered into a correlation analysis with the measure of reaction time for the 48 participants of Arab children. These relationships are illustrated in Table 2.

Table 2: Correlation co-efficients between measures of word perception (RTs) and level of exposure in different modalities. Only significant correlations are shown (p<.05) (n.s-not significant).

































































The analyses of the visual modality revealed a significant positive relationship between exposure to Hebrew and the reaction times of word perception as the correlations were significant in questions 10, 11 and 12. On the other hand, in Arabic such correlation was not found between exposure to language in this modality and speed of word perception. In regard to the auditory modality, an opposite pattern was gained. The correlation analyses between language (L1, L2) and speed of word perception revealed a significant relationship as can be seen in question 7 but no significant effect was found in Arabic.

The goal of this study was to examine the level of competence in terms of reaction time and error rates in different languages and the impact of linguistic experience on the level of competence in those languages. In this study, the level of competence of native Arabic speakers in lexical decisions for words presented orally and visually in Literary Arabic and Hebrew was compared. The results showed that the processing of the two languages took on different patterns. Reaction times in lexical decisions to stimuli that were presented orally were longer than when presented visually and there was interaction between this factor and the language of the stimuli. Analysis of the interaction showed that, while the responses were generally faster in Hebrew in the visual forms (62 milliseconds), they were faster in Literary Arabic in the oral forms (36 milliseconds). The implication of this result is that students read more of the Hebrew and hear more of the Literary Arabic
This explanation is supported by the results of the questionnaire distributed to the subjects and by analysis of the degree of exposure to the different languages in the two forms with the speed of identification (reaction time) of the words in the languages and the corresponding forms. These results indicated different patterns of impact for the two languages with respect to the speed of performance. From investigation of the usage of these languages, it becomes clear that the use of Literary Arabic for speaking within the school, and particularly in lessons (as seen in the results of the questionnaire), is more common than Hebrew. In informal discussions, the breadth of usage of the Hebrew language is greater in written literature. In science-based classes, for example (from which most of the study population comes), science lesson books which the teachers use for instruction and students for reading (and as part of their preparation for university) are in Hebrew, although the lessons are given in Arabic. In addition, the students are exposed to written Hebrew media, which is more common than media in the Arabic language. In contrast, the Arabic language is more common in oral form during class hours, as well as in street conversation, at home, and in the primary electronic media (radio and television). Regarding usage habits in the Arabic language, there is a deliberate and established trend in school policy to expand the usage of the literary language in speech to strengthen the roots of the Literary Arabic language amongst the population. This desire is emphasized in light of the continuing threat of the waning usage of the literary language and its decreasing usage among the Arab population, at least in our country. This interaction can be explained by the analysis of the level of exposure and the level of usage in these two languages and the level of competence (reaction times and accuracy measures) in the lexical decisions of the subjects. From the analysis, it appears that students with greater oral and spoken exposure levels and degree of usage in the Hebrew language had better oral competence (in reaction time and precision), while students who use Literary Arabic more in oral and spoken forms had faster and more accurate performance in oral identification. A similar pattern of results was obtained in the visual forms as well.
This result corresponds with previous findings on the acquisition of a second language. These findings show clearly that the usage of visual forms is broader in the second language since this language is learned more on the basis of reading than on the basis of speech. In fact, in the case of acquisition of the Hebrew language by the Arabic speaker, usage of the visual forms is supported by the testimony of the teachers and students of the same population. This finding with respect to the different cognitive performance pattern in the second language is also reflected in the hemispheric performance level. According to the hypothesis of the stages of second language acquisition, the later the second language is learned (as is true for our population), the greater the involvement of the right hemisphere, and only when expertise in this language increases, involvement of the left hemisphere observed (Albert & Olber, 1978). A study that examined children in 7th, 9th and 11th grades, whose mother tongue is Hebrew and who learned English as a second language from grade 5, showed right visual field advantage (RVFA) with respect to Hebrew. With respect to English, there was left visual field advantage (LVFA) principally among 7th grade students, while the advantage completely disappeared by grade 11 (Silverberg, 1979). These findings support the theory of stages but are not sufficient to negate the theory that the right hemisphere is important for the initial learning of a second language.
Another result that sheds light on the impact of the usage of these languages on their cognitive standing relates to the frequency effect. The performance of the current study participants with infrequent words was especially slow, with longer reaction times to infrequent words than to non-words in oral presentation in Hebrew and visual presentation in Literary Arabic. This is an unusual result in lexical studies, which was also found in our previous study (Bentin & Ibrahim, 1996). Furthermore, the error rate was found to be larger with infrequent words than with non-words. A possible explanation for these findings is that participants’ mastery of the languages was not sufficient to recognize the infrequent words in a situation of rapid presentation and time pressure. It is worth noting that judges within the same population were familiar with the infrequent words, which suggests that the problem is with the level of command rather than knowledge itself. In addition, this study did not find an identical frequency effect in the two languages and the different forms. In other words, the pattern of the frequency effect in the visual and oral forms is not similar between the languages. In Hebrew, there is significant disparity between frequency effect in the oral form compared with the visual form, with the oral frequency effect in this language being greater. Activation models in oral word recognition, such as the “Cohort Model” (Marlsen Wilson, 1987), assume that lexical priming increases as increasing parts of the word are sounded. Given that the initial activation level with a frequent word is greater than with an infrequent one and that the activation grows with the passage of time, maximum frequency effects are expected in cases of slower reaction time. Support for this assumption of the model comes from a study by Connine, Titone & Wang (1993), who examined the word frequency effect in the identification of orally presented words. In their study, they showed that in lists where there was a trend toward infrequent words (general reaction times were slow); the influence of the frequency effect was greater than with words that appeared in mixed lists. This evidence supports the fact that the frequency effect occurs in later stages in the identification of the spoken word. However, the fact that the frequency effect was greater in the Hebrew language than in Literary Arabic appears to be related to the initial low level of recognition more than to the oral words in Hebrew. This result of the current study strengthens the conclusion that the study participants were less accustomed to exposure to the Hebrew language in oral form.
From the analysis of error rates, it appears that the language effect is not significant. That is, beyond the presentation form of the language there was no difference in the error rate. Contrary to the significant effect of the form on reaction times, no significant impact was found in error rates. In contrast, as in the analysis of reaction times, a significant effect was found in the frequency factor. Despite these differences, the error analysis does not contradict the direction of reaction times and it supports some of them. Furthermore, the study results as expressed in average reaction time and error rates show that the study participants had similar mastery of Literary Arabic and Hebrew, which strengthens former research results that pointed to Literary Arabic being a second language similar to the Hebrew language (Ibrahim & Aharon-Peretz, 2005). By itself, independence of lexicons for Literary Arabic and Hebrew does not imply language-selective access. It is possible that both lexicons of a bilingual are activated simultaneously to the extent that the input matches representations within each lexicon (Van Heuven, Dijkstra, & Grainger, 1998).
In regard to reading, written symbols perceived during this process and processed by visual system must eventually be translated by the brain into sounds. Thus the auditory system which is responsible for receiving, filtering and storing sounds, becomes a likely contributor to both normal and abnormal reading processes. Due to this contribution, it is possible that deficits in dyslexia and other reading problems (like fluency in reading), are related directly to auditory deficits. There is a large empirical support for this contribution in the literature. An early study that investigated the development of grapheme-phoneme conversion ability in normal and reading-age matched dyslexic readers, postulated that dyslexics have a specific difficulty in grapheme-phoneme conversion (Snowling, 1980). In more recent study, second and sixth grade poor and normal readers attempted to retain orally and visually presented target letters for 0, 4, or 10 seconds while they were shadowing letter names presented orally in rapid succession (Huba, Vellutino & Scanlon, 1990). Target letters sounded either similar or dissimilar to shadowing letters. Since shadowing presumably disrupted phonological encoding of the target stimuli, it was possible to evaluate reader groups’ auditory and visual retention ability when they could not rely on such encoding. Consistent with the phonological coding interpretation of individual differences in reading ability, normal readers were less accurate than poor readers in auditory target letter recall when phonological encoding was disrupted. Normal readers were inferior to poor readers in visual target recall as well. As expected, differences between reader groups in phonological encoding were more strongly indicated in second than sixth grade, suggesting that older poor readers’ sensitivity to phonological information is more like that of normal readers.
In series of studies using behavioral (Reaction Times – RT) and electrophysiological (Evoked Related Potentials – ERP) measures, Breznitz and her colleagues, examined differences between ‘regular’ and dyslexic adult bilingual readers when processing reading and reading related skills in their first (L1 Hebrew) and second (L2 English) languages (i.e. Breznitz & Meyler, 2003; Breznitz 2003a; Oren & Breznitz, 2005) . In first study (Breznitz & Meyler, 2003) they investigated speed of processing (SOP) among college-level adult dyslexic and normal readers in nonlinguistic and sublexical linguistic auditory and visual tasks, and a nonlinguistic cross-modal choice reaction task. The results revealed that RT and ERP latencies were longest in the cross-modal task. In other words, the gap between ERP latencies in the visual versus the auditory modalities for each component was larger among dyslexic as compared to normal readers, and was particularly evident at the linguistic level. These results support the hypothesis that there is an amodal, basic SOP deficit among dyslexic readers. The slower cross-modal SOP is attributed to slower information processing in general and to disproportionate “asynchrony” between SOP in the visual versus the auditory system. It is suggested that excessive asynchrony in the SOP of the two systems may be one of the underlying causes of poor readers and dyslexics’ impaired reading skills. Our data in this study is in line of the data reported by Breznitz and her colleagues and support the ‘script dependent’ hypotheses by demonstrating universal deficits in L1 and L2 among regular and dyslexic readers along with differential manifestations of these deficits as a function of specific orthographic features. According to our results, the automatization deficit that underlies dyslexics and poor readers’ performance in reading is due at least partially, to the unique features of the languages and orthographies (Oren & Breznitz, 2005).

The whole findings of this study related to Arabic language in addition to findings from our recent studies (see. Eviatar, Leiken & Ibrahim, 1999; Ibrahim, Eviatar & Aharon Peretz, 2002; Eviatar, Ibrahim & Ganayim, 2004), support the notion that Arabic has unique features that contribute to the inhibition and slowness of the reading process. In that regard, the findings do not allow us to ignore the fact that a normal Arab child (and for a further extent dyslexic child), who encounters special difficulties in reading acquisition needs special pedagogical methods and systematical professional intervention to overcome these difficulties that the Arabic language imposes.
In addition, this study could shed light on the relationship between visual and auditory language mechanisms during reading of regular readers, and offer also new psycholinguistic evidence to understand the dynamics of processing two languages in bilingual. Specifically, it adds an important contribution to our understanding the role of aspects like modality, status of language (L1, L2) and characteristics of language in processing a specific language. Although our focus was on children with normal abilities (or poor readers) and not on children with abnormal abilities (dyslexics), the pattern of results obtained could consider applicable in some limited aspects to dyslexic populations since verbal information mechanisms are universal, but this awaits further clarification in future studies.


Bentin, S., Bargai, N., & Katz, L. (1984). Graphemic and phonemic coding for lexical access: Evidence from Hebrew. Journal of Experimental Psychology: Learning, Memory and Cognition, 10, 353-368.

Bentin, S., & Frost, R. (1987). Procsessing lexical ambiguity and visual word recognition in a deep orthography. Memory & Cognition, 15, 13-23.

Bentin, S., & Ibrahim, R.(1996). Reading a language without orthography. New evidence for access to phonology during visual word perception: The case of Arabic. Journal of Experimental Psychology: Learning, Memory, & Cognition, 22,2, 309-323.

Berman, R.A. (1978). Modern Hebrew structure. Tel Aviv, Israel: University Publishing.

Best, C.T. & Strange, W. (1992). Effects of phonological and phonetic factors on cross-language perception of approximates. Journal of Phonetics, 20, 305-330.

Bijeljac- Babic, R., Biardeau, A., & Grainger, J. (1997). Masked orthographic priming in bilingual word recognition. Memory & Cognition, 2, 447-457.

Breznitz, Z. (2003a). Speed of phonological and orthographic processing as factors in dyslexia: Electrophysiological evidence. Genetic, Social and General Psychology Monographs, 129(2): 183-206.

Breznitz, Z., & Meyler, A. (2003). Speed of lower-level auditory and visual processing as a basic factor in dyslexia: Electrophysiological evidence. Brain & Language, 85(1): 166-184.

Coltheart M . (1980). Deep dyslexia: a right hemisphere hypothesis. In: Deep dyslexia (Coltheart M ., Patterson KE, Marshall JC, eds.), pp. 326–380. Boston, MA: Routledge & Kegan Paul.

Coltheart. M., Rastle, K., Perry, C., Langdon, R. & Ziegler, J. (2001) DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review 108: 204-256.

Connine, M. C., Titone, D., & Wang, J. (1993). Auditory word recognition: Extrinsic and intrinsic effects of word frequency. Journal of Experimental Psychology: Learning, Memory, & Cognition, 19. 1, 81-94.

Elman, J. L., & McClelland, J. L. (1984). Speech perception as cognitive processes: The interactive activation model . In N. lass (Eds.), Speech and Language, Vol. 10 New York: Academic Press.

Eviatar, Z., Ibrahim, R., & Ganayim, D. (2004). Orthography and the hemispheres: Visual and linguistic aspects of letter processing. Neuropsychology, 18(1), 174-184.

Eviatar. Z., Leiken. M. & Ibrahim, R. (1999). Phonological processing of second language phonemes: A selective deficit in a bilingual aphasic. Language Learning, 49.1, 121-141.

Flege, J. E. (1992). Speech learning in a second language. In C.A. Ferguson, L. Menn, & C. Stoel-Cannon (Eds.) Phonological development: Models, research, and implications (pp 565-604). Timonium MD: York Press.

Flege, J. E., Yeni-Komshian, G. H., & Liu, S. (1999). Age constraints on second language acquisition. Journal of Memory and Language, 41, 78-104.

Frost, R., Forster, I. K., & Deutsch, A. (1997). What can we learn from the morphology of Hebrew? A masked-priming investigation of morphological representation. Journal of Experimental Psychology : Learning, Memory, and Cognition. 4, 1-28.

Frost, R. (1995). Phonological computation and missing vowels: Mapping lexical involvement in reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 398–408.

Frost, R., Katz, L., & Bentin, S. (1987). Strategies for visual word recognition and orthographical depth: A multilingual comparison. Journal of Experimental Psychology: Human Perception and Performance, 13, 104–115.

Goetry, V. & Kolinsky, R. (2000).The role of rhythmic cues for speech segmentation in monolingual and bilingual listeners. Psychologica Belgica, 40(3): 115-152.

de Groot, A.M.B. (1995). Determinants of bilingual lexico-semantic organization. Computer Assisted Language Learning, 8, 151-180.

Grainger, J., & Frenck-Mestre, C. (1998). Masked priming by translation equivalents in proficient bilinguals. Language and Cognitive Processes, 13, 901-623.

Guiora, A. Z. (1994). The two faces of language ego. Psychologica Belgica, 34(2-3): 83-97.

Hinton, G. E., & Shallice, T. (1991). Lesioning an attractor network: Investigations of acquired dyslexia. Psychological Review, 98(1), 74-95.

Huba. M. E., Vellutino. F. R., & Scanlon. D. M. (1990). Auditory and visual retention in poor and normal readers when verbal encoding is disrupted. Learning and Individual Differences. 2, 1, 95-112.

Ibrahim, R.& Aharon-Peretz, J. (2005). Is literary Arabic a second language for native Arab speakers?: Evidence from a semantic priming study The Journal of Psycholinguistic Research, 34(1),51-70.

Ibrahim, R., Eviatar, Z., & Aharon Peretz, J. (2002). The characteristics of the Arabic orthography slow it’s cognitive processing. Neuropsycholgy, 16(3), 322-326.

Jared, D., & Seidenberg, M. S. (1991). Does word perception proceed
from spelling to sound to meaning? Journal of Experimental Psychology: General, 120, 358–394.

Johnson. J.S., & Newport. E. L., (1989). Critical period effects in second language learning: the influence of maturational state on the acquisition of English as a second language. Cognitive Psychology 21: 60-99.

Kroll, J. F., & Stewart, E. (1994). Category interference in translation and picture naming: Evidence for asymmetric connections between bilingual memory representations. Journal of Memory and Language. 33, 149-174.

Kroll, J. F., & de Groot, A. M. B. (1997). Lexical And Conceptual Memory in the Bilingual: Mapping Form to Meaning in Two Languages. In de Groot, A. M. B. and Kroll, J. F. (Eds.), Tutorials In Bilingualism: Psycholinguistic Perspectives. (pp. 201-224). Mahwah, NJ: Lawrence Erlbaum Publishers.

Marslen Wilson, W. (1987). Functional parallelism in spoken word perception. Cognition, 25, 71-102.

Major, R.C. (1999). Chronological and stylistic aspects of second language acquisition of consonant clusters. Language Learning Supplement 1, 123-150.

Mahadin. R. S. (1982). The morphophonemics of the Standard Arabic triconsonantal verbs. Doctoral dissertation, University of Pennsylvania, Philadelphia.

Morton, J., and Patterson, K. (1980). A new attempt at an interpretation, or, an attempt at a new interpretation. In Coltheart, M., Patterson, K., and Marshall, J.C. (Eds.) Deep Dyslexia. London: Routledge and Kegan Paul.

Oren. R., & Breznitz. Z. (2005). Reading processes in L1 and L2 among dyslexic as compared to regular bilingual readers: behavioral and electrophysiological evidence. Journal of Neurolinguistics. 18, 2, 127-151.

Paradis, M. (1997). The cognitive neuropsychology of bilingualism. In de Groot, A. M. B. and Kroll, J. F. (Eds.), Tutorials In Bilingualism: Psycholinguistic Perspectives. (pp. 331-354). Mahwah, NJ: Lawrence Erlbaum Publishers.

Piske, T., MacKay, I., & Flege, J. (2001). Factors affecting degree of foreign accent in an L2: a review. Journal of Phonetics 29, 191-215

Plaut, D. C., McClelland, J. L., Seidenberg, M. S., & Patterson, K. E. (1996). Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review, 103, 56-115.

Roman, G., & Pavard, B. (1987). A comparative study: How we read Arabic and French. In J. K. O’Regan & A. Levy-Schoen (Eds.), Eye movement: From physiology to cognition (pp. 431-440). Amsterdam, The Netherlands: North Holland Elsevier.

Seidenberg, M. S. (1995). Visual word perception: An overview. In P. Eimas & J. L. Miller (Eds.), Handbook of perception and cognition: Language. New York: Academic Press

Snowling. M. J. (1980). The development of grapheme-phoneme correspondence in normal and dyslexic readers. Journal of Experimental Child Psychology, 29, 2, 294-305.

Taraban, R., & McClelland, J. L. (1987). Conspiracy effects in word recognition. Journal of Memory and Language, 26, 608–631

Van Orden, G. C., Johnston, J. C., & Hale, B. L. (1988). Word perception in reading proceeds from the spelling to sound to meaning. Journal of Experimental Psychology: Memory, Language and Cognition, 14, 371-386.

Van Heuven, W. J. B., Dijkstra, T., & Grainger, J. (1998). Orthographic Neighborhood Effects in Bilingual word perception. Journal of Memory and Language, 39(3), 458-483.


Language questionnaire

1. What is the official language at school?
a. literary Arabic b. spoken Arabic c. Hebrew
Answer question number 2 only if your answer to question 1 is a. or b.
2. What is the language that is spoken in lessons that are not language lessons?
a. only spoken Arabic b. mainly spoken Arabic c. both spoken Arabic and literary Arabic at the same rate d. mainly literary Arabic
e. only literary Arabic
3. In which rate do teachers speak the Hebrew language in lessons that are not Hebrew lessons?
a. not at all b. very little c. moderately so d. very much so e. extremely much
4. To which extent do teachers demand speaking literary Arabic during lessons?
a. not at all b. very little c. moderately so d. very much so e. extremely much
5. To which extent do you use Hebrew in your speech?
a. not at all b. very little c. moderately so d. very much so e. extremely much
6. To which extent do you use literary Arabic in your speech?
a. not at all b. very little c. moderately so d. very much so e. extremely much
7. In which language do you usually watch TV programs (entertainment, news etc.)
a. spoken Arabic b. literary Arabic c. both forms of Arabic d. only Hebrew
e. both Hebrew and Arabic
8. In which language are the teaching books (which are not languages teaching books) written (mathematics, physics, chemistry, biology etc.)?
a. only in literary Arabic b. mainly in literary Arabic c. in Hebrew and Arabic at the same extent d. mainly in Hebrew e. only in Hebrew
9. In which language do you read academic material?
a. only in literary Arabic b. mainly in literary Arabic c. in Hebrew and Arabic at the same extent d. mainly in Hebrew e. only in Hebrew
10. In which language do you read un academic materials (newspaper, magazines, reading books)?
a. only in literary Arabic b. mainly in literary Arabic c. in Hebrew and Arabic at the same extent d. mainly in Hebrew e. only in Hebrew
11. In which language do you write letters to friends or messages and notes in the diary?
a. only in literary Arabic b. mainly in literary Arabic c. in Hebrew and Arabic at the same extent d. mainly in Hebrew e. only in Hebrew
12. In which language do you read the subtitles that are shown in foreign movies in the Israeli broadcast stations?
a. only in literary Arabic b. mainly in literary Arabic c. in Hebrew and Arabic at the same extent d. mainly in Hebrew e. only in Hebrew

Tags: , , , , ,

Category: 2008