(2024). Speech Analysis of Teaching Assistant Interventions in Small Group Collaborative Problem Solving with Undergraduate Engineering Students. British Journal of Educational Technology, v55 n4 p1583-1601. This descriptive study focuses on using voice activity detection (VAD) algorithms to extract student speech data in order to better understand the collaboration of small group work and the impact of teaching assistant (TA) interventions in undergraduate engineering discussion sections. Audio data were recorded from individual students wearing head-mounted noise-cancelling microphones. Video data of each student group were manually coded for collaborative behaviours (eg, group task relatedness, group verbal interaction and group talk content) of students and TA-student interactions. The analysis includes information about the turn taking, overall speech duration patterns and amounts of overlapping speech observed both when TAs were intervening with groups and when they were not. We found that TAs very rarely provided explicit support regarding collaboration. Key speech metrics, such as amount of turn overlap and maximum turn duration, revealed important information about the nature of… [Direct]
(2024). Conversation Disruptions in Early Childhood Predict Executive Functioning Development: A Longitudinal Study. Developmental Science, v27 n1 e13414. Conversational turn-taking is a complex communicative skill that requires both linguistic and executive functioning (EF) skills, including processing input while simultaneously forming and inhibiting responses until one's turn. Adult-child turn-taking predicts children's linguistic, cognitive, and socioemotional development. However, little is understood about how disruptions to temporal contingency in turn-taking, such as interruptions and overlapping speech, relate to cognitive outcomes, and how these relationships may vary across developmental contexts. In a longitudinal sample of 275 socioeconomically diverse mother-child dyads (children 50% male, 65% White), we conducted pre-registered examinations of whether the frequency of dyads' conversational disruption during free play when children were 3 years old related to children's executive functioning (EF; 9 months later), self-regulation skills (18 months later), and externalizing psychopathology in early adolescence (age 10-12… [Direct] [Direct]
(2022). A Tool for Differential Diagnosis of Childhood Apraxia of Speech and Dysarthria in Children: A Tutorial. Language, Speech, and Hearing Services in Schools, v53 n4 p926-946 Oct. Purpose: While there has been mounting research centered on the diagnosis of childhood apraxia of speech (CAS), little has focused on differentiating CAS from pediatric dysarthria. Because CAS and dysarthria share overlapping speech symptoms and some children have both motor speech disorders, differential diagnosis can be challenging. There is a need for clinical tools that facilitate assessment of both CAS and dysarthria symptoms in children. The goals of this tutorial are to (a) determine confidence levels of clinicians in differentially diagnosing dysarthria and CAS and (b) provide a systematic procedure for differentiating CAS and pediatric dysarthria in children. Method: Evidence related to differential diagnosis of CAS and dysarthria is reviewed. Next, a web-based survey of 359 pediatric speech-language pathologists is used to determine clinical confidence levels in diagnosing CAS and dysarthria. Finally, a checklist of pediatric auditory-perceptual motor speech features is… [Direct]
(2021). Many Are the Ways to Learn Identifying Multi-Modal Behavioral Profiles of Collaborative Learning in Constructivist Activities. International Journal of Computer-Supported Collaborative Learning, v16 n4 p485-523 Dec. Understanding the way learners engage with learning technologies, and its relation with their learning, is crucial for motivating design of effective learning interventions. Assessing the learners' state of engagement, however, is non-trivial. Research suggests that performance is not always a good indicator of learning, especially with open-ended constructivist activities. In this paper, we describe a combined multi-modal learning analytics and interaction analysis method that uses video, audio and log data to identify multi-modal collaborative learning behavioral profiles of 32 dyads as they work on an open-ended task around interactive tabletops with a robot mediator. These profiles, which we name "Expressive Explorers," "Calm Tinkerers," and "Silent Wanderers," confirm previous collaborative learning findings. In particular, the amount of speech interaction and the overlap of speech between a pair of learners are behavior patterns that strongly… [Direct]
(2016). Meaning-Making in Online Language Learner Interactions via Desktop Videoconferencing. ReCALL, v28 spec iss 3 p305-325 Sep. Online language learning and teaching in multimodal contexts has been identified as one of the key research areas in computer-aided learning (CALL) (Lamy, 2013; White, 2014). This paper aims to explore meaning-making in online language learner interactions via desktop videoconferencing (DVC) and in doing so illustrate multimodal transcription and analysis as well as the application of theoretical frameworks from other fields. Recordings of learner DVC interactions and interviews are qualitatively analysed within a case study methodology. The analysis focuses on how semiotic resources available in DVC are used for meaning-making, drawing on semiotics, interactional sociolinguistics, nonverbal communication, multimodal interaction analysis and conversation analysis. The findings demonstrate the use of contextualization cues, five codes of the body, paralinguistic elements for emotional expression, gestures and overlapping speech in meaning-making. The paper concludes with… [Direct]
(2017). The IATH ELAN Text-Sync Tool: A Simple System for Mobilizing ELAN Transcripts On- or Off-Line. Language Documentation & Conservation, v11 p94-102. In this article we present the IATH ELAN Text-Sync Tool (ETST; see community.village.virginia.edu/etst), a series of scripts and workflow for playing ELAN files and associated audiovisual media in a web browser either on- or off-line. ELAN has become an indispensable part of documentary linguists' toolkit, but it is less than ideal for mobilizing the transcribed media it allows linguists to create when they have reason to display these materials in non-research settings where linguists are not the primary audience. In conjunction with display of a video or audio file, ETST plays tiers of transcript for overlapping speech, along with optional glosses, and distinguishes speakers with participant codes. Using ETST requires no programming knowledge, but with some such knowledge the tool can be readily customized to suit users' needs. To that extent, ETST is a simple browser-based transcript player that can be used either as is, "out of the box," or as a basis for further… [Direct]
(2018). Using Sensor Technology to Capture the Structure and Content of Team Interactions in Medical Emergency Teams during Stressful Moments. Frontline Learning Research, v6 n3 p123-147. In healthcare, action teams are carrying out complex medical procedures in intense and unpredictable situations to save lives. Previous research has shown that efficient communication, high-quality coordination, and coping with stress are particularly essential for high performance. However, precisely and objectively capturing these team interactions during stressful moments remains a challenge. In this study, we used a multimodal design to capture the structure and content of team interactions of medical teams at moments of high arousal during a simulated crisis situation. Sociometric badges were used to measure the structure of team interactions, including speaking time, overlapping speech and conversational imbalance. Video coding was used to reveal the content of the team interactions. Furthermore, the Empatica E4 was used to unobtrusively measure the team leader's skin conductance to identify moments of high arousal. In total, 21 four-person teams of technical medicine students… [PDF]
(2020). Age Norms for Auditory-Perceptual Neurophonetic Parameters: A Prerequisite for the Assessment of Childhood Dysarthria. Journal of Speech, Language, and Hearing Research, v63 n4 p1071-1082 Apr. Purpose: The aim of this study was to collect auditory-perceptual data on established symptom categories of dysarthria from typically developing children between 3 and 9 years of age, for the purpose of creating age norms for dysarthria assessment. Method: One hundred forty-four typically developing children (3;0-9;11 [years;months], 72 girls and 72 boys) participated. We used a computer-based game specifically designed for this study to elicit sentence repetitions and spontaneous speech samples. Speech recordings were analyzed using the auditory-perceptual criteria of the Bogenhausen Dysarthria Scales, a standardized German assessment tool for dysarthria in adults. The Bogenhausen Dysarthria Scales (scales and features) cover clinically relevant dimensions of speech and allow for an evaluation of well-established symptom categories of dysarthria. Results: The typically developing children exhibited a number of speech characteristics overlapping with established symptom categories of… [Direct]
(2012). The Role of Instructors' Sociolinguistic Language Awareness in College Writing Courses: A Discourse Analytic/Ethnographic Approach. ProQuest LLC, Ph.D. Dissertation, Georgetown University. Grounded in literature on the miseducation of students whose native varieties of English differ most noticeably from the standard academic variety (Delpit 2006; Labov 1972a; Rickford 1999; Smitherman 1999; Wolfram, Adger, and Christian 1999; Wolfram and Schilling-Estes 2006), this dissertation examines the links between the sociolinguistic language awareness of college writing instructors and their discursive interactions with students. Using a case study approach that is at once broadly ethnographic and closely focused on unfolding discourse, the study concentrates on the language awareness of three European American teachers, two of whom teach basic (developmental) writing and one who teaches a more advanced technical writing class. After determining the three instructors' respective levels of language awareness through analysis of the pejorative or affirmative lexical choices they make when discussing the varieties of English their students speak, this study analyzes the… [Direct]
(1990). Repair of Overlapping Speech in the Conversations of Specifically Language-Impaired and Normally Developing Children. Applied Psycholinguistics, v11 n2 p201-15 Jun. A study examined the manner in which 10 specifically language-impaired children and their linguistically normal chronological age-matched peers repaired overlapping speech. Conversational samples from each student were elicited by an adult examiner. (26 references) (Author/CB)…
(1994). Can You See Whose Speech Is Overlapping?. Visible Language, v28 n2 p110-33 Spr. Discusses the types of overlapping speech, a common characteristic of speech that any annotation system must deal with. Critiques two types of current systems for marking overlaps. Describes software developed by the authors that not only accurately marks the boundaries of overlaps but presents them to the user in a readable format. (SR)…
(1992). A Sociolinguistic Analysis of the Interpreter's Role in Simultaneous Talk in Face-to-Face Interpreted Dialogue. Sign Language Studies, n74 p21-61 Spr. Explores the active role of the sign language interpreter in resolving simultaneous and overlapping speech, guided by social and linguistic knowledge of the entire communicative situation in making linguistic choices about what to interpret. (29 references) (Author/CB)…
(2013). Recognition of Amodal Language Identity Emerges in Infancy. International Journal of Behavioral Development, v37 n2 p90-94 Mar. Audiovisual speech consists of overlapping and invariant patterns of dynamic acoustic and optic articulatory information. Research has shown that infants can perceive a variety of basic auditory-visual (A-V) relations but no studies have investigated whether and when infants begin to perceive higher order A-V relations inherent in speech. Here, we asked whether and when do infants become capable of recognizing amodal language identity, a critical perceptual skill that is necessary for the development of multisensory communication. Because, at a minimum, such a skill requires the ability to perceive suprasegmental auditory and visual linguistic information, we predicted that this skill would not emerge before higher-level speech processing and multisensory perceptual skills emerge. Consistent with this prediction, we found that recognition of the amodal identity of language emerges at 10-12 months of age but that when it emerges it is restricted to infants' native language…. [Direct]
(2024). Computational Modeling of the Segmentation of Sentence Stimuli from an Infant Word-Finding Study. Cognitive Science, v48 n3 e13427. Computational models of infant word-finding typically operate over transcriptions of infant-directed speech corpora. It is now possible to test models of word segmentation on speech materials, rather than transcriptions of speech. We propose that such modeling efforts be conducted over the speech of the experimental stimuli used in studies measuring infants' capacity for learning from spoken sentences. Correspondence with infant outcomes in such experiments is an appropriate benchmark for models of infants. We demonstrate such an analysis by applying the DP-Parser model of Algayres and colleagues to auditory stimuli used in infant psycholinguistic experiments by Pelucchi and colleagues. The DP-Parser model takes speech as input, and creates multiple overlapping embeddings from each utterance. Prospective words are identified as clusters of similar embedded segments. This allows segmentation of each utterance into possible words, using a dynamic programming method that maximizes the… [Direct]
(2024). Iconicity and Gesture Jointly Facilitate Learning of Second Language Signs at First Exposure in Hearing Nonsigners. Language Learning, v74 n4 p781-813. When learning spoken second language (L2), words overlapping in form and meaning with one's native language (L1) help break into the new language. When nonsigning speakers learn a sign language as L2, such overlaps are absent because of the modality differences (L1: speech, L2: sign). In such cases, nonsigning speakers might use iconic form-meaning mappings in signs or their own gestural experience as gateways into the to-be-acquired sign language. In this study, we investigated how both these phenomena may contribute jointly to the acquisition of sign language vocabulary by hearing nonsigners. Participants were presented with three types of signs in the Sign Language of the Netherlands (NGT): arbitrary signs, iconic signs with high or low gesture overlap. Signs that were both iconic and highly overlapping with gestures boosted learning most at first exposure, and this effect remained the day after. Findings highlight the influence of modality-specific attributes supporting the… [Direct] [Direct]