Development, Deployment and Evaluation of Personalized Learning Companion Robots for Early Literacy and Language Learning

pi_name

Abeer Alwan

pi_email

alwan@ucla.edu

pi_phone

(310) 206-2231

pi_department

Engineering

pi_title

Professor

ucla_faculty_sponsor

other_key_personnel

Co-PI Professor Alison Bailey (Education) 5-1731

abstract

This project is a NSF sponsored collaboration between UCLA Engineering, UCLA Education and MIT Media Lab. We will develop, deploy and evaluate personalized companion robots to assist pre-K through 1st grade children in learning language and vocabulary skills. The aim is to accelerate the impacts of social robotics for early education in schools and at home.
The multi-year project will generate new insights for how to develop expressive, socially responsive robots that provide more effective, engaging, and empathetic educational experiences for young children. We will use Jibo - a state of the art social robot - see http://time.com/5023212/best-inventions-of-2017.

project_summary

This research and development project will be implemented in two phases: An initial phase consisting of short pilot deployments to train and continually iterate development of project technologies and systems, followed by longer term deployment of the robot to examine autonomous interactions with social robots in school. During the development stage of individual components (automatic reading and language assessment tools, automatic question-generation algorithm, automatic speech recognition and spoken language understanding system models, and activities with the autonomous social robot learning companion) the project team will collect and analyze data with practical and performance measures, and refine and iterate each component of the system being developed.

To evaluate the impacts of long-term interactions on educational outcomes, the UCLA project team will conduct a longitudinal study starting in Pre-Kindergarten and Kindergarten classrooms in year 1 of the project and follow these students into years 2 and 3 of the project.

In the final year of the project, two 4-month studies will be conducted in new pre-Kindergarten and Kindergarten classrooms using the finalized and fully developed version of the robot (all tasks integrated within the robot). Observations about practices with the robot may be made in classrooms. This project is expected to result in five key contributions: (1) Development of Automatic Speech Recognition (ASR) and Spoken Language Understanding systems for young children's speech, (2) Multi-modal automatic assessment algorithms for Pre-K 1st grade children's spoken language and early reading skills; (3) Automatic personalization algorithms for story content customization and dialogic question generation in the context of young children's verbal storytelling; (4) Development of a fully autonomous, collaborative, peer-like social robot system with effective educational activities; and (5) Long-term studies with deployed social robots in schools and homes spanning several months and demonstrating sustained engagement and positive learning outcomes.

goals

While, as noted the overarching aim is to accelerate the impacts of social robotics for early education in schools and at home, the four-year project will do this by advancing knowledge in three key areas: (1) automatic speech recognition models for young children; (2) multi-modal student assessment algorithms for early language and literacy skills; and (3) personalization of activities, content, and dialogic question generation to boost language and literacy learning outcomes.

benefits_of_research

Less than half of Kindergarten-aged children have access to the necessary resources to prepare them for the development of literacy skills (NIEER, 2013). This lack of educational resources can be supplemented by educational robotics (desk-top child-appealing units). Our research is focused on the design and implementation of these robots for the teaching of reading and literacy skills for young children. Such robots are programmed to understand child input, provide evaluation information to teachers on their reading and literacy levels, and adapt response questions and teaching approaches to promote steady learning and improvement.
One of the most limiting factors to the development of such robots is the lack of reliable automatic speech recognition (ASR) technologies for young children. While ASR for adults has improved vastly in the past years, ASR for children still lags far behind due to the large variability in children’s acoustics and pronunciation, as well as a larger number of disfluencies in speaking. UCLA is currently researching children’s ASR technologies and aims to improve the ability for machines to understand the speech of children. The benefits of the research are intended to be improved assessment and support of young children's early language and literacy.
There is only very minimal direct benefit to the students of the Lab School due to the limited time spent with each student - but working with researchers periodically and getting familiar with the functionality of the newly developed social robots over time (2-3 years) could positively influence the young children's experiences with robotics in the classroom designed to enhance their oral language and early literacy.

dissemination/publications

We will be submitting reserach findings to both education and technology conferences and publications with the intention of informing early childhood educators, language/reading experts, as well as developers in robotics, ASR and computing sciences.

numer_of_subjects

100

selection_criteria

Approximately 50 students in Pre-k and K (EC I & II) will be recruited in Year 1 and followed for an additional 2 years. In Year 4 of the project we will recruit approximately an additional 50 new Pre-K and K students to implement the fully automated social robot and observe their use in classrooms (iSTEAM lab). Students who are native speakers of English or Spanish and speakers of English as a second language are included. This means our selection criteria will include the dual-language program students in addition to students in English-medium classrooms. Complex learners, including students with speech, developmental or learning disabilities will also be included because these students can be a target of the educational initiative of this application of social robotics research. However, students will need sufficient receptive levels of proficiency in either English or Spanish in order to follow the researchers' directions and respond to the tasks. Teachers will be asked to identify students to provide a range of language proficiency in English at the start of the study and will advise which students may not be suitable for the study due to the researchers' directions being only in English or Spanish and no other languages.

methods

Individual student interviews will be used to collect the data. Each interview will be 20-30 minutes in length and consists of a battery of early literacy and language tasks. If these young students are fatigued after 10-15 minutes we will break the session into 2 10-15 minute sessions and return to complete the language and literacy protocol (see attached) on a separate day.
Students will work one-on-one with a researcher and the social robot (used to record the data in the early stages of development - later it will be fully interactive)- in the quiet corner of the iSTEAM student lab at the Lab School. Other students may be engaged with their regular curriculum in other parts of the room but all students will be introduce to the robot and encouraged to get familiar with its functionality at the start of the study.
Analyses:
1. Speech data will be used to develop ASR technologies.
2. Audio and videoed recordings of children's interactions with the robot will be evaluated for such aspects as contingency (i.e., relevancy in child responses to robot elicited language), sustained attention, etc. to determine behaviors in human-computer interface to know their feasible use in classrooms.
3. Children's performances on the various tasks will be scored and/or coded for language and literacy features. Relationships between the oral language task performances and children's early literacy abilities will be examine using inferential statistical procedures (e.g., multiple regression techniques).

instruments


Warning: Array to string conversion in /opt/data/www/connect/wp-content/themes/ucla-connect-test/templates/content-single-project.php on line 29
Array

instruments_other

instrument_explanations

The battery of early literacy and language tasks comprises:
1.letter names and sounds (GFTA),
2. picture naming,
3. story generation, and
4. extended discourse responses to explanation prompts.

Please see the attached language and literacy protocol for all the task items.

justification_of_methods

These methods are necessary for collecting direct measures of young children's early language and literacy competencies that the social robot will eventually be programmed to deliver. The current robot will be used to deliver the letter name and sound tasks and to record all student responses. The picture naming task will be conducted with a ipad and the story generation and explanations will be prompted by researchers in the first data collection efforts. The Year 4 implementations with the additional new pre-K and K students will use a fully automated version of the stoical robot with Jibo capable of asking contingent follow up questions, etc. Field notes and video recordings used during observation of students' interactions with the fully automated robot in Year 4 will be necessary to evaluated how effective Jibo is at eliciting student responses. If we decided to additionally observe in the home and addendum asking for additional parental consent will be created and submitted to IRB (UCLA and Lab School) at the start of Year 4.

separate_informed_consent

A separate informed consent will be necessary in Year 4 if the UCLA team decides to include observations of social robot use in the home with parents and the longitudinal cohort students interacting with the fully-automated Jibo together.

risk_minimization

As a result of completing the language and early literacy tasks with a relative stranger (i.e., the researcher) the student may experience minimal psychological discomfort.
However, if s/he feels uncomfortable at any time during the research session, s/he may terminate the research session with no penalty.

deception_debriefing

No deception is used in this study. In terms so debriefing, we will not be giving students, teacher or parents individual score reports of their task performances. We can inform teachers in the aggregate of how their classes are performing on the tasks and we will provide teachers with copies of the language and literacy protocols if they choose to use this information in their instructional decision-making (e.g., which letter names are known by what % of students, etc.).
Furthermore, students in the longitudinal cohort will continue to interact with Jibo as we integrate more technology into the robot so they will have an opportunity annually to learn how Jibo is evolving with their assistance.

confidentiality_data_storage

Once eligible student participants have been identified and selected, we will assign unique numerical identifiers to each student. We will use the numerical identifiers on all individual student data to ensure confidentiality. Additionally, audio/video files will be assigned unique numerical identifiers. A roster containing the matched teacher/student name and numerical identifier will be stored under lock and key separate from individual data. Only key research personnel will have access to the roster with names and matching identifiers. All audio/video files will be stored on a secure UCLA server. Only Dr. Abeer Alwan and Dr. Alison Bailey will have access to the videos once the current study is complete. Only Dr. Alwan and Dr. Bailey will have access to the identifiers and/or codes at the close of the study. Any information in the recordings that might identify the students will be redacted in the transcribed data and removed from any recordings used in training and professional presentations.

Only UC researchers/graduate students and research partners at MIT (our collaborators funded by the same NSF grant initiative) approved by Dr. Alwan and Dr. Bailey will be able to obtain data collected in the current study. Their students may be allowed to use the collected data for future research studies.
Experts/samples of recordings and transcripts of the students' language without identifying information will be used for training and presentations in digital format.

other_notes

N/A

relationship_prior_contact

EC I & II teachers have met with the project team during a visit to the school by the MIT team to be introduced to Jibo and to be briefed on the work.

Primary teachers will be briefed before the start of Year 2 for the project continuation with the longitudinal cohort of students who transition from Kindergarten to 1st grade (Primary I).

teachers_staff_consent

Teacher consent for observations in Year 4 will be requested at the start of Year 4 if their interactions are likely to be part of the reserach on students' interactions with the fully automated Jibo robot.

ucla_lab_school_personnel_involved

EC I & II teachers.

academic_topic

iSTEAM lab time is requested so the children can explore the functionality of the robot in the context of science and technology design.

information_from_ucla_lab_school_database

Student names, gender, home language(s), DOB, teacher/classroom.

special_requirements_at_ucla_lab_school

N/A

estimated_start_date

20180316

estimated_end_date

20210831

irb

IRB#17-001939

irb_approval

Approved pending review of responses

attachments


Warning: Array to string conversion in /opt/data/www/connect/wp-content/themes/ucla-connect-test/templates/content-single-project.php on line 29
Array