ABSTRACT VIEW
DEVELOPMENT OF A SENSOR-INTEGRATED INFANT MANNEQUIN AND VISUALIZATION SYSTEM FOR INFANT CARE PRACTICE
T. Toriyama, A. Urashima, J. Wakase, T. Terai, S. Takagi, Y. Matsumoto, Y. Minagawa, T. Kobayashi
Toyama Prefectural University (JAPAN)
One of the objectives of nursing education is to acquire infant care skills. To provide care for infants whose necks are not yet stable, it is necessary to master techniques for handling their delicate joints and communication techniques, such as softly speaking to babies. To master these skills, it has been reported that not only classroom learning but also simulation education using model mannequins, as well as the Humanitude technique (a care approach that combines "seeing," "speaking," and "touching" in daily life, which has gained attention in elderly care), can be effective. However, currently, many of the model mannequins developed for acquiring infant care skills are designed for training in specific situations, and there are none that can be used for educating everyday infant care. Therefore, we formed a group consisting of nursing education specialists and engineering faculty members. The nursing faculty developed the functional specifications required for implementing infant care education using the Humanitude technique, while the engineering faculty designed and implemented these functions. The functional specifications created by the nursing faculty consisted of 12 items related to realism, detection abilities, and expressive abilities. To implement these specifications, the engineering group designed a system comprising a model mannequin that collects data and a visualization system that enables real-time observation of infant care, as well as reviewing the data after the care session has ended. The model mannequin features a body made entirely of silicone resin, replicating the texture of an infant’s skin. It has the same weight ratio between the head and body as a real infant and includes an internal skeletal model with an unstable neck. Sensors are embedded to detect the exact location of touch, and 3D accelerometers are used to monitor the tilt of the head and body. To confirm whether the caregiver is making eye contact, a camera is installed inside the right eye, while microphones inside the ears ensure that the caregiver is speaking to the infant. Additionally, a speaker inside the mouth simulates crying to express emotions, and LEDs embedded in the cheeks change the complexion to reflect emotional states. To verify the effectiveness of this system, we conducted two evaluation tasks: one to assess the realism of the prototype infant mannequin and the fulfillment of Humanitude requirements, and the other to evaluate whether the system could guide users in correcting improper infant care techniques. One member of the nursing group was selected to play the role of a student for each task, while the remaining members acted as instructors. All members of the nursing group took turns as the student role to evaluate the system. The results of the experiment were generally positive. However, low evaluations were received in the area of color changes used to represent sadness or distress, which were attributed to issues with the LED’s lighting duration and installation position. There were also lower scores related to the fulfillment of the Humanitude technique’s “touch” component, as well as the system's ability to detect incorrect care. These issues were caused by a visual inconsistency between the interface, which displayed surface sensor data from the mannequin in 2D, and the evaluators’ misunderstanding that the mannequin's surface capacitive sensors would provide pressure data.

Keywords: Infant care practice, sensor based system, infant mannequin, care visualization.

Event: INTED2025
Session: Emerging Technologies in Education
Session time: Tuesday, 4th of March from 08:30 to 13:45
Session type: POSTER