、The Now and Future of RionI’d like to begin this interview by asking you about the research and development you’re currently working on. My work is basically focused on hearing aid development research. I’ve recently been working on developing a function that reduces noise to make it easier to hear speech using deep neural networks as an application of AI technology. I’m in charge of everything from simulation to product implementation. Difficulties in discerning speech in noisy environments can significantly affect a person’s life—for example, they might miss important announcements at airports and train stations or have a hard time talking to friends at cafes and pubs. So improving speech perception in noisy environments has always been a major challenge for hearing aids. An abundance of current research results shows we can enhance speech perception using deep neural networks. Hearing aids are compact devices used throughout the day—their dimensions limit internal computational power, so our research can’t always be directly incorporated. However, we are working on ways to overcome these physical limitations. I’m mainly involved in research on the audiometers used by otolaryngologists (ear, nose, and throat doctors). There’s a part of the ear called the cochlea. It’s the first part of the ear to sense incoming sound. Otoacoustic emission (OAE) tests have long been used in clinical practice to assess the health of the cochlea, but we are working with outside universities and hospitals to research and develop new methods to evaluate the pathology and characteristics of the cochlea that cannot be determined by existing testing methods.What would you like to ask each other about your research fields? This may sound like an amateurish question, but I think it’s amazing that when you make a sound, the sound is reflected from inside the ear. What’s the underlying mechanism? The cochlea converts the sound that enters the ear as vibrations into electrical signals, which are then transmitted to the neural pathways connected to the brain. The cochlea has a function that amplifies the sound to transmit clearer signals. Some of the energy generated in the process of amplifying the sound flows back and escapes out of the ear as sound. If the cochlea isn’t in good health, this amplification function is compromised, and no sound comes out from the ear. It’s quite fascinating, isn’t it? [laughs] As for me, I have no direct experience with hearing aid design, but I’m curious how other hearing aid manufacturers, including foreign manufacturers, are currently incorporating AI technologies. Almost all com-panies, including Rion, have already started incorporating AI technologies into their products. But only a small number of products based on deep learning have been introduced to date. These are mainly used for adjusting and optimizing hearing aids or for improving speech perception in noisy environments. I believe we’ll see dramatic improvements in hearing aid performance with the application of deep learning in the near future. And Rion is working daily to develop AI technologies to provide new value to users. Are AI technologies in use in your research field, Mr. Ebina? There’s been little progress in that direction in the field of otolaryngology. I think the fundamental value provided by AI is that it can constantly adapt products to environments by incorporating new data and continue to improve performance. But with medical devices, unfortunately, this can pose risks. For example, overlearning can degrade performance. Or the AI results can be overly complicated, leading to misdiagnoses and even harm to the patient. AI offers many prospective advantages, including the discovery of undiagnosed diseases and provision of useful information for diagnosis by data mining medical records information, test data, and other big data sources. However, guaranteeing safety and performance after shipment and service launch remains an issue. I think this has been an obstacle to its widespread application.What do you foresee for your research and development? What are your visions for the near and far future? In the near future, I think we’ll have less trouble with speech perception in noisy environments. This will contribute to society, since those who used to have difficulty engaging in conversation and were reluctant to attempt to communicate with others will become more active, strengthening social bonding. It’s been known that active social participation has the effect of extending healthy life expectancy, including preventing dementia. So I think our research results can be fruitful in this domain. As for the future 100 years from now—that’s hard to imagine. Maybe we’ll live in a telepathic world, with chips implanted in our body that feed the sound directly to our brain. I doubt we’ll have trouble communicating in terms of conveying our thoughts in words. What I’m interested in is applying AI to the neurophysiological modeling of the auditory system using deep neural networks. This will require gathering vast volumes of learning data and clarifying the neurophysiological correspondence between the results obtained through learning (models) and the auditory system, but it has potential applications in many areas, including gaining an understanding of the hearing of those with hearing impairments and visualizing their pathological conditions, as well as predicting prognoses. I think using a tool like this would allow even physicians and nurses who are unfamiliar with the field to make accurate diagnoses and consider treatment plans. Masatoshi OsawaHearing Care, Sound and Vibration Measurement Technology Development Group, R&D Department, Technical Develop-ment Center. Since joining Rion in 2012, he’s devoted himself to research and development aiming to improve hearing aid performance and technologies.Future TechnoloFUTURE TALK 10、Future TechnoloFUTURE TALK
元のページ ../index.html#12