Google has introduced an innovative AI model named Health Acoustic Representations (HeAR), which can identify diseases such as tuberculosis (TB) and chronic obstructive pulmonary disease (COPD) by analyzing cough sounds. This foundational bioacoustics model, trained on a vast dataset of 300 million pieces of audio samples and 100 million cough sounds, can identify six non-speech health events, including coughing and breathing. HeAR listens to subtle cough patterns to detect underlying health issues, enhancing the speed and accessibility of disease detection.
The system demonstrated impressive performance across various health acoustic tasks, including tuberculosis classification from cough sounds and monitoring lung function via audio from smartphones. The model’s self-supervised learning approach has enhanced its versatility, distinguishing it as a significant advancement over current deep learning models that often lack broad applicability. Google has made HeAR accessible to researchers, aiming to foster further advancements in utilizing sound for health diagnostics.
By using AI to detect early warning signs through cough analysis, it holds the promise of enhancing the proactiveness and accessibility of healthcare, ensuring timely diagnoses and treatment. HeAR signifies the notable role of AI in advancing healthcare technology.