An app that can detect coronavirus in your voice has been developed in a major scientific breakthrough.
The AI-powered technology is easier to use and more accurate than a lateral flow testscientists say.
The mobile app takes less than a minute to flag positive cases and gives an accurate result 89 percent of the time and negative cases 83 percent of the time.
In contrast, the accuracy of lateral flow tests varies widely depending on the brand and the nose and throat swabs are less good at picking up infectious people without symptoms.
The new app could be used to very quickly screen people for the bug before they attend mass events such as concerts and big sports matches.
It could also be deployed in poorer countries where gold-standard PCR tests are very expensive and often difficult to distribute.
The Dutch researchers say coronavirus usually affects the upper respiratory tract and vocal chords, leading to changes in a person’s voice.
Users asked to record respiratory sounds
The team decided to investigate whether it was possible to detect the novel virus in people’s voices.
When developing it, they used data from the University of Cambridge’s crowdsourcing Covid-19 Sounds App, which contains 893 audio samples from 4,352 participants, 308 of whom had tested positive for the virus.
The app is installed on the user’s mobile phone and the participants report some basic details about demographics, medical history and smoking status.
They are then asked to record some respiratory sounds which includes coughing three times, breathing deeply through their mouth three to five times and reading a short sentence on the screen three times.
The researchers used a voice analysis technique called Mel-spectrogram analysis, which identifies different voice features such as loudness, power and variation over time.
To distinguish the voices of Covid-19 patients from those who did not have the disease, the team built different artificial intelligence models and evaluated which one worked best at classifying positive cases.
One model called Long-Short Term Memory (LSTM) outperformed the others.
It is based on neural networks, which mimic the way the human brain operates and recognize the underlying relationships in data.
It works with sequences, which makes it suitable for modeling signals collected over time, such as from the voice, because of its ability to store data in its memory.
Tests can be provided at no cost
Wafaa Aljbawi, a researcher from the University of Maastricht, said: “These promising results suggest that simple voice recordings and fine-tuned AI algorithms can potentially achieve high precision in determining which patients have Covid-19 infection.
“Such tests can be provided at no cost and are simple to interpret. Moreover, they enable remote, virtual testing and have a turnaround time of less than a minute.
“They could be used, for example, at the entry points for large gatherings, enabling rapid screening of the population.
“These results show a significant improvement in the accuracy of diagnosing Covid-19 compared to state-of-the-art tests such as the lateral flow test.
“The lateral flow test has a sensitivity of only 56 percent, but a higher specificity rate of 99.5 percent.
“This is important as it means that the lateral flow test is misclassifying infected people as Covid-19 negative more often than our test.
“In other words, with the AI LSTM model, we could miss 11 out of 100 cases who would go on to spread the infection, while the lateral flow test would miss 44 out of 100 cases.
“The high specificity of the lateral flow test means that only one in 100 people would be wrongly told they were Covid-19 positive when, in fact, they were not infected, while the LSTM test would wrongly diagnose 17 in 100 non-infected people as positive.
“However, since this test is virtually free, it is possible to invite people for PCR tests if the LSTM tests show they are positive.”
Further research is needed before the app can be used
The team says further research with more participants needs to be done before the app can start appearing on people’s phones.
Since the start of the project 53,449 audio samples from 36,116 participants have now been collected and can be used to improve and validate the accuracy of the model.
The team is also carrying out more analysis to understand which parameters in the voice are influencing the AI model.
The findings will be presented at the European Respiratory Society International Congress in Barcelona, Spain.