Search Articles

View query in Help articles search

Search Results (1 to 6 of 6 Results)

Download search results: CSV END BibTex RIS


Acoustic and Natural Language Markers for Bipolar Disorder: A Pilot, mHealth Cross-Sectional Study

Acoustic and Natural Language Markers for Bipolar Disorder: A Pilot, mHealth Cross-Sectional Study

Recordings involved the use of one audio channel based on the participant’s voice in a controlled environment with minimal acoustic conditions. Both the raw audio data and the transcribed text content were processed to extract acoustic and NLP-based features from speech outputs. NLP and acoustic signal models were embedded in the backend part of the mobile app. Consistent with recent evidence, we assumed speech as verbal behavior, the spoken output of the mental system underlying the language [39].

Cristina Crocamo, Riccardo Matteo Cioni, Aurelia Canestro, Christian Nasti, Dario Palpella, Susanna Piacenti, Alessandra Bartoccetti, Martina Re, Valentina Simonetti, Chiara Barattieri di San Pietro, Maria Bulgheroni, Francesco Bartoli, Giuseppe Carrà

JMIR Form Res 2025;9:e65555

Investigating Acoustic and Psycholinguistic Predictors of Cognitive Impairment in Older Adults: Modeling Study

Investigating Acoustic and Psycholinguistic Predictors of Cognitive Impairment in Older Adults: Modeling Study

To reduce the number of parameters, we used the feature set “e Ge MAPSv02,” which is based upon the Geneva minimalistic acoustic parameter set for voice research and affective computing, which identifies a basic set of acoustic features commonly used in clinical speech analysis [26]. A total of 88 Geneva minimalistic acoustic parameter set acoustic features are identified in the Python open SMILE library, which has been previously validated for this purpose [27-29].

Varsha D Badal, Jenna M Reinen, Elizabeth W Twamley, Ellen E Lee, Robert P Fellows, Erhan Bilal, Colin A Depp

JMIR Aging 2024;7:e54655

Accurate Modeling of Ejection Fraction and Stroke Volume With Mobile Phone Auscultation: Prospective Case-Control Study

Accurate Modeling of Ejection Fraction and Stroke Volume With Mobile Phone Auscultation: Prospective Case-Control Study

Prior research suggests that both mobile phones and acoustic recording can assist in HF diagnosis or monitoring; however, no current technologies use basic cellular microphone capability to obtain the acoustic data that can estimate EF or SV. This novel, proprietary unpublished technology has far-reaching potential for screening and management of patients with HF, including the undiagnosed.

Martin Huecker, Craig Schutzman, Joshua French, Karim El-Kersh, Shahab Ghafghazi, Ravi Desai, Daniel Frick, Jarred Jeremy Thomas

JMIR Cardio 2024;8:e57111

Impact of Audio Data Compression on Feature Extraction for Vocal Biomarker Detection: Validation Study

Impact of Audio Data Compression on Feature Extraction for Vocal Biomarker Detection: Validation Study

In our previous work, “Acoustic Analysis and Prediction of Type 2 Diabetes Mellitus Using Smartphone-Recorded Voice Segments” [7], smartphone-recorded speech was used to predict type 2 diabetes mellitus through a comprehensive acoustic analysis [7]. The study demonstrated the feasibility of using acoustic features from smartphone-recorded voice data to predict the presence of this disorder, highlighting the valuable diagnostic potential of vocal biomarkers in the context of a specific health condition [7].

Jessica Oreskovic, Jaycee Kaufman, Yan Fossat

JMIR Biomed Eng 2024;9:e56246

Automatic Depression Detection Using Smartphone-Based Text-Dependent Speech Signals: Deep Convolutional Neural Network Approach

Automatic Depression Detection Using Smartphone-Based Text-Dependent Speech Signals: Deep Convolutional Neural Network Approach

Among speech-based methods, previous studies have focused more on using handcrafted acoustic features, such as prosody [13], formant [22], and cepstral [23] features, and then classifying patterns using ML algorithms, such as support vector machine (SVM) [24], logistic regression [25], and random forest (RF) [26]. These studies have suggested that acoustic features are closely related to depression.

Ah Young Kim, Eun Hye Jang, Seung-Hwan Lee, Kwang-Yeon Choi, Jeon Gue Park, Hyun-Chool Shin

J Med Internet Res 2023;25:e34474