e.g. mhealth
Search Results (1 to 6 of 6 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 1 JMIR Aging
- 1 JMIR Biomedical Engineering
- 1 JMIR Cardio
- 1 JMIR Formative Research
- 1 JMIR Research Protocols
- 1 Journal of Medical Internet Research
- 0 Medicine 2.0
- 0 Interactive Journal of Medical Research
- 0 iProceedings
- 0 JMIR Human Factors
- 0 JMIR Medical Informatics
- 0 JMIR Public Health and Surveillance
- 0 JMIR mHealth and uHealth
- 0 JMIR Serious Games
- 0 JMIR Mental Health
- 0 JMIR Rehabilitation and Assistive Technologies
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Medical Education
- 0 JMIR Cancer
- 0 JMIR Challenges
- 0 JMIR Diabetes
- 0 JMIR Data
- 0 Journal of Participatory Medicine
- 0 JMIR Dermatology
- 0 JMIR Pediatrics and Parenting
- 0 JMIR Perioperative Medicine
- 0 JMIR Nursing
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 JMIR Infodemiology
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR AI
- 0 JMIR Neurotechnology
- 0 Asian/Pacific Island Nursing Journal
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

Acoustic and Natural Language Markers for Bipolar Disorder: A Pilot, mHealth Cross-Sectional Study
Recordings involved the use of one audio channel based on the participant’s voice in a controlled environment with minimal acoustic conditions. Both the raw audio data and the transcribed text content were processed to extract acoustic and NLP-based features from speech outputs. NLP and acoustic signal models were embedded in the backend part of the mobile app.
Consistent with recent evidence, we assumed speech as verbal behavior, the spoken output of the mental system underlying the language [39].
JMIR Form Res 2025;9:e65555
Download Citation: END BibTex RIS

To reduce the number of parameters, we used the feature set “e Ge MAPSv02,” which is based upon the Geneva minimalistic acoustic parameter set for voice research and affective computing, which identifies a basic set of acoustic features commonly used in clinical speech analysis [26]. A total of 88 Geneva minimalistic acoustic parameter set acoustic features are identified in the Python open SMILE library, which has been previously validated for this purpose [27-29].
JMIR Aging 2024;7:e54655
Download Citation: END BibTex RIS

Prior research suggests that both mobile phones and acoustic recording can assist in HF diagnosis or monitoring; however, no current technologies use basic cellular microphone capability to obtain the acoustic data that can estimate EF or SV. This novel, proprietary unpublished technology has far-reaching potential for screening and management of patients with HF, including the undiagnosed.
JMIR Cardio 2024;8:e57111
Download Citation: END BibTex RIS

In our previous work, “Acoustic Analysis and Prediction of Type 2 Diabetes Mellitus Using Smartphone-Recorded Voice Segments” [7], smartphone-recorded speech was used to predict type 2 diabetes mellitus through a comprehensive acoustic analysis [7]. The study demonstrated the feasibility of using acoustic features from smartphone-recorded voice data to predict the presence of this disorder, highlighting the valuable diagnostic potential of vocal biomarkers in the context of a specific health condition [7].
JMIR Biomed Eng 2024;9:e56246
Download Citation: END BibTex RIS

Among speech-based methods, previous studies have focused more on using handcrafted acoustic features, such as prosody [13], formant [22], and cepstral [23] features, and then classifying patterns using ML algorithms, such as support vector machine (SVM) [24], logistic regression [25], and random forest (RF) [26]. These studies have suggested that acoustic features are closely related to depression.
J Med Internet Res 2023;25:e34474
Download Citation: END BibTex RIS

Reference 40: Acoustic patterns in schizophrenia: a systematic review and meta-analysisacousticMultimodal Assessment of Schizophrenia and Depression Utilizing Video, Acoustic, Locomotor, Electroencephalographic
JMIR Res Protoc 2022;11(7):e36417
Download Citation: END BibTex RIS