All papers examples
Get a Free E-Book!
Log in
HIRE A WRITER!
Paper Types
Disciplines
Get a Free E-Book! ($50 Value)

Designing Quality Speech Assessment, Thesis Paper Example

Pages: 17

Words: 4581

Thesis Paper

Abstract

Speeching is a mobile application (app) that supports the self-monitoring and self-management of speech and voice issues for people with Parkinson’s (PwP) by using crowdsourcing to monitor voice data. PwP participants record audio of the PwP practicing voice tasks onto the Speeching app. Crowd workers, who are not familiar with the PwP’s voice patterns, then assess and rate the voice tasks. The PwP user then receives feedback from the crowd workers via the Speeching app. Speeching feeds the results to the PwP users, which provides the user with examples. This allows the PwP to better understand their progress as they practice speech tasks. The study was conducted in two phases, the first to assess feasibility and the second to evaluate feedback. Feasibility is explored with the goal to assess the variance, if any, between crowd workers and clinical experts in assessing speech tasks of PwPs. In the second phase, a trial was conducted to evaluate the PwP’s valuation of the feedback provision through the Speeching app. The study highlights how Speeching, and similar applications, can provide users with new opportunities for self-monitoring health and wellbeing. Digital applications like Speeching can improve the means by which participants without regular clinical access can receive feedback to better self-manage therapeutic interventions in speech and voice training tasks.

Introduction

Crowdsourcing has emerged as a research tool to collect and analyse large sets of raw data [48, 13]. As a research tool, the benefits of making connections with participants and gather information has been well-acknowledged. Crowdsourcing may be able to make other contributions to everyday healthcare, but this arena has not been explored at length. One area of everyday healthcare that crowdsourcing can be used successfully is in personal health. Personal health is important to individual medical participants because concepts like self-care, self-management, personal motivation, and constant monitoring of health conditions and changes can create marked improvements in individual’s health [4,42].

One role that crowdsourcing could have in personal healthcare is through the application to speech and language therapy (SLT). SLT is the training, practice, and use of specific skills related to conditions that impact the ability to speak and use one’s voice. Acute conditions such as a stroke or traumatic brain injury occur suddenly and alter the way in which a person speaks. Degenerative conditions occur over time, as the vocal capacity of the patient lessens, and are caused by a number of conditions such as Parkinson’s disease, motor neuron disease, and dementia. The SLT practitioner teaches series of exercises that require the patient to repetitively practice and build fluency, strength, and voice capability. These exercises are commonly taught and practiced in the clinical setting and the speech and language therapist monitors and maintains information about the patient’s progress, gains, changes, and challenges. Repetitive practice cannot always occur in a clinical setting, and often there is SLT practice work that the patient accomplishes at home. Practicing at home comes with motivational barriers in the self-directed practice of speech, and treatment may not persist through the long-term after therapy has occurred [41, 23, 53]. Speech and language therapists (SLTs) acknowledge concern that clinical and therapeutic demands extend beyond capacity in both developed and developing countries [33, 35]. SLT patients may benefit from new approaches in self-directed therapeutic practices.

Speeching is a crowdsourcing app that proliferates feedback between crowd workers and persons completing voice tasks or exercises. The goal is to facilitate self-management, self-care, and motivate persons to complete SLT exercises outside of the clinic. The case focuses on persons with Parkinson’s disease (PwP). PwPs are likely to experience speech difficulties as a result of neuromuscular degeneration [24,34]. Speeching is a system comprised of a smartphone application that allows participants to record their self-practice of a series of speech tasks and to upload these to a remote server. The recordings are then rated by crowd workers basedease of listening, speaking rate, pitch variability and volume. The ratings from the crowd workers are then delivered to the participants. The participants access the ratings to support their practice of at-home SLT tasks and improve progress towards SLT targets. The case first demonstrates the feasibility of using crowd workers to judge recorded speech compared to expert judgements. Based on these results, Speeching was developed and deployed Speeching in a real world pilot study with PwP to establish its acceptability.

The potential for crowdsourcing to offer support in self-care practices of clinical patients is highlighted. This case provides contributions to HCI. The first contribution is the demonstrative feasibility of crowdsourcing to produce quantitative ratings of PwP speech, compared to expert judgements. The second contribution is the exemplification of real world crowdsourcing as a method that has the capacity to present data from crowd workers directly back to patients, and the benefits and challenges that occur within this chosen method. Third, the case provides an analysis of how PwP participants are impacted by crowdsourced ratings. The analysis seeks to discover the participant’s valuation of Speeching as a system that promotes the self-care practice of therapeutic tasks. Lastly, the case offers insights for future researchers who seek to further explore crowdsourcing apps for personal health.

Background

Crowdsourcing Health

Crowdsourcing research in healthcare has focused on the collection of raw data, such as if large online health communities as representatives of wider populations [7]; to utilize the personal data collected by health communities about themselves [48]; gain new understandings into preventative medicine [49]; provide new sources of patient data for research; to understand how online communities function in a supportive role among specific patient groups [52]. Crowdsourcing has also been used to facilitate analysis of patient data. Crowdmed allows people to post medical conditions and have the conditions solved by medical experts [55].The use of non-expert crowds in analysis of clinical data. Some non-expert crowdsourced examples include the use of crowd workers who view images of blood samples to find parasites, identify genome protein structures, and identify polyps [13, 14, 40]. Crowdsourcing therefore has been successfully used in medical research, medical diagnosis, and medical imaging.

Crowdsourcing has been used beyond the healthcare context in interactive, user-supported systems and human powered assistive technologies that are influential in modern works [5].

VizWiz is a smartphone application that provides near real-time feedback on visual information to blind people[6,9]. The ASL-STEM Forum is an online portal for contributing sign language describing scientific terminology for deaf or hard of hearing people [12].VizWiz and ASL-STEM are samples of human powered assistive technology that leverages crowdsourcing to support participation and motivation of persons with difficulties. A gap can be identified in the current use of crowdsourcing, where there has not been a focus on the use of non-expert crowd workers to support self-management of their at-home healthcare practices, exercises, and therapeutic tasks. The case study explores the gap in research by employing crowd workers to provide feedback ratings to support, promote, and motivate PwP’s self-monitoring and personal healthcare management.

Crowdsourcing for Speech Data

Researchers have applied crowdsourcing to speech analysis problems in the collection and transcription of speech data [3, 31, 32, 43, 54]. Crowdsourcing using speech data has also been researched to refine speech recognition systems by measuring the quality of speech samples [22]. Amazon Mechanical Turk (AMT) workers have been recruited as crowdsourced participants to transcribe and classify utterances produced by users of transport information system, which highlighted the value of reductive measurements of intelligibility [43].

AMT crowdsourced research participants also transcribed spontaneous speech samples and were gauged on the reliability of the transcription [30]. Researchers who examined the reliability of the transcriptions found that crowdsourced participant’s accuracy approached that of experts. The study also discovered that shorter segments of speech were more likely to have faster turnaround times and higher rates of transcription accuracy [30]. Crowdsourced participants were also studied in research to rate the perceptual aspects of speech [19]. In the perceptual aspects of speech, researchers studied the use of crowdsourcing. The study examined the viability of using crowdsourcing for annotating prosodic stress and boundary tones on a corpus of spontaneous speech. This study used a comparative analysis of annotations and stressors in speech on non-native speakers compared to experts. The research discovered that crowdsourced participants had high levels of agreement when compared to experts [19].

The current research studies provide a range of samples of crowdsourcing for speech data. This highlights the methodological considerations in this area of research. Clinical populations of persons seeking therapeutic interventions in speech have specific complexities that occur when rating speech data. SLT clinical literature can promote an understanding between the complexities of clinical methods and practices that are used in speech and voice measurement for PwP.

Parkinson’s Speech

Approximately 90% of PwP experience speech and voice degeneration through the progression of the disease [24]. Changes that occur in speech and voice include volume reduction, prosody (stress and intonation patterns), level of loudness (monoloudness), and variation in pitch (monopitch). PwP may experience a hoarse, rough, breathy, or trembling speaking voice as the perceptual vocal quality becomes impaired [25, 50]. For the PwP, the characteristics can cause feelings of lowered confidence, embarrassment, and increased difficulty speaking with strangers [34, 36, 37]. The result for the PwP is that they may avoid social situations, which indicates the importance of speech therapy for PwP to retain social interactions, confidence, and self-esteem [34, 36, 37].

Qualified SLTs diagnose, measure, assess, and plan therapeutic interventions for PwPs with voice difficulties. A clinical interview with an SLT involves speech sample collections from the client. These speech samples undergo testing where the SLT listens for impaired speech performance. One issue with the clinical assessment is that SLTs are familiar with speech patterns, and this familiarity can cause a predisposition to score higher during the assessment [39]. Best practice guidelines suggest that SLTs use naïve listeners to create a representative rating for the SLT to use comparatively. In the clinical setting it is difficult to use naïve listeners due to time, resources, and clinical constraints [56]. Furthermore, PwPs are limited in clinical access, services, and time restraints, increasing the difficulty of implementing best practices to use both naïve and SLT assessments [38].

Measuring intelligibility

The challenges of speech intelligibility testing are in the access to clinical facilities, the predisposition of familiarity, and the reasonability of double assessments as recommended by best practice guidelines [39, 38 56]. Researchers responded to these challenges by studying how online digital platforms could remotely conduct speech intelligibility testing, which would allow for reducing familiarity and increasing both accessibility and assessment capacity. The Munich Intelligibility Profile (MVP) is an online system that provides SLTs with remove access to intelligibility assessments for dysarthric1 speech [57]. Speech samples were submitted for analysis and collected in a clinical setting to be reviewed by an SLT which created an external level of control for the MVP study. Moderators assigned speech samples to listeners, then collated and reviewed listeners’ responses. The MVP online study resulted in a decrease in mean deviation correlated to an increase in the mean number of listeners [57].

Crowdsourced workforces, or crowd workers, create an abundant, affordable workforce of listeners accessible through the app and online platforms in the context of speech intelligibility testing. There is not significant and established work that has previously examined the potential for crowd workers to provide speech analysis in a program of speech therapy, but the use of pre-existing crowdsourcing platforms is emerging in research. Untrained listeners crowdsourced through AMT were asked to classify speech samples came from children with articulation difficulties as correct or incorrect [10]. The classifications from untrained listeners were compared to the judgements of experienced listeners. The research found that there was an extremely high (0.98) agreement between non-experienced and experienced listeners [10]. This highlights the potential for crowdsourcing to have a role in SLT practice, and for researchers to examine how crowdsourcing can be used in measures of intelligibility.

The Speeching case study addresses the gaps in the relationship between crowdsourcing and speech assessment by: (1) exploring novel methods towards both eliciting and collecting real world speech samples; and (2) exploring the potential for crowdsourcing to provide feedback on PwP speech. The study occurs in two phases. Phase one demonstrates the feasibility of anonymous crowdsourced workers to rate impaired speech. The second phase deployed the Speeching app to collect samples from and provide feedback to PwP participants in their home environment.

Selecting the sample dataset

The first phase’s main aim explores the development of crowdsourcing tasks which might elicit ratings of Parkinson’s speech equivalent to expert ratings. Degeneration of speech and voice due to Parkinson’s has specific elements of impairment selected for investigation as rate, pitch, and volume [25]. Rate, pitch, and volume were selected as the most common variables in degenerated speech due to Parkinson’s [25].

Twelve speakers were selected from a pre-existing data set of 125 PwP collected in a lab setting [34]. The data set was solicited from an SLT with experience in Parkinson’s speech. The SLT navigated 125 samples and selected a representative sample of twelve. The speakers were categorized as mild, moderate, and severe intelligibility patterns, with two male and two female speakers in each category. The speakers provided ten single word reading samples of unconnected speech, and nine sentences of connected speech from a reading sample, the Grandfather Passage [17].

Designing the mini-tasks

The Speeching tasks were designed with an expert in Parkinson’s speech to simulate a standard SLT assessment. In the standard SLT assessment the therapist hears a range of single unconnected words, connected sentences, and longer samples of speech. These are produced by asking the PwP to read, describe, and engage in open ended discussions about a topic. Often the SLT will make an initial recording of the PwP’s pre-therapy speech. The SLT conducts a clinical assessment and diagnosis about volume, rate, and vocal qualities of the PwP’s speaking patterns and voice. The SLT uses a range of standardized assessments to objectively measure the PwP’s speaking capabilities, but may also use nonstandard methods relative to the expertise of the SLT.

Two categories of speech samples were used in the case study. The first was unconnected speech, which is a range of single random words that are not related to one another. Unconnected words provide a measurement of intelligibility in isolation by removing context and flow that may add to the listener’s ability to hear and relate the words together in an intelligible message. Unconnected speech was selected as the first category of speech samples because it is widely used in SLT assessments. As an assessment category unconnected speech apportions for a finer analysis of the speech rate, patterns, and vocal quality.

The unconnected speech tasks were included for crowdsourced analysis to create a wider scope of the future potential for the system. Crowdsourced workers selected the target word from ten similar words, for example: coop, cup, cape, cope. The ten single words in the word recognition tasks of each assessment were part of an assessment conducted to target specific sound contrasts [34].

The second category of speech samples were connected speech. Connected speech is comprised of sentences that allow for an analysis of acuity and flow. To rate this category, two types of measurements were applied: Ease of Listening (EOL) and perceptual measures. EOL measurements were subjective based on the crowd workers’ effort to understand the PwP speaker. The measurement used a five-point rating scale, which has been previously used with novice listeners unfamiliar with dysarthric speech and was found to have a strong correlation to intelligibility scores [27,34]. Perceptual measurements of speech were also used to rate the categories. In the second type of measurement, the listener’s perception of rate, pitch, variance, and volume required more complex appraisal. To provide a more multifaceted approach to the scaled responses, a continuous scaling system was used to improve sensitive accuracy of responses [15, 39]. A Direct Magnitude Estimation (DME) was used for perceptual intelligibility measures [39, 51]. An anchor or midrange exemplar of impaired speech was played to the listener so that the crowd worker could estimate the magnitude of variance or difference in the speaking tasks of connected speech [39, 51].

Crowd workers were not experienced listeners in disordered speech, and thus were likely to exhibit variability in judgments of volume, pitch variance and rate. To mitigate this possibility the study deployed a continuous scale of 0-100. A numerical scale allowed for a variability range from listeners without impacting the sensitivity of ratings that may have been observed in a discrete scale.

The mid-range exemplar sample was selected by the experienced SLT clinician who selected one male and one female speaker. Each speaker was representative of a moderate speech impairment of pitch, rate, and volume variance from the originating sample of 15 speakers. These mid-range speakers were not from speakers who had been selected for the subsample of twelve speakers for analysis. Mid-range exemplar samples were gender matched to the participant samples and used in the final dataset. Crowd workers were asked to rate the speech, out of 100, for volume, rate and pitch variance using the midrange exemplar as a reference point for a score of 50.

Participants

The crowd workers who listened and rated the speech were 33 crowdsourced participants living in the UK and recruited from AMT. The crowd workers were asked to complete the tasks and the obtained ratings from two highly experienced experts in Parkinson’s speech to act as a gold standard. The experts completed a data set of 282 speech samples from which the crowd workers were randomly assigned crowdsourced tasks. Each speech sample was crowdsourced with a minimum of three ratings from the crowd workers. To ensure that the same listener did not rate a sample more than once the listening tasks were assigned with variances in the number of listening tasks a crowd worker completed. The samples with three different ratings were indexed and the task randomized again until the data set was complete. The listeners were only required to complete 70 tasks, or 25%, to receive payment for their time. Crowd workers were paid UK minimum wage based on the estimated time to complete each task.

References

Melissa M Ahern and Michael S Hendryx. 2003. Social capital and trust in providers. Social Science & Medicine 57, 7: 1195–1203. http://doi.org/10.1016/S0277-9536(02)00494-X

S Arora, V Venkataraman, A Zhan, et al. 2015. Detecting and monitoring the symptoms of Parkinson’s disease using smartphones: A pilot study. Parkinsonism & Related Disorders 21, 6: 650–653. http://doi.org/10.1016/j.parkreldis.2015.02.026

Kartik Audhkhasi, Panayiotis G. Georgiou, and Shrikanth S. Narayanan. 2011. Reliability-weighted acoustic model adaptation using crowd sourced transcriptions. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 3045–3048.

Julie Barlow, Chris Wright, Janice Sheasby, Andy Turner, and Jenny Hainsworth. 2002. Self-management approaches for people with chronic conditions: A review. Patient Education and Counseling 48, 177–187. http://doi.org/10.1016/S0738-3991(02)00032-0

Jeffrey P. Bigham, Richard E. Ladner, and Yevgen Borodin. 2011. The design of human-powered access technology. The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility – ASSETS ’11, ACM Press, 3. http://doi.org/10.1145/2049536.2049540

Jeffrey P. Bigham, Samual White, Tom Yeh, et al. 2010. VizWiz. Proceedings of the 23nd annual ACM symposium on User interface software and technology – UIST ’10, ACM Press, 333. http://doi.org/10.1145/1866029.1866080

Riley Bove, Elizabeth Secor, Brian C Healy, et al. 2013. Evaluation of an online platform for multiple sclerosis research: patient description, validation of severity scale, and exploration of BMI effects on disease course. PloS one 8, 3: e59707. http://doi.org/10.1371/journal.pone.0059707

V. Braun and V. Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3: 77–101. http://doi.org/10.1191/1478088706qp063oa

Michele A. Burton, Erin Brady, Robin Brewer, Callie Neylan, Jeffrey P. Bigham, and Amy Hurst. 2012. Crowdsourcing subjective fashion advice using VizWiz. Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility – ASSETS ’12, ACM Press, 135. http://doi.org/10.1145/2384916.2384941

Tara McAllister Byun, Peter F Halpin, and Daniel Szeredi. 2015. Online crowdsourcing for efficient rating of speech: A validation study. Journal of Communication Disorders 53, 0: 70–83. http://doi.org/http://dx.doi.org/10.1016/j.jcomdis.2014.1 1.003

Gerald J. Canter. 1963. Speech Characteristics of Patients with Parkinson’s Disease: I. Intensity, Pitch, and Duration. Journal of Speech and Hearing Disorders 28, 3: 221. http://doi.org/10.1044/jshd.2803.221

Anna C. Cavender, Daniel S. Otero, Jeffrey P. Bigham, and Richard E. Ladner. 2010. Asl-stem forum. Proceedings of the 28th international conference on Human factors in computing systems – CHI ’10, ACM Press, 2075. http://doi.org/10.1145/1753326.1753642

Rumi Chunara, Vina Chhaya, Sunetra Bane, et al. 2012. Online reporting for malaria surveillance using micro-monetary incentives, in urban India 2010-2011. Malaria Journal 11, 1: 43. http://doi.org/10.1186/1475-2875-11-43

Seth Cooper, Firas Khatib, Adrien Treuille, et al. 2010. Predicting protein structures with a multiplayer online game. Nature 466, 7307: 756–60. http://doi.org/10.1038/nature09304

Nicolas Côté. 2011. Integral and Diagnostic Intrusive Prediction of Speech Quality. Springer Science & Business Media. Retrieved September 12, 2015 from https://books.google.com/books?id=-utLeUB2H34C&pgis=1

F Darley, A Aronson, and J Brown. 1969. Differential Diagnostic Patterns of Dysarthria. Journal of Speech Language and Hearing Research 12, 2: 246. http://doi.org/10.1044/jshr.1202.246

F Darley, A Aronson, and J Brown. 1975. Motor speech disorders. W.B. Saunders Company., Philadelphia, PA.

Julie McDonough Dolmaya. 2011. The ethics of crowdsourcing. LinguisticaAntverpiensia, New Series – Themes in Translation Studies. Retrieved September 25, 2015 from https://lans-tts.ua.ac.be/index.php/LANS-TTS/article/view/279

Keelan Evanini and Klaus Zechner. 2011. Using crowdsourcing to provide prosodic annotations for non-native speech. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 3069–3072.

James Evans. 1996. Straightforward statistics for the behavioral sciences. Brooks/Cole Pub. Co., Pacific Grove.

C Fox, C Morrison, L Ramig, and S Shapir. 2002. Current Perspectives on the Lee Silverman Voice Treatment (LSVT) for Individuals With Idiopathic Parkinson Disease. American Journal of Speech-Language Pathology 11: 111–123.

MasatakaGoto and Jun Ogata. 2011. PodCastle: Recent advances of a spoken document retrieval service improved by anonymous user contributions. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 3073–3076.

John Green, Anne Forster, Sue Bogle, and John Young. 2002. Physiotherapy for patients with mobility problems more than 1 year after stroke: A randomised controlled trial. Lancet 359, 9302: 199–203. http://doi.org/10.1016/S0140-6736(02)07443-3

Aileen K. Ho, Robert Iansek, Caterina Marigliani, John L. Bradshaw, and Sandra Gates. 1998. Speech impairment in a large sample of patients with Parkinson’s disease. Behavioural neurology 11: 131– 137. http://doi.org/10.1155/1999/327643

R Holmes, J Oates, D Phyland, and A Hughes. 2000. Voice characteristics in the progression of Parkinson’s disease. International Journal of Language & Communication Disorders 35, 3: 407–418. http://doi.org/10.1080/136828200410654

I Kawachi, B P Kennedy, and R Glass. 1999. Social capital and self-rated health: a contextual analysis. American Journal of Public Health 89, 8: 1187–1193. http://doi.org/10.2105/AJPH.89.8.1187

Sophie Landa, Lindsay Pennington, Nick Miller, Sheila Robson, Vicki Thompson, and Nick Steen. 2014. Association between objective measurement of the speech intelligibility of young people with dysarthria and listener ratings of ease of understanding. International journal of speech-language pathology 16, 4: 408–16. http://doi.org/10.3109/17549507.2014.927922

J R Landis and G G Koch. 1977. The measurement of observer agreement for categorical data. Biometrics 33, 1: 159–174. http://doi.org/10.2307/2529310

Walter S. Lasecki, Jaime Teevan, and Ece Kamar. 2014. Information extraction and manipulation threats in crowd-powered systems. 17th ACM conference on Computer supported cooperative work & social computing, 248–256. http://doi.org/10.1145/2531602.2531733

Matthew Marge, Satanjeev Banerjee, and Alexander I Rudnicky. 2010. Using the Amazon Mechanical Turk to Transcribe and Annotate Meeting Speech for Extractive Summarization. Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, Association for Computational Linguistics, 99–107.

Matthew Marge, Satanjeev Banerjee, and Alexander I. Rudnicky. 2010. Using the Amazon Mechanical Turk for transcription of spoken language. 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, 5270–5273. http://doi.org/10.1109/ICASSP.2010.5494979

Ian McGraw, Alexander Gruenstein, and Andrew Sutherland. 2009. A self-labeling speech corpus: Collecting spoken words with an online educational game. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 3031–3034.

J A McKenzie. 1992. The provision of speech, language and hearing services in a rural district of South Africa. The South African journal of communication disorders = Die Suid-AfrikaansetydskrifvirKommunikasieafwykings 39: 50–4. Retrieved September 21, 2015 from http://europepmc.org/abstract/med/1345506

Nick Miller, Liesl Allcock, Diana Jones, Emma Noble, Anthony J Hildreth, and David J Burn. 2007. Prevalence and pattern of perceived intelligibility changes in Parkinson’s disease. Journal of neurology, neurosurgery, and psychiatry 78, 11: 1188–1190. http://doi.org/10.1136/jnnp.2006.110171

Nick Miller, Katherine H O Deane, Diana Jones, Emma Noble, and Catherine Gibb. 2011. National survey of speech and language therapy provision for people with Parkinson’s disease in the United Kingdom: therapists’ practices. International Journal of Language & Communication Disorders 46, 2: 189–201. http://doi.org/10.3109/13682822.2010.484849

Nick Miller, Emma Noble, Diana Jones, Liesl Allcock, and David J Burn. 2008. How do I sound to me? Perceived changes in communication in Parkinson’s disease. Clinical rehabilitation 22, 1: 14–22. http://doi.org/10.1177/0269215507079096

Nick Miller, Emma Noble, Diana Jones, and David Burn. 2006. Life with communication changes in Parkinson’s disease. Age and Ageing 35, 3: 235–239. http://doi.org/10.1093/ageing/afj053

Nick Miller, Emma Noble, Diana Jones, Katherine H O Deane, and Catherine Gibb. 2011. Survey of speech and language therapy provision for people with Parkinson’s disease in the United Kingdom: patients’ and carers’ perspectives. International journal of language & communication disorders / Royal College of Speech & Language Therapists 46, 2: 179–188. http://doi.org/10.3109/13682822.2010.484850

Nick Miller. 2013. Measuring up to speech intelligibility. International Journal of Language and Communication Disorders 48, 601–612. http://doi.org/10.1111/1460-6984.12061

Tan B Nguyen, Shijun Wang, Vishal Anugu, et al. 2012. Distributed human intelligence for colonic polyp classification in computer-aided detection for CT colonography. Radiology 262, 3: 824–33. http://doi.org/10.1148/radiol.11110938

M J Nijkrake, S H J Keus, J G Kalf, et al. 2007. Allied health care interventions and complementary therapies in Parkinson’s disease. Parkinsonism & related disorders 13 Suppl 3: S488–S494. http://doi.org/10.1016/S1353-8020(08)70054-3

Francisco Nunes and Geraldine Fitzpatrick. 2015. Self-care technologies and collaboration. International Journal of Human-Computer Interaction: 150730080814008. http://doi.org/10.1080/10447318.2015.1067498

Gabriel Parent and Maxine Eskenazi. 2011. Speaking to the Crowd: Looking at past achievements in using crowdsourcing for speech and predicting future challenges. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 3037–3040.

Patientslikeme. 2015. Live Better, Together! Retrieved from https://www.patientslikeme.com/

Megan Perry, Robert L Williams, Nina Wallerstein, and Howard Waitzkin. 2008. Social capital and health care experiences among low-income individuals. American journal of public health 98, 2: 330–6. http://doi.org/10.2105/AJPH.2006.086306

R Putnam. 2001. Bowling Alone: The Collapse and Revival of American Community. Simon & Schuster. Retrieved September 25, 2015 from http://bowlingalone.com/

L O Ramig, S Sapir, S Countryman, et al. 2001. Intensive voice treatment (LSVT) for patients with Parkinson’s disease: a 2 year follow up. http://doi.org/10.1136/jnnp.71.4.493

M Swan, K Hathaway, C Hogg, R McCauley, and A Vollrath. 2010. Citizen science genomics as a model for crowdsourced preventive medicine research. J Participat Med 2: e20. Retrieved from http://www.jopm.org/evidence/research/2010/12/23/citi zen-science-genomics-as-a-model-for-crowdsourced-preventive-medicine-research

M Swan. 2012. Health 2050: The Realization of Personalized Medicine through Crowdsourcing, the Quantified Self, and the Participatory Biocitizen. Journal of personalized medicine 2, 3: 93–118. http://doi.org/10.3390/jpm2030093

Kris Tjaden. 2008. Speech and Swallowing in Parkinson’s Disease. Topics in geriatric rehabilitation 24, 2: 115–126. http://doi.org/10.1097/01.TGR.0000318899.87690.44

Gary Weismer and Jacqueline S Laures. 2002. Direct magnitude estimates of speech intelligibility in dysarthria: effects of a chosen standard. Journal of speech, language, and hearing research : JSLHR 45, 3: 421–433. http://doi.org/10.1044/1092-4388(2002/033)

Paul Wicks, Dorothy L Keininger, Michael P Massagli, et al. 2012. Perceived benefits of sharing health data between people with epilepsy on an online platform. Epilepsy &behavior : E&B 23, 1: 16–23. http://doi.org/10.1016/j.yebeh.2011.09.026

Sheila Wight and Nick Miller. 2015. Lee Silverman Voice Treatment for people with Parkinson’s: audit of outcomes in a routine clinic. International journal of language & communication disorders / Royal College of Speech & Language Therapists 50, 2: 215–25. http://doi.org/10.1111/1460-6984.12132

Maria K. Wolters, Karl B. Isaac, and Steve Renals.

Evaluating speech synthesis intelligibility using Amazon Mechanical Turk. Retrieved August 27, 2015 from https://www.era.lib.ed.ac.uk/handle/1842/4660

Xian-Hong Xiang, Xiao-Yu Huang, Xiao-Ling Zhang, Chun-Fang Cai, Jian-Yong Yang, and Lei Li. 2014. Many Can Work Better than the Best: Diagnosing with Medical Images via Crowdsourcing. Entropy 16, 7: 3866–3877. http://doi.org/10.3390/e16073866

Wolfram Ziegler and Andreas Zierdt. 2008. Telediagnostic assessment of intelligibility in dysarthria: A pilot investigation of MVP-online. Journal of Communication Disorders. http://doi.org/10.1016/j.jcomdis.2008.05.001

Wolfram Ziegler and Andreas Zierdt. 2008. Telediagnostic assessment of intelligibility in dysarthria: A pilot investigation of MVP-online. Journal of Communication Disorders 41, 6: 553–577. http://doi.org/10.1016/j.jcomdis.2008.05.001

Crowdmed. Retrieved from https://www.crowdmed.com/

Time is precious

Time is precious

don’t waste it!

Get instant essay
writing help!
Get instant essay writing help!
Plagiarism-free guarantee

Plagiarism-free
guarantee

Privacy guarantee

Privacy
guarantee

Secure checkout

Secure
checkout

Money back guarantee

Money back
guarantee

Related Thesis Paper Samples & Examples

Reaching Higher: Sacrifices and Brand Loyalty, Thesis Paper Example

Conceptualizing Relationship Marketing The concept of relationship marketing establishes the concept of collaborative engagement between consumers and brands. The relationship grows broader and deepens beyond [...]

Pages: 41

Words: 11204

Thesis Paper

Handmaid’s Tale, Thesis Paper Example

Under His Eye – Patriarchy, and Masculinity In the novel The Handmaid’s Tale, Margaret Atwood explores numerous thematic concerns that affect societies, such as female [...]

Pages: 11

Words: 3022

Thesis Paper

The Phenomena of the 27th Club and Its Icons, Thesis Paper Example

Chapter 1, Introduction 1.1  Problem statement Musical talent is a desirable quality among singers and music educators in the music industry. Musicians’ musical careers are [...]

Pages: 39

Words: 10627

Thesis Paper

Female Identity in the Context of Patriarchal Society in the Handmaid’s Tale, Thesis Paper Example

1.0: Female identity in the context of patriarchal society in The Handmaid’s Tale. As seen in the novel Handmaid’s Tale, Margaret Atwood has the main [...]

Pages: 11

Words: 2996

Thesis Paper

“Black Robe (1991)” and “Last of the Mohicans”, Thesis Paper Example

Introduction “The Last of the Mohicans” and “The Black Robe” are movies that were produced in 1992 and 1991 respectively. The films depict the struggle [...]

Pages: 7

Words: 1813

Thesis Paper

Henry V Movie Comparison, Thesis Paper Example

The play Henry V was set in England in the early fifteenth century at the time when England was under the tense political situation. Several [...]

Pages: 4

Words: 1235

Thesis Paper

Reaching Higher: Sacrifices and Brand Loyalty, Thesis Paper Example

Conceptualizing Relationship Marketing The concept of relationship marketing establishes the concept of collaborative engagement between consumers and brands. The relationship grows broader and deepens beyond [...]

Pages: 41

Words: 11204

Thesis Paper

Handmaid’s Tale, Thesis Paper Example

Under His Eye – Patriarchy, and Masculinity In the novel The Handmaid’s Tale, Margaret Atwood explores numerous thematic concerns that affect societies, such as female [...]

Pages: 11

Words: 3022

Thesis Paper

The Phenomena of the 27th Club and Its Icons, Thesis Paper Example

Chapter 1, Introduction 1.1  Problem statement Musical talent is a desirable quality among singers and music educators in the music industry. Musicians’ musical careers are [...]

Pages: 39

Words: 10627

Thesis Paper

Female Identity in the Context of Patriarchal Society in the Handmaid’s Tale, Thesis Paper Example

1.0: Female identity in the context of patriarchal society in The Handmaid’s Tale. As seen in the novel Handmaid’s Tale, Margaret Atwood has the main [...]

Pages: 11

Words: 2996

Thesis Paper

“Black Robe (1991)” and “Last of the Mohicans”, Thesis Paper Example

Introduction “The Last of the Mohicans” and “The Black Robe” are movies that were produced in 1992 and 1991 respectively. The films depict the struggle [...]

Pages: 7

Words: 1813

Thesis Paper

Henry V Movie Comparison, Thesis Paper Example

The play Henry V was set in England in the early fifteenth century at the time when England was under the tense political situation. Several [...]

Pages: 4

Words: 1235

Thesis Paper