Speaking and articulation duration have been shown to be important biomarkers of disorder severity in ALS and other motor speech disorders. While signal processing algorithms afford significant advantages over manual annotation in that they allow for scalability and automatic computation of these and other biomarkers at scale, we must ensure that they are accurate and reliable. This work investigates the various parameters of 2 automated algorithms used to compute speaking and articulation duration for speech data collected via a conversational AI agent and estimates, via simulated tuning experiments, optimal settings for bulbar symptomatic amyotrophic lateral sclerosis (ALS) patients in comparison to healthy controls. We also uncover non-intuitive differences in optimal parameter settings required for robust computation of articulation duration versus speaking duration. We found that optimal settings for the automatic prediction of articulation and speaking duration were dependent on both task and cohort type.