6533b820fe1ef96bd127914a

RESEARCH PRODUCT

Markerless 2D kinematic analysis of underwater running : A deep learning approach

Neil J. CroninBenjamin WallerBenjamin WallerTimo RantalainenEsa HynynenJuha P. Ahtiainen

subject

QA75Motion analysisComputer scienceQP301.H75_Physiology._Sport.0206 medical engineeringBiomedical EngineeringBiophysicsVideo RecordingSTRIDEImage processing02 engineering and technologyKinematicstekoälySports biomechanicsRunning03 medical and health sciencesMotion0302 clinical medicineImmersionImage Processing Computer-AssistedHumansOrthopedics and Sports MedicineComputer visionliikeanalyysita315liikeoppiGV557_SportsArtificial neural networkPixelbusiness.industryDeep learningmotion analysisRehabilitationvesijuoksuReproducibility of Resultsdeep learningdeep water runningartificial intelligence020601 biomedical engineeringBiomechanical PhenomenaLower ExtremitykinematicsArtificial intelligenceNeural Networks Computerbusiness030217 neurology & neurosurgery

description

Kinematic analysis is often performed with a camera system combined with reflective markers placed over bony landmarks. This method is restrictive (and often expensive), and limits the ability to perform analyses outside of the lab. In the present study, we used a markerless deep learning-based method to perform 2D kinematic analysis of deep water running, a task that poses several challenges to image processing methods. A single GoPro camera recorded sagittal plane lower limb motion. A deep neural network was trained using data from 17 individuals, and then used to predict the locations of markers that approximated joint centres. We found that 300–400 labelled images were sufficient to train the network to be able to position joint markers with an accuracy similar to that of a human labeler (mean difference < 3 pixels, around 1 cm). This level of accuracy is sufficient for many 2D applications, such as sports biomechanics, coaching/training, and rehabilitation. The method was sensitive enough to differentiate between closely-spaced running cadences (45–85 strides per minute in increments of 5). We also found high test–retest reliability of mean stride data, with between-session correlation coefficients of 0.90–0.97. Our approach represents a low-cost, adaptable solution for kinematic analysis, and could easily be modified for use in other movements and settings. Using additional cameras, this approach could also be used to perform 3D analyses. The method presented here may have broad applications in different fields, for example by enabling markerless motion analysis to be performed during rehabilitation, training or even competition environments.

10.1016/j.jbiomech.2019.02.021http://juuli.fi/Record/0339609019