<buttonclass="md-header-nav__button md-icon"title="Switch to light mode"aria-label="Switch to light mode"data-md-option="palette"data-md-color-scheme="default"data-md-color-primary="blue"data-md-color-accent="blue"data-md-state="hidden">
<buttonclass="md-header-nav__button md-icon"title="Switch to dark mode"aria-label="Switch to dark mode"data-md-option="palette"data-md-color-scheme="slate"data-md-color-primary="blue"data-md-color-accent="blue"data-md-state="hidden">
<li>In addition to using RAPIDS to extract behavioral features and create plots, you can structure your data analysis within RAPIDS (i.e. cleaning your features and creating ML/statistical models)</li>
<li>We include an analysis example in RAPIDS that covers raw data processing, cleaning, feature extraction, machine learning modeling, and evaluation</li>
<li>Use this example as a guide to structure your own analysis within RAPIDS</li>
<li>RAPIDS analysis workflows are compatible with your favorite data science tools and libraries</li>
<li>RAPIDS analysis workflows are reproducible and we encourage you to publish them along with your research papers</li>
</ul>
</div>
<h2id="why-should-i-integrate-my-analysis-in-rapids">Why should I integrate my analysis in RAPIDS?<aclass="headerlink"href="#why-should-i-integrate-my-analysis-in-rapids"title="Permanent link">¶</a></h2>
<p>Even though the bulk of RAPIDS current functionality is related to the computation of behavioral features, we recommend RAPIDS as a complementary tool to create a mobile data analysis workflow. This is because the cookiecutter data science file organization guidelines, the use of Snakemake, the provided behavioral features, and the reproducible R and Python development environments allow researchers to divide an analysis workflow into small parts that can be audited, shared in an online repository, reproduced in other computers, and understood by other people as they follow a familiar and consistent structure. We believe these advantages outweigh the time needed to learn how to create these workflows in RAPIDS.</p>
<p>We clarify that to create analysis workflows in RAPIDS, researchers can still use any data manipulation tools, editors, libraries or languages they are already familiar with. RAPIDS is meant to be the final destination of analysis code that was developed in interactive notebooks or stand-alone scripts. For example, a user can compute call and location features using RAPIDS, then, they can use Jupyter notebooks to explore feature cleaning approaches and once the cleaning code is final, it can be moved to RAPIDS as a new step in the pipeline. In turn, the output of this cleaning step can be used to explore machine learning models and once a model is finished, it can also be transferred to RAPIDS as a step of its own. The idea is that when it is time to publish a piece of research, a RAPIDS workflow can be shared in a public repository as is.</p>
<p>To accurately reflect the complexity of a real-world modeling scenario, we decided not to oversimplify this example. Importantly, every step in this example follows a basic structure: an input file and parameters are manipulated by an R or Python script that saves the results to an output file. Input files, parameters, output files and scripts are grouped into Snakemake rules that are described on <code>smk</code> files in the rules folder (we point the reader to the relevant rule(s) of each step). </p>
<p>Researchers can use these rules and scripts as a guide to create their own as it is expected every modeling project will have different requirements, data and goals but ultimately most follow a similar chainned pattern.</p>
<p>The example’s config file is <code>example_profile/example_config.yaml</code> and its Snakefile is in <code>example_profile/Snakefile</code>. The config file is already configured to process the sensor data as explained in <ahref="#analysis-workflow-modules">Analysis workflow modules</a>.</p>
<h2id="description-of-the-study-modeled-in-our-analysis-workflow-example">Description of the study modeled in our analysis workflow example<aclass="headerlink"href="#description-of-the-study-modeled-in-our-analysis-workflow-example"title="Permanent link">¶</a></h2>
<p>Our example is based on a hypothetical study that recruited 2 participants that underwent surgery and collected mobile data for at least one week before and one week after the procedure. Participants wore a Fitbit device and installed the AWARE client in their personal Android and iOS smartphones to collect mobile data 24/7. In addition, participants completed daily severity ratings of 12 common symptoms on a scale from 0 to 10 that we summed up into a daily symptom burden score. </p>
<p>The goal of this workflow is to find out if we can predict the daily symptom burden score of a participant. Thus, we framed this question as a binary classification problem with two classes, high and low symptom burden based on the scores above and below average of each participant. We also want to compare the performance of individual (personalized) models vs a population model. </p>
<p>In total, our example workflow has nine steps that are in charge of sensor data preprocessing, feature extraction, feature cleaning, machine learning model training and model evaluation (see figure below). We ship this workflow with RAPIDS and share a database with <ahref="https://osf.io/skqfv/files/">test data</a> in an Open Science Framework repository. </p>
<h2id="configure-and-run-the-analysis-workflow-example">Configure and run the analysis workflow example<aclass="headerlink"href="#configure-and-run-the-analysis-workflow-example"title="Permanent link">¶</a></h2>
<li>Configure the <ahref="../../setup/configuration/#database-credentials">user credentials</a> of a local or remote MySQL server with writing permissions in your <code>.env</code> file. </li>
<li>Unzip the <ahref="https://osf.io/skqfv/files/">test database</a> to <code>data/external/rapids_example.sql</code> and run:
<detailsclass="info"><summary>1. Feature extraction</summary><p>We extract daily behavioral features for data yield, received and sent messages, missed, incoming and outgoing calls, resample fused location data using Doryab provider, activity recognition, battery, Bluetooth, screen, light, applications foreground, conversations, Wi-Fi connected, Wi-Fi visible, Fitbit heart rate summary and intraday data, Fitbit sleep summary data, and Fitbit step summary and intraday data without excluding sleep periods with an active bout threshold of 10 steps. In total, we obtained 237 daily sensor features over 12 days per participant. </p>
</details>
<detailsclass="info"><summary>2. Extract demographic data.</summary><p>It is common to have demographic data in addition to mobile and target (ground truth) data. In this example we include participants’ age, gender and the number of days they spent in hospital after their surgery as features in our model. We extract these three columns from the participant_info table of our test database . As these three features remain the same within participants, they are used only on the population model. Refer to the <code>demographic_features</code> rule in <code>rules/models.smk</code>.</p>
</details>
<detailsclass="info"><summary>3. Create target labels.</summary><p>The two classes for our machine learning binary classification problem are high and low symptom burden. Target values are already stored in the <code>participant_target</code> table of our test database and transferred to a CSV file. A new rule/script can be created if further manipulation is necessary. Refer to the <code>parse_targets</code> rule in <code>rules/models.smk</code>.</p>
</details>
<detailsclass="info"><summary>4. Feature merging.</summary><p>These daily features are stored on a CSV file per sensor, a CSV file per participant, and a CSV file including all features from all participants (in every case each column represents a feature and each row represents a day). Refer to the <code>merge_sensor_features_for_individual_participants</code> and <code>merge_features_for_population_model</code> rules in <code>rules/features.smk</code>.</p>
</details>
<detailsclass="info"><summary>5. Data visualization.</summary><p>At this point the user can use the five plots RAPIDS provides (or implement new ones) to explore and understand the quality of the raw data and extracted features and decide what sensors, days, or participants to include and exclude. Refer to <code>rules/reports.smk</code> to find the rules that generate these plots.</p>
</details>
<detailsclass="info"><summary>6. Feature cleaning.</summary><p>In this stage we perform four steps to clean our sensor feature file. First, we discard days with a data yield hour ratio less than or equal to 0.75, i.e. we include days with at least 18 hours of data. Second, we drop columns (features) with more than 30% of missing rows. Third, we drop columns with zero variance. Fourth, we drop rows (days) with more than 30% of missing columns (features). In this cleaning stage several parameters are created and exposed in <code>example_profile/example_config.yaml</code>. </p>
<p>After this step, we kept 162 features over 11 days for the individual model of p01, 107 features over 12 days for the individual model of p02 and 101 features over 20 days for the population model. Note that the difference in the number of features between p01 and p02 is mostly due to iOS restrictions that stops researchers from collecting the same number of sensors than in Android phones. </p>
<p>Feature cleaning for the individual models is done in the <code>clean_sensor_features_for_individual_participants</code> rule and for the population model in the <code>clean_sensor_features_for_all_participants</code> rule in <code>rules/models.smk</code>.</p>
</details>
<detailsclass="info"><summary>7. Merge features and targets.</summary><p>In this step we merge the cleaned features and target labels for our individual models in the <code>merge_features_and_targets_for_individual_model</code> rule in <code>rules/models.smk</code>. Additionally, we merge the cleaned features, target labels, and demographic features of our two participants for the population model in the <code>merge_features_and_targets_for_population_model</code> rule in <code>rules/models.smk</code>. These two merged files are the input for our individual and population models. </p>
</details>
<detailsclass="info"><summary>8. Modeling.</summary><p>This stage has three phases: model building, training and evaluation. </p>
<p>In the building phase we impute, normalize and oversample our dataset. Missing numeric values in each column are imputed with their mean and we impute missing categorical values with their mode. We normalize each numeric column with one of three strategies (min-max, z-score, and scikit-learn package’s robust scaler) and we one-hot encode each categorial feature as a numerical array. We oversample our imbalanced dataset using SMOTE (Synthetic Minority Over-sampling Technique) or a Random Over sampler from scikit-learn. All these parameters are exposed in <code>example_profile/example_config.yaml</code>.</p>
<p>In the training phase, we create eight models: logistic regression, k-nearest neighbors, support vector machine, decision tree, random forest, gradient boosting classifier, extreme gradient boosting classifier and a light gradient boosting machine. We cross-validate each model with an inner cycle to tune hyper-parameters based on the Macro F1 score and an outer cycle to predict the test set on a model with the best hyper-parameters. Both cross-validation cycles use a leave-one-participant-out strategy. Parameters for each model like weights and learning rates are exposed in <code>example_profile/example_config.yaml</code>.</p>
<p>Finally, in the evaluation phase we compute the accuracy, Macro F1, kappa, area under the curve and per class precision, recall and F1 score of all folds of the outer cross-validation cycle.</p>
<p>Refer to the <code>modelling_for_individual_participants</code> rule for the individual modeling and to the <code>modelling_for_all_participants</code> rule for the population modeling, both in <code>rules/models.smk</code>.</p>
</details>
<detailsclass="info"><summary>9. Compute model baselines.</summary><p>We create three baselines to evaluate our classification models.</p>
<p>First, a majority classifier that labels each test sample with the majority class of our training data. Second, a random weighted classifier that predicts each test observation sampling at random from a binomial distribution based on the ratio of our target labels. Third, a decision tree classifier based solely on the demographic features of each participant. As we do not have demographic features for individual model, this baseline is only available for population model. </p>
<p>Our baseline metrics (e.g. accuracy, precision, etc.) are saved into a CSV file, ready to be compared to our modeling results. Refer to the <code>baselines_for_individual_model</code> rule for the individual model baselines and to the <code>baselines_for_population_model</code> rule for population model baselines, both in <code>rules/models.smk</code>.</p>
<scriptsrc="../../assets/javascripts/bundle.d371fdb2.min.js"></script><scriptid="__lang"type="application/json">{"clipboard.copy":"Copy to clipboard","clipboard.copied":"Copied to clipboard","search.config.lang":"en","search.config.pipeline":"trimmer, stopWordFilter","search.config.separator":"[\\s\\-]+","search.placeholder":"Search","search.result.placeholder":"Type to start searching","search.result.none":"No matching documents","search.result.one":"1 matching document","search.result.other":"# matching documents","search.result.more.one":"1 more on this page","search.result.more.other":"# more on this page","search.result.term.missing":"Missing"}</script>